Welcome to benchbuild’s documentation!

benchbuild package

Subpackages

benchbuild.experiments package

Experiments module.

By default, only experiments that are listed in the configuration are loaded automatically. See configuration variables:

*_PLUGINS_AUTOLOAD *_PLUGINS_EXPERIMENTS
benchbuild.experiments.discover()[source]

Import all experiments listed in PLUGINS_EXPERIMENTS.

Tests:
>>> from benchbuild.settings import CFG
>>> from benchbuild.experiments import discover
>>> import logging as lg
>>> import sys
>>> l = lg.getLogger('benchbuild')
>>> lg.getLogger('benchbuild').setLevel(lg.DEBUG)
>>> lg.getLogger('benchbuild').handlers = [lg.StreamHandler(stream=sys.stdout)]
>>> CFG["plugins"]["experiments"] = ["benchbuild.non.existing", "benchbuild.experiments.raw"]
>>> discover()
Could not find 'benchbuild.non.existing'
ImportError: No module named 'benchbuild.non'
Subpackages
benchbuild.experiments.polly package
Submodules
benchbuild.experiments.polly.openmp module

The ‘polly-openmp’ Experiment.

This experiment applies polly’s transformations with openmp code generation enabled to all projects and measures the runtime.

This forms the baseline numbers for the other experiments.

Measurements
3 Metrics are generated during this experiment:
time.user_s - The time spent in user space in seconds (aka virtual time) time.system_s - The time spent in kernel space in seconds (aka system time) time.real_s - The time spent overall in seconds (aka Wall clock)
class benchbuild.experiments.polly.openmp.PollyOpenMP(projects=None, group=None)[source]

Bases: benchbuild.experiment.RuntimeExperiment

Timing experiment with Polly & OpenMP support.

NAME = 'polly-openmp'
actions_for_project(project)[source]

Build & Run each project with Polly & OpenMP support.

benchbuild.experiments.polly.openmpvect module

The ‘polly-openmp-vectorize’ Experiment.

This experiment applies polly’s transformations with openmp code generation enabled to all projects and measures the runtime.

This forms the baseline numbers for the other experiments.

Measurements
3 Metrics are generated during this experiment:
time.user_s - The time spent in user space in seconds (aka virtual time) time.system_s - The time spent in kernel space in seconds (aka system time) time.real_s - The time spent overall in seconds (aka Wall clock)
class benchbuild.experiments.polly.openmpvect.PollyOpenMPVectorizer(projects=None, group=None)[source]

Bases: benchbuild.experiment.RuntimeExperiment

Timing experiment with Polly & OpenMP+Vectorizer support.

NAME = 'polly-openmpvect'
actions_for_project(project)[source]

Compile & Run the experiment with -O3 enabled.

benchbuild.experiments.polly.polly module
The ‘polly’ Experiment

This experiment applies polly’s transformations to all projects and measures the runtime.

This forms the baseline numbers for the other experiments.

Measurements
3 Metrics are generated during this experiment:
time.user_s - The time spent in user space in seconds (aka virtual time) time.system_s - The time spent in kernel space in seconds (aka system time) time.real_s - The time spent overall in seconds (aka Wall clock)
class benchbuild.experiments.polly.polly.Polly(projects=None, group=None)[source]

Bases: benchbuild.experiment.RuntimeExperiment

The polly experiment.

NAME = 'polly'
actions_for_project(project)[source]

Compile & Run the experiment with -O3 enabled.

benchbuild.experiments.polly.pollyperformance module
The ‘polly’ Experiment

This experiment applies polly’s transformations to all projects and measures the runtime.

This forms the baseline numbers for the other experiments.

Measurements
3 Metrics are generated during this experiment:
time.user_s - The time spent in user space in seconds (aka virtual time) time.system_s - The time spent in kernel space in seconds (aka system time) time.real_s - The time spent overall in seconds (aka Wall clock)
class benchbuild.experiments.polly.pollyperformance.PollyPerformance(projects=None, group=None)[source]

Bases: benchbuild.experiment.RuntimeExperiment

The polly performance experiment.

NAME = 'pollyperformance'
actions_for_project(project)[source]
exception benchbuild.experiments.polly.pollyperformance.ShouldNotBeNone[source]

Bases: RuntimeWarning

User warning, if config var is null.

benchbuild.experiments.polly.vectorize module
The ‘polly-vectorize’ Experiment

This experiment applies polly’s transformations with stripmine vectorizer enabled to all projects and measures the runtime.

This forms the baseline numbers for the other experiments.

Measurements
3 Metrics are generated during this experiment:
time.user_s - The time spent in user space in seconds (aka virtual time) time.system_s - The time spent in kernel space in seconds (aka system time) time.real_s - The time spent overall in seconds (aka Wall clock)
class benchbuild.experiments.polly.vectorize.PollyVectorizer(projects=None, group=None)[source]

Bases: benchbuild.experiment.RuntimeExperiment

The polly experiment with vectorization enabled.

NAME = 'polly-vectorize'
actions_for_project(project)[source]

Compile & Run the experiment with -O3 enabled.

Submodules
benchbuild.experiments.compilestats module

The ‘compilestats’ experiment.

This experiment is a basic experiment in the benchbuild study. It simply runs all projects after compiling it with -O3 and catches all statistics emitted by llvm.

class benchbuild.experiments.compilestats.CompilestatsExperiment(projects=None, group=None)[source]

Bases: benchbuild.experiment.RuntimeExperiment

The compilestats experiment.

NAME = 'cs'
actions_for_project(project)[source]
class benchbuild.experiments.compilestats.PollyCompilestatsExperiment(projects=None, group=None)[source]

Bases: benchbuild.experiment.RuntimeExperiment

The compilestats experiment with polly enabled.

NAME = 'p-cs'
actions_for_project(project)[source]
benchbuild.experiments.empty module

The ‘empty’ Experiment.

This experiment is for debugging purposes. It only prepares the basic directories for benchbuild. No compilation & no run can be done with it.

class benchbuild.experiments.empty.Empty(projects=None, group=None)[source]

Bases: benchbuild.experiment.Experiment

The empty experiment.

NAME = 'empty'
actions_for_project(project)[source]

Do nothing.

class benchbuild.experiments.empty.NoMeasurement(projects=None, group=None)[source]

Bases: benchbuild.experiment.Experiment

Run everything but do not measure anything.

NAME = 'no-measurement'
actions_for_project(project)[source]

Execute all actions but don’t do anything as extension.

benchbuild.experiments.papi module

PAPI based experiments.

These types of experiments (papi & papi-std) need to instrument the project with libbenchbuild support to work.

class benchbuild.experiments.papi.Analyze(project_or_experiment, action_fn=None)[source]

Bases: benchbuild.utils.actions.Step

DESCRIPTION = 'Analyze the experiment after completion.'
NAME = 'ANALYZE'
class benchbuild.experiments.papi.Calibrate(project_or_experiment, action_fn=None)[source]

Bases: benchbuild.utils.actions.Step

DESCRIPTION = 'Calibrate libpapi measurement functions.'
NAME = 'CALIBRATE'
class benchbuild.experiments.papi.PapiScopCoverage(projects=None, group=None)[source]

Bases: benchbuild.experiment.RuntimeExperiment

PAPI-based dynamic SCoP coverage measurement.

NAME = 'papi'
actions()[source]

Do the postprocessing, after all projects are done.

actions_for_project(project)[source]

Create & Run a papi-instrumented version of the project.

This experiment uses the -jitable flag of libPolyJIT to generate dynamic SCoP coverage.

class benchbuild.experiments.papi.PapiStandardScopCoverage(projects=None, group=None)[source]

Bases: benchbuild.experiments.papi.PapiScopCoverage

PAPI Scop Coverage, without JIT.

NAME = 'papi-std'
actions_for_project(project)[source]

Create & Run a papi-instrumented version of the project.

This experiment uses the -jitable flag of libPolyJIT to generate dynamic SCoP coverage.

benchbuild.experiments.pj_sequence module

The ‘sequence analysis’ experiment suite.

Each experiment generates sequences of flags for a compiler command using an algorithm that calculates a best sequence in its own way. For calculating the value of a sequence (called fitness) regions and scops are being compared to each other and together generate the fitness value of a sequence. The used metric depends on the experiment, the fitness is being calculated for.

The fittest generated sequences and the compilestats of the whole progress are then written into a persisted data base for further analysis.

class benchbuild.experiments.pj_sequence.FindFittestSequenceGenetic1(project, experiment, *extensions, config=None)[source]

Bases: benchbuild.extensions.RuntimeExtension

class benchbuild.experiments.pj_sequence.FindFittestSequenceGenetic2(project, experiment, *extensions, config=None)[source]

Bases: benchbuild.extensions.RuntimeExtension

class benchbuild.experiments.pj_sequence.FindFittestSequenceGreedy(project, experiment, *extensions, config=None)[source]

Bases: benchbuild.extensions.RuntimeExtension

class benchbuild.experiments.pj_sequence.FindFittestSequenceHillclimber(project, experiment, *extensions, config=None)[source]

Bases: benchbuild.extensions.RuntimeExtension

class benchbuild.experiments.pj_sequence.Genetic1Sequence(projects=None, group=None)[source]

Bases: benchbuild.experiments.polyjit.PolyJIT

This experiment is part of the sequence generating suite.

The sequences for Poly are getting generated using the first of two genetic algorithms. Only the compilestats are getting written into a database for further analysis.

NAME = 'pj-seq-genetic1-opt'
actions_for_project(project)[source]

Execute the actions for the test.

class benchbuild.experiments.pj_sequence.Genetic2Sequence(projects=None, group=None)[source]

Bases: benchbuild.experiments.polyjit.PolyJIT

An experiment that excecutes all projects with PolyJIT support.

It is part of the sequence generating experiment suite.

The sequences are getting generated for Poly using another than the first genetic algorithm. The compilestats are getting written into a database for further analysis.

NAME = 'pj-seq-genetic2-opt'
actions_for_project(project)[source]

Execute the actions for the test.

class benchbuild.experiments.pj_sequence.GreedySequences(projects=None, group=None)[source]

Bases: benchbuild.experiments.polyjit.PolyJIT

This experiment is part of the sequence generating experiment suite.

Instead of the actual actions the compile stats for executing them are being written into the database. The sequences are getting generated with the greedy algorithm. This shall become the default experiment for sequence analysis.

NAME = 'pj-seq-greedy'
actions_for_project(project)[source]

Execute the actions for the test.

class benchbuild.experiments.pj_sequence.HillclimberSequences(projects=None, group=None)[source]

Bases: benchbuild.experiments.polyjit.PolyJIT

This experiment is part of the sequence generating suite.

The sequences for poly are getting generated using a hillclimber algorithm. The ouptut gets thrown away and the statistics of the compiling are written into a database to be analyzed later on.

NAME = 'pj-seq-hillclimber'
actions_for_project(project)[source]

Execute the actions for the test.

class benchbuild.experiments.pj_sequence.RunSequence(project, experiment, *extensions, config=None)[source]

Bases: benchbuild.extensions.ExtractCompileStats

Execute and compile a given sequence, to calculate its fitness value with a given function and metric.

class benchbuild.experiments.pj_sequence.SequenceReport(exp_name, exp_ids, out_path)[source]

Bases: benchbuild.reports.Report

Handles the view of the sequences in the database.

QUERY_TOTAL = <sqlalchemy.sql.selectable.Select at 0x7f57c1ac20f0; Select object>
SUPPORTED_EXPERIMENTS = ['pj-seq-hillclimber', 'pj-seq-genetic1-opt', 'pj-seq-genetic2-opt', 'pj-seq-greedy']
generate()[source]

Generates the output of what is written in the database.

report()[source]
benchbuild.experiments.pj_sequence.create_ir()[source]

Read out the ir to compare it before and after adding a flag from the pass to the sequence or work with its returnings.

benchbuild.experiments.pj_sequence.filter_compiler_commandline(cmd, predicate=<function <lambda>>)[source]

Filter unnecessary arguments for the compiler.

benchbuild.experiments.pj_sequence.filter_invalid_flags(item)[source]

Filter our all flags not needed for getting the compilestats.

benchbuild.experiments.pj_sequence.get_args(cmd)[source]

Returns the arguments of a command. Asserts if the given command is not part of the experiment. Args:

cmd: The clang command of which the arguments are returned.
benchbuild.experiments.pj_sequence.get_defaults()[source]

Return the defaults for the experiment.

benchbuild.experiments.pj_sequence.get_genetic_defaults()[source]

Returns the needed defaults for the genetic algorithms.

Connect the intermediate representation of llvm with the files that are to be compiled.

benchbuild.experiments.pj_sequence.persist_sequence(run, sequence, fitness_val)[source]

Persist the sequence and its fitness value in the database.

Args:
run: The current run we are attached to, with all its information. sequence: The fittest sequence generated by an algorithm. fitness_val: The fittest algorithm generated by an algorithm.
benchbuild.experiments.pj_sequence.set_args(cmd, new_args)[source]

Sets the arguments of a command. Also asserts if the command is empty. Args:

cmd: The clang command that is getting its arguments set. new_args: The new additional arguments of the command.
benchbuild.experiments.pj_sequence.unique_compiler_cmds(run_f)[source]

Verifys that compiler comands are not excecuted twice.

benchbuild.experiments.pjtest module

A test experiment for PolyJIT.

This experiment should only be used to test various features of PolyJIT. It provides only 1 configuration (maximum number of cores) and tests 2 run-time execution profiles of PolyJIT:

  1. PolyJIT enabled, with specialization
  2. PolyJIT enabled, without specialization
class benchbuild.experiments.pjtest.EnableDBExport(*extensions, config=None, **kwargs)[source]

Bases: benchbuild.experiments.polyjit.PolyJITConfig, benchbuild.extensions.Extension

Call the child extensions with an activated PolyJIT.

class benchbuild.experiments.pjtest.JitExportGeneratedCode(projects=None, group=None)[source]

Bases: benchbuild.experiments.polyjit.PolyJIT

An experiment that executes all projects with PolyJIT support.

This is our default experiment for speedup measurements.

NAME = 'pj-db-export'
actions_for_project(project)[source]
class benchbuild.experiments.pjtest.Test(projects=None, group=None)[source]

Bases: benchbuild.experiments.polyjit.PolyJIT

An experiment that executes all projects with PolyJIT support.

This is our default experiment for speedup measurements.

NAME = 'pj-test'
actions_for_project(project)[source]
class benchbuild.experiments.pjtest.TestReport(exp_name, exp_ids, out_path)[source]

Bases: benchbuild.reports.Report

Writes report to the database.

QUERY_REGION = <sqlalchemy.sql.selectable.Select at 0x7f57c16097f0; Select object>
QUERY_TOTAL = <sqlalchemy.sql.selectable.Select at 0x7f57c16095c0; Select object>
SUPPORTED_EXPERIMENTS = ['pj-test']
generate()[source]
report()[source]
sa = <module 'sqlalchemy' from '/home/docs/checkouts/readthedocs.org/user_builds/pprof-study/envs/v2.0.3/lib/python3.5/site-packages/sqlalchemy/__init__.py'>
benchbuild.experiments.pollytest module

The ‘pollytest’ experiment.

This experiment uses four different configs to analyse the compilestats’ and the time’s behavior regarding the position of polly and unprofitable processes.

class benchbuild.experiments.pollytest.PollyTest(projects=None, group=None)[source]

Bases: benchbuild.experiment.Experiment

An experiment that executes projects with different configurations.

The time and the compilestats are collected.

NAME = 'pollytest'
actions_for_project(project)[source]
class benchbuild.experiments.pollytest.PollyTestReport(exp_name, exp_ids, out_path)[source]

Bases: benchbuild.reports.Report

QUERY_EVAL = <sqlalchemy.sql.selectable.Select at 0x7f57c15a3908; Select object>
SUPPORTED_EXPERIMENTS = ['pollytest']
generate()[source]
report()[source]
benchbuild.experiments.polyjit module

The ‘polyjit’ experiment.

This experiment uses likwid to measure the performance of all binaries when running with polyjit support enabled.

class benchbuild.experiments.polyjit.ClearPolyJITConfig(*extensions, config=None, **kwargs)[source]

Bases: benchbuild.experiments.polyjit.PolyJITConfig, benchbuild.extensions.Extension

class benchbuild.experiments.polyjit.DisableDelinearization(*extensions, config=None, **kwargs)[source]

Bases: benchbuild.experiments.polyjit.PolyJITConfig, benchbuild.extensions.Extension

Deactivate the JIT for the following extensions.

class benchbuild.experiments.polyjit.DisablePolyJIT(*extensions, config=None, **kwargs)[source]

Bases: benchbuild.experiments.polyjit.PolyJITConfig, benchbuild.extensions.Extension

Deactivate the JIT for the following extensions.

class benchbuild.experiments.polyjit.EnableJITDatabase(*args, project=None, experiment=None, **kwargs)[source]

Bases: benchbuild.experiments.polyjit.PolyJITConfig, benchbuild.extensions.Extension

The run and given extensions store polli’s statistics to the database.

class benchbuild.experiments.polyjit.EnablePolyJIT(*extensions, config=None, **kwargs)[source]

Bases: benchbuild.experiments.polyjit.PolyJITConfig, benchbuild.extensions.Extension

Call the child extensions with an activated PolyJIT.

class benchbuild.experiments.polyjit.EnablePolyJIT_Opt(*extensions, config=None, **kwargs)[source]

Bases: benchbuild.experiments.polyjit.PolyJITConfig, benchbuild.extensions.Extension

Call the child extensions with an activated PolyJIT.

class benchbuild.experiments.polyjit.PolyJIT(projects=None, group=None)[source]

Bases: benchbuild.experiment.RuntimeExperiment

The polyjit experiment.

actions_for_project(project)[source]
classmethod init_project(project)[source]

Execute the benchbuild experiment.

We perform this experiment in 2 steps:
  1. with likwid disabled.
  2. with likwid enabled.
Args:
project: The project we initialize.
Returns:
The initialized project.
class benchbuild.experiments.polyjit.PolyJITConfig[source]

Bases: object

Object that stores the configuraion of the JIT.

argv

Getter for the configuration held by the config object.

clear()[source]
value_to_str(key)[source]

Prints the value of a given key.

class benchbuild.experiments.polyjit.PolyJITFull(projects=None, group=None)[source]

Bases: benchbuild.experiments.polyjit.PolyJIT

An experiment that executes all projects with PolyJIT support.

This is our default experiment for speedup measurements.

NAME = 'pj'
actions_for_project(project)[source]
class benchbuild.experiments.polyjit.PolyJITSimple(projects=None, group=None)[source]

Bases: benchbuild.experiments.polyjit.PolyJIT

Simple runtime-testing with PolyJIT.

NAME = 'pj-simple'
actions_for_project(project)[source]
class benchbuild.experiments.polyjit.RegisterPolyJITLogs(*extensions, config=None, **kwargs)[source]

Bases: benchbuild.experiments.polyjit.PolyJITConfig, benchbuild.extensions.LogTrackingMixin, benchbuild.extensions.Extension

Extends the following RunWithTime extensions with extra PolyJIT logs.

benchbuild.experiments.polyjit.verbosity_to_polyjit_log_level(verbosity: int)[source]

Transfers the verbosity level to a useable polyjit format.

benchbuild.experiments.raw module

The ‘raw’ Experiment.

This experiment is the basic experiment in the benchbuild study. It simply runs all projects after compiling it with -O3. The binaries are wrapped with the time command and results are written to the database.

This forms the baseline numbers for the other experiments.

Measurements
3 Metrics are generated during this experiment:
time.user_s - The time spent in user space in seconds (aka virtual time) time.system_s - The time spent in kernel space in seconds (aka system time) time.real_s - The time spent overall in seconds (aka Wall clock)
class benchbuild.experiments.raw.RawRuntime(projects=None, group=None)[source]

Bases: benchbuild.experiment.RuntimeExperiment

The polyjit experiment.

NAME = 'raw'
actions_for_project(project)[source]

Compile & Run the experiment with -O3 enabled.

benchbuild.projects package

Projects module.

By default, only projects that are listed in the configuration are loaded automatically. See configuration variables:

*_PLUGINS_AUTOLOAD *_PLUGINS_PROJECTS
benchbuild.projects.discover()[source]
Subpackages
benchbuild.projects.apollo package
Submodules
benchbuild.projects.apollo.group module
class benchbuild.projects.apollo.group.ApolloGroup(exp)[source]

Bases: benchbuild.project.Project

GROUP = 'apollo'
path_suffix = 'src'
benchbuild.projects.apollo.rodinia module
class benchbuild.projects.apollo.rodinia.BFS(exp)[source]

Bases: benchbuild.projects.apollo.rodinia.RodiniaGroup

NAME = 'bfs'
config = {'dir': 'openmp/bfs', 'src': {'bfs': ['bfs.cpp']}, 'flags': ['-fopenmp', '-UOPEN']}
select_compiler(_, cxx)[source]
class benchbuild.projects.apollo.rodinia.BPlusTree(exp)[source]

Bases: benchbuild.projects.apollo.rodinia.RodiniaGroup

NAME = 'b+tree'
config = {'dir': 'openmp/b+tree', 'src': {'b+tree.out': ['./main.c', './kernel/kernel_cpu.c', './kernel/kernel_cpu_2.c', './util/timer/timer.c', './util/num/num.c']}, 'flags': ['-fopenmp', '-lm']}
class benchbuild.projects.apollo.rodinia.Backprop(exp)[source]

Bases: benchbuild.projects.apollo.rodinia.RodiniaGroup

NAME = 'backprop'
config = {'dir': 'openmp/backprop', 'src': {'backprop': ['backprop_kernel.c', 'imagenet.c', 'facetrain.c', 'backprop.c']}, 'flags': ['-fopenmp', '-lm']}
class benchbuild.projects.apollo.rodinia.CFD(exp)[source]

Bases: benchbuild.projects.apollo.rodinia.RodiniaGroup

NAME = 'cfd'
config = {'dir': 'openmp/cfd', 'src': {'euler3d_cpu': ['euler3d_cpu.cpp']}}
select_compiler(_, cxx)[source]
class benchbuild.projects.apollo.rodinia.HeartWall(exp)[source]

Bases: benchbuild.projects.apollo.rodinia.RodiniaGroup

NAME = 'heartwall'
config = {'dir': 'openmp/heartwall', 'src': {'heartwall': ['./AVI/avimod.c', './AVI/avilib.c', './main.c']}, 'flags': ['-I./AVI', '-fopenmp', '-lm']}
class benchbuild.projects.apollo.rodinia.Hotspot(exp)[source]

Bases: benchbuild.projects.apollo.rodinia.RodiniaGroup

NAME = 'hotspot'
config = {'dir': 'openmp/hotspot', 'src': {'hotspot': ['hotspot_openmp.cpp']}, 'flags': ['-fopenmp']}
select_compiler(_, cxx)[source]
class benchbuild.projects.apollo.rodinia.Hotspot3D(exp)[source]

Bases: benchbuild.projects.apollo.rodinia.RodiniaGroup

NAME = 'hotspot3D'
config = {'dir': 'openmp/hotspot3D', 'src': {'3D': ['./3D.c']}, 'flags': ['-fopenmp', '-lm']}
class benchbuild.projects.apollo.rodinia.KMeans(exp)[source]

Bases: benchbuild.projects.apollo.rodinia.RodiniaGroup

NAME = 'kmeans'
config = {'dir': 'openmp/kmeans', 'src': {'./kmeans_openmp/kmeans': ['./kmeans_openmp/kmeans_clustering.c', './kmeans_openmp/kmeans.c', './kmeans_openmp/getopt.c', './kmeans_openmp/cluster.c'], './kmeans_serial/kmeans': ['./kmeans_serial/kmeans_clustering.c', './kmeans_serial/kmeans.c', './kmeans_serial/getopt.c', './kmeans_serial/cluster.c']}, 'flags': ['-lm', '-fopenmp']}
class benchbuild.projects.apollo.rodinia.LUD(exp)[source]

Bases: benchbuild.projects.apollo.rodinia.RodiniaGroup

NAME = 'lud'
config = {'dir': 'openmp/lud', 'src': {'./omp/lud_omp': ['./common/common.c', './omp/lud_omp.c', './omp/lud.c']}, 'flags': ['-I./common', '-lm', '-fopenmp']}
class benchbuild.projects.apollo.rodinia.LavaMD(exp)[source]

Bases: benchbuild.projects.apollo.rodinia.RodiniaGroup

NAME = 'lavaMD'
config = {'dir': 'openmp/lavaMD', 'src': {'lavaMD': ['./main.c', './util/timer/timer.c', './util/num/num.c', './kernel/kernel_cpu.c']}, 'flags': ['-lm', '-fopenmp']}
class benchbuild.projects.apollo.rodinia.Leukocyte(exp)[source]

Bases: benchbuild.projects.apollo.rodinia.RodiniaGroup

NAME = 'leukocyte'
config = {'dir': 'openmp/leukocyte', 'src': {'leukocyte': ['./meschach_lib/memstat.c', './meschach_lib/meminfo.c', './meschach_lib/version.c', './meschach_lib/ivecop.c', './meschach_lib/matlab.c', './meschach_lib/machine.c', './meschach_lib/otherio.c', './meschach_lib/init.c', './meschach_lib/submat.c', './meschach_lib/pxop.c', './meschach_lib/matop.c', './meschach_lib/vecop.c', './meschach_lib/memory.c', './meschach_lib/matrixio.c', './meschach_lib/err.c', './meschach_lib/copy.c', './meschach_lib/bdfactor.c', './meschach_lib/mfunc.c', './meschach_lib/fft.c', './meschach_lib/svd.c', './meschach_lib/schur.c', './meschach_lib/symmeig.c', './meschach_lib/hessen.c', './meschach_lib/norm.c', './meschach_lib/update.c', './meschach_lib/givens.c', './meschach_lib/hsehldr.c', './meschach_lib/solve.c', './meschach_lib/qrfactor.c', './meschach_lib/chfactor.c', './meschach_lib/bkpfacto.c', './meschach_lib/lufactor.c', './meschach_lib/iternsym.c', './meschach_lib/itersym.c', './meschach_lib/iter0.c', './meschach_lib/spswap.c', './meschach_lib/spbkp.c', './meschach_lib/splufctr.c', './meschach_lib/spchfctr.c', './meschach_lib/sparseio.c', './meschach_lib/sprow.c', './meschach_lib/sparse.c', './meschach_lib/zfunc.c', './meschach_lib/znorm.c', './meschach_lib/zmatop.c', './meschach_lib/zvecop.c', './meschach_lib/zmemory.c', './meschach_lib/zmatio.c', './meschach_lib/zcopy.c', './meschach_lib/zmachine.c', './meschach_lib/zschur.c', './meschach_lib/zhessen.c', './meschach_lib/zgivens.c', './meschach_lib/zqrfctr.c', './meschach_lib/zhsehldr.c', './meschach_lib/zmatlab.c', './meschach_lib/zsolve.c', './meschach_lib/zlufctr.c', './OpenMP/detect_main.c', './OpenMP/misc_math.c', './OpenMP/track_ellipse.c', './OpenMP/find_ellipse.c', './OpenMP/avilib.c']}, 'flags': ['-DSPARSE', '-DCOMPLEX', '-DREAL_FLT', '-DREAL_DBL', '-I./meschach_lib', '-lm', '-lpthread', '-fopenmp']}
class benchbuild.projects.apollo.rodinia.NN(exp)[source]

Bases: benchbuild.projects.apollo.rodinia.RodiniaGroup

NAME = 'nn'
config = {'dir': 'openmp/nn', 'src': {'nn': ['./nn_openmp.c']}, 'flags': ['-lm', '-fopenmp']}
class benchbuild.projects.apollo.rodinia.NW(exp)[source]

Bases: benchbuild.projects.apollo.rodinia.RodiniaGroup

NAME = 'nw'
config = {'dir': 'openmp/nw', 'src': {'needle': ['./needle.cpp']}, 'flags': ['-lm', '-fopenmp']}
select_compiler(_, cxx)[source]
class benchbuild.projects.apollo.rodinia.ParticleFilter(exp)[source]

Bases: benchbuild.projects.apollo.rodinia.RodiniaGroup

NAME = 'particlefilter'
config = {'dir': 'openmp/particlefilter', 'src': {'particle_filter': ['./ex_particle_OPENMP_seq.c']}, 'flags': ['-lm', '-fopenmp']}
class benchbuild.projects.apollo.rodinia.PathFinder(exp)[source]

Bases: benchbuild.projects.apollo.rodinia.RodiniaGroup

NAME = 'pathfinder'
config = {'dir': 'openmp/pathfinder', 'src': {'pathfinder': ['./pathfinder.cpp']}, 'flags': ['-fopenmp']}
select_compiler(_, cxx)[source]
class benchbuild.projects.apollo.rodinia.RodiniaGroup(exp)[source]

Bases: benchbuild.project.Project

DOMAIN = 'rodinia'
GROUP = 'rodinia'
SRC_FILE = 'rodinia_3.1.tar.bz2'
VERSION = '3.1'
build()[source]
config = {}
configure()[source]
download()[source]
run_tests(experiment, runner)[source]
select_compiler(cc, cxx)[source]
src_dir = 'rodinia_3.1'
src_uri = 'http://www.cs.virginia.edu/~kw5na/lava/Rodinia/Packages/Current/rodinia_3.1.tar.bz2'
class benchbuild.projects.apollo.rodinia.SRAD1(exp)[source]

Bases: benchbuild.projects.apollo.rodinia.RodiniaGroup

NAME = 'srad-1'
config = {'dir': 'openmp/srad/srad_v1', 'src': {'srad': ['./main.c']}, 'flags': ['-I.', '-lm', '-fopenmp']}
class benchbuild.projects.apollo.rodinia.SRAD2(exp)[source]

Bases: benchbuild.projects.apollo.rodinia.RodiniaGroup

NAME = 'srad-2'
config = {'dir': 'openmp/srad/srad_v2', 'src': {'srad': ['./srad.cpp']}, 'flags': ['-lm', '-fopenmp']}
select_compiler(_, cxx)[source]
class benchbuild.projects.apollo.rodinia.StreamCluster(exp)[source]

Bases: benchbuild.projects.apollo.rodinia.RodiniaGroup

NAME = 'streamcluster'
config = {'dir': 'openmp/streamcluster', 'src': {'./sc_omp': ['./streamcluster_omp.cpp']}, 'flags': ['-lpthread', '-fopenmp']}
select_compiler(_, cxx)[source]
benchbuild.projects.apollo.scimark module
class benchbuild.projects.apollo.scimark.SciMark(exp)[source]

Bases: benchbuild.projects.apollo.group.ApolloGroup

DOMAIN = 'scientific'
NAME = 'scimark'
SRC_FILE = 'scimark2_1c.zip'
VERSION = '2.1c'
build()[source]
configure()[source]
download()[source]
prepare()[source]
run_tests(experiment, run)[source]
src_uri = 'http://math.nist.gov/scimark2/scimark2_1c.zip'
benchbuild.projects.benchbuild package
Submodules
benchbuild.projects.benchbuild.bzip2 module
class benchbuild.projects.benchbuild.bzip2.Bzip2(exp)[source]

Bases: benchbuild.projects.benchbuild.group.BenchBuildGroup

DOMAIN = 'compression'
NAME = 'bzip2'
SRC_FILE = 'bzip2-1.0.6.tar.gz'
VERSION = '1.0.6'
build()[source]
configure()[source]
download()[source]
prepare()[source]
run_tests(experiment, run)[source]
src_dir = 'bzip2-1.0.6'
src_uri = 'http://www.bzip.org/1.0.6/bzip2-1.0.6.tar.gz'
testfiles = ['text.html', 'chicken.jpg', 'control', 'input.source', 'liberty.jpg']
benchbuild.projects.benchbuild.ccrypt module
class benchbuild.projects.benchbuild.ccrypt.Ccrypt(exp)[source]

Bases: benchbuild.projects.benchbuild.group.BenchBuildGroup

ccrypt benchmark

DOMAIN = 'encryption'
NAME = 'ccrypt'
SRC_FILE = 'ccrypt-1.10.tar.gz'
VERSION = '1.10'
build()[source]
configure()[source]
download()[source]
run_tests(experiment, run)[source]
src_dir = 'ccrypt-1.10'
src_uri = 'http://ccrypt.sourceforge.net/download/ccrypt-1.10.tar.gz'
benchbuild.projects.benchbuild.crafty module
class benchbuild.projects.benchbuild.crafty.Crafty(exp)[source]

Bases: benchbuild.projects.benchbuild.group.BenchBuildGroup

crafty benchmark

DOMAIN = 'scientific'
NAME = 'crafty'
SRC_FILE = 'crafty-25.2.zip'
VERSION = '25.2'
build()[source]
configure()[source]
download()[source]
run_tests(experiment, run)[source]
src_dir = 'crafty-25.2'
src_uri = 'http://www.craftychess.com/downloads/source/crafty-25.2.zip'
benchbuild.projects.benchbuild.crocopat module
class benchbuild.projects.benchbuild.crocopat.Crocopat(exp)[source]

Bases: benchbuild.projects.benchbuild.group.BenchBuildGroup

crocopat benchmark

DOMAIN = 'verification'
NAME = 'crocopat'
SRC_FILE = 'crocopat-2.1.4.zip'
VERSION = '2.1.4'
build()[source]
configure()[source]
download()[source]
run_tests(experiment, run)[source]
src_dir = 'crocopat-2.1.4'
src_uri = 'http://crocopat.googlecode.com/files/crocopat-2.1.4.zip'
benchbuild.projects.benchbuild.ffmpeg module
class benchbuild.projects.benchbuild.ffmpeg.LibAV(exp)[source]

Bases: benchbuild.projects.benchbuild.group.BenchBuildGroup

LibAV benchmark

DOMAIN = 'multimedia'
NAME = 'ffmpeg'
SRC_FILE = 'ffmpeg-3.1.3.tar.bz2'
VERSION = '3.1.3'
build()[source]
configure()[source]
download()[source]
fate_dir = 'fate-samples'
fate_uri = 'rsync://fate-suite.libav.org/fate-suite/'
run_tests(experiment, run)[source]
src_dir = 'ffmpeg-3.1.3'
src_uri = 'http://ffmpeg.org/releases/ffmpeg-3.1.3.tar.bz2'
benchbuild.projects.benchbuild.group module
class benchbuild.projects.benchbuild.group.BenchBuildGroup(exp)[source]

Bases: benchbuild.project.Project

GROUP = 'benchbuild'
path_suffix = 'src'
benchbuild.projects.benchbuild.gzip module
class benchbuild.projects.benchbuild.gzip.Gzip(exp)[source]

Bases: benchbuild.projects.benchbuild.group.BenchBuildGroup

DOMAIN = 'compression'
NAME = 'gzip'
SRC_FILE = 'gzip-1.6.tar.xz'
VERSION = '1.6'
build()[source]
configure()[source]
download()[source]
prepare()[source]
run_tests(experiment, run)[source]
src_dir = 'gzip-1.6'
src_uri = 'http://ftpmirror.gnu.org/gzip/gzip-1.6.tar.xz'
testfiles = ['text.html', 'chicken.jpg', 'control', 'input.source', 'liberty.jpg']
benchbuild.projects.benchbuild.js module
class benchbuild.projects.benchbuild.js.SpiderMonkey(exp)[source]

Bases: benchbuild.projects.benchbuild.group.BenchBuildGroup

SpiderMonkey requires a legacy version of autoconf: autoconf-2.13

DOMAIN = 'compilation'
NAME = 'js'
VERSION = ''
build()[source]
configure()[source]
download()[source]
run_tests(experiment, run)[source]
src_dir = 'gecko-dev.git'
src_uri = 'https://github.com/mozilla/gecko-dev.git'
version = ''
benchbuild.projects.benchbuild.lammps module
class benchbuild.projects.benchbuild.lammps.Lammps(exp)[source]

Bases: benchbuild.projects.benchbuild.group.BenchBuildGroup

LAMMPS benchmark

DOMAIN = 'scientific'
NAME = 'lammps'
SRC_FILE = 'lammps.git'
build()[source]
download()[source]
run_tests(experiment, run)[source]
src_dir = 'lammps.git'
src_uri = 'https://github.com/lammps/lammps'
benchbuild.projects.benchbuild.lapack module
class benchbuild.projects.benchbuild.lapack.Lapack(exp)[source]

Bases: benchbuild.projects.benchbuild.group.BenchBuildGroup

DOMAIN = 'scientific'
NAME = 'lapack'
SRC_FILE = 'clapack.tgz'
VERSION = '3.2.1'
build()[source]
configure()[source]
download()[source]
run_tests(experiment, run)[source]
src_dir = 'CLAPACK-3.2.1'
src_uri = 'http://www.netlib.org/clapack/clapack.tgz'
class benchbuild.projects.benchbuild.lapack.OpenBlas(exp)[source]

Bases: benchbuild.projects.benchbuild.group.BenchBuildGroup

DOMAIN = 'scientific'
NAME = 'openblas'
SRC_FILE = 'OpenBLAS'
build()[source]
configure()[source]
download()[source]
run_tests(experiment, run)[source]
src_uri = 'https://github.com/xianyi/OpenBLAS'
benchbuild.projects.benchbuild.leveldb module
class benchbuild.projects.benchbuild.leveldb.LevelDB(exp)[source]

Bases: benchbuild.projects.benchbuild.group.BenchBuildGroup

DOMAIN = 'database'
NAME = 'leveldb'
SRC_FILE = 'leveldb.src'
build()[source]
configure()[source]
download()[source]
run_tests(experiment, run)[source]

Execute LevelDB’s runtime configuration.

Args:
experiment: The experiment’s run function.
src_uri = 'https://github.com/google/leveldb'
benchbuild.projects.benchbuild.linpack module
class benchbuild.projects.benchbuild.linpack.Linpack(exp)[source]

Bases: benchbuild.projects.benchbuild.group.BenchBuildGroup

Linpack (C-Version)

DOMAIN = 'scientific'
NAME = 'linpack'
build()[source]
configure()[source]
download()[source]
src_uri = 'http://www.netlib.org/benchmark/linpackc.new'
benchbuild.projects.benchbuild.lulesh module
class benchbuild.projects.benchbuild.lulesh.Lulesh(exp)[source]

Bases: benchbuild.projects.benchbuild.group.BenchBuildGroup

DOMAIN = 'scientific'
NAME = 'lulesh'
SRC_FILE = 'LULESH.cc'
build()[source]
configure()[source]
download()[source]
run_tests(experiment, run)[source]
src_uri = 'https://codesign.llnl.gov/lulesh/LULESH.cc'
benchbuild.projects.benchbuild.luleshomp module
class benchbuild.projects.benchbuild.luleshomp.LuleshOMP(exp)[source]

Bases: benchbuild.projects.benchbuild.group.BenchBuildGroup

Lulesh-OMP

DOMAIN = 'scientific'
NAME = 'lulesh-omp'
SRC_FILE = 'LULESH_OMP.cc'
build()[source]

Build process for OpenMP enabled LULESH code:

Required: openmp (omp.h) needs to be available.

configure()[source]
download()[source]
run_tests(experiment, run)[source]
src_uri = 'https://codesign.llnl.gov/lulesh/LULESH_OMP.cc'
benchbuild.projects.benchbuild.mcrypt module
class benchbuild.projects.benchbuild.mcrypt.MCrypt(exp)[source]

Bases: benchbuild.projects.benchbuild.group.BenchBuildGroup

MCrypt benchmark

DOMAIN = 'encryption'
NAME = 'mcrypt'
SRC_FILE = 'mcrypt-2.6.8.tar.gz'
VERSION = '2.6.8'
build()[source]
configure()[source]
download()[source]
libmcrypt_dir = 'libmcrypt-2.5.8'
libmcrypt_file = 'libmcrypt-2.5.8.tar.gz'
libmcrypt_uri = 'http://sourceforge.net/projects/mcrypt/files/Libmcrypt/2.5.8/libmcrypt-2.5.8.tar.gz'
mhash_dir = 'mhash-0.9.9.9'
mhash_file = 'mhash-0.9.9.9.tar.gz'
mhash_uri = 'http://sourceforge.net/projects/mhash/files/mhash/0.9.9.9/mhash-0.9.9.9.tar.gz'
run_tests(experiment, run)[source]
src_dir = 'mcrypt-2.6.8'
src_uri = 'http://sourceforge.net/projects/mcrypt/files/MCrypt/2.6.8mcrypt-2.6.8.tar.gz'
benchbuild.projects.benchbuild.minisat module
class benchbuild.projects.benchbuild.minisat.Minisat(exp)[source]

Bases: benchbuild.projects.benchbuild.group.BenchBuildGroup

minisat benchmark

DOMAIN = 'verification'
NAME = 'minisat'
SRC_FILE = 'minisat.git'
build()[source]
configure()[source]
download()[source]
run_tests(experiment, runner)[source]
src_uri = 'https://github.com/niklasso/minisat'
benchbuild.projects.benchbuild.openssl module
class benchbuild.projects.benchbuild.openssl.LibreSSL(exp)[source]

Bases: benchbuild.projects.benchbuild.group.BenchBuildGroup

OpenSSL

BINARIES = ['aeadtest', 'aes_wrap', 'asn1test', 'base64test', 'bftest', 'bntest', 'bytestringtest', 'casttest', 'chachatest', 'cipherstest', 'cts128test', 'destest', 'dhtest', 'dsatest', 'ecdhtest', 'ecdsatest', 'ectest', 'enginetest', 'evptest', 'exptest', 'gcm128test', 'gost2814789t', 'hmactest', 'ideatest', 'igetest', 'md4test', 'md5test', 'mdc2test', 'mont', 'pbkdf2', 'pkcs7test', 'poly1305test', 'pq_test', 'randtest', 'rc2test', 'rc4test', 'rmdtest', 'sha1test', 'sha256test', 'sha512test', 'shatest', 'ssltest', 'timingsafe', 'utf8test']
DOMAIN = 'encryption'
NAME = 'libressl'
SRC_FILE = 'libressl-2.1.6.tar.gz'
VERSION = '2.1.6'
build()[source]
configure()[source]
download()[source]
run_tests(experiment, run)[source]
src_dir = 'libressl-2.1.6'
src_uri = 'http://ftp.openbsd.org/pub/OpenBSD/LibreSSL/libressl-2.1.6.tar.gz'
benchbuild.projects.benchbuild.postgres module
benchbuild.projects.benchbuild.povray module
class benchbuild.projects.benchbuild.povray.Povray(exp)[source]

Bases: benchbuild.projects.benchbuild.group.BenchBuildGroup

povray benchmark

DOMAIN = 'multimedia'
NAME = 'povray'
SRC_FILE = 'povray.git'
boost_src_dir = 'boost_1_59_0'
boost_src_file = 'boost_1_59_0.tar.bz2'
boost_src_uri = 'http://sourceforge.net/projects/boost/files/boost/1.59.0/boost_1_59_0.tar.bz2'
build()[source]
configure()[source]
download()[source]
prepare()[source]
run_tests(experiment, run)[source]
src_uri = 'https://github.com/POV-Ray/povray'
benchbuild.projects.benchbuild.python module
class benchbuild.projects.benchbuild.python.Python(exp)[source]

Bases: benchbuild.projects.benchbuild.group.BenchBuildGroup

python benchmarks

DOMAIN = 'compilation'
NAME = 'python'
SRC_FILE = 'Python-3.4.3.tar.xz'
VERSION = '3.4.3'
build()[source]
configure()[source]
download()[source]
run_tests(experiment, run)[source]
src_dir = 'Python-3.4.3'
src_uri = 'https://www.python.org/ftp/python/3.4.3/Python-3.4.3.tar.xz'
benchbuild.projects.benchbuild.rasdaman module
class benchbuild.projects.benchbuild.rasdaman.Rasdaman(exp)[source]

Bases: benchbuild.projects.benchbuild.group.BenchBuildGroup

DOMAIN = 'database'
NAME = 'Rasdaman'
SRC_FILE = 'rasdaman.git'
build()[source]
configure()[source]
download()[source]
gdal_dir = 'gdal'
gdal_uri = 'https://github.com/OSGeo/gdal'
run_tests(experiment, run)[source]
src_uri = 'git://rasdaman.org/rasdaman.git'
benchbuild.projects.benchbuild.ruby module
class benchbuild.projects.benchbuild.ruby.Ruby(exp)[source]

Bases: benchbuild.projects.benchbuild.group.BenchBuildGroup

DOMAIN = 'compilation'
NAME = 'ruby'
SRC_FILE = 'ruby-2.2.2.tar.gz'
VERSION = '2.2.2'
build()[source]
configure()[source]
download()[source]
run_tests(experiment, run)[source]
src_dir = 'ruby-2.2.2'
src_uri = 'http://cache.ruby-lang.org/pub/ruby/2.2.2/ruby-2.2.2.tar.gz'
benchbuild.projects.benchbuild.sdcc module
class benchbuild.projects.benchbuild.sdcc.SDCC(exp)[source]

Bases: benchbuild.projects.benchbuild.group.BenchBuildGroup

DOMAIN = 'compilation'
NAME = 'sdcc'
SRC_FILE = 'sdcc'
build()[source]
configure()[source]
download()[source]
run_tests(experiment, run)[source]
src_uri = 'svn://svn.code.sf.net/p/sdcc/code/trunk/sdcc'
benchbuild.projects.benchbuild.sevenz module
class benchbuild.projects.benchbuild.sevenz.SevenZip(exp)[source]

Bases: benchbuild.projects.benchbuild.group.BenchBuildGroup

7Zip

DOMAIN = 'compression'
NAME = '7z'
SRC_FILE = 'p7zip_16.02_src_all.tar.bz2'
VERSION = '16.02'
build()[source]
configure()[source]
download()[source]
run_tests(experiment, run)[source]
src_dir = 'p7zip_16.02'
src_uri = 'http://downloads.sourceforge.net/project/p7zip/p7zip/16.02/p7zip_16.02_src_all.tar.bz2'
benchbuild.projects.benchbuild.sqlite3 module
class benchbuild.projects.benchbuild.sqlite3.SQLite3(exp)[source]

Bases: benchbuild.projects.benchbuild.group.BenchBuildGroup

DOMAIN = 'database'
NAME = 'sqlite3'
SRC_FILE = 'sqlite-amalgamation-3080900.zip'
build()[source]
build_leveldb()[source]
configure()[source]
download()[source]
fetch_leveldb()[source]
run_tests(experiment, run)[source]
src_dir = 'sqlite-amalgamation-3080900'
src_uri = 'http://www.sqlite.org/2015/sqlite-amalgamation-3080900.zip'
benchbuild.projects.benchbuild.tcc module
class benchbuild.projects.benchbuild.tcc.TCC(exp)[source]

Bases: benchbuild.projects.benchbuild.group.BenchBuildGroup

DOMAIN = 'compilation'
NAME = 'tcc'
SRC_FILE = 'tcc-0.9.26.tar.bz2'
VERSION = '0.9.26'
build()[source]
configure()[source]
download()[source]
run_tests(experiment, run)[source]
src_dir = 'tcc-0.9.26'
src_uri = 'http://download-mirror.savannah.gnu.org/releases/tinycc/tcc-0.9.26.tar.bz2'
benchbuild.projects.benchbuild.x264 module
class benchbuild.projects.benchbuild.x264.X264(exp)[source]

Bases: benchbuild.projects.benchbuild.group.BenchBuildGroup

x264

DOMAIN = 'multimedia'
NAME = 'x264'
SRC_FILE = 'x264.git'
build()[source]
configure()[source]
download()[source]
inputfiles = {'Sintel.2010.720p.raw': ['--input-res', '1280x720'], 'tbbt-small.y4m': []}
prepare()[source]
run_tests(experiment, run)[source]
src_uri = 'git://git.videolan.org/x264.git'
benchbuild.projects.benchbuild.xz module
class benchbuild.projects.benchbuild.xz.XZ(exp)[source]

Bases: benchbuild.projects.benchbuild.group.BenchBuildGroup

DOMAIN = 'compression'
NAME = 'xz'
SRC_FILE = 'xz-5.2.1.tar.gz'
VERSION = '5.2.1'
build()[source]
configure()[source]
download()[source]
prepare()[source]
run_tests(experiment, run)[source]
src_dir = 'xz-5.2.1'
src_uri = 'http://tukaani.org/xz/xz-5.2.1.tar.gz'
testfiles = ['text.html', 'chicken.jpg', 'control', 'input.source', 'liberty.jpg']
benchbuild.projects.gentoo package

Import all gentoo based modules.

All manually entered modules can be placed in the following import section. Portage_Gen based projects will be generated automatically as soon as we can find an index generated by portage info.

Submodules
benchbuild.projects.gentoo.autoportage module
class benchbuild.projects.gentoo.autoportage.AutoPortage(exp)[source]

Bases: benchbuild.projects.gentoo.gentoo.GentooGroup

Generic portage experiment.

build()[source]
run_tests(*args, **kwargs)[source]
benchbuild.projects.gentoo.bzip2 module

bzip2 experiment within gentoo chroot.

class benchbuild.projects.gentoo.bzip2.BZip2(exp)[source]

Bases: benchbuild.projects.gentoo.gentoo.GentooGroup

app-arch/bzip2

DOMAIN = 'app-arch'
NAME = 'gentoo-bzip2'
VERSION = '1.0.6'
build()[source]
prepare()[source]
run_tests(experiment, run)[source]
test_archive = 'compression.tar.gz'
test_url = 'http://lairosiel.de/dist/'
testfiles = ['text.html', 'chicken.jpg', 'control', 'input.source', 'liberty.jpg']
benchbuild.projects.gentoo.crafty module

crafty experiment within gentoo chroot.

class benchbuild.projects.gentoo.crafty.Crafty(exp)[source]

Bases: benchbuild.projects.gentoo.gentoo.GentooGroup

games-board/crafty

DOMAIN = 'games-board'
NAME = 'gentoo-crafty'
build()[source]
download()[source]
run_tests(experiment, run)[source]
benchbuild.projects.gentoo.eix module

eix experiment within gentoo chroot

class benchbuild.projects.gentoo.eix.Eix(exp)[source]

Bases: benchbuild.projects.gentoo.gentoo.GentooGroup

Represents the package eix from the portage tree.

DOMAIN = 'app-portage'
NAME = 'eix'
build()[source]

Compiles and installes eix within gentoo chroot

run_tests(experiment, run)[source]

Runs runtime tests for eix

benchbuild.projects.gentoo.gentoo module

The Gentoo module for running tests on builds from the portage tree.

This will install a stage3 image of gentoo together with a recent snapshot of the portage tree. For building / executing arbitrary projects successfully it is necessary to keep the installed image as close to the host system as possible. In order to speed up your experience, you can replace the stage3 image that we pull from the distfiles mirror with a new image that contains all necessary dependencies for your experiments. Make sure you update the hash alongside the gentoo image in benchbuild’s source directory.

class benchbuild.projects.gentoo.gentoo.GentooGroup(exp)[source]

Bases: benchbuild.project.Project

Gentoo ProjectGroup is the base class for every portage build.

CONTAINER = <benchbuild.utils.container.Gentoo object>
GROUP = 'gentoo'
SRC_FILE = None
build()[source]
configure()[source]
download()[source]
benchbuild.projects.gentoo.gentoo.write_bashrc(path)[source]

Write a valid gentoo bashrc file to :path:.

Args:
path - The output path of the make.conf
benchbuild.projects.gentoo.gentoo.write_layout(path)[source]

Write a valid gentoo layout file to :path:.

Args:
path - The output path of the layout.conf
benchbuild.projects.gentoo.gentoo.write_makeconfig(path)[source]

Write a valid gentoo make.conf file to :path:.

Args:
path - The output path of the make.conf
benchbuild.projects.gentoo.gentoo.write_wgetrc(path)[source]

Write a valid gentoo wgetrc file to :path:.

Args:
path - The output path of the wgetrc
benchbuild.projects.gentoo.gzip module

gzip experiment within gentoo chroot.

class benchbuild.projects.gentoo.gzip.GZip(exp)[source]

Bases: benchbuild.projects.gentoo.gentoo.GentooGroup

app-arch/gzip

DOMAIN = 'app-arch'
NAME = 'gentoo-gzip'
build()[source]
prepare()[source]
run_tests(experiment, run)[source]
test_archive = 'compression.tar.gz'
test_url = 'http://lairosiel.de/dist/'
testfiles = ['text.html', 'chicken.jpg', 'control', 'input.source', 'liberty.jpg']
benchbuild.projects.gentoo.info module

Get package infos, e.g., specific ebuilds for given languages, from gentoo chroot.

class benchbuild.projects.gentoo.info.Info(exp)[source]

Bases: benchbuild.projects.gentoo.autoportage.AutoPortage

Info experiment to retrieve package information from portage.

DOMAIN = 'debug'
NAME = 'gentoo-info'
build()[source]
benchbuild.projects.gentoo.info.get_string_for_language(language_name)[source]

Maps language names to the corresponding string for qgrep.

benchbuild.projects.gentoo.lammps module

LAMMPS (sci-physics/lammps) project within gentoo chroot.

class benchbuild.projects.gentoo.lammps.Lammps(exp)[source]

Bases: benchbuild.projects.gentoo.gentoo.GentooGroup

sci-physics/lammps

DOMAIN = 'sci-physics'
NAME = 'gentoo-lammps'
build()[source]
prepare()[source]
run_tests(experiment, run)[source]
test_archive = 'lammps.tar.gz'
test_url = 'http://lairosiel.de/dist/'
benchbuild.projects.gentoo.portage_gen module

Generic experiment to test portage packages within gentoo chroot.

class benchbuild.projects.gentoo.portage_gen.FuncClass(name, domain, container)[source]

Bases: object

Finds out the current version number of a gentoo package.

The package name is created by combining the domain and the name. Then uchroot is used to switch into a gentoo shell where the ‘emerge’ command is used to recieve the version number. The function then parses the version number back into the file.

Args:
Name: Name of the project. Domain: Categorie of the package.
benchbuild.projects.gentoo.portage_gen.PortageFactory(name, NAME, DOMAIN, BaseClass=<class 'benchbuild.projects.gentoo.autoportage.AutoPortage'>)[source]

Create a new dynamic portage project.

Auto-Generated projects can only be used for compilie-time experiments, because there simply is no run-time test defined for it. Therefore, we implement the run symbol as a noop (with minor logging).

This way we avoid the default implementation for run() that all projects inherit.

Args:
name: Name of the dynamic class. NAME: NAME property of the dynamic class. DOMAIN: DOMAIN property of the dynamic class. BaseClass: Base class to use for the dynamic class.
Returns:
A new class with NAME,DOMAIN properties set, unable to perform run-time tests.
Examples:
>>> from benchbuild.projects.gentoo.portage_gen import PortageFactory
>>> from benchbuild.experiments.empty import Empty
>>> c = PortageFactory("test", "NAME", "DOMAIN")
>>> c
<class '__main__.test'>
>>> i = c(Empty())
>>> i.NAME
'NAME'
>>> i.DOMAIN
'DOMAIN'
benchbuild.projects.gentoo.postgresql module

postgresql experiment within gentoo chroot.

class benchbuild.projects.gentoo.postgresql.Postgresql(exp)[source]

Bases: benchbuild.projects.gentoo.gentoo.GentooGroup

dev-db/postgresql

DOMAIN = 'dev-db/postgresql'
NAME = 'gentoo-postgresql'
build()[source]
outside(chroot_path)[source]

Return the path with the outside prefix.

Args:
chroot_path: the path inside the chroot.
Returns:
Absolute path outside this chroot.
run_tests(experiment, run)[source]
benchbuild.projects.gentoo.sevenz module

p7zip experiment within gentoo chroot.

class benchbuild.projects.gentoo.sevenz.SevenZip(exp)[source]

Bases: benchbuild.projects.gentoo.gentoo.GentooGroup

app-arch/p7zip

DOMAIN = 'app-arch'
NAME = 'gentoo-p7zip'
build()[source]
run_tests(experiment, run)[source]
benchbuild.projects.gentoo.x264 module

media-video/x264-encoder within gentoo chroot.

class benchbuild.projects.gentoo.x264.X264(exp)[source]

Bases: benchbuild.projects.gentoo.gentoo.GentooGroup

media-video/x264-encoder

DOMAIN = 'media-libs'
NAME = 'gentoo-x264'
build()[source]
inputfiles = {'Sintel.2010.720p.raw': ['--input-res', '1280x720'], 'tbbt-small.y4m': []}
prepare()[source]
run_tests(experiment, run)[source]
test_url = 'http://lairosiel.de/dist/'
benchbuild.projects.gentoo.xz module

xz experiment within gentoo chroot.

class benchbuild.projects.gentoo.xz.XZ(exp)[source]

Bases: benchbuild.projects.gentoo.gentoo.GentooGroup

app-arch/xz

DOMAIN = 'app-arch'
NAME = 'gentoo-xz'
build()[source]
prepare()[source]
run_tests(experiment, run)[source]
test_archive = 'compression.tar.gz'
test_url = 'http://lairosiel.de/dist/'
testfiles = ['text.html', 'chicken.jpg', 'control', 'input.source', 'liberty.jpg']
benchbuild.projects.lnt package
Submodules
benchbuild.projects.lnt.lnt module

LNT based measurements.

class benchbuild.projects.lnt.lnt.LNTGroup(exp)[source]

Bases: benchbuild.project.Project

LNT ProjectGroup for running the lnt test suite.

DOMAIN = 'lnt'
GROUP = 'lnt'
NAME_FILTERS = ['(?P<name>.+)\\.simple', '(?P<name>.+)-(dbl|flt)']
VERSION = '9.0.1.13'
after_run_tests(sandbox_dir)[source]
before_run_tests(experiment, run)[source]
build()[source]
configure()[source]
download()[source]
src_dir = 'lnt'
src_uri = 'http://llvm.org/git/lnt'
test_suite_dir = 'test-suite'
test_suite_uri = 'http://llvm.org/git/test-suite'
class benchbuild.projects.lnt.lnt.MultiSourceApplications(exp)[source]

Bases: benchbuild.projects.lnt.lnt.LNTGroup

DOMAIN = 'LNT (MSA)'
NAME = 'MultiSourceApplications'
run_tests(experiment, run)[source]
class benchbuild.projects.lnt.lnt.MultiSourceBenchmarks(exp)[source]

Bases: benchbuild.projects.lnt.lnt.LNTGroup

DOMAIN = 'LNT (MSB)'
NAME = 'MultiSourceBenchmarks'
run_tests(experiment, run)[source]
class benchbuild.projects.lnt.lnt.Povray(exp)[source]

Bases: benchbuild.projects.lnt.lnt.LNTGroup

DOMAIN = 'LNT (Ext)'
NAME = 'Povray'
download()[source]
povray_src_dir = 'Povray'
povray_url = 'https://github.com/POV-Ray/povray'
run_tests(experiment, run)[source]
class benchbuild.projects.lnt.lnt.SPEC2006(exp)[source]

Bases: benchbuild.projects.lnt.lnt.LNTGroup

DOMAIN = 'LNT (Ext)'
NAME = 'SPEC2006'
download()[source]
run_tests(experiment, run)[source]
class benchbuild.projects.lnt.lnt.SingleSourceBenchmarks(exp)[source]

Bases: benchbuild.projects.lnt.lnt.LNTGroup

DOMAIN = 'LNT (SSB)'
NAME = 'SingleSourceBenchmarks'
run_tests(experiment, run)[source]
benchbuild.projects.polybench package
Submodules
benchbuild.projects.polybench.polybench module
class benchbuild.projects.polybench.polybench.Adi(exp)[source]

Bases: benchbuild.projects.polybench.polybench.PolyBenchGroup

NAME = 'adi'
class benchbuild.projects.polybench.polybench.Atax(exp)[source]

Bases: benchbuild.projects.polybench.polybench.PolyBenchGroup

NAME = 'atax'
class benchbuild.projects.polybench.polybench.BicG(exp)[source]

Bases: benchbuild.projects.polybench.polybench.PolyBenchGroup

NAME = 'bicg'
class benchbuild.projects.polybench.polybench.Cholesky(exp)[source]

Bases: benchbuild.projects.polybench.polybench.PolyBenchGroup

NAME = 'cholesky'
class benchbuild.projects.polybench.polybench.Correlation(exp)[source]

Bases: benchbuild.projects.polybench.polybench.PolyBenchGroup

NAME = 'correlation'
class benchbuild.projects.polybench.polybench.Covariance(exp)[source]

Bases: benchbuild.projects.polybench.polybench.PolyBenchGroup

NAME = 'covariance'
class benchbuild.projects.polybench.polybench.Deriche(exp)[source]

Bases: benchbuild.projects.polybench.polybench.PolyBenchGroup

NAME = 'deriche'
class benchbuild.projects.polybench.polybench.Doitgen(exp)[source]

Bases: benchbuild.projects.polybench.polybench.PolyBenchGroup

NAME = 'doitgen'
class benchbuild.projects.polybench.polybench.Durbin(exp)[source]

Bases: benchbuild.projects.polybench.polybench.PolyBenchGroup

NAME = 'durbin'
class benchbuild.projects.polybench.polybench.FDTD2D(exp)[source]

Bases: benchbuild.projects.polybench.polybench.PolyBenchGroup

NAME = 'fdtd-2d'
class benchbuild.projects.polybench.polybench.FloydWarshall(exp)[source]

Bases: benchbuild.projects.polybench.polybench.PolyBenchGroup

NAME = 'floyd-warshall'
class benchbuild.projects.polybench.polybench.Gemm(exp)[source]

Bases: benchbuild.projects.polybench.polybench.PolyBenchGroup

NAME = 'gemm'
class benchbuild.projects.polybench.polybench.Gemver(exp)[source]

Bases: benchbuild.projects.polybench.polybench.PolyBenchGroup

NAME = 'gemver'
class benchbuild.projects.polybench.polybench.Gesummv(exp)[source]

Bases: benchbuild.projects.polybench.polybench.PolyBenchGroup

NAME = 'gesummv'
class benchbuild.projects.polybench.polybench.Gramschmidt(exp)[source]

Bases: benchbuild.projects.polybench.polybench.PolyBenchGroup

NAME = 'gramschmidt'
class benchbuild.projects.polybench.polybench.Heat3D(exp)[source]

Bases: benchbuild.projects.polybench.polybench.PolyBenchGroup

NAME = 'heat-3d'
class benchbuild.projects.polybench.polybench.Jacobi1D(exp)[source]

Bases: benchbuild.projects.polybench.polybench.PolyBenchGroup

NAME = 'jacobi-1d'
class benchbuild.projects.polybench.polybench.Jacobi2Dimper(exp)[source]

Bases: benchbuild.projects.polybench.polybench.PolyBenchGroup

NAME = 'jacobi-2d'
class benchbuild.projects.polybench.polybench.Lu(exp)[source]

Bases: benchbuild.projects.polybench.polybench.PolyBenchGroup

NAME = 'lu'
class benchbuild.projects.polybench.polybench.LuDCMP(exp)[source]

Bases: benchbuild.projects.polybench.polybench.PolyBenchGroup

NAME = 'ludcmp'
class benchbuild.projects.polybench.polybench.Mvt(exp)[source]

Bases: benchbuild.projects.polybench.polybench.PolyBenchGroup

NAME = 'mvt'
class benchbuild.projects.polybench.polybench.Nussinov(exp)[source]

Bases: benchbuild.projects.polybench.polybench.PolyBenchGroup

NAME = 'nussinov'
class benchbuild.projects.polybench.polybench.PolyBenchGroup(exp)[source]

Bases: benchbuild.project.Project

DOMAIN = 'polybench'
GROUP = 'polybench'
SRC_FILE = 'polybench-c-4.2.tar.gz'
VERSION = '4.2'
build()[source]
configure()[source]
download()[source]
path_dict = {'ludcmp': 'linear-algebra/solvers', 'gemm': 'linear-algebra/blas', 'gramschmidt': 'linear-algebra/solvers', 'gesummv': 'linear-algebra/blas', 'fdtd-2d': 'stencils', 'deriche': 'medley', 'covariance': 'datamining', 'cholesky': 'linear-algebra/solvers', 'correlation': 'datamining', 'floyd-warshall': 'medley', 'syr2k': 'linear-algebra/blas', 'trisolv': 'linear-algebra/solvers', 'lu': 'linear-algebra/solvers', 'bicg': 'linear-algebra/kernels', '2mm': 'linear-algebra/kernels', 'symm': 'linear-algebra/blas', 'heat-3d': 'stencils', 'seidel-2d': 'stencils', 'gemver': 'linear-algebra/blas', 'trmm': 'linear-algebra/blas', 'syrk': 'linear-algebra/blas', 'doitgen': 'linear-algebra/kernels', '3mm': 'linear-algebra/kernels', 'jacobi-1d': 'stencils', 'atax': 'linear-algebra/kernels', 'nussinov': 'medley', 'adi': 'stencils', 'jacobi-2d': 'stencils', 'durbin': 'linear-algebra/solvers', 'mvt': 'linear-algebra/kernels'}
run_tests(experiment, run)[source]
src_dir = 'polybench-c-4.2'
src_uri = 'http://downloads.sourceforge.net/project/polybench/polybench-c-4.2.tar.gz'
class benchbuild.projects.polybench.polybench.Seidel2D(exp)[source]

Bases: benchbuild.projects.polybench.polybench.PolyBenchGroup

NAME = 'seidel-2d'
class benchbuild.projects.polybench.polybench.Symm(exp)[source]

Bases: benchbuild.projects.polybench.polybench.PolyBenchGroup

NAME = 'symm'
class benchbuild.projects.polybench.polybench.Syr2k(exp)[source]

Bases: benchbuild.projects.polybench.polybench.PolyBenchGroup

NAME = 'syr2k'
class benchbuild.projects.polybench.polybench.Syrk(exp)[source]

Bases: benchbuild.projects.polybench.polybench.PolyBenchGroup

NAME = 'syrk'
class benchbuild.projects.polybench.polybench.ThreeMM(exp)[source]

Bases: benchbuild.projects.polybench.polybench.PolyBenchGroup

NAME = '3mm'
class benchbuild.projects.polybench.polybench.Trisolv(exp)[source]

Bases: benchbuild.projects.polybench.polybench.PolyBenchGroup

NAME = 'trisolv'
class benchbuild.projects.polybench.polybench.Trmm(exp)[source]

Bases: benchbuild.projects.polybench.polybench.PolyBenchGroup

NAME = 'trmm'
class benchbuild.projects.polybench.polybench.TwoMM(exp)[source]

Bases: benchbuild.projects.polybench.polybench.PolyBenchGroup

NAME = '2mm'
benchbuild.projects.polybench.polybench.get_dump_arrays_output(data)[source]

benchbuild.reports package

Register reports for an experiment

class benchbuild.reports.Report(exp_name, exp_ids, out_path)[source]

Bases: object

SUPPORTED_EXPERIMENTS = []
class benchbuild.reports.ReportRegistry(name, bases, dict)[source]

Bases: type

reports = {'pj-seq-greedy': [<class 'benchbuild.experiments.pj_sequence.SequenceReport'>], 'pj-seq-genetic2-opt': [<class 'benchbuild.experiments.pj_sequence.SequenceReport'>], 'pj-test': [<class 'benchbuild.experiments.pjtest.TestReport'>], 'pollytest': [<class 'benchbuild.experiments.pollytest.PollyTestReport'>], 'pj-seq-hillclimber': [<class 'benchbuild.experiments.pj_sequence.SequenceReport'>], 'pj-seq-genetic1-opt': [<class 'benchbuild.experiments.pj_sequence.SequenceReport'>]}
benchbuild.reports.discover()[source]

Import all experiments listed in *_PLUGINS_REPORTS.

Tests:
>>> from benchbuild.settings import CFG
>>> from benchbuild.reports import discover
>>> import logging as lg
>>> import sys
>>> l = lg.getLogger('benchbuild')
>>> l.setLevel(lg.DEBUG)
>>> l.handlers = [lg.StreamHandler(stream=sys.stdout)]
>>> CFG["plugins"]["reports"] = ["benchbuild.non.existing", "benchbuild.reports.raw"]
>>> discover()
Could not find 'benchbuild.non.existing'
Found report: benchbuild.reports.raw
benchbuild.reports.load_experiment_ids_from_names(session, names)[source]
Submodules
benchbuild.reports.raw module
class benchbuild.reports.raw.RawReport(exp_name, exp_ids, out_path)[source]

Bases: benchbuild.reports.Report

SUPPORTED_EXPERIMENTS = ['raw']
generate()[source]
report()[source]

benchbuild.utils package

Module handler that makes sure the modules for our commands are build similar to plumbum. The built modules are only active during a run of an experiment and get deleted afterwards.

benchbuild.utils.cmd

Module-hack, adapted from plumbum.

Submodules
benchbuild.utils.actions module

This defines classes that can be used to implement a series of Actions.

class benchbuild.utils.actions.Any(actions)[source]

Bases: benchbuild.utils.actions.Step

DESCRIPTION = 'Just run all actions, no questions asked.'
NAME = 'ANY'
class benchbuild.utils.actions.Build(project)[source]

Bases: benchbuild.utils.actions.Step

DESCRIPTION = 'Build the project'
NAME = 'BUILD'
class benchbuild.utils.actions.Clean(project_or_experiment, action_fn=None, check_empty=False)[source]

Bases: benchbuild.utils.actions.Step

DESCRIPTION = 'Cleans the build directory'
NAME = 'CLEAN'
class benchbuild.utils.actions.CleanExtra(project_or_experiment, action_fn=None)[source]

Bases: benchbuild.utils.actions.Step

DESCRIPTION = 'Cleans the extra directories.'
NAME = 'CLEAN EXTRA'
class benchbuild.utils.actions.Configure(project)[source]

Bases: benchbuild.utils.actions.Step

DESCRIPTION = 'Configure project source files'
NAME = 'CONFIGURE'
class benchbuild.utils.actions.Download(project)[source]

Bases: benchbuild.utils.actions.Step

DESCRIPTION = 'Download project source files'
NAME = 'DOWNLOAD'
class benchbuild.utils.actions.Echo(message)[source]

Bases: benchbuild.utils.actions.Step

DESCRIPTION = 'Print a message.'
NAME = 'ECHO'
class benchbuild.utils.actions.Experiment(experiment, actions)[source]

Bases: benchbuild.utils.actions.Any

DESCRIPTION = 'Run a experiment, wrapped in a db transaction'
NAME = 'EXPERIMENT'
begin_transaction()[source]
end_transaction(experiment, session)[source]
class benchbuild.utils.actions.MakeBuildDir(project_or_experiment, action_fn=None)[source]

Bases: benchbuild.utils.actions.Step

DESCRIPTION = 'Create the build directory'
NAME = 'MKDIR'
class benchbuild.utils.actions.Prepare(project)[source]

Bases: benchbuild.utils.actions.Step

DESCRIPTION = 'Prepare project build folder'
NAME = 'PREPARE'
class benchbuild.utils.actions.RequireAll(actions)[source]

Bases: benchbuild.utils.actions.Step

class benchbuild.utils.actions.RetrieveFile(project_or_experiment, filename, run_group)[source]

Bases: benchbuild.utils.actions.Step

DESCRIPTION = 'Retrieve a file from the database'
NAME = 'RETRIEVEFILE'
class benchbuild.utils.actions.Run(project)[source]

Bases: benchbuild.utils.actions.Step

DESCRIPTION = 'Execute the run action'
NAME = 'RUN'
class benchbuild.utils.actions.SaveProfile(project_or_experiment, filename)[source]

Bases: benchbuild.utils.actions.Step

DESCRIPTION = 'Save a profile in llvm format in the DB'
NAME = 'SAVEPROFILE'
class benchbuild.utils.actions.Step(project_or_experiment, action_fn=None)[source]

Bases: object

DESCRIPTION = None
NAME = None
ON_STEP_BEGIN = []
ON_STEP_END = []
onerror()[source]
status
class benchbuild.utils.actions.StepClass[source]

Bases: abc.ABCMeta

class benchbuild.utils.actions.StepResult[source]

Bases: enum.IntEnum

An enumeration.

CAN_CONTINUE = 2
ERROR = 3
OK = 1
UNSET = 0
benchbuild.utils.actions.log_before_after(name: str, desc: str)[source]
benchbuild.utils.actions.notify_step_begin_end(f)[source]
benchbuild.utils.actions.prepend_status(f)[source]
benchbuild.utils.actions.to_step_result(f)[source]
benchbuild.utils.bootstrap module

Helper functions for bootstrapping external dependencies.

benchbuild.utils.bootstrap.check_uchroot_config()[source]
benchbuild.utils.bootstrap.find_package(binary)[source]
benchbuild.utils.bootstrap.install_package(pkg_name)[source]
benchbuild.utils.bootstrap.install_uchroot()[source]
benchbuild.utils.bootstrap.linux_distribution_major()[source]
benchbuild.utils.bootstrap.provide_package(pkg_name)[source]
benchbuild.utils.bootstrap.provide_packages(pkg_names)[source]
benchbuild.utils.compiler module

Helper functions for dealing with compiler replacement.

This provides a few key functions to deal with varying/measuring the compilers used inside the benchbuild study. From a high-level view, there are 2 interesting functions:

  • lt_clang(cflags, ldflags, func)
  • lt_clang_cxx(cflags, ldflags, func)

These generate a wrapped clang/clang++ in the current working directory and hide the given cflags/ldflags from the calling build system. Both will give you a working plumbum command and call a python script that redirects to the real clang/clang++ given the additional cflags&ldflags.

The wrapper-script generated for both functions can be found inside:
  • wrap_cc()
The remaining methods:
  • llvm()
  • llvm_libs()
  • clang()
  • clang_cxx()

Are just convencience methods that can be used when interacting with the configured llvm/clang source directories.

class benchbuild.utils.compiler.ExperimentCommand(cmd, args, exp_args)[source]

Bases: plumbum.commands.base.BoundCommand

args
cmd
benchbuild.utils.compiler.clang()[source]

Get a usable clang plumbum command.

This searches for a usable clang in the llvm binary path (See llvm()) and returns a plumbum command to call it.

Returns:
plumbum Command that executes clang++
benchbuild.utils.compiler.clang_cxx()[source]

Get a usable clang++ plumbum command.

This searches for a usable clang++ in the llvm binary path (See llvm()) and returns a plumbum command to call it.

Returns:
plumbum Command that executes clang++
benchbuild.utils.compiler.llvm()[source]

Get the path where all llvm binaries can be found.

Environment variable:
BB_LLVM_DIR
Returns:
LLVM binary path.
benchbuild.utils.compiler.llvm_libs()[source]

Get the path where all llvm libraries can be found.

Environment variable:
BB_LLVM_DIR
Returns:
LLVM library path.
benchbuild.utils.compiler.lt_clang(cflags, ldflags, func=None)[source]

Return a clang that hides CFLAGS and LDFLAGS.

This will generate a wrapper script in the current directory and return a complete plumbum command to it.

Args:

cflags: The CFLAGS we want to hide. ldflags: The LDFLAGS we want to hide. func (optional): A function that will be pickled alongside the compiler.

It will be called before the actual compilation took place. This way you can intercept the compilation process with arbitrary python code.
Returns (benchbuild.utils.cmd):
Path to the new clang command.
benchbuild.utils.compiler.lt_clang_cxx(cflags, ldflags, func=None)[source]

Return a clang++ that hides CFLAGS and LDFLAGS.

This will generate a wrapper script in the current directory and return a complete plumbum command to it.

Args:

cflags: The CFLAGS we want to hide. ldflags: The LDFLAGS we want to hide. func (optional): A function that will be pickled alongside the compiler.

It will be called before the actual compilation took place. This way you can intercept the compilation process with arbitrary python code.
Returns (benchbuild.utils.cmd):
Path to the new clang command.
benchbuild.utils.compiler.wrap_cc_in_uchroot(cflags, ldflags, func=None, cc_name='clang')[source]

Generate a clang wrapper that may be called from within a uchroot.

This basically does the same as lt_clang/lt_clang_cxx. However, we do not create a valid plumbum command. The generated script will only work inside a uchroot environment that has is root at the current working directory, when calling this function.

Args:

cflags: The CFLAGS we want to hide ldflags: The LDFLAGS we want to hide func (optional): A function that will be pickled alongside the compiler.

It will be called before the actual compilation took place. This way you can intercept the compilation process with arbitrary python code.

uchroot_path: Prefix path of the compiler inside the uchroot. cc_name: Name of the generated script.

benchbuild.utils.compiler.wrap_cxx_in_uchroot(cflags, ldflags, func=None)[source]

Delegate to wrap_cc_in_uchroot).

benchbuild.utils.container module

Container utilites.

class benchbuild.utils.container.Container[source]

Bases: object

filename
local

Finds the current location of a container. Also unpacks the project if necessary.

Returns:
target: The path, where the container lies in the end.
name = 'container'
remote
class benchbuild.utils.container.Gentoo[source]

Bases: benchbuild.utils.container.Container

latest_src_uri(*args, **kwargs)[source]
name = 'gentoo'
remote

Get a remote URL of the requested container.

class benchbuild.utils.container.Ubuntu[source]

Bases: benchbuild.utils.container.Container

name = 'ubuntu'
remote

Get a remote URL of the requested container.

benchbuild.utils.container.cached(func)[source]
benchbuild.utils.container.is_valid_container(container, path)[source]

Checks if a container exists and is unpacked.

Args:
path: The location where the container is expected.
Returns:
True if the container is valid, False if the container needs to unpacked or if the path does not exist yet.
benchbuild.utils.container.unpack_container(container, path)[source]

Unpack a container usable by uchroot.

Method that checks if a directory for the container exists, checks if erlent support is needed and then unpacks the container accordingly.

Args:
path: The location where the container is, that needs to be unpacked.
benchbuild.utils.db module

Database support module for the benchbuild study.

benchbuild.utils.db.create_run(cmd, project, exp, grp)[source]

Create a new ‘run’ in the database.

This creates a new transaction in the database and creates a new run in this transaction. Afterwards we return both the transaction as well as the run itself. The user is responsible for committing it when the time comes.

Args:
cmd: The command that has been executed. prj: The project this run belongs to. exp: The experiment this run belongs to. grp: The run_group (uuid) we blong to.
Returns:
The inserted tuple representing the run and the session opened with the new run. Don’t forget to commit it at some point.
benchbuild.utils.db.create_run_group(prj)[source]

Create a new ‘run_group’ in the database.

This creates a new transaction in the database and creates a new run_group within this transaction. Afterwards we return both the transaction as well as the run_group itself. The user is responsible for committing it when the time comes.

Args:
prj - The project for which we open the run_group.
Returns:
A tuple (group, session) containing both the newly created run_group and the transaction object.
benchbuild.utils.db.extract_file(filename, outfile, exp_id, run_group)[source]

Extract a previously stored file from the database.

Args:
filename (str):
The name of the file associated to the content in the database.
outfile (str):
The filepath we want to store the content to.
exp_id (uuid):
The experiment uuid the file was stored in.
run_group (uuid):
The run_group the file was stored in.
benchbuild.utils.db.persist_compilestats(run, session, stats)[source]

Persist the run results in the database.

Args:
run: The run we attach the compilestats to. session: The db transaction we belong to. stats: The stats we want to store in the database.
benchbuild.utils.db.persist_config(run, session, cfg)[source]

Persist the configuration in as key-value pairs.

Args:
run: The run we attach the config to. session: The db transaction we belong to. cfg: The configuration we want to persist.
benchbuild.utils.db.persist_experiment(experiment)[source]

Persist this experiment in the benchbuild database.

Args:
experiment: The experiment we want to persist.
benchbuild.utils.db.persist_file(f, experiment_id, run_group)[source]

Persist a file in the FileContent relation.

Args:
f (str):
The filename we want to persist.
experiment_id (uuid):
The experiment uuid this file needs to be assigned to.
run_group (uuid):
The run group uuid this file needs to be assigned to.
benchbuild.utils.db.persist_likwid(run, session, measurements)[source]

Persist all likwid results.

Args:
run: The run we attach our measurements to. session: The db transaction we belong to. measurements: The likwid measurements we want to store.
benchbuild.utils.db.persist_perf(run, session, svg_path)[source]

Persist the flamegraph in the database.

The flamegraph exists as a SVG image on disk until we persist it in the database.

Args:
run: The run we attach these perf measurements to. session: The db transaction we belong to. svg_path: The path to the SVG file we want to store.
benchbuild.utils.db.persist_project(project)[source]

Persist this project in the benchbuild database.

Args:
project: The project we want to persist.
benchbuild.utils.db.persist_time(run, session, *args, **kwargs)[source]
benchbuild.utils.db.validate(func)[source]
benchbuild.utils.downloader module

Downloading helper functions for benchbuild.

The helpers defined in this module provide access to some common Downloading methods for the source code of benchbuild projects. All downloads will be cached in BB_TMP_DIR and locked down with a hash that is generated after the first download. If the hash matches the file/folder found in BB_TMP_DIR, nothing will be downloaded at all.

Supported methods:
Copy, CopyNoFail, Wget, Git, Svn, Rsync
benchbuild.utils.downloader.Copy(From, To)[source]

Small copy wrapper.

Args:
From (str): Path to the SOURCE. To (str): Path to the TARGET.
benchbuild.utils.downloader.CopyNoFail(src, root=None)[source]

Just copy fName into the current working directory, if it exists.

No action is executed, if fName does not exist. No Hash is checked.

Args:

src: The filename we want to copy to ‘.’. root: The optional source dir we should pull fName from. Defaults

to benchbuild.settings.CFG[“tmpdir”].
Returns:
True, if we copied something.
benchbuild.utils.downloader.Git(src_url, tgt_name, tgt_root=None)[source]

Get a shallow clone of the given repo

Args:

src_url (str): Git URL of the SOURCE repo. tgt_name (str): Name of the repo folder on disk. tgt_root (str): TARGET folder for the git repo.

Defaults to CFG["tmpdir"]
benchbuild.utils.downloader.Rsync(url, tgt_name, tgt_root=None)[source]

RSync a folder.

Args:

url (str): The url of the SOURCE location. fname (str): The name of the TARGET. to (str): Path of the target location.

Defaults to CFG["tmpdir"].
benchbuild.utils.downloader.Svn(url, fname, to=None)[source]

Checkout the SVN repo.

Args:

url (str): The SVN SOURCE repo. fname (str): The name of the repo on disk. to (str): The name of the TARGET folder on disk.

Defaults to CFG["tmpdir"]
benchbuild.utils.downloader.Wget(src_url, tgt_name, tgt_root=None)[source]

Download url, if required.

Args:

src_url (str): Our SOURCE url. tgt_name (str): The filename we want to have on disk. tgt_root (str): The TARGET directory for the download.

Defaults to CFG["tmpdir"].
benchbuild.utils.downloader.get_hash_of_dirs(directory)[source]

Recursively hash the contents of the given directory.

Args:
directory (str): The root directory we want to hash.
Returns:
A hash of all the contents in the directory.
benchbuild.utils.downloader.source_required(src_file, src_root)[source]

Check, if a download is required.

Args:
src_file: The filename to check for. src_root: The path we find the file in.
Returns:
True, if we need to download something, False otherwise.
benchbuild.utils.downloader.update_hash(src, root)[source]

Update the hash for the given file.

Args:
src: The file name. root: The path of the given file.
benchbuild.utils.log module
benchbuild.utils.log.configure()[source]

Load logging configuration from our own defaults.

benchbuild.utils.log.set_defaults()[source]

Configure the loggers default settings.

benchbuild.utils.path module

Path utilities for benchbuild.

benchbuild.utils.path.determine_path()[source]

Borrowed from wxglade.py

benchbuild.utils.path.list_to_path(pathlist)[source]

Convert a list of path elements to a path string.

benchbuild.utils.path.mkdir_uchroot(dirpath, root='.')[source]

Create a file inside a uchroot env.

You will want to use this when you need to create a file with apropriate rights inside a uchroot container with subuid/subgid handling enabled.

Args:
dirpath:
The dirpath that should be created. Absolute inside the uchroot container.
root:
The root PATH of the container filesystem as seen outside of the container.
benchbuild.utils.path.mkfile_uchroot(filepath, root='.')[source]

Create a file inside a uchroot env.

You will want to use this when you need to create a file with apropriate rights inside a uchroot container with subuid/subgid handling enabled.

Args:
filepath:
The filepath that should be created. Absolute inside the uchroot container.
root:
The root PATH of the container filesystem as seen outside of the container.
benchbuild.utils.path.path_to_list(pathstr)[source]

Conver a path string to a list of path elements.

benchbuild.utils.path.template_files(path, exts=[])[source]

Return a list of filenames found at @path.

The list of filenames can be filtered by extensions.

Arguments:
path: Existing filepath we want to list. exts: List of extensions to filter by.
Returns:
A list of filenames found in the path.
benchbuild.utils.path.template_path(template)[source]

Return path to template file.

benchbuild.utils.path.template_str(template)[source]

Read a template file from the resources and return it as str.

benchbuild.utils.run module

Experiment helpers.

class benchbuild.utils.run.RunInfo(**kwargs)[source]

Bases: object

commit()[source]
has_failed

Check, whether this run failed.

class benchbuild.utils.run.UchrootEC[source]

Bases: enum.Enum

An enumeration.

MNT_DEV_FAILED = 253
MNT_FAILED = 255
MNT_PROC_FAILED = 254
MNT_PTS_FAILED = 251
MNT_SYS_FAILED = 252
exception benchbuild.utils.run.UnmountError[source]

Bases: BaseException

benchbuild.utils.run.begin_run_group(project)[source]

Begin a run_group in the database.

A run_group groups a set of runs for a given project. This models a series of runs that form a complete binary runtime test.

Args:
project: The project we begin a new run_group for.
Returns:
(group, session) where group is the created group in the database and session is the database session this group lives in.
benchbuild.utils.run.end_run_group(group, session)[source]

End the run_group successfully.

Args:
group: The run_group we want to complete. session: The database transaction we will finish.
benchbuild.utils.run.exit_code_from_run_infos(run_infos: typing.List[benchbuild.utils.run.RunInfo])[source]
benchbuild.utils.run.fail_run_group(group, session)[source]

End the run_group unsuccessfully.

Args:
group: The run_group we want to complete. session: The database transaction we will finish.
benchbuild.utils.run.fetch_time_output(marker, format_s, ins)[source]

Fetch the output /usr/bin/time from a.

Args:
marker: The marker that limits the time output format_s: The format string used to parse the timings ins: A list of lines we look for the output.
Returns:
A list of timing tuples
benchbuild.utils.run.in_builddir(sub='.')[source]

Decorate a project phase with a local working directory change.

Args:
sub: An optional subdirectory to change into.
benchbuild.utils.run.retry(pb_cmd, retries=0, max_retries=10, retcode=0, retry_retcodes=None)[source]
benchbuild.utils.run.run(command, retcode=0)[source]

Execute a plumbum command, depending on the user’s settings.

Args: command & TEE(retcode=retcode)

command: The plumbumb command to execute.
benchbuild.utils.run.store_config(func)[source]

Decorator for storing the configuration in the project’s builddir.

benchbuild.utils.run.track_execution(cmd, project, experiment, **kwargs)[source]

Guard the execution of the given command.

Args:
cmd: the command we guard. pname: the database we run under. ename: the database session this run belongs to. run_group: the run group this execution will belong to.
Raises:
RunException: If the cmd encounters an error we wrap the exception
in a RunException and re-raise. This ends the run unsuccessfully.
benchbuild.utils.run.uchroot(*args, **kwargs)[source]

Return a customizable uchroot command.

Args:
args: List of additional arguments for uchroot (typical: mounts)
Return:
chroot_cmd
benchbuild.utils.run.uchroot_env(mounts)[source]

Compute the environment of the change root for the user.

Args:
mounts: The mountpoints of the current user.
Return:
paths ld_libs
benchbuild.utils.run.uchroot_mounts(prefix, mounts)[source]

Compute the mountpoints of the current user.

Args:
prefix: Define where the job was running if it ran on a cluster. mounts: All mounts the user currently uses in his file system.
Return:
mntpoints
benchbuild.utils.run.uchroot_no_args()[source]

Return the uchroot command without any customizations.

benchbuild.utils.run.uchroot_no_llvm(*args, **kwargs)[source]

Return a customizable uchroot command.

The command will be executed inside a uchroot environment.

Args:
args: List of additional arguments for uchroot (typical: mounts)
Return:
chroot_cmd
benchbuild.utils.run.uchroot_with_mounts(*args, **kwargs)[source]

Return a uchroot command with all mounts enabled.

benchbuild.utils.run.unionfs(base_dir='./base', image_dir='./image', image_prefix=None, mountpoint='./union')[source]

Decorator for the UnionFS feature.

This configures a unionfs for projects. The given base_dir and/or image_dir are layered as follows:

image_dir=RW:base_dir=RO

All writes go to the image_dir, while base_dir delivers the (read-only) versions of the rest of the filesystem.

The unified version will be provided in the project’s builddir. Unmouting is done as soon as the function completes.

Args:
base_dir:The unpacked container of a project delievered by a method
out of the container utils.
image_dir: Virtual image of the actual file system represented by the
build_dir of a project.
image_prefix: Useful prefix if the projects run on a cluster,
to identify where the job came from and where it runs.
mountpoint: Location where the filesystems merge, currently per default
as ‘./union’.
benchbuild.utils.run.unionfs_is_active(root)[source]
benchbuild.utils.run.unionfs_set_up(ro_base, rw_image, mountpoint)[source]

Setup a unionfs via unionfs-fuse.

Args:
ro_base: base_directory of the project rw_image: virtual image of actual file system mountpoint: location where ro_base and rw_image merge
benchbuild.utils.run.uretry(cmd, retcode=0)[source]
benchbuild.utils.run.with_env_recursive(cmd, **envvars)[source]

Recursively updates the environment of cmd and all its subcommands.

Args:
cmd - A plumbum command-like object **envvars - The environment variables to update
Returns:
The updated command.
benchbuild.utils.schema module
Database schema for benchbuild

The schema should initialize itself on an empty database. For now, we do not support automatic upgrades on schema changes. You might encounter some roadbumps when using an older version of benchbuild.

Furthermore, for now, we are restricted to postgresql databases, although we already support arbitrary connection strings via config.

If you want to use reports that use one of our SQL functions, you need to initialize the functions first using the following command:

> BB_DB_CREATE_FUNCTIONS=true benchbuild run -E empty -l

After that you (normally) do not need to do this agains, unless we supply a new version that you are interested in. As soon as we have alembic running, we can provide automatic up/downgrade paths for you.

class benchbuild.utils.schema.CompileStat(**kwargs)[source]

Bases: sqlalchemy.ext.declarative.api.Base

Store compilestats as given by LLVM’s ‘-stats’ commoand.

component
id
name
run_id
value
class benchbuild.utils.schema.Config(**kwargs)[source]

Bases: sqlalchemy.ext.declarative.api.Base

Store customized information about a run.

You can store arbitrary configuration information about a run here. Use it for extended filtering against the run table.

name
run_id
value
class benchbuild.utils.schema.Event(**kwargs)[source]

Bases: sqlalchemy.ext.declarative.api.Base

Store PAPI profiling based events.

duration
id
name
run_id
start
tid
type
class benchbuild.utils.schema.Experiment(**kwargs)[source]

Bases: sqlalchemy.ext.declarative.api.Base

Store metadata about experiments.

begin
description
end
id
name
class benchbuild.utils.schema.FileContent(**kwargs)[source]

Bases: sqlalchemy.ext.declarative.api.Base

Store content of a file for being able to retrieve it later

content
experience_id
filename
rungroup_id
class benchbuild.utils.schema.GUID(*args, as_uuid=False, **kwargs)[source]

Bases: sqlalchemy.sql.type_api.TypeDecorator

Platform-independent GUID type.

Uses Postgresql’s UUID type, otherwise uses CHAR(32), storing as stringified hex values.

as_uuid = False
impl

alias of CHAR

load_dialect_impl(dialect)[source]
process_bind_param(value, dialect)[source]
process_result_value(value, dialect)[source]
class benchbuild.utils.schema.GlobalConfig(**kwargs)[source]

Bases: sqlalchemy.ext.declarative.api.Base

Store customized information about a run.

You can store arbitrary configuration information about a run here. Use it for extended filtering against the run table.

experiment_group
name
value
class benchbuild.utils.schema.IslAst(**kwargs)[source]

Bases: sqlalchemy.ext.declarative.api.Base

Store default metrics, simple name value store.

ast
function
run_id
class benchbuild.utils.schema.Likwid(**kwargs)[source]

Bases: sqlalchemy.ext.declarative.api.Base

Store measurement results of likwid based experiments.

core
metric
region
run_id
value
class benchbuild.utils.schema.Metadata(**kwargs)[source]

Bases: sqlalchemy.ext.declarative.api.Base

Store metadata information for every run.

If you happen to have some free-form data that belongs to the database, this is the place for it.

name
run_id
value
class benchbuild.utils.schema.Metric(**kwargs)[source]

Bases: sqlalchemy.ext.declarative.api.Base

Store default metrics, simple name value store.

name
run_id
value
class benchbuild.utils.schema.Project(**kwargs)[source]

Bases: sqlalchemy.ext.declarative.api.Base

Store project metadata.

description
domain
group_name
name
src_url
version
class benchbuild.utils.schema.Regions(**kwargs)[source]

Bases: sqlalchemy.ext.declarative.api.Base

Store region metadata generated by libpjit.

duration
events
id
name
run_id
class benchbuild.utils.schema.RegressionTest(**kwargs)[source]

Bases: sqlalchemy.ext.declarative.api.Base

Store regression tests for all projects.

This relation is filled from the PolyJIT side, not from benchbuild-study. We collect all JIT SCoPs found by PolyJIT in this relation for regression-test generation.

module
name
project_name
run_id
class benchbuild.utils.schema.Run(**kwargs)[source]

Bases: sqlalchemy.ext.declarative.api.Base

Store a run for each executed test binary.

begin
command
end
experiment_group
experiment_name
id
project_group
project_name
run_group
status
class benchbuild.utils.schema.RunGroup(**kwargs)[source]

Bases: sqlalchemy.ext.declarative.api.Base

Store information about a run group.

begin
end
experiment
id
status
class benchbuild.utils.schema.RunLog(**kwargs)[source]

Bases: sqlalchemy.ext.declarative.api.Base

Store log information for every run.

Properties like, start time, finish time, exit code, stderr, stdout are stored here.

begin
config
end
run_id
status
stderr
stdout
class benchbuild.utils.schema.Schedule(**kwargs)[source]

Bases: sqlalchemy.ext.declarative.api.Base

Store default metrics, simple name value store.

function
run_id
schedule
class benchbuild.utils.schema.ScopDetection(**kwargs)[source]

Bases: sqlalchemy.ext.declarative.api.Base

Store results of polli-profile-scops

count
invalid_reason
run_id
class benchbuild.utils.schema.Sequence(**kwargs)[source]

Bases: sqlalchemy.ext.declarative.api.Base

Store a the fittest sequence of an opt command and its fitness value.

name
run_id
value
benchbuild.utils.schema.Session()
class benchbuild.utils.schema.SessionManager[source]

Bases: object

get()[source]
benchbuild.utils.schema.init_functions(connection)[source]

Initialize all SQL functions in the database.

benchbuild.utils.slurm module

SLURM support for the benchbuild study.

This module can be used to generate bash scripts that can be executed by the SLURM controller either as batch or interactive script.

benchbuild.utils.slurm.dump_slurm_script(script_name, benchbuild, experiment, projects)[source]

Dump a bash script that can be given to SLURM.

Args:

script_name (str): name of the bash script. commands (list(benchbuild.utils.cmd)):

List of plumbum commands to write to the bash script.
**kwargs: Dictionary with all environment variable bindings we should
map in the bash script.
benchbuild.utils.slurm.prepare_directories(dirs)[source]

Make sure that the required directories exist.

Args:
dirs - the directories we want.
benchbuild.utils.slurm.prepare_slurm_script(experiment, projects)[source]

Prepare a slurm script that executes the experiment for a given project.

Args:
experiment: The experiment we want to execute projects: All projects we generate an array job for.
benchbuild.utils.user_interface module

User interface helpers for benchbuild.

benchbuild.utils.user_interface.ask(question, default_answer=False, default_answer_str='no')[source]
benchbuild.utils.user_interface.query_yes_no(question, default='yes')[source]

Ask a yes/no question via raw_input() and return their answer.

Args:

question (str): Question hat is presented to the user. default (str): The presumed answer, if the user just hits <Enter>.

It must be “yes” (the default), “no” or None (meaning an answer is required of the user).
Returns (boolean):
True, if ‘yes’, False otherwise.
benchbuild.utils.versions module

Gather version information for BB.

benchbuild.utils.versions.get_git_hash(from_url)[source]

Get the git commit hash of HEAD from :from_url.

Args:
from_url: The file system url of our git repository.
Returns:
git commit hash of HEAD, or empty string.
benchbuild.utils.versions.get_version_from_cache_dir(src_file)[source]

Creates a version for a project out of the hash.

The hash is taken from the directory of the source file.

Args:
src_file: The source file of the project using this function.
Returns:
Either returns the first 8 digits of the hash as string, the entire hash as a string if the hash consists out of less than 7 digits or None if the path is incorrect.
benchbuild.utils.wrapping module

Wrapper utilities for benchbuild.

This module provides methods to wrap binaries with extensions that are pickled alongside the original binary. In place of the original binary a new python module is generated that loads the pickle and redirects the program call with all its arguments to it. This allows interception of arbitrary programs for experimentation.

Examples:
TODO
Compiler Wrappers:
The compiler wrappers substitute the compiler call with a script that produces the expected output from the original compiler call first. Afterwards the pickle is loaded and the original call is forwarded to the pickle. This way the user is not obligated to produce valid output during his own experiment.
Runtime Wrappers:
These directly forward the binary call to the pickle without any execution of the binary. We cannot guarantee that repeated execution is valid, therefore, we let the user decide what the program should do.
benchbuild.utils.wrapping.strip_path_prefix(ipath, prefix)[source]

Strip prefix from path.

Args:
ipath: input path prefix: the prefix to remove, if it is found in :ipath:
Examples:
>>> strip_path_prefix("/foo/bar", "/bar")
'/foo/bar'
>>> strip_path_prefix("/foo/bar", "/")
'foo/bar'
>>> strip_path_prefix("/foo/bar", "/foo")
'/bar'
>>> strip_path_prefix("/foo/bar", "None")
'/foo/bar'
benchbuild.utils.wrapping.unpickle(pickle_file)[source]

Unpickle a python object from the given path.

benchbuild.utils.wrapping.wrap(name, runner, sprefix=None, python='/home/docs/checkouts/readthedocs.org/user_builds/pprof-study/envs/v2.0.3/bin/python')[source]

Wrap the binary :name: with the function :runner:.

This module generates a python tool that replaces :name: The function in runner only accepts the replaced binaries name as argument. We use the cloudpickle package to perform the serialization, make sure :runner: can be serialized with it and you’re fine.

Args:
name: Binary we want to wrap runner: Function that should run instead of :name:
Returns:
A plumbum command, ready to launch.
benchbuild.utils.wrapping.wrap_cc(filepath, cflags, ldflags, compiler, extension, compiler_ext_name=None, python='/home/docs/checkouts/readthedocs.org/user_builds/pprof-study/envs/v2.0.3/bin/python')[source]

Substitute a compiler with a script that hides CFLAGS & LDFLAGS.

This will generate a wrapper script in the current directory and return a complete plumbum command to it.

Args:

filepath (str): Path to the wrapper script. cflags (list(str)): The CFLAGS we want to hide. ldflags (list(str)): The LDFLAGS we want to hide. compiler (benchbuild.utils.cmd): Real compiler command we should call in the

script.
extension: A function that will be pickled alongside the compiler.
It will be called before the actual compilation took place. This way you can intercept the compilation process with arbitrary python code.
compiler_ext_name: The name that we should give to the generated
dill blob for :func:
Returns (benchbuild.utils.cmd):
Command of the new compiler we can call.
benchbuild.utils.wrapping.wrap_dynamic(self, name, runner, sprefix=None, python='/home/docs/checkouts/readthedocs.org/user_builds/pprof-study/envs/v2.0.3/bin/python', name_filters=None)[source]

Wrap the binary :name with the function :runner.

This module generates a python tool :name: that can replace a yet unspecified binary. It behaves similar to the :wrap: function. However, the first argument is the actual binary name.

Args:

name: name of the python module runner: Function that should run the real binary sprefix: Prefix that should be used for commands. python: The python executable that should be used. name_filters:

List of regex expressions that are used to filter the real project name. Make sure to include a match group named ‘name’ in the regex, e.g., [

r’foo(?P<name>.)-flt’

]

Returns: plumbum command, readty to launch.

benchbuild.utils.wrapping.wrap_dynamic_in_uchroot(self, name, runner, sprefix=None)[source]
benchbuild.utils.wrapping.wrap_in_uchroot(name, runner, sprefix=None)[source]

Submodules

benchbuild.bootstrap module

class benchbuild.bootstrap.BenchBuildBootstrap(executable)[source]

Bases: plumbum.cli.application.Application

Bootstrap benchbuild external dependencies, if possible.

main(*args)[source]
store_config

Sets an attribute

benchbuild.container module

class benchbuild.container.BashStrategy[source]

Bases: benchbuild.container.ContainerStrategy

The user interface for setting up a bash inside the container.

run(context)[source]
class benchbuild.container.Container(exe)[source]

Bases: plumbum.cli.application.Application

Manage uchroot containers.

VERSION = '2.0.1-$Id: b9b13c9242c155f6fda66e981d19e23e39941f36 $'
builddir(tmpdir)[source]

Set the current builddir of the container.

input_file(container)[source]

Find the input path of a uchroot container.

main(*args)[source]
mounts(user_mount)[source]

Save the current mount of the container into the settings.

output_file(container)[source]

Find and writes the output path of a chroot container.

shell(custom_shell)[source]

The command to run inside the container.

verbosity

Sets an attribute

class benchbuild.container.ContainerBootstrap(executable)[source]

Bases: plumbum.cli.application.Application

Check for the needed files.

install_cmake_and_exit()[source]

Tell the user to install cmake and aborts the current process.

main(*args)[source]
class benchbuild.container.ContainerCreate(executable)[source]

Bases: plumbum.cli.application.Application

Create a new container with a predefined strategy.

We offer a variety of creation policies for a new container. By default a basic ‘spawn a bash’ policy is used. This just leaves you inside a bash that is started in the extracted container. After customization you can exit the bash and pack up the result.

main(*args)[source]
strategy(strategy)[source]
class benchbuild.container.ContainerList(executable)[source]

Bases: plumbum.cli.application.Application

Prints a list of the known containers.

main(*args)[source]
class benchbuild.container.ContainerRun(executable)[source]

Bases: plumbum.cli.application.Application

Execute commannds inside a prebuilt container.

main(*args)[source]
class benchbuild.container.ContainerStrategy[source]

Bases: object

Interfaces for the different containers chosen by the experiment.

run(context)[source]
class benchbuild.container.MockObj(**kwargs)[source]

Bases: object

class benchbuild.container.SetupPolyJITGentooStrategy[source]

Bases: benchbuild.container.ContainerStrategy

Interface of using gentoo as a container for an experiment.

configure()[source]

Configure the gentoo container for a PolyJIT experiment.

run(context)[source]

Setup a gentoo container suitable for PolyJIT.

write_bashrc(path)[source]

Write inside a bash and update the shell if necessary.

write_layout(path)[source]

Create a layout from the given path.

write_makeconfig(path)[source]

Create the stringed to be written in the settings.

write_wgetrc(path)[source]

Wget the project from a specified link.

benchbuild.container.clean_directories(builddir, in_dir=True, out_dir=True)[source]

Remove the in and out of the container if confirmed by the user.

benchbuild.container.find_hash(container_db, key)[source]

Find the first container in the database with the given key.

benchbuild.container.main(*args)[source]
benchbuild.container.pack_container(in_container, out_file)[source]
benchbuild.container.run_in_container(command, container_dir, mounts)[source]

Run a given command inside a container.

Mounts a directory as a container at the given mountpoint and tries to run the given command inside the new container.

benchbuild.container.set_input_container(container, cfg)[source]

Save the input for the container in the configurations.

benchbuild.container.setup_bash_in_container(builddir, container, outfile, mounts, shell)[source]

Setup a bash environment inside a container.

Creates a new chroot, which the user can use as a bash to run the wanted projects inside the mounted container, that also gets returned afterwards.

benchbuild.container.setup_container(builddir, container)[source]

Prepare the container and returns the path where it can be found.

benchbuild.container.setup_directories(builddir)[source]

Create the in and out directories of the container.

benchbuild.driver module

class benchbuild.driver.PollyProfiling(executable)[source]

Bases: plumbum.cli.application.Application

Frontend for running/building the benchbuild study framework.

VERSION = '2.0.1-$Id: b9b13c9242c155f6fda66e981d19e23e39941f36 $'
debug

Sets an attribute

main(*args)[source]
verbosity

Sets an attribute

benchbuild.driver.main(*args)[source]

Main function.

benchbuild.experiment module

BenchBuild’s skeleton for experiments.

An benchbuild.experiment defines a series of phases that constitute a benchbuild compatible experiment. This is the default implementation of an experiment.

Clients can derive from class class::benchbuild.experiment.Experiment and override the methods relvant to their experiment.

An experiment can have a variable number of phases / steps / substeps.

Phases / Steps / Substeps

All phases/steps/substeps support being used as a context manager. All 3 of them catch ProcessExecutionErrors that may be thrown from plumbum, without aborting the whole experiment. However, an error is tracked.

Actions

An experiment performs the following actions in order:
  1. clean - Clean any previous runs that collide with our directory
  2. prepare - Prepare the experiment, this is a good place to copy relevant
    files over for testing.
  3. run (run_tests) - Run the experiment. The ‘meat’ lies here. Override
    This to perform all your experiment needs.
class benchbuild.experiment.Configuration(project=None, config=None)[source]

Bases: object

Build a set of experiment actions out of a list of configurations.

class benchbuild.experiment.Experiment(projects=None, group=None)[source]

Bases: object

A series of commands executed on a project that form an experiment.

The default implementation should provide a sane environment for all derivates.

One important task executed by the basic implementation is setting up the default set of projects that belong to this project. As every project gets registered in the ProjectFactory, the experiment gets a list of experiment names that work as a filter.

NAME = None
actions()[source]
actions_for_project(project)[source]

Get the actions a project wants to run.

Args:
project (benchbuild.Project): the project we want to run.
static default_compiletime_actions(project)[source]

Return a series of actions for a compile time experiment.

static default_runtime_actions(project)[source]

Return a series of actions for a run time experiment.

class benchbuild.experiment.ExperimentRegistry(name, bases, dict)[source]

Bases: type

Registry for benchbuild experiments.

experiments = {}
class benchbuild.experiment.RuntimeExperiment(projects=None, group=None)[source]

Bases: benchbuild.experiment.Experiment

Additional runtime only features for experiments.

get_papi_calibration(project, calibrate_call)[source]

Get calibration values for PAPI based measurements.

Args:
project (Project):
Unused (deprecated).
calibrate_call (benchbuild.utils.cmd):
The calibration command we will use.
persist_calibration(project, cmd, calibration)[source]

Persist the result of a calibration call.

Args:
project (benchbuild.Project):
The calibration values will be assigned to this project.
cmd (benchbuild.utils.cmd):
The command we used to generate the calibration values.
calibration (int):
The calibration time in nanoseconds.
benchbuild.experiment.get_group_projects(group: str, experiment) → typing.List[benchbuild.project.Project][source]

Get a list of project names for the given group.

Filter the projects assigned to this experiment by group.

Args:

group (str): The group. experiment (benchbuild.Experiment): The experiment we draw our projects

to filter from.
Returns (list):
A list of project names for the group that are supported by this experiment.

benchbuild.extensions module

Extension base-classes for compile-time and run-time experiments.

class benchbuild.extensions.Extension(*extensions, config=None, **kwargs)[source]

Bases: object

call_next(*args, **kwargs)[source]

Call all child extensions with the same arguments.

print(indent=0)[source]

Print a structural view of the registered extensions.

class benchbuild.extensions.ExtractCompileStats(project, experiment, *extensions, config=None)[source]

Bases: benchbuild.extensions.Extension

get_compilestats(prog_out)[source]

Get the LLVM compilation stats from :prog_out:.

class benchbuild.extensions.LogAdditionals(*extensions, config=None, **kwargs)[source]

Bases: benchbuild.extensions.Extension

Log any additional log files that were registered.

class benchbuild.extensions.LogTrackingMixin[source]

Bases: object

Add log-registering capabilities to extensions.

add_log(path)[source]
logs
class benchbuild.extensions.RunCompiler(project, experiment, *extensions, config=None)[source]

Bases: benchbuild.extensions.Extension

class benchbuild.extensions.RunWithTime(*extensions, config=None, **kwargs)[source]

Bases: benchbuild.extensions.Extension

Wrap a command with time and store the timings in the database.

class benchbuild.extensions.RunWithTimeout(*extensions, limit='10m', **kwargs)[source]

Bases: benchbuild.extensions.Extension

class benchbuild.extensions.RuntimeExtension(project, experiment, *extensions, config=None)[source]

Bases: benchbuild.extensions.Extension

class benchbuild.extensions.SetThreadLimit(*extensions, config=None, **kwargs)[source]

Bases: benchbuild.extensions.Extension

benchbuild.likwid module

Likwid helper functions.

Extract information from likwid’s CSV output.

benchbuild.likwid.fetch_cols(fstream, split_char=', ')[source]

Fetch columns from likwid’s output stream.

Args:
fstream: The filestream with likwid’s output. split_car (str): The character we split on, default ‘,’
Returns (list(str)):
A list containing the elements of fstream, after splitting at split_char.
benchbuild.likwid.get_likwid_perfctr(infile)[source]

Get a complete list of all measurements.

Args:
infile: The filestream containing all likwid output.
Returns:
A list of all measurements extracted from likwid’s file stream.
benchbuild.likwid.get_measurements(region, core_info, data, extra_offset=0)[source]

Get the complete measurement info from likwid’s region info.

Args:
region: The region we took a measurement in. core_info: The core information. data: The raw data. extra_offset (int): default = 0
Returns (list((region, metric, core, value))):
A list of measurement tuples, a tuple contains the information about the region, the metric, the core and the actual value.
benchbuild.likwid.read_struct(fstream)[source]

Read a likwid struct from the text stream.

Args:
fstream: Likwid’s filestream.
Returns (dict(str: str)):
A dict containing all likwid’s struct info as key/value pairs.
benchbuild.likwid.read_structs(fstream)[source]

Read all structs from likwid’s file stream.

Args:
fstream: Likwid’s output file stream.
Returns:
A generator that can be used to iterate over all structs in the fstream.
benchbuild.likwid.read_table(fstream)[source]

Read a likwid table info from the text stream.

Args:
fstream: Likwid’s filestream.
Returns (dict(str: str)):
A dict containing likwid’s table info as key/value pairs.
benchbuild.likwid.read_tables(fstream)[source]

Read all tables from likwid’s file stream.

Args:
fstream: Likwid’s output file stream.
Returns:
A generator that can be used to iterate over all tables in the fstream.

benchbuild.log module

Analyze the BB database.

class benchbuild.log.BenchBuildLog(executable)[source]

Bases: plumbum.cli.application.Application

Frontend command to the benchbuild database.

experiment(experiments)[source]

Set the experiments to fetch the log for.

experiment_ids(experiment_ids)[source]

Set the experiment ids to fetch the log for.

log_type(types)[source]

Set the output types to print.

main()[source]

Run the log command.

project(projects)[source]

Set the projects to fetch the log for.

project_ids(project_ids)[source]

Set the project ids to fetch the log for.

benchbuild.log.print_logs(query, types=None)[source]

Print status logs.

benchbuild.log.print_runs(query)[source]

Print all rows in this result query.

benchbuild.project module

Project handling for the benchbuild study.

class benchbuild.project.Project(exp, group: str = None)[source]

Bases: object

benchbuild’s Project class.

A project defines how run-time testing and cleaning is done for this
IR project
CONTAINER = <benchbuild.utils.container.Gentoo object>
DOMAIN = None
GROUP = None
NAME = None
SRC_FILE = None
VERSION = None
build()[source]

Build the project.

clean()[source]

Clean the project build directory.

clone()[source]

Create a deepcopy of ourself.

compiler_extension

Return the compiler extension registered to this project.

configure()[source]

Configure the project.

download()[source]

Download the input source for this project.

prepare()[source]

Prepare the build diretory.

run(experiment)[source]

Run the tests of this project.

This method initializes the default environment and takes care of cleaning up the mess we made, after a successfull run.

Args:
experiment: The experiment we run this project under
run_tests(experiment, run)[source]

Run the tests of this project.

Clients override this method to provide customized run-time tests.

Args:
experiment: The experiment we run this project under run: A function that takes the run command.
run_uuid

Get the UUID that groups all tests for one project run.

Args:
create_new: Create a fresh UUID (Default: False)
runtime_extension

Return the runtime extension registered for this project.

setup_derived_filenames()[source]

Construct all derived file names.

class benchbuild.project.ProjectDecorator(name, bases, attrs)[source]

Bases: benchbuild.project.ProjectRegistry

Decorate the interface of a project with the in_builddir decorator.

This is just a small safety net for benchbuild users, because we make sure to run every project method in the project’s build directory.

decorated_methods = ['build', 'configure', 'download', 'prepare', 'run_tests']
class benchbuild.project.ProjectRegistry(name, bases, attrs)[source]

Bases: type

Registry for benchbuild projects.

projects = Trie()
benchbuild.project.populate(projects_to_filter=None, group=None)[source]

Populate the list of projects that belong to this experiment.

Args:
projects_to_filter (list):
List of projects we want to assign to this experiment. We intersect the list of projects with the list of supported projects to get the list of projects that belong to this experiment.
group (str):
In addition to the project filter, we provide a way to filter whole groups.

benchbuild.report module

class benchbuild.report.BenchBuildReport(executable)[source]

Bases: plumbum.cli.application.Application

Generate Reports from the benchbuild db.

experiment_ids(ids)[source]
experiments(experiments)[source]
main(*args)[source]
outfile(outfile)[source]

benchbuild.run module

benchbuild’s run command.

This subcommand executes experiments on a set of user-controlled projects. See the output of benchbuild run –help for more information.

class benchbuild.run.BenchBuildRun(executable)[source]

Bases: plumbum.cli.application.Application

Frontend for running experiments in the benchbuild study framework.

experiment_tag(description)[source]
experiments(experiments)[source]
full()[source]
group(group)[source]
list_experiments()[source]
list_projects()[source]
main()[source]

Main entry point of benchbuild run.

pretend

Sets an attribute

projects(projects)[source]
show_config

Sets an attribute

store_config

Sets an attribute

benchbuild.run.print_projects(exp)[source]

Print a list of projects registered for that experiment.

Args:
exp: The experiment to print all projects for.

benchbuild.settings module

Settings module for benchbuild.

All settings are stored in a simple dictionary. Each setting should be modifiable via environment variable.

class benchbuild.settings.Configuration(parent_key, node=None, parent=None, init=True)[source]

Bases: object

Dictionary-like data structure to contain all configuration variables.

This serves as a configuration dictionary throughout benchbuild. You can use it to access all configuration options that are available. Whenever the structure is updated with a new subtree, all variables defined in the new subtree are updated from the environment.

Environment variables are generated from the tree paths automatically.
CFG[“build_dir”] becomes BB_BUILD_DIR CFG[“llvm”][“dir”] becomes BB_LLVM_DIR

The configuration can be stored/loaded as JSON.

Examples:
>>> from benchbuild import settings as s
>>> c = s.Configuration('bb')
>>> c['test'] = 42
>>> c['test']
BB_TEST=42
>>> str(c['test'])
'42'
>>> type(c['test'])
<class 'benchbuild.settings.Configuration'>
filter_exports()[source]
has_default()[source]

Check, if the node contains a ‘default’ value.

has_value()[source]

Check, if the node contains a ‘value’.

init_from_env()[source]

Initialize this node from environment.

If we’re a leaf node, i.e., a node containing a dictionary that consist of a ‘default’ key, compute our env variable and initialize our value from the environment. Otherwise, init our children.

is_leaf()[source]

Check, if the node is a ‘leaf’ node.

load(_from)[source]

Load the configuration dictionary from file.

store(config_file)[source]

Store the configuration dictionary to a file.

update(cfg_dict)[source]

Update the configuration dictionary with new content.

This just delegates the update down to the internal data structure. No validation is done on the format, be sure you know what you do.

Args:
cfg_dict: A configuration dictionary.
value()[source]

Return the node value, if we’re a leaf node.

Examples:
>>> from benchbuild import settings as s
>>> c = s.Configuration("test")
>>> c['x'] = { "y" : { "value" : None }, "z" : { "value" : 2 }}
>>> c['x']['y'].value() == None
True
>>> c['x']['z'].value()
2
>>> c['x'].value()
TEST_X_Y=null
TEST_X_Z=2
exception benchbuild.settings.InvalidConfigKey[source]

Bases: RuntimeWarning

Warn, if you access a non-existing key benchbuild’s configuration.

class benchbuild.settings.UUIDEncoder(skipkeys=False, ensure_ascii=True, check_circular=True, allow_nan=True, sort_keys=False, indent=None, separators=None, default=None)[source]

Bases: json.encoder.JSONEncoder

Encoder module for UUID objects.

default(o)[source]

Encode UUID objects as string.

benchbuild.settings.available_cpu_count()[source]

Get the number of available CPUs.

Number of available virtual or physical CPUs on this system, i.e. user/real as output by time(1) when called with an optimally scaling userspace-only program.

Returns:
Number of avaialable CPUs.
benchbuild.settings.escape_json(raw_str)[source]

Shell-Escape a json input string.

Args:
raw_str: The unescaped string.
benchbuild.settings.find_config(test_file=None, defaults=['.benchbuild.yml', '.benchbuild.yaml', '.benchbuild.json'], root='.')[source]

Find the path to the default config file.

We look at :root: for the :default: config file. If we can’t find it there we start looking at the parent directory recursively until we find a file named :default: and return the absolute path to it. If we can’t find anything, we return None.

Args:
default: The name of the config file we look for. root: The directory to start looking for.
Returns:
Path to the default config file, None if we can’t find anything.
benchbuild.settings.is_yaml(cfg_file)[source]
benchbuild.settings.to_env_dict(config)[source]

Convert configuration object to a flat dictionary.

benchbuild.settings.update_env()[source]

benchbuild.slurm module

Dump SLURM script that executes the selected experiment with all projects.

This basically provides the same as benchbuild run, except that it just dumps a slurm batch script that executes everything as an array job on a configurable SLURM cluster.

class benchbuild.slurm.Slurm(executable)[source]

Bases: plumbum.cli.application.Application

Generate a SLURM script.

experiment(cfg_experiment)[source]

Specify experiments to run

experiment_tag(description)[source]

A description for this experiment run

group(groups)[source]

Run a group of projects under the given experiments

main()[source]

Main entry point of benchbuild run.

projects(projects)[source]

Specify projects to run

benchbuild.test module

class benchbuild.test.BenchBuildTest(executable)[source]

Bases: plumbum.cli.application.Application

Create regression tests for polyjit from the measurements database.

get_check_line(name, module)[source]
main()[source]
opt_flags()[source]
prefix(prefix)[source]

Indices and tables