Testing

A genome scale model is almost never finished. Thousands of hours of manual curation can go in to models and, as a result, changes can break things. For this reason it is good practice to work in a test driven manner. Creating good test cases ensures that if a model once meets criteria for experimental validation it always meets this criteria.

Running the default tests

A test report can be generated by the tester. This is a command line utility which, by default, loads each model, set of conditions and design and performs the FBA simulations. This ensures that any changes to the project files maintain designs, models and conditions.

$ gsmodutils test

The output from the terminal should look something like this:

------------------------- gsmodutils test results -------------------------
Running tests: ....
Default project file tests (models, designs, conditions):
Counted 4 test assertions with 0 failures
Project file completed all tests without error
    --model::e_coli_core.json

    --design::mevalonate_calvin

    --design::mevalonate_cbb

    --design::cbb_cycle

Ran 4 test assertions with a total of 0 errors (100.0% success)

Custom tests

Whilst the default tests provided by the tool are a useful way of ensuring that the project files remain valid after manual curation they do not have the capability to match all design goals. These design goals will be based on a data driven approach to genome scale model development and often require a more fundamental understanding of how an organism functions.

The simplest way to do this is in python

from gsmodutils import GSMProject
project = GSMProject('./') # insert path to project

reactions = ["RXID_1", ...] # List of essential reactions

flux = dict(PYR=[0.5, 1000]) # Required flux for a given reaction id

project.add_essential_pathway('pathway_x', description='Example pathway', reactions=reactions, reaction_fluxes=flux)

This will create a file tests/test_pathway_x.json. Alternatively, tests can be created by adding json files to the tests directory as long as they are of the form test_NAME.json. These json files have the following required fields:

conditions - JSON array (list of project conditions to be loaded and tested)
models - JSON array (list of project models to be loaded and tested)
designs - JSON array (list of project design ids to be loaded and tested)
reaction_fluxes - JSON associative array
required_reactions - JSON array
description - JSON string

For example the file tests/test_example.json might look like

{
    'conditions':[],
    'models':[],
    'designs':['my_pathway_01'],
    'reaction_fluxes': {
        'Biomass': [0.21, 1000]
    }
    'required_reactions': ['reaction_1'],
    'description': 'Make sure reaction 1 is carrying flux. Make sure Biomass is above 0.21'
}

This will add a test to be run that ensures that a reaction with the id reaction_1 carries flux and that the flux accross the biomass is above 0.21 with the design my_pathway_01.

This will automatically be picked up by gsmodutils test and run accordingly. Note, if the files are badly formatted tests will not run and will throw an error.

Writing python test cases

For many use cases, it may require the use of more complex functionality. For this reason, gsmodutils allows users to write fully featured python test cases. This means that any code written in python can be used and assertion statements can be written and included in the test reports.

Any file of the format tests/test_*.py will be included in the test cases run by the project tester instance.

Only functions of the prototype def func_<name>(model, project, log) will be called by the tester.

For test cases use the method log.assertion(<bool: statement>, <str: success>, <str:failure>) to record the result of a given test assertion.

log is always an instance of the gsmodutils.testutils.TestRecord class.

For example, create the python module tests/test_my_model.py and then add the code:

def test_model(model, project, log):
    solution = model.solver.optimize()
    log.assertion(solution.f > 0.0, "Model grows", "Model does not grow")

When creating tests the class gsmodutils.test.utils.ModelTestSelector can be used as a helper decorator to load models and designs of specific names. The same test function will be called repeatedly with all combinations of models, designs and conditions specified.

from gsmodutils.test.utils import ModelTestSelector

@ModelTestSelector(models=[], conditions=[], designs=[])
def test_func(model, project, log):
    log.assertion(True, "Works", "Does not work", "Test")

As with json tests these will be picked up automatically by gsmodutils test. Any logs to standard out (e.g. using python print) can also be captured with this approach. Note that this should not be used in all environments as this will allow any code to be executed, malicious or not.

Performing tests on loaded models

If you have an in memory Model object that has been modified, gsmodutils supports running existing tests on this model. For example,

from gsmodutils import project
project = GSMProject()
# Load a model or design
model = project.load_design("<some_design>")
# make some changes
model.reactions.get_by_id("SOME_REACTION").knock_out()
model.run_tests()

The progress and results for tests that are only those that are applied to the given loaded model or design.

Please note, that tests for designs that are downstream of a given model (i.e. designs based on this model or child designs) will not be run in the setting. Testing of this requires the changes to be saved to disk in order for them to be loaded in to other designs/models within the project.