How to generate a test with Kolo

  1. Run pip install "kolo"

  2. Add "kolo.middleware.KoloMiddleware" to your MIDDLEWARE

  3. Make a request to your local Django app

  4. Browse to localhost:8000/_kolo/ to view your request

  5. Generate your integration test 🚀

Customising test generation

Today, Kolo supports pytest and unittest out of the box. You can generate a pytest-style test by specifying the django_request_pytest.py.j2 template:

kolo generate-test <trace_id> --template=django_request_pytest.py.j2

You can customise the name of the generated test class and test method:

kolo generate-test <trace_id> --test-class="GeneratedTestCase" --test-name="test_generated"

We’re launching new ways every week to fully customise tests, and would love to hear your use case!

Django Model Factories

If you use factory boy you can tell Kolo where to find your factories:

# .kolo/config.toml

[test_generation]

factories = [
    {"path": "tests.factories.UserFactory"},
    {"path": "tests.factories.ArticleFactory", "pk": true},
    ...
]

path should be a dotted path to the factory class. pk tells Kolo to include the recorded primary key value in the factory create call.

Templates

If you want more control, you can define your own jinja template:

kolo generate-test trc_01GZZWS4D4TA8PVJ9D040KEKZ3 --template="path/to/template.py.j2"

For example, this cut-down version of Kolo’s pytest template:

{% if sql_fixtures or asserts or test_client_call_pytest %}
import pytest

{% for import in imports %}{{ import }}
{% endfor %}

@pytest.mark.django_db()
def {{ test_name }}(client):
    {% for fixture in sql_fixtures %}
    {% for line in fixture.template_parts %}
    {{ line }}
    {% endfor %}
    {% endfor %}

    {% if test_client_call_pytest %}
    response = {{ test_client_call_pytest }}

    assert response.status_code == {{ response.status_code }}

    {% for fixture in asserts %}
    {% for line in fixture.pytest_template_parts %}
    {{ line }}
    {% endfor %}
    {% endfor %}
{% endif %}

Trace processors

For complete flexibility, you can define custom “trace processors”. These are functions that take a context and return data that will be merged into the context. For example:

def my_function_processor(context):
    frames = context["_frames"]
    my_function_frames = [
        f for f in frames if f["type"] == "frame" and f["co_name"] == "my_function"
    ]
    call_args = [f["locals"] for f in my_function_frames if f["event"] == "call"]
    return_values = [f["arg"] for f in my_function_frames if f["event"] == "return"]
    
    return {"my_function_calls": list(zip(call_args, return_values))}

Your custom processors should be added to your .kolo/config.toml file:

# .kolo/config.toml

[test_generation]

trace_processors = [
    "path.to.my_function_processor",
]

You can now use my_function_calls in a custom template, or in another processor later in the list.

Field parsers

Kolo supports many model fields provided by Django. Sometimes you may have a third-party field which isn’t being parsed correctly. In this case you can define a custom field parser:

def parse_custom_field(value, field):
    if field != "dotted.field.path.CustomField":
        return value

    # custom parsing logic

    return parsed_value
# .kolo/config.toml

[test_generation]

field_parsers = [
    "path.to.parse_custom_field",
]

Known limitations

  • Kolo autogenerates test fixtures, but in some cases there isn’t enough data to generate them accurately. In that case, you will need to either add additional fixtures manually or tweak the ones we generate.

  • Time machine or freezegun is used to make sure dealing with time is easier and more consistent. If you’re not using one of these, you will need to install one or define a custom template or processor for your preferred library.

  • httpretty is used to mock expected outbound http requests. If you’re not using httpretty, you will need to install it or manually change httpretty to your preferred http mocking library.