Hi! Thanks for contributing. This page contains all the details about getting your dev environment setup.
This is documentation for contributors developing nunavut. If you are a user of this software you can ignore everything here.
When committing to main you must bump at least the patch number in
or the build will fail on the upload step.
tox -e local¶
I highly recommend using the local tox environment when doing python development. It’ll save you hours of lost productivity the first time it keeps you from pulling in an unexpected dependency from your global python environment. You can install tox from brew on osx or apt-get on GNU/Linux. I’d recommend the following environment for vscode:
git submodule update --init --recursive tox -e local source .tox/local/bin/activate
Our language generation verification suite uses CMake to build and run unit tests. If you are working with a native language see Nunavut Verification Suite for details on manually running these builds and tests.
Visual Studio Code¶
To use vscode you’ll need:
- install vscode command line (Shell Command: Install)
- cmake (and an available GCC or Clang toolchain, or Docker to use our toolchain-as-container)
cd path/to/nunavut git submodule update --init --recursive tox -e local source .tox/local/bin/activate code .
Then install recommended extensions.
Running The Tests¶
To run the full suite of tox tests locally you’ll need docker. Once you have docker installed and running do:
git submodule update --init --recursive docker pull uavcan/toxic:py35-py39-sq docker run --rm -v $PWD:/repo uavcan/toxic:py35-py39-sq tox
To run a limited suite using only locally available interpreters directly on your host machine,
skip the docker invocations and use
To run the language verification build you’ll need to use a different docker container:
docker pull uavcan/c_cpp:ubuntu-20.04 docker run --rm -it -v $PWD:/repo uavcan/c_cpp:ubuntu-20.04 ./.github/verify.py -l c ./.github/verify.py -l cpp
The verify.py script is a simple commandline generator for our cmake scripts. Use help for details:
Files Generated by the Tests¶
Given that Nunavut is a file generator our tests do have to write files. Normally these files are temporary and therefore automatically deleted after the test completes. If you want to keep the files so you can debug an issue provide a “keep-generated” argument.
pytest -k test_namespace_stropping --keep-generated
You will see each test’s output under “build/(test name}”.
Don’t use this option when running tests in parallel. You will get errors.
This project makes extensive use of Sybil doctests. These take the form of docstrings with a structure like thus:
.. invisible-code-block: python from nunavut.lang.c import filter_to_snake_case .. code-block:: python # an input like this: input = "scotec.mcu.Timer" # should yield: filter_to_snake_case(input) >>> scotec_mcu_timer
The invisible code block is executed but not displayed in the generated documentation and,
code-block is both rendered using proper syntax formatting in the documentation
and executed. REPL works the same as it does for
assert is also a valid
way to ensure the example is correct especially if used in a trailing
.. invisible-code-block: python assert 'scotec_mcu_timer' == filter_to_snake_case(input)
These tests are run as part of the regular pytest build. You can see the Sybil setup in the
conftest.py found under the
src directory but otherwise shouldn’t need to worry about
it. The simple rule is; if the docstring ends up in the rendered documentation then your
code-block tests will be executed as unit tests.
import file mismatch¶
If you get an error like the following:
_____ ERROR collecting test/gentest_dsdl/test_dsdl.py _______________________________________ import file mismatch: imported module 'test_dsdl' has this __file__ attribute: /my/workspace/nunavut/test/gentest_dsdl/test_dsdl.py which is not the same as the test file we want to collect: /repo/test/gentest_dsdl/test_dsdl.py HINT: remove __pycache__ / .pyc files and/or use a unique basename for your test file modules
Then you are probably a wonderful developer that is running the unit-tests locally. Pytest’s cache is interfering with your docker test run. To work around this simply delete the pycache files. For example:
#! /usr/bin/env bash clean_dirs="src test" for clean_dir in $clean_dirs do find $clean_dir -name __pycache__ | xargs rm -rf find $clean_dir -name \.coverage\* | xargs rm -f done
Note that we also delete the .coverage intermediates since they may contain different paths between the container and the host build.
Alternatively just nuke everything temporary using git clean:
git clean -X -d -f
Building The Docs¶
We rely on read the docs to build our documentation from github but we also verify this build
as part of our tox build. This means you can view a local copy after completing a full, successful
test run (See Running The Tests) or do
docker run --rm -t -v $PWD:/repo uavcan/toxic:py35-py39-sq /bin/sh -c "tox -e docs" to build
the docs target. You can open the index.html under .tox/docs/tmp/index.html or run a local
python3 -m http.server --directory .tox/docs/tmp & open http://localhost:8000/docs/index.html
Of course, you can just use Visual Studio Code to build and preview the docs using
> reStructuredText: Open Preview.
We manually generate the api doc using
sphinx-apidoc. To regenerate use
tox -e gen-apidoc.
tox -e gen-apidoc will start by deleting the docs/api directory.
Coverage and Linting Reports¶
We publish the results of our coverage data to sonarcloud and the tox build will fail for any mypy
or black errors but you can view additional reports locally under the
We generate a local html coverage report. You can open the index.html under .tox/report/tmp or run a local web-server:
python -m http.server --directory .tox/report/tmp & open http://localhost:8000/index.html
At the end of the mypy run we generate the following summaries: