The Docker image to automatically run tests on Uiua solutions submitted to Exercism.
Build the analyzer, conforming to the analyzer interface specification.
Update the files to match your track's needs.
At the very least, you'll need to update bin/run.sh
, Dockerfile
and the test solutions in the tests
directory
- Tip: look for
TODO:
comments to point you towards code that need updating - Tip: look for
OPTIONAL:
comments to point you towards code that could be useful
To analyze an arbitrary exercise, do the following:
- Open a terminal in the project's root
- Run
./bin/run.sh <exercise-slug> <solution-dir> <output-dir>
Once the analyzer has finished, its results will be written to <output-dir>/analysis.json
.
This script is provided for testing purposes, as it mimics how analyzers run in Exercism's production environment.
To analyze an arbitrary exercise using the Docker image, do the following:
- Open a terminal in the project's root
- Run
./bin/run-in-docker.sh <exercise-slug> <solution-dir> <output-dir>
Once the analyzer has finished, its results will be written to <output-dir>/analysis.json
.
To run the tests to verify the behavior of the analyzer, do the following:
- Open a terminal in the project's root
- Run
./bin/run-tests.sh
These are golden tests that compare the analysis.json
generated by running the current state of the code against the "known good" tests/<test-name>/expected_analysis.json
. All files created during the test run itself are discarded.
When you've made modifications to the code that will result in a new "golden" state, you'll need to generate and commit a new tests/<test-name>/expected_analysis.json
file.
This script is provided for testing purposes, as it mimics how analyzers run in Exercism's production environment.
To run the tests to verify the behavior of the analyzer using the Docker image, do the following:
- Open a terminal in the project's root
- Run
./bin/run-tests-in-docker.sh
These are golden tests that compare the analysis.json
generated by running the current state of the code against the "known good" tests/<test-name>/expected_analysis.json
. All files created during the test run itself are discarded.
When you've made modifications to the code that will result in a new "golden" state, you'll need to generate and commit a new tests/<test-name>/expected_analysis.json
file.
There are two scripts you can use to benchmark the analyzer:
./bin/benchmark.sh
: benchmark the analyzer code./bin/benchmark-in-docker.sh
: benchmark the Docker image
These scripts can give a rough estimation of the analyzer's performance. Bear in mind though that the performance on Exercism's production servers is often lower.