By default, system tests will build a weblog image that ships the latest production version of the specified tracer language library.
But we often want to run system tests against unmerged changes. The general approach is to identify the git commit hash that contains your changes and use this commit hash to download a targeted build of the tracer. Note: ensure that the commit is pushed to a remote branch first, and when taking the commit hash, ensure you use the full hash. You can identify the commit hash using git log or from the github UI.
- Add a file
agent-imageinbinaries/. The content must be a valid docker image name containing the datadog agent, likedatadog/agentordatadog/agent-dev:master-py3.
- Tracer: There are two ways for running the C++ library tests with a custom tracer:
- Create a file
cpp-load-from-gitinbinaries/. Content examples:https://github.com/DataDog/dd-trace-cpp@mainhttps://github.com/DataDog/dd-trace-cpp@<COMMIT HASH>
- Clone the dd-trace-cpp repo inside
binaries
- Profiling: add a ddprof release tar to the binaries folder. Call the
install_ddprof.
There are three ways to run system-tests with a custom Kong plugin:
- Place a
kong-plugin-ddtrace*.rockfile (.src.rockartifact from CI) inbinaries/. The build will extract the source files from the rock package. - Clone the kong-plugin-ddtrace repo inside
binaries/:cd binaries && git clone https://github.com/DataDog/kong-plugin-ddtrace.git
- Use
load-binary.shto download the latest CI artifact automatically:./utils/scripts/load-binary.sh cpp_kong
To test with a custom dd-trace-cpp C binding, you can additionally:
- Create a file
cpp-load-from-gitinbinaries/(e.g.https://github.com/DataDog/dd-trace-cpp@main) - Clone dd-trace-cpp inside
binaries/ - Place a pre-built
libdd_trace_c.soinbinaries/
- Add a file
datadog-dotnet-apm-<VERSION>.tar.gzinbinaries/.<VERSION>must be a valid version number.- One way to get that file is from an Azure pipeline (either a recent one from master if the changes you want to test were merged recently, or the one from your PR if it's open)
Create a file golang-load-from-go-get under the binaries directory that specifies the target build. The content of this file will be installed by the weblog or parametric app via go get when the test image is built.
- Content example:
github.com/DataDog/dd-trace-go/v2@mainTest the main branchgithub.com/DataDog/dd-trace-go/v2@v2.0.0Test the 2.0.0 releasegithub.com/DataDog/dd-trace-go/v2@<commit_hash>Test un-merged changes
To change Orchestrion version, create a file orchestrion-load-from-go-get under the binaries directory that specifies the target build. The content of this file will be installed by the weblog or parametric app via go get when the test image is built.
- Content example:
github.com/DataDog/orchestrion@latestTest the latest releasegithub.com/DataDog/orchestrion@v1.1.0Test the 1.1.0 releasegithub.com/DataDog/orchestrion@<commit_hash>Test un-merged changes
Follow these steps to run tests with a custom Java Tracer version:
To run a custom Tracer version from a local branch:
- Clone the repo and checkout to the branch you'd like to test:
git clone git@github.com:DataDog/dd-trace-java.git
cd dd-trace-javaBy default you will be on the master branch, but if you'd like to run system-tests on the changes you made to your local branch, git checkout to that branch before proceeding.
- Build Java Tracer artifacts
./gradlew :dd-java-agent:shadowJar :dd-trace-api:jar
- Copy both artifacts into the
system-tests/binaries/folder:
- The Java tracer agent artifact
dd-java-agent-*.jarfromdd-java-agent/build/libs/ - Its public API
dd-trace-api-*.jarfromdd-trace-api/build/libs/into
Note, you should have only TWO jar files in system-tests/binaries. Do NOT copy sources or javadoc jars.
- Build your selected weblog:
./build.sh java [--weblog-variant spring-boot]- Run tests from the
system-testsfolder:
TEST_LIBRARY=java ./run.sh test_span_sampling.py::test_single_rule_match_span_sampling_sss001To run a custom tracer version from a remote branch:
- Find your remote branch on Github and navigate to the
ci/circleci: build_libtest. - Open the details of the test in CircleCi and click on the
Artifactstab. - Download the
libs/dd-java-agent-*-SNAPSHOT.jarandlibs/dd-trace-api-*-SNAPSHOT.jarand move them into thesystem-tests/binaries/folder. - Follow Step 4 from above to run the Parametric tests.
Follow these steps to run the OpenTelemetry drop-in test with a custom drop-in version:
- Download the custom version from https://repo1.maven.org/maven2/io/opentelemetry/javaagent/instrumentation/opentelemetry-javaagent-r2dbc-1.0/
- Copy the downloaded
opentelemetry-javaagent-r2dbc-1.0-{version}.jarinto thesystem-tests/binaries/folder
Then run the OpenTelemetry drop-in test from the repo root folder:
./build.sh javaTEST_LIBRARY=java ./run.sh INTEGRATIONS -k Test_Otel_Drop_In
There are three ways to run system-tests with a custom node tracer.
- Using a custom tracer existing in a remote branch.
- Create a file
nodejs-load-from-npminbinaries/ - In the file, add the path to the branch of the custom tracer. The content will be installed by npm install.
- Content Examples:
DataDog/dd-trace-js#masterDataDog/dd-trace-js#<commit-hash>
- Run any scenario normally with
./build.sh nodejsand./run.shand your remote changes will be in effect
- Create a file
- Using a custom tracer existing in a local branch.
- Create a file
nodejs-load-from-localinbinaries/ - In the file, add the relative path to the
dd-trace-jsrepo. - Content Examples:
- If the
dd-trace-jsrepo is in the same directory as thesystem-testsrepo, add../dd-trace-jsto the file.
- If the
- This method will disable installing with npm install dd-trace and will instead get the content of the file, and use it as a location of the
dd-trace-jsrepo and then mount it as a volume and npm link to it. This also removes the need to rebuild the weblog image since the code is mounted at runtime.
- Create a file
- Cloning a custom tracer in
binaries- Clone the
dd-trace-jsrepo insidebinaries. - Checkout the remote branch with the custom tracer in the
dd-trace-jsrepo that was just cloned. - Run any scenario normally with
./build.sh nodejsand./run.shand your remote changes will be in effect
- Clone the
- Place
datadog-setup.phpanddd-library-php-[X.Y.Z+commitsha]-*-linux-gnu.tar.gzin/binariesfolder- You can download the
.tar.gzfrom thepackage extension: [arm64, aarch64-unknown-linux-gnu](or theamd64if you're not on ARM) job artifacts (from thepackage-triggersub-pipeline), from a CI run of your branch. - The
datadog-setup.phpcan be copied from the dd-trace-php repository root.
- You can download the
- Copy it in the binaries folder
Then run the tests from the repo root folder:
./build.sh -i runnerTEST_LIBRARY=php ./run.sh PARAMETRICorTEST_LIBRARY=php ./run.sh PARAMETRIC -k <my_test>
⚠️ If you are seeing DNS resolution issues when running the tests locally, add the following config to the Docker daemon:
"dns-opts": [
"single-request"
],Use one of the four options:
- Add a
.tar.gzor a.whlfile inbinaries, pip will install it - Add a
python-load-from-pipfile inbinaries, its content will be sent topip install - Add a
python-load-from-s3file inbinaries, with a dd-trace-py commit ID or branch inside, the corresponding wheel will be loaded from S3 - Clone the dd-trace-py repo inside
binaries:cd binaries && git clone https://github.com/DataDog/dd-trace-py.git
For fast local development (for PARAMETRIC, INTEGRATION_FRAMEWORKS, otel and end-to-end scenarios):
- Prerequisites (for most use cases, a one-time setup): Make sure the native extensions are built for the Python version being used by the scenario you are running. For example, the
PARAMETRICandINTEGRATION_FRAMEWORKSscenarios require Python 3.11.14 from thepython:3.11-slimimage.- If they are not available (for example, if
ddtrace/internal/_encoding.cpython-311-aarch64-linux-gnu.sodoes not exist), you will need to build them. - Ensure Docker is running. In
dd-trace-py, runscripts/ddtestto start up a shell which is based off of thetestrunnerimage. - Run
pyenv local [PYTHON_VERSION] && pip install -e .to install the dd-trace-py package in development mode, which will build the native extensions. You need to replace[PYTHON_VERSION]with the appropriate version for the weblog you want to run (for example3.11for flask-poc). The required version can be found in the base image docker file of the weblog. - Verify the native extensions are built by checking for the existence of
ddtrace/internal/_encoding.cpython-311-aarch64-linux-gnu.so. - For any of these steps, swap out the Python version used/checked and the architecture (e.g.
aarch64-linux-gnuorx86_64-linux-gnu) as needed.
- If they are not available (for example, if
- Add a
python-load-from-localfile inbinaries, with its contents being the relative path to the dd-trace-py repo on your machine - Build and run system-tests as normal. The scenarios will add a volume mount for the dd-trace-py repo from the relative path in the
python-load-from-localfile, and also add it to the PYTHONPATH environment variable for the client container.
You have two ways to run system-tests with a custom Ruby Tracer version:
- Create
ruby-load-from-bundle-addinbinariesdirectory with the content that should be added toGemfile. Content example:
gem 'datadog', git: 'https://github.com/Datadog/dd-trace-rb', branch: 'master', require: 'datadog/auto_instrument'. To point to a specific branch, replacebranch: 'master'withbranch: '<your-branch>'. If you want to point to a specific commit, delete thebranch: 'master'entry and replace it withref: '<commit-hash>'.
- Clone the dd-trace-rb repo inside
binariesand checkout the branch that you want to test against.
You can also use utils/scripts/watch.sh script to sync your local dd-trace-rb repo into the binaries folder:
./utils/scripts/watch.sh /path/to/dd-trace-rbYou have two ways to run system-tests with a custom Rust Tracer version:
- Create
rust-load-from-gitinbinariesdirectory with the name of the branch or the ref you want to test. - Clone the dd-trace-rs repo inside
binariesand checkout the branch that you want to test against.
Note: You cannot have rust-load-from-git and dd-trace-rs folder at the same time, else the build will fail with exit code 128.
You can also use utils/scripts/watch.sh script to sync your local dd-trace-rs repo into the binaries folder:
./utils/scripts/watch.sh /path/to/dd-trace-rs- copy a file
waf_rule_setinbinaries/
Most of the ways to run system-tests with a custom tracer version involve modifying the binaries directory. Modifying the binaries will alter the tracer version used across your local computer. Once you're done testing with the custom tracer, ensure you remove it. For example for Python:
rm -rf binaries/python-load-from-pipHint for components who allows to have the repo in binaries, use the command mount --bind src dst to mount your local repo => any build of system tests will uses it.