End to end testing your Rust service
If you’re building a web API in Rust, you need a way to test your endpoints end to end. Unit tests ensure your logic is correct, but a proper end to end test can also verify that your infrastructure, routing, database migrations, and security settings are correct. Since most modern services manage these pieces with code, it’s a good idea to test them just like your application code. One of the best ways is with an end to end test in your CI/CD process. For Rust services, cargo
makes this painless.
The point of an end to end test
With an end to end test, your goal is to experience your service from the perspective of a client. You want to verify that the client isn’t going to have any problems with your latest deploy (other than versioned, planned breaking changes). You’re not just verifying that your logic is correct (which you can verify with unit tests), but that your software, hardware, networking, and permissions all work together. Unit tests and localized use case testing can verify your business logic, but the end to end test also checks the infrastructure your software is running on and interacting with. A robust end to end test suite with good coverage also helps you detect regressions in your infrastructure, which allows you to make infrastructure changes with more confidence.
Idiomatic Rust testing
Part of the reason Rust is a great language is because it comes with the toolbox included. By toolbox included I mean it’s much more than just a language; if you install Rust using rustup
(the canonical, preferred method), you also get cargo
. cargo
is a package manager that wraps a few other standard tools, including a formatter, a linter, and most salient for us: test tools. With cargo test
we can handle all the tests we might want using the built-in annotations and support Rust provides, including unit tests in our source files, and integration tests in the tests
directory.
Writing a unit test is easy: add a test module to the same file you wrote your application code in and add some #[test]
annotations. Here’s an example:
#[cfg(test)]
mod tests {
#[test]
fn run_test() {
// foo
}
}
Because of the #[cfg(test)]
annotation, the compiler knows not to include this code in your actual builds. It is only compiled and run when you execute cargo test
.
For integration tests, we only want to test the publicly-exposed bits of our API, so idiomatically these tests go in a separate tests
directory where they can’t access any private code. Similar to unit tests, these need to be annotated with #[test]
and are only run by cargo test
. You can read more about unit and integration testing in the Rust book.
While our code should always have unit tests, our primary focus in this article is actually on those integration tests. If you’re writing a Rust library that you intend for other Rust code to use, you might have a public Rust API you need to test. But in our case, we’re talking about a service providing a web API. We don’t have a Rust API to hit. Instead, we will be making some HTTP requests over the network, and they will probably be async
. That changes how we’ll approach writing these tests.
Running unit tests and end to end tests separately
The first change that comes with running end to end tests is that you’ll probably want to run your unit tests and integration tests separately. By default, cargo test
will run unit tests and integration tests at the same time. This isn’t a problem if the code you’re testing is all available locally. It’s fast because there’s no network latency involved, and you also can run tests on your latest code. When you’re running tests against an actual nonprod environment though, the tests can take much longer, and you need your new code to be deployed to that environment before you can test it.
Since deploying can be a lengthy process (at least a few minutes), you probably want to run unit tests before you bother deploying. It doesn’t make sense to put the code out there if the logic isn’t correct! So you’ll want to run your unit tests before running any end to end tests. Then, you’re free to deploy. Only after deploying do you want to run your end to end tests.
There are two ways to achieve this separation:
- use Rust’s built in
#[ignore]
annotation, or - configure your integration tests to skip themselves using environment variables.
Using the #[ignore]
annotation is the simpler route. As you’re writing end to end tests in your tests
directory and annotating them with #[test]
, just go ahead and add #[ignore]
on the next line, like so:
#[test]
#[ignore]
fn run_test() {
// foo
}
Now, when you run cargo test
, any tests with #[ignore]
will be skipped! If we’ve annotated our tests properly, that means cargo test
will only run your unit tests. Then, when you want to run your end to end tests, you can do cargo test -- --ignored
and it will do only the ignored tests!
However, this method can break down if there are other tests you want to ignore. There may be some tests that are in development and you don’t want them to be run as part of your unit or integration testing steps. In that case, you can’t rely on #[ignore]
to distinguish between your unit and integration tests. As an alternative, I like to use environment variables to control when my tests run.
Configuring your test run with env vars
Configuring your tests with environment variables can be useful for a whole slew of reasons, including controlling when they run. For my config variables, I like to prefix them with E2E_
just to keep everything organized and readable. I use environment variables to enable/disable my end to end tests, and also add some delay before they start to give my new servers time to spin up.
Since you’ll be reusing this config across multiple integration tests, I like to break the config code out into its own module. Setting up modules in your integration tests directory is pretty easy: simply add a new directory and put a mod.rs
file in that directory with your code. If your module is complicated, you can put multiple files in that new directory and export any code you want in the mod.rs
file, just like you would with normal source code. You then need to reference that module in your top-level integration test files like you would declare modules in your main.rs
or lib.rs
file. Check out the Rust book for more info about file structure.
Here’s some actual test config code I’ve used in my own applications:
pub struct TestConfig {
pub is_enabled: bool,
pub delay_start_min: u64,
pub env_name: String,
}
pub fn test_config() -> TestConfig {
let is_enabled = env::var("E2E_ENABLE")
.map(|s| &s.to_lowercase() == "true" || &s == "1")
.unwrap_or(false);
let delay_start_min = env::var("E2E_DELAY_START_MIN")
.unwrap_or(String::from("0"))
.parse::<u64>()
.unwrap_or(0);
let env_name = env::var("E2E_ENV_NAME").unwrap_or_else(|_| String::from("local"));
TestConfig {
is_enabled,
delay_start_min,
env_name,
}
}
In my test functions, I can then call this test_config()
function to get a TestConfig
struct with my configuration in it. The config uses some sensible default values if the environment variables aren’t set. If I haven’t explicitly set E2E_ENABLE
to true
or 1
then the tests will be disabled. If E2E_DELAY_START_MIN
isn’t set (or if it’s set to a non-numeric value), it defaults to no delay. Lastly, if E2E_ENV_NAME
isn’t set to something (for example, dev
or staging
), then it will default to local
just in case I wanted to run my tests against localhost
or something like that. This TestConfig
can then be referenced in test functions to skip, delay, or generate environment-specific mock data (for instance, getting user ids that exist in the staging
database).
(A side note: if you only want to delay when your tests first start running, you may want to consider having your test set E2E_DELAY_START_MIN
to 0
after the initial delay is finished. You can have all of your tests do this, as setting it to 0
multiple times won’t hurt anything. This way, as additional tests pull in the config and start running, they won’t continue to delay).
The default with the set up I’ve shown is that when you run cargo test
without setting E2E_ENABLE
, you will only run unit tests. Then, when you’re ready to run just your integration tests, you can set E2E_ENABLE
to true
or to 1
and then run cargo test
. If you want to run only your integration tests, you may want to pass in your module names as well to filter to only the tests you want. This is pretty easy if you group your tests into cohesive units and then put those units in their own files.
Grouping tests
Grouping your tests and keeping the groups separate is straightforward. By default, each file at the top-level of the tests
directory is compiled as its own crate, so each file is totally independent. You can lump related tests together into those files.
Let’s say you have a REST API with two different resources called apple
and orange
. You could organize your tests into two files, apple_tests.rs
and orange_tests.rs
. Each of those test files can rely on your test_config
module and pull in the test configuration using that public test_config()
method. On top of that, you can use those file names when running cargo test
to filter down to only those tests. You’d do that by running:
cargo test apple_tests orange_tests
Async
Last but not least, we need to talk about running tests using async
. Many Rust libraries for making HTTP requests are async
, so you need to run them inside async
functions. However, Rust tests don’t support async
by default. We need to pull in another library to make this possible.
Probably the most well-known option would be to use the test
macro provided by the tokio
library. This macro sets up a Tokio runtime to wrap your test, allowing you to make your test function async
. It’s very easy to use: simply swap in #[tokio::test]
for the #[test]
annotation you were using and you’re all set!
A similar alternative—and this can be easier if you’re already using actix-web
to serve your web API—is to the use the actix_rt
crate. This crate is suggested for running async
unit tests for your actix-web
code, which means you likely already have it in your project if you’re running actix-web
. In that case, just use the #[actix_rt::test]
annotation on your tests instead!
Conclusion
You should be all set to start running end to end tests using nothing other than cargo test
! I’ve found this to be a very lightweight, simple approach. It’s easy for other developers working on your codebase to write tests too, since it’s all written in the same language.
If you find that this isn’t suiting your needs and you need something a little more feature-rich, I’ve also had a lot of success writing API tests with Playwright. I’ve had a few coworkers using it for UI tests and I found that it was pretty straightforward to adapt it to only make HTTP requests. That’s a subject for another article though!