(Cross-post) Serverless integration testing at Freetrade

(Cross-posted from https://blog.freetrade.io/serverless-integration-testing-at-freetrade-5359fb0b0e57)

Our tech stack at Freetrade is 99% serverless. This means that for the vast majority of our applications, the physical infrastructure, deployment and other operational concerns are largely abstracted away by the cloud provider (Google Cloud in our case).

We focus on building what our users need, and our cloud provider focuses on what we need to keep it running smoothly at scale.

Serverless computing has huge benefits in terms of allowing more resources to be devoted to solving user problems, but it also presents some different challenges. One of those is how you go about applying automated testing to your software systems when you’re in a serverless environment.

Automated testing is essential to produce and maintain high-quality software. With traditional infrastructure where you’re responsible for the actual server instances and processes that run your applications, it can be easier to implement automated testing because you have direct control over more of the stack. That extra responsibility is exactly what you’re trying to abstract away with serverless computing, though, so you have to adapt your testing strategy.

Unit testing serverless applications

Unit testing is the bedrock of an automated testing strategy. Unit tests are the most numerous and lowest-level tests, which isolate tiny parts of application logic and test them on their own.

With a well-architected application, the infrastructure that will ultimately run the application shouldn’t impact unit testing at all. Each individual unit is oblivious to what server environment it might be running in. Serverless computing facilitates this, as units of code cannot rely on particular environment details being available, such as disk i/o or some particular application state being persisted across requests. Good abstractions have to be used for these instead.

In other words, serverless encourages you to write code that depends on abstractions (see dependency inversion). It also encourages stateless applications.

Both of these make it easier to write comprehensive unit tests, as you can isolate those parts out.

As an example, requests from Freetrade’s apps to the platform are handled by HTTP handlers in Google Cloud Functions. There is little to no logic in these handlers; they merely pass the request on to services that don’t need to know about how the request came in or even about HTTP at all.

Those services and their components are then easy to unit test because they’re not tied to particular details of the environment.

Integration testing serverless applications in Google Cloud Functions

Integration testing could be described as the next level up from unit testing. Unit testing mocks out interactions between components to isolate logic, but what if you’ve got that interaction wrong?

Integration tests fill this gap in a testing strategy by confirming that components also play nicely with each other.

While serverless does not impact unit testing, it can present a couple of challenges for integration testing. You need to test the interactions between HTTP requests, databases, queues and other systems, so you need those to be available for the integration tests. With a traditional server infrastructure, you might run instances of all those things locally on a single machine and run the integration tests that way.

There are options to try and emulate serverless components locally for this purpose, but that kind of defeats the point of the integration testing — what if the emulation is wrong? You’re reliant on the emulation being accurate and up to date with the serverless platform, and that’s the kind of work you want to avoid with serverless.

It is possible to spin up a fresh environment in Google Cloud for each integration test run, and that’s certainly an option. However, it takes a bit of time for each setup, and you want the integration tests to be applied frequently and rapidly, so this approach might not be ideal.

So far we’ve taken a middle-ground approach by having a dedicated CI (continuous integration) environment in Google Cloud. It isn’t spun up and torn down between test runs. Instead, our integration tests aim to achieve that robustness by creating the specific state to be tested within each test, and avoiding dependency on any other state (as that would be a bad practice).

As an imaginary example:

// Given I have a user account;

(code to create a fresh user account)

// When I make a valid sign-in request;

(code to send a real sign-in request over the Internet to the test environment)

// Then I should get a valid auth token;

(code to check received token works in the test environment)

// And a sign-in event should have been persisted.

(code to check a sign-in event was persisted)

We have various helper code to set up and assert on the expected state within a test, and a test client to mimic user app requests against the system. Individual tests or the entire test suite can be run as needed without needing to refresh the whole environment.

This is a pragmatic approach that is working well for us so far. Tests can be run continuously, and each test run completes in reasonable time with the individual tests executed in parallel.

The serverless cloud functions are spun up and down as in production use, and interact realistically with other components.

Because serverless functions are spun up to meet demand, they can have cold starts. As the name suggests, this is when a function that had no “hot” instances running is spun up.

Inevitably this will happen during integration testing, which can add several seconds to the time taken by the test. However, we actually want this to occur, as it mimics behaviour in production. We want our integration tests to help us identify potential issues due to cold function starts just as we would with any other integration issue.

End-to-end testing serverless applications

The top tier of a testing strategy is end-to-end (E2E) testing. This applies user-level interactions via the user interface, and also checks that the application behaviour is correct via the user interface. These tests are the most time-consuming to run, but also provide the strongest confirmation that the entire stack works as expected for our users.

As with unit tests, the fact that serverless infrastructure is being used should not impact end-to-end testing. They only care about what they can interact with in the user interface, with the platform behind that being a black box.

That said, serverless computing does offer some benefits to end-to-end testing. You don’t need to maintain dedicated infrastructure to handle E2E tests: the infrastructure spins up automatically to meet the demand. This can reduce costs as you don’t pay for idle infrastructure that might sit waiting for a test run.

Getting more specific, Google Cloud also makes it easier to manage E2E testing by grouping components within specific environments, with consistent URLs based on that.

Managing test suites and their environments is then straightforward.

To sum up

We’ve had a great experience applying different tiers of automated testing to our serverless infrastructure.

Having strong abstractions provided by the cloud platform lets us devote more of our time to building things for our users, and that carries through to our approach to testing.

This allows a lot of our test code to be highly readable and meaningful from a user perspective, keeping us on track to bring free investing to more and more people.


View post: (Cross-post) Serverless integration testing at Freetrade