Sunday, October 1, 2017

Fast tests for integration points

Ports-and-Adapters is a good design approach for separating business logic from external dependencies, aka Mine vs. Thine.

Like all good designs, Ports-and-Adapters makes things more testable. Everything is tested in a tight edit/build/test cycle except for the "real" adapters. The "real" adapters don't change very much, so we test them on a slower cadence.

"Don't change very much" isn't very reassuring though. I don't think about the real adapters much, but I at least want something to tell me that my real adapters aren't changing. If they need to change, grab my attention so I can run the focused integration tests.

Arlo Belshee suggested record/reply tests. Here's an example, in C# with HTTP:

Record-and-passthrough integration testing

Test->HttpClient: HTTP Request
HttpClient->Pass-through Recorder: HTTP Request
note right of HttpClient: Record the request
Pass-through Recorder->Some service: HTTP Request
Some service->Pass-through Recorder: HTTP Response
note right of HttpClient: Record the response
Pass-through Recorder->HttpClient: HTTP Response
HttpClient->Test: HTTP Request


While developing the adapter we run "focused integration tests", testing the adapter it against the real dependency. For each test we record the HTTP requests and responses.

Since these tests are slow/flaky/expensive we don't run them in the edit/build/test cycle, but only when actively working on the adapter.

Verify-and-replay isolated testing

Test->HttpClient: HTTP Request
HttpClient->Player/Verifier: HTTP Request
note right of Player/Verifier: Verify the request
note right of Player/Verifier: Return the recorded the response
Player/Verifier->HttpClient: HTTP Response
HttpClient->Test: HTTP Request


While doing development on the rest of the system, or while refactoring in the real adapter, we run the real adapter against the recorded messages. This test tells us that the adapter's behavior hasn't changed (in any way that we test for), without the speed/reliability/cost of talking to the real service.

#NoMocks

This is not a mock. How so? And why not use a mock?

A mock encodes what we know about the thing we're mocking. We write our understanding in code. If our understanding doesn't match the real service, our tests can pass but the system is will fail in production.

New requirements mean extending the mock. As the mock grows, it needs good design to keep from becoming unmaintainable. This recorder is cheap to extend: write a new test, run it, save the results.

Code



No comments: