Is there a way to ensure that when a consumer's be...
# general
d
Is there a way to ensure that when a consumer's behaviour is modified, the pact gets updated? We are introducing contract testing, but I'm worried about the case where a developer changes the consumer's real interactions with the provider but forgets to update the contract. So the functionality could be broken, but the contract test would still pass. I haven't seen this risk mentioned anywhere, so maybe I'm missing something
r
Pact broker? I think isnt it the purpose of pact broker to manage contracts?
👇 1
b
That’s why you need tests that verify the implementation of your consumer AND drive the generation of your contracts. It’s still not 100% fool-proof but at least you’d be consciously working on / thinking about the expectations you’ve got about provider behaviour.
âž• 1
d
@Bas Dijkstra I agree, but just having functional tests doesn't ensure that when the functional test is modified in a way that makes it incompatible with the provider, the contract test is also modified. They can fall out of sync if the developer isn't aware of, or forgets the need to also modify the contract test. This can obviously have disastrous consequences as the pact broker would release a version of the consumer thats incompatible with the provider. Right?
b
You’re right, that’s exactly why I said it wasn’t fool-proof :) Even with contract testing, there are no guarantees. We might get even closer when there would be some kind of mutation-testing-for-contract-tests kind of thing but that doesn’t exist as far as I know.. And even then that probably wouldn’t prevent all the failures
p
I think that’s why in functional tests/ unit tests, we should mock things as less as possible and actually execute the function/method that calls the api during the test execution. But yeah again, it can’t be forced and not fool proof
m
We might get even closer when there would be some kind of mutation-testing-for-contract-tests kind of thing but that doesn’t exist as far as I know
Oh, I like the sound of that!
🙌 2
The other approach I’d like to explore is instrumentation, to discover discrepancies between tests (expected behaviour) and actual (possibly expected, maybe unexpected behaviour)
Another approach to ensure consistency, is to re-use the same test data fixtures in your contract tests and other layers of tests. This way, you can reduce potential drift
galaxy brain 1
the other rule of thumb of course, is to have a corresponding contract test wherever you stub/mock out your service layer in other forms of testing
đź’ˇ 1
👍 1
d
Thanks Matt, very helpful. Another approach I had in mind was to try and combine the consumer functional tests with the consumer contract test. Tell me if this sounds crazy or not: • Modify functional test to replace Mockito mock of provider with Pact mockServer instead • Annotate functional test with @PactTestFor Is there any merit to this approach, or is it infeasible or otherwise a bad idea to mix contract and functional testing in this way? I guess it's similar to your idea of sharing test fixtures between the two, but trying to go one step further to ensure they're in-step. I'm still new to contract testing, so there may be obvious reasons this can't work that I'm not seeing yet
y
Is there a way to ensure that when a consumer's behaviour is modified, the pact gets updated?
We are introducing contract testing, but I'm worried about the case where a developer changes the consumer's real interactions with the provider but forgets to update the contract.
Automation can't replace good engineering practises, nor good review practises. It is possible to suffer from consumer drift, whereby the consumer drift drifts in some way from the underlying code, usually this isn't the case as they are an output of your unit test, but it all too easy to encode in extra fields into the contract which aren't relevant for a particular consumer test. By avoiding abstraction layers, and testing closest to the collaborating code, you can make it easier to detect these discrepancies. As Matt said, these contracts can then be reused in other areas of your testing, knowing that they are a good source of truth, although not infallible. Still better than no mocking (you can test in insolation) and better than full mocking (you share some idea of your expectations with others) Depending on your ecosystem and team make-up, teams can share factories/fixtures https://docs.pact.io/consumer#in-dynamic-languages-ensure-the-models-you-use-in-other-tests-could-actually-be-created-from-the-responses-you-expect we used similar approaches in typescript, through shared types and a test data service
but I'm worried about the case where a developer changes the consumer's real interactions with the provider but forgets to update the contract.
The contract is generated as part of a unit test run, therefore if the underlying behaviour of the consumer changes, and the outward behaviour of the SUT changes, there would need to be changes in the developers tests for it to pass, or they would need to update the data in the pact expectations, in order to setup their mock provider in the correct state. if they can change the underlying behaviour of their code, without changing the expectations in their test, they probably aren't testing their actual code.
We might get even closer when there would be some kind of mutation-testing-for-contract-tests kind of thing but that doesn’t exist as far as I know
I like the idea of being able to use a pact mock provider, on the consumer side, to act as my http mock and test cases that I wouldn't necessarily want to share with a provider, maybe a flag, write to pact. Or here is my happy request, send some garbage and see how my collaborater code behaves On the provider side, as a pact verification provides a good amount of test coverage of the SUT, it could be nice to create a contract to functionally test your provider (with a use case that could represent how your consumers could use the api) utilising pact to act as the mock consumer. That could serve as documentation for potential consumers, and there could be analysis done to see divergence from real consumers, and the providers own view of a consumer, which may highlight new emergent use cases that the providing team didn't expect (and now want to cater for)
âś… 1
d
@Yousaf Nabi (pactflow.io) I really appreciate you engaging with my question in so much detail, I think you've helped me identify a fundamental misunderstanding I may have about where Pact sits in the test pyramid. My testing for my service
Foo
currently looks like: 1. Functional tests for `Foo`: verify that when my application runs, it takes an input and makes the expected API calls to two downstream services (one we control called
Bar
, and another that is a public cloud API) 2. Consumer test for `Foo`: invokes the
BarClient
of
Foo
directly to generate the contract using provided example requests 3. Provider test for `Bar`: verifies the contract from step 2 2 isn't really a test in this setup, it's more of a scaffold for generating the contract so that it can be used for 3. I think what you're saying is that 1 and 2 should be combined right? I.e. Pact annotations should be added to the existing functional tests, rather than writing new tests purely for the sake of generating the Pact as I've done.