Hi team! Question about using Pact contracts to st...
# general
p
Hi team! Question about using Pact contracts to stub external API calls for UI integration tests running in the CI pipeline. I know, don’t do it, we’re gonna have a Bad Time™ - docs make that very clear, as does this article. We (@Roma Abrosimov) followed the recommendation in that article, and got the Pact stub server running and it works like a charm, very impressive! And yet … I’m struggling to understand the difference between this approach vs making everything a “contract test”. For the approach of stubs to actually work, we still need a matching contract to exist supporting every UI test. I think the article is cautioning against perhaps using UI tests to somehow generate the contract that is being used, e.g. if we have 10 different tests that are calling same API, we would have 10 different versions of the contract for the same API (= bad). And pact stub server is better because .. it allows us to reuse a single contract across those 10 tests? But what if we had a way to reuse contracts across tests without using pact stub server? What we have now is: 1. Contract test for API call 1 2. Contract test for API call 2 3. Integration test for UI. Code is calling both API 1 and API 2, both calls hit back stub server, it downloads all the pacts and it matches them to the request and returns appropriate JSON response. However, we have also implemented an alternate approach that avoids the need for using the stub server. We still have a contract test for API call 1 and 2. However, we extracted the pact contract declaration (
api_client.upon_receiving('api 1')...
) to a “shared context” (rspec ruby thing) and that shared context in the contract test. Then, in the integration test for the UI, we include the contract for both API calls, something like:
Copy code
include_context 'api_1'
include_context 'api_2'
It seems to achieve the same exact result with a slightly different way of implementing. It’s not going to automagically find all the correct matching contracts to use; on the other hand it is more explicit in terms of being able to inspect which contracts are actually being stubbed out by Pact for a given test. Any concerns with this latter approach? It seems that the key to maintainability with either approach, again, is to minimize the number of contracts for a given single API.
t
You're right that the key point is maintainability. The issue with driving tests through the UI is that you won't always have the context about what exactly the API call should be when you are modifying the UI. And, if you modify the API, you don't want to modify the UI tests either
if you have some way of solving that, then I suppose it's ok. It's up to you.
Personally, I: • Stub the API during UI tests (the code layer at the entry point to your client code, not at the network layer) • Ensure that stubbed client does the same as the real client by using contract tests
👍 1
Integration test for UI. Code is calling both API 1 and API 2, both calls hit back stub server, it downloads all the pacts and it matches them to the request and returns appropriate JSON response.
I think this is probably just as much maintenance effort as the first-class contract test approach
It seems to achieve the same exact result with a slightly different way of implementing.
Yes, I agree. I think you'll risk frustrating test maintenance with either approach.
1
m
e.g. if we have 10 different tests that are calling same API, we would have 10 different versions of the contract for the same API (= bad)
Yes, this is one of the key problems (and thanks for reading the guidance in advance 🙏 ). If you can avoid that, half of the maintenance problems go away. In my experience, most of the pain with UI tests is pushed to the API provider. So anything you can do to improve situation that is likely to make it more workable.
👍 1
A lot of this comes down to specificity in the tests themselves, ideally, it’s as close to a unit test context as possible (to achieve specificity) but that’s not the only way. It sounds like you’re thinking about it, which is probably half the battle - going in with open eyes and looking for the problems
1
p
thanks all!