Hi! :wave: Quick question about splitting the cont...
# general
s
Hi! 👋 Quick question about splitting the contracts with a larger number of endpoints/interactions. Some context: • We have a Consumer service and Provider service • Provider has ~40 endpoints. 4 CRUD calls for 10 different resources. We would like to know what is the best/suggested approach for grouping them and testing them. Ideally, the tests could be split based on the resource. • Each consumer test would run contract tests for its resource and write them into file/directory • Each provider test would run contract tests for given resource. The issue/question comes with the file(s) • Should all interactions be put into one file on the consumer side? (this is what we started with in the beginning) ◦ Consumer side works nicely, but on provider side, how would I run only subset of the interactions? ▪︎ Another option would be to have one test on provider side, but not sure if this is the suggested approach... • Should interactions be put into different contract files on the consumer side? ◦ This would solve the issue on the provider side, as I can specify the contract file for given resource. ◦ This raises question about naming. Currently my Pact JS library names them
<consumer>-<provider>.json
. How would I name those then? How would they be treated/handled in the pact broker? Thanks in advance! 🙇
m
In general, best practice is to name your applications along the lines of the deployable software unit. In your case, 40 endpoints isn’t that big of an API, so I’d not see any problems with that being a single contract. If the 10 different resources were deployed separately, then each of those could be its own “provider”
Should all interactions be put into one file on the consumer side? (this is what we started with in the beginning)
yes, usually
Consumer side works nicely, but on provider side, how would I run only subset of the interactions?
in most languages, there is a way to only run specific interactions. e.g. see the filter options for the CLI, most languages expose a version of this: https://docs.pact.io/implementation_guides/rust/pact_verifier_cli
Another option would be to have one test on provider side, but not sure if this is the suggested approach...
Should interactions be put into different contract files on the consumer side?
the problem with that, is now you have multiple logical consumers - so you would need to run multiple
can-i-deploy
calls,
record-deployment
etc. You’ll likely get tangled in knots
s
Thanks @Matt (pactflow.io / pact-js / pact-go)! I now got full clarity on the consumer side. Since we are having microservices as providers, they don't get too big, so we'll keep grouping the interactions per provider into the same contract file - makes perferct sense! For the provider, I am still trying to understand the difference between two approaches: 1. Having one all-inclusive test on provider side without filtering (state change URL controller could have some logical grouping) 2. Having separate tests verifying the same contract, but with filters Which one of those two is preferred / more common? If going with option (2), separate tests, how will the results publishing work? Does pact broker mark it as partially validated until all interactions have been separately validated and results published?
m
> If going with option (2), separate tests, how will the results publishing work? Does pact broker mark it as partially validated until all interactions have been separately validated and results published? No, I don’t think so, that’s the problem. I believe it needs to be one physical
@Test
-like annotation, but if you use the JUnit library, I think it reports each interaction separately as different tests.
I believe there are examples of how to break the verification test down so state handlers etc. can be spread across multiple files etc.
s
I think for now we'll go with option (1) and have a single test on provider side as well. Splitting the tests would be nice to be able to simply have the tests run quicker. The filter options would expect some kind of an agreed structure between consumer and provider. e.g. provider state indicating what endpoint it belongs to or some keyword in description. • One could not simply filter, for example, for "resource-a" in description, as some interactions might be: "a request for resource-b when resource-a does not exist" - in this case this is considered a resource-b test, but it's run during resource-a tests. • Putting an arbitrary prefix before the description seems hacky and creates noise • Putting an arbitrary provider state also seems hacky and is not an intended use
I believe it needs to be one physical
@Test
-like annotation, but if you use the JUnit library, I think it reports each interaction separately as different tests.
For context, we are using
pact_erlang
library to run tests in Elixir. I would have to implement the FFI function to test it out.
I think it reports each interaction separately as different tests.
I'm just trying to understand what happens over on pact broker side and from the perspective of can-i-deploy when it receives verification results which used the filter options 🤔
👍 1
m
Currently, available filters include (see the CLI, but options for the FFI also exist):
Copy code
Filtering interactions:
      --filter-description <filter-description>
          Only validate interactions whose descriptions match this filter (regex format) [env: PACT_DESCRIPTION=]
      --filter-state <filter-state>
          Only validate interactions whose provider states match this filter (regex format) [env: PACT_PROVIDER_STATE=]
      --filter-no-state
          Only validate interactions that have no defined provider state [env: PACT_PROVIDER_NO_STATE=]
  -c, --filter-consumer <filter-consumer>
          Consumer name to filter the pacts to be verified (can be repeated)
We could also extend that to labels (see https://github.com/pact-foundation/pact-specification/issues/75). That would be an obvious thing to do
👍 1
I’m just trying to understand what happens over on pact broker side and from the perspective of can-i-deploy when it receives verification results which used the filter options
I’m hoping it doesn’t publish if filter is set
👍 1
s
Great! Tags would be cool indeed. In short: We'll go with • Multiple tests creating one contract on consumer side • One test validating the contract on provider side • If we need further splitting for any reason, we'll take another look at what we can do with filters. Possibly check back on the progress with tags.