Hi In the following articles "contract testing us...
# general
c
Hi In the following articles "contract testing using json schema" is explained from a theoretical point of view: contract-testing-using-json-schemas-and-open-api-part-1 contract-testing-using-json-schemas-and-open-api-part-2 contract-testing-using-json-schemas-and-open-api-part-3 If I'm not mistaken Bi-Directional contract testing is a kind of contract testing using json schemas. In the articles above it is mentioned that one of the advantages of this type of testing is the possibility to test the provider as blackbox. For example in the dotnet Bi-Directional contract testing workshop ( https://github.com/pactflow/example-bi-directional-provider-dotnet ) the schemathesis tool is used to verify the provider API in a blackbox manner as follows (in this case the provider is running on the docker host on port 9000 and the tool access it from within the docker container to test it): docker run --net="host" schemathesis/schemathesis:stable run --stateful=links --checks all http://host.docker.internal:9000/swagger/v1/swagger.json > report.txt Of course it has advantages to be able to test a provider in this blackbox manner but what if most of your microservices have external dependencies? Can you use this approach then? In the workshop mentioned above you have to send both to Pactflow server, the swagger.json file (which is generated during the build phase in the workshop) and the report.txt file which contains evidence that the provider API was tested (by that blackbox test mentioned above). The problem is what if you can’t easily get your provider with lots of external dependencies running in the pipeline. In that case the generation of that provider API test evidence gets quite difficult. You would have to have access to the code to make sure that your external dependencies are mocked away during the blackbox test. But being forced to have access to your code while you want to perform blackbox tests kind of defeats the advantage of blackbox testing. I have tested the Pactflow workflow to see whether the system could work without the need to publish the provider API verification evidence and by only publishing the swagger.json. It turns out that the consumer’s can-i-deploy is happy with provider’s swagger.json being published alone and can be deployed without the provider having its API test evidence published. But the provider’s can-i-deploy does not allow it to be deployed as long as its API test evidence is not published. Of course this all makes perfect sense. But what if you really want to take advantage of blackbox testing but you can’t get your provider with lots of external dependencies up and running in the pipeline. What if you are mostly interested in the compatibility between your consumers and providers and not so much about testing your providers themselves. Is there a way to work with swagger.json alone and leave out the provider API test evidence (represented by report.txt in the workshop example above)?
m
So the TL;DR is “yes”, you can definitely omit the report (I think the field may need something, even just text that says “we tested this, trust me because I’m a developer”.
But.
But what if you really want to take advantage of blackbox testing but you can’t get your provider with lots of external dependencies up and running in the pipeline. What if you are mostly interested in the compatibility between your consumers and providers and not so much about testing your providers themselves.
The question implies that the provider implements the spec to a degree you have a level of confidence that is the case. If you do have that confidence, this should be fine. But the question then is, how are you getting that confidence? It sounds like testing it is already difficult - do you test it by hand? Autogenerate the OAS? Something else? That’s the evidence you should upload to Pactflow.
If you don’t have confidence the provider is compatible with the OAS, then you’re leaving yourself open to production issues.
c
Well in the workshop I mentioned above we have the following two steps: 2. Generate the dll for the project, this is the binary file that would be deployed to an environment. This step generates the Swagger doc for the project, which will be uploaded to Pactflow. Run the following command in the terminal on the root of the project: make publish_dll 3. Use Schemathesis to verify that the API endpoints match the generated Swagger doc by running the verify_swagger target. This will generate a a Schemathesis report documenting the compatibility of the endpoints with the Swagger doc make verify_swagger I imagined that the step number 2 automatically generates the swagger.json from the API. In that case I imagined that the confidence that the API matches the swagger.json would be quite strong. If thats indeed the case, would the step 3 be really necessary? Is is about having the HTTP communication really tested? Or am I missing something here!
m
There is always the chance 2 isn’t a perfect representation of things. Particularly with respect to the behaviour where discriminators such as
anyOf
,
oneOf
etc. are concerned. So (3) is usually a good thing to balance that out
There is of course also the possibility of bugs or e.g. an unimplemented controller that still generates a resource in an OAS - that could give you a false sense of confidence
But to answer your question, if schemathesis gives you the confidence of correctness, and you generate your OAS from working code that should probably be enough. You can upload whatever you want as the report - ultimately, it’ll be future you (or a consumer) that reads that report if things break. so making it useful/readible is probably in your best interest anyway
c
Thank you for your explanation and clarification, that makes a lot more sense now.
👍 1
m
Great question, thanks for taking the time to explain it in detail
👍 1