Hi - I've been watching the Bi-directional videos ...
# documentation
c
Hi - I've been watching the Bi-directional videos and reading the examples - very nice by the way - but I can't see how we can handle state, which is a big thing when you're testing an API, obviously! Can we have an example of how: • A consumer would test against a success (200) and failure (401) against a login endpoint • How a provider would test those
šŸ‘‹ 2
y
states are not used in BDCT as you do not have fine-grained control over the provider in the same way as regular Pact CDCT.
ā˜ļø 1
• A consumer would test against a success (200) and failure (401) against a login endpoint
They would setup a provider mock to return 200 or 401 for a specific test case, and generate a Pact contract from a successful match
• How a provider would test those
With any functional testing tool, we do not mandate the providers choice here, the provider should test any endpoints they expose in an openAPI spec, to provide strong confidence that the implemented service, matches the documented specification. This evidence is uploaded to Pactflow when publishing the provider spec, and forms the provider contract. The bi-directional check, will ensure that the pact contracts generated by the client form a subset of the providers contract, but this check is static, so does not exercise the real provider
We should add a 401 case to the BDCT examples though, so good shout Chris!
šŸ‘ 1
c
Ok, so (whitewashing over a lot of the complexity), if my provider has tested a
200
and
401
state, and it's in the generated OpenAPI, and my consumer has mocked a
200
and
401
, then the BDCT will marry those two up and give it a green light ?
šŸ‘ 1
We (thankfully) already have all our integration tests test against a Swagger file, to ensure that the input/output is validated, so we have that 'strong confidence' as you put it
y
Even if your provider hasn't tested it, but it's in the spec, and the pact files generated can be satisfied by the spec, then BDCT can-I-deploy will give you the green light. So in essence it's a maybe rather than firm YES, offered by CDCT with Pact. Sounds like you are covered from a functional testing side on the prover which is good. What tool are you using for the provider testing out of interest
c
it's a fairly typical node / nyc / mocha testing stack
šŸ‘ 1
y
How are you ensuring conformance to the swagger spec? Is it generated as code output
c
Ahh I see
I wrote about it a few years ago.. two secs..
y
Mint
Thanks my good man!
API validation on the request, API validation on the response(s) in the tests
little dusty now but the basics are still there
It doesn't check that for things that are documented in the spec but not tested for - I never worked out how to write a code-coverage part for that
y
Thanks for sharing mate, appreciate that, looking forward to reading with a coffee shortly
šŸ‘ 1
c
Another question, if I may, in the video demo a couple of weeks ago, it mentioned that the OpenAPI file is uploaded to Pactflow (which makes sense) and that the Postman test results are also uploaded - is there a generic format, or additional formats, that those test files could be in?
y
Ask away dude Can be any content with a valid mime type, we have examples that just send a bool
true/false
https://docs.pactflow.io/docs/bi-directional-contract-testing/contracts/oas#request-details
•
verificationResults.content
• The base64 encoded test results, which may be any output of your choosing (see base64 encoding below).
•
verificationResults.contentType
• The content type of the results. Must be a valid mime type
c
ok so
verificationResults.content
can be literally anything - and it's just rendered in pactflow - gottcha
šŸ‘ 1
y
we ask the user to send the name of the tool as well, so maybe if a particular tool is really popular we could have a nice renderer for the results, currently it's just rendered as plain text
šŸ‘ 1