What is the recommended approach to force a deploy...
# general
j
What is the recommended approach to force a deployment of a specific version (regardless of consumer or provider verification status) when
can-i-deploy
fails
? I'm exploring the following workaround — is this a valid approach? 1.
create-version
2.
publish-pact
/
verify-pact
3.
can-i-deploy
(fails) 4. force deploy anyway (with override flag or manual trigger) 5.
record-deployment
Are there any risks or better alternatives to this flow?
y
first question is why would you want to do this? can i deploy has a dry run mode that can be activated to ignore the result and suppress failing exit code. once you’ve got a broken version deployed, how would you expect to verify new/updated contracts against that environment, it would likely fail.
the recommended approach is to NEVER force a deployment of a specific version when
can-i-deploy
fails
j
This flow is critical in scenarios where the consumer is the sole blocker preventing the release of the provider API. To proceed with the deployment, the provider may need to force the release, after which the consumer will be responsible for updating their implementation to align with the new provider contract.
y
your consumer failures would never make it past their main branch and be able to deployed as they would fail can i deploy. unless someone circumvented it and deployed it anyway therefore you’ll never block your provider from deploying into an environment, as it should still be compatible with all deployed and released consumers. you can enable pending pacts on the provider side which stops the consumers failing contract on main branch from failing the provider build
after which the consumer will be responsible for updating their implementation to align with the new provider contract.
how do you know the consumer needs to update here, it may have been a genuine failure from the providing team. they may have removed a field that only that consumer requires
j
Apologies, let me clarify with a scenario:
consumer_v1
has always worked with
provider_v1.0
(e.g.,
/api/v1/*
), along with many other consumers. The provider now needs to deprecate and eventually remove a specific API endpoint — but must continue using the
/api/v1/*
path structure (due to versioning constraints or compatibility policies). Most consumers have already migrated to alternative endpoints, except for
consumer_v1
. As a result, the provider needs to upgrade to
provider_v1.1
, which removes the deprecated API, and proceed with deployment despite
consumer_v1
not being compatible
. What’s the recommended way to handle this kind of situation?
y
you have to make a choice there, either a breaking change with downtime or use an alternative mechanism to provide a more graceful change. There are probably more options but here are some i can think of straight away 1. can you support both live versions of the provider, and remove release of provider v1.0 when the consumer_v1 stops consuming it? 2. can you deal with consumer v1 having a period of downtime? if so deploy v1.1 of provider. consumer will not be able to deploy their code until updated to use consumer v1.1. the provider could choose to ignore a consumer explicitly in the can-i-deploy call to get a positive cid result, or ignore the exit code as you proposed 3. migrate the old endpoint to the new v1.1 api temporarily to allow for a single provider deployment, and allow the consumer v1.0 to gracefully deprecate that endpoint call essentially the breaking change in the provider ( removal of a field ) will either need a big bang deployment or something like an expand and contract to support v1.0 and v1 consumers for a short period of time
j
Thanks for sharing your options. I'm leaning towards option 2, as it requires less effort from the provider and enables a faster deployment of the updated API. It also serves as a more effective way to prompt the consumer to update their implementation. While this approach may risk encouraging teams to bypass contract testing (since widespread verification failures could become the norm), I believe having a mechanism for manual intervention is necessary in certain cases — especially when deployment is critical. That said, it should be used sparingly and with caution to avoid undermining the overall integrity of the testing process.