First class support for Kafka
Kafka is emerging as a standard for sharing business events across the Enterprise. First class connectors for Kafka, and also integration/consideration for the Avro schema registry would simplify adoption.
Support "smart mode toggle" where the verification task automatically switches between verifying the pact at the specified URL (if supplied) or the pacts for the configured tags/branches/environments if no URL supplied
This would make it easier for people to get set up.
Allow pact-stub-server to respond data that depends on request
Let's make pact-stub-server smarter by allow it to refer request data to be a part of response data like attached picture below. Furthermore, it would be really nice if it can refer to value in API path as well. xD
Allow can-i-deploy to return true when there is a missing verification between a consumer version/provider version if there is another successful verification that contains verifications for all the interactions in the pact
Rename pacticipant to participant
Pacticipant is a nice pun (and I do love puns), but it's hard to type, easy to misread as participant, and occasionally results in PRs to change it to participant. Also, I think "participant" carries the appropriate meaning.
It can be really useful to be able to show teams a dashboard with aggregated information about their pact usage. For example - “Here are all the times we would introduce bugs into production in the last 6 months if it weren’t for pact” or “Here are the pacts that are pending for providers to implement a change”, etc..
gRPC is a common microservice framework. Commonly used with Protobufs as the encoding, and HTTP/2 as the transport, it is a highly efficient and type safe architecture. It is often said that contract testing is not required, due to forwards/backwards compatibility encoded into the schemas. Arguments for gRPC and Protobuf support in Pact See https://developers.google.com/protocol-buffers/docs/overview * > "You can add new fields to your message formats without breaking backwards-compatibility; old binaries simply ignore the new field when parsing. So if you have a communications protocol that uses protocol buffers as its data format, you can extend your protocol without having to worry about breaking existing code." * Whilst this won’t break the “contract”, it may actually not be a plausible situation. There are no guarantees that the actual RPC service will still work as expected The protocol definition itself doesn’t guarantee it can handle all situations the consumers expect to use: * Proto 3 removes “mandatory fields” - "Making every field optional provides a clearer contract to clients. They are explicitly responsible for checking that every field has been populated with something valid." * This means specification examples (ala Pact) are very important to ensuring the functionally contract behaviour * Similar issues to the challenges of "Optional" or "Any" schemas in SOAP SOA architectures Backwards guarantee doesn’t tell you _forwards_ compatibility i.e. it doesn’t help you coordinate a release OneOf semantics - see https://developers.google.com/protocol-buffers/docs/proto3#backwards-compatibility-issues The protocol buffer is separate to the HTTP endpoint serving it. See value prop from above It is absolutely possible to break a proto file by modifying numbered tags (field identifiers) or removing fields Related Resources * Buf ( https://buf.build/ ) - a useful tool for static protobuf linting, backwards compatiblity checks and introspection