Hi folks, we have an interesting case which I'm no...
# general
s
Hi folks, we have an interesting case which I'm not super sure I'm approaching correctly, maybe the experts can help me wrap my head around this. • We have services that communicate over Kafka, using kafka.Message structs. • The difficulty here is that it's not only the message value that's a part of the contract, but also message key and headers. • value, key and headers are all just
[]byte
objects, so the actual payload could be json, protobuf or avro, the particular example I'm looking at is with value and some of the headers encoded as protos. • I don't think I can use
pact-protobuf
plugin since it's not a single proto message that I'm sending, but a combination of string key, a map of string to string/proto as headers and a proto value. • I don't think there's a good way to express such contracts in Pact; I was thinking of writing some sort of a plugin that would accept configuration like this one, but ideally if it deals with e.g. protos it should refer to pact-protobuf plugin for data conversion, ie I don't want to rewrite half of that plugin inside this one.
Copy code
{
    "key": "abc",
    "keyType": "string",
    "keyProto": null,  // only necessary when key is protobuf
    "value": {...},
    "valueType": "proto",
    "valueProto": "MyProtoMessage",
    "headers": [
       {
         "name": "whatever",
         "type": "string",
         "value": "some value",
         "proto": null
      }
    ]
}
• I could probably craft a separate proto representing just the kafka message specific for this service, ie where value and headers are typed, and use pact-protobuf plugin, but I'm not sure it'll deal well with headers of different types either; though that could probably be a lesser issue, maybe even refactor on the product side • Maybe I'm tackling this from the wrong direction completely, wdyt? • Thanks! 🙂
forgot to mention the alternative approach I've tried: • added functions to convert from kafka.Message to raw json on the provider side and vice versa on the consumer side. It works reasonably ok, but it trips a bit on converting primitive types - in go json.Unmarshal always treats numbers as float64, so we're losing some fidelity since there's a bit too many conversion steps, between protos, strings and maps, using both proto and json marshalers. • This is basically an attempt to not write a plugin, but seems like it comes with too many compromises. • Also, this means that the types are not really part of the contract, ie the contract itself is a simple json, and it's only implied what the actual type is.
m
mmm I think you’re right there. What is the actual payload sent over the wire in the end? Is it JSON with encoded protobuf in it?
s
It's a kafka message - key is string -> bytes; value is json or protobuf -> bytes; headers are either string -> bytes, or json/proto -> bytes.
m
Yeah, that’s tricky. Currently the bodies in Pact are considered a single “content type”. It sounds like you need a Kafka message specific plugin, that can delegate content-type matching for fields of different types
s
can a plugin delegate to another plugin?
m
That’s a good question. @rholshausen?
r
Not yet
👍 2
s
Currently the bodies in Pact are considered a single “content type”. It sounds like you need a Kafka message specific plugin, that can delegate content-type matching for fields of different types
I'm wondering if I can bypass this whole conundrum by putting key and headers in metadata, and kafka value in a body of the request. I mean currently that's how http works - contract consists of the body, but also additional data like url, query params, headers etc; I wonder if I just put all extra data in metadata in async pact too, that might work
m
It’s worth a shot and would be more consistent with the rest of the Pact model/ecosystem I think. See how far you get!
👍 1