We are hitting the limits of kafka broker message ...
# getting-started
n
We are hitting the limits of kafka broker message size limits when we have a huge schema pushed in our MAE (Message is coming via GMS API) Are there any plans of compressing messages sent/read from kafka from gms?
a
@nutritious-bird-77396 It can be set in the broker settings, no need to enable this change in MAE consumer/producer. If you have the already created topic, it can be applied to it from kafka container using this script (change topic name)
Copy code
bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type topics --entity-name test-topic-zip --alter --add-config compression.type=gzip
But this can be a problem for huge companies with thousands of entities. @mammoth-bear-12532 why all entities described in the single avro schema? It’s really hard to maintain - if some MCE event is broken, it’s nearly impossible to find which entity was corrupted.
m
Hi @nutritious-bird-77396 @abundant-dinner-2901: we plan to move towards skinny schemas using the MCP topics. (https://datahubproject.io/docs/advanced/mcp-mcl/)
thankyou 1
we'll roll out this change gradually over the next few releases
this should remove the dependency on the monolith MCE schema