Margaret Figura
10/22/2025, 4:00 PMprintln() or other work per message). Again, CPU usage is under 10% for all components, but I see the same small drops.
I started debugging and found Pulsar is dropping because the Netty connection's .isWritable() returns false and this causes Pulsar to immediately drop. This "Returns true if and only if the I/O thread will perform the requested write operation immediately", meaning there is room available in Netty's ChannelOutboundBuffer. I found that if I increase the Netty low/highWaterMarks, the drops go away, but it's not possible without a code change to Pulsar broker.
I'm looking for any suggestions on different configurations I can try. Thanks!!Vaibhav Swarnkar
10/25/2025, 7:26 PMKiryl Valkovich
10/26/2025, 7:42 PMAndrew
10/29/2025, 5:11 AMDavid K
10/29/2025, 12:47 PMJack Pham
10/29/2025, 5:41 PM..<subscription>-<consumerName>-DLQ) be as well, since it uses the consumer name in the producer’s name? Will consumer stop consuming message if this happen?
We are using Pulsar client 4.0.0 where producer name constructed as:
.producerName(String.format("%s-%s-%s-DLQ", this.topicName, this.subscription, this.consumerName))Romain
10/29/2025, 7:23 PMschemaValidationEnforced=true and isAllowAutoUpdateSchema=false (only under an approved process), so only admins can push schemas.
Here’s the issue: when a consumer is configured with a DeadLetterPolicy and a message fails too many times (or is negatively acknowledged repeatedly), the client will publish the message to a dead-letter topic (default name <topic>-<subscription>-DLQ) after the redelivery threshold.
That topic doesn’t necessarily exist ahead of time (unless created before), so when it’s first used it may trigger topic creation and/or schema registration. Because our namespace forbids auto schema updates and enforces schemas, this can fail - the consumer isn’t authorized to register the schema for the DLQ topic.
To work around this, we’re creating a separate namespace (e.g., <namespace>-dlq) where:
• isAllowAutoUpdateSchema=true
• schemaValidationEnforced=false
• so consumers can safely publish DLQ messages without schema conflicts.
Is this the recommended approach? Is there a cleaner way to allow DLQ schema creation while keeping production namespaces locked down?
Any official guidance or community best practices would be really appreciated 🙏
Thanks!Francesco Animali
10/30/2025, 8:52 AMChaitanya Gudipati
11/05/2025, 3:48 PMJack Pham
11/05/2025, 11:11 PMTomek Zmijowski
11/06/2025, 9:46 PMPublish ordering guarantee
Consumer ordering guarantee
Incoming replicator ordering guarantee
Outgoing replicator ordering guarantee with the topic unavailability tradeoff
Auto resource creation (tenant, namespace, partitioned-topic, subscriptions) in a green cluster
Auto topic deletion after migration successfully completed for a topic
Enable migration at cluster level or per namespace level
Stats to show topic's migration state
But the thing is that due to missing configuration steps, it's hard to test this feature. Can someone explain how to start with that?Ujjain Bana
11/10/2025, 1:57 PMbhasvij
11/11/2025, 2:29 PMNithin Subbaraj
11/12/2025, 10:47 AMLari Hotari
11/17/2025, 8:58 AMAlexandre Burgoni
11/17/2025, 9:27 AM504 Gateway Timeout from pulsar clients in a production cluster ? We are currently experiencing timeout of proxies from time to time on multiple clusters with a HTTP 504, exception message is SSL BAD PACKET LENGTH. It looks like an issue between proxy - broker connection pool, but cannot yet prove it. We're running 4.1.0
We have to reboot proxies to fix the issue for nowAlexander Brown
11/17/2025, 7:03 PMDavid K
11/17/2025, 7:45 PMBen Hirschberg
11/17/2025, 11:13 PMsensor_id, but I don’t want long-lived key to consumer stickiness. Instead, I’m trying to achieve this logic:
If no consumer is currently processing, then the next message for that sensor should be assigned to the next available consumer (round-robin or least-loaded).sensor_id = X
All while preserving ordering and ensuring no two consumers ever process the same key concurrently.
KeyShared ensures ordering and exclusivity, but it uses stable key-range hashing, so a key stays with one consumer until that consumer dies.
Is there any Pulsar pattern, config, or upcoming feature that supports dynamic per-message key assignment instead of sticky key-range ownership?
Or is this fundamentally outside Pulsar’s delivery semantics?
Thanks! 🙏Ben Hirschberg
11/18/2025, 6:12 AMKeyShare option, since we do want messages to be processed in order per Key (this is something our design requires)Sahin Sarkar
11/20/2025, 6:23 AMSahin Sarkar
11/20/2025, 7:24 AMFrancesco Animali
11/20/2025, 2:28 PMLari Hotari
11/21/2025, 7:56 AMDANIEL STRAUGHAN
11/21/2025, 7:14 PMbin/pulsar-admin functions update --tenant <TENANT> --namespace <NS> --name example-test-function --update-auth-data to perform this functionality. Is there a way to do this with the functions REST API?Jack Pham
11/22/2025, 1:15 AMorg.apache.pulsar.client.api.PulsarClientException$FeatureNotSupportedException: The feature of getting partitions without auto-creation is not supported by the broker. Please upgrade the broker to version that supports PIP-344 to resolve this issue.
Looking at the code, i see something like useFallbackForNonPIP344Brokers it seems like from 4.0.7 no longer support falling back? what version from 4.0.0 to 4.07 that have the DLQ producer name conflict fix but still supporting fallback for non PIP-344 broker?Thomas MacKenzie
11/25/2025, 1:19 AMshared subscription with 16 partitions.
We are using Pulsar 4.0.6 (via helm chart), our applications are using the Go client (commit ed7d4980034871e1db28770576151c4c05c7d0ea)
I noticed this behavior a few times already. I know that it happened once when the brokers were restarted (rollout used to move workload to other k8s nodes, no config change there), I'm not sure if that's the case when applications (Go apps) restart and I'm still trying to see if this is a server or client problem.
Is there anything that could potentially trigger that behavior? I'm thinking maybe partition discovery in the Go client, or an issue related to the partition consumer as I this there is an issue on Git: https://github.com/apache/pulsar-client-go/issues/1426
I'm not seeing anything obvious that could fix the issue in the release logs of 4.0.7 or 4.0.8 , or similar issues open on Git.
Regardless, any feedback is welcome. Thanks for the helpFlorian Federighi
11/25/2025, 6:08 PM2025-11-25T11:18:25,109+0000 [pulsar-web-41-5] ERROR org.apache.pulsar.functions.worker.rest.api.SourcesImpl - Failed process Source debezium/XXXXXX package:
org.apache.distributedlog.exceptions.BKTransmitException: Failed to transmit entry : -6
------
Caused by: org.apache.distributedlog.exceptions.BKTransmitException: Failed to write to bookkeeper; Error is (-6) Not enough non-faulty bookies available: -6
All my bookies are healthy
Do you have any ideas ?Xianle Wang
11/25/2025, 11:07 PMStanislaw Makarawec
11/26/2025, 3:49 PM