https://pulsar.apache.org/ logo
Join Slack
Powered by
# general
  • d

    David K

    10/29/2025, 12:47 PM
    Unfortunately, there isn’t really an easy way to do that. The main issue is that the data is stored in very different formats, so it can’t just be copied.
  • j

    Jack Pham

    10/29/2025, 5:41 PM
    Hi all, I have a question about this issue: https://github.com/apache/pulsar-client-go/issues/1297. We don’t specify consumers' name, and I understand that Pulsar will create a unique name in that case. However, if the consumer name is unique, shouldn’t the producer name (
    ..<subscription>-<consumerName>-DLQ
    ) be as well, since it uses the consumer name in the producer’s name? Will consumer stop consuming message if this happen? We are using Pulsar client 4.0.0 where producer name constructed as:
    Copy code
    .producerName(String.format("%s-%s-%s-DLQ", this.topicName, this.subscription, this.consumerName))
  • r

    Romain

    10/29/2025, 7:23 PM
    Hi everyone We’re using Pulsar with strict schema governance - each namespace has
    schemaValidationEnforced=true
    and
    isAllowAutoUpdateSchema=false
    (only under an approved process), so only admins can push schemas. Here’s the issue: when a consumer is configured with a
    DeadLetterPolicy
    and a message fails too many times (or is negatively acknowledged repeatedly), the client will publish the message to a dead-letter topic (default name
    <topic>-<subscription>-DLQ
    ) after the redelivery threshold. That topic doesn’t necessarily exist ahead of time (unless created before), so when it’s first used it may trigger topic creation and/or schema registration. Because our namespace forbids auto schema updates and enforces schemas, this can fail - the consumer isn’t authorized to register the schema for the DLQ topic. To work around this, we’re creating a separate namespace (e.g.,
    <namespace>-dlq
    ) where: •
    isAllowAutoUpdateSchema=true
    •
    schemaValidationEnforced=false
    • so consumers can safely publish DLQ messages without schema conflicts. Is this the recommended approach? Is there a cleaner way to allow DLQ schema creation while keeping production namespaces locked down? Any official guidance or community best practices would be really appreciated 🙏 Thanks!
    l
    • 2
    • 1
  • f

    Francesco Animali

    10/30/2025, 8:52 AM
    hello everybody, I have a reproducible test case that demonstrate that pulsar 2-way geo replication doesn't support exclusive access producers. If this is a limitation made by design, I am not sure what benefit that could bring. Otherwise, if it's not a limitation made by design, I believe that it should be resolved to strengthen and improve the pulsar geo replication feature. I have opened this issue 24914. Will appreciate if someone can take a look and suggest whether it makes sense or not.
    l
    • 2
    • 10
  • c

    Chaitanya Gudipati

    11/05/2025, 3:48 PM
    Hi Folks, I was exploring Apache Pulsar functions. From the documentation, we seem to use Apache Bookkeeper as the state storage interface. Couple of questions on the state storage for pulsar functions: 1\ Do we have any upper limit on the storage allocated for the pulsar function state storage? 2\ Do we have a tiered storage paradigm even for the pulsar function state storage similar to the event stream storage? TIA.
    d
    • 2
    • 6
  • j

    Jack Pham

    11/05/2025, 11:11 PM
    I’m facing a problem where the consumer internal DLQ producer can’t connect due to a conflict with another producer with the same name. This issue was fixed in version 4.1.0, but it requires Java 17, which is not feasible for us at the moment (still with java8 ) I want to implement a short-term workaround to detect this issue and recreate the consumer with a different name, which, in theory, should resolve the conflict. The pulsar client implementation, however, seems to hide the entire DLQ handling, with exceptions not thrown or propagated back to external code. Is there a way to archive what I need here?
    l
    • 2
    • 3
  • t

    Tomek Zmijowski

    11/06/2025, 9:46 PM
    Hey! I'm evaluating migration options for moving our Pulsar stacks from EC2 to Kubernetes environments, where the requirement is to minimize the service downtime. So far so good I learned a lot about the geo-replication, which could be used by me, but I'm wondering what the story is behind the PIP-188 https://github.com/apache/pulsar/issues/16551 . AFAIk the work has been delivered, but I can't find instructions on how to leverage and try this solution. The list of features is impressive
    Copy code
    Publish ordering guarantee
    Consumer ordering guarantee
    Incoming replicator ordering guarantee
    Outgoing replicator ordering guarantee with the topic unavailability tradeoff
    Auto resource creation (tenant, namespace, partitioned-topic, subscriptions) in a green cluster
    Auto topic deletion after migration successfully completed for a topic
    Enable migration at cluster level or per namespace level
    Stats to show topic's migration state
    But the thing is that due to missing configuration steps, it's hard to test this feature. Can someone explain how to start with that?
  • u

    Ujjain Bana

    11/10/2025, 1:57 PM
    <URGENT> Hi, there are many .log files under the /data/bookie1/ledgers/current directory, which occupy a large amount of space. How can I clean them up? For temporary fix can i manually delete these log files ?
    l
    • 2
    • 1
  • b

    bhasvij

    11/11/2025, 2:29 PM
    Pulsar Flink Connector side we don't have any support from Pulsar side, currently?
    d
    • 2
    • 10
  • n

    Nithin Subbaraj

    11/12/2025, 10:47 AM
    Hi Team ,On pulsar bookkeeper server under /data/bookkeeper/ledgers/current there are old .log files which are older like 2 years more , we set retention in broker for acknowlege messages to 60 minutes . But still the old log files are not getting delete , ledger is the most consuming one which is filling disk space Checked https://apache-pulsar.slack.com/archives/C5Z4T36F7/p176278302931373
  • l

    Lari Hotari

    11/17/2025, 8:58 AM
    📣 [ANNOUNCE] Apache Pulsar 3.0.15, 4.0.8 and 4.1.2 released 📣 For Pulsar release details and downloads, visit: https://pulsar.apache.org/download Release Notes are at: • 3.0.15: https://pulsar.apache.org/release-notes/versioned/pulsar-3.0.15/ (previous LTS release, support until May 2026) • 4.0.8: https://pulsar.apache.org/release-notes/versioned/pulsar-4.0.8/ (Current LTS release) • 4.1.2: https://pulsar.apache.org/release-notes/versioned/pulsar-4.1.2/ (Latest release) Please check the release notes for more details.
    🔥 3
    🎉 3
  • a

    Alexandre Burgoni

    11/17/2025, 9:27 AM
    Hi everyone, has anyone experience
    504 Gateway Timeout
    from pulsar clients in a production cluster ? We are currently experiencing timeout of proxies from time to time on multiple clusters with a HTTP
    504
    , exception message is
    SSL BAD PACKET LENGTH
    . It looks like an issue between proxy - broker connection pool, but cannot yet prove it. We're running
    4.1.0
    We have to reboot proxies to fix the issue for now
  • a

    Alexander Brown

    11/17/2025, 7:03 PM
    What's the technical reason between having journal/ledger on same nvme versus having two separate drives, one for journal and one for ledgers?
  • d

    David K

    11/17/2025, 7:45 PM
    There are several reasons why you should have the journal and ledger disks on separate physical volumes, including performance. But the primary reason is that they serve two different purposes. The journal disk is used for short-term storage of the messages before they are indexed and written to the ledger disk. The journal disk provides data durability guarantees in the event of a failure, the bookie can recover and load the messages from the journal disk. However, if the journal disk fails, Pulsar will continue to operate. So separating them eliminates a single point of failure from the storage layer as well.
    ✅ 1
  • b

    Ben Hirschberg

    11/17/2025, 11:13 PM
    Hi all 👋 I have a question about per-key scheduling behavior in Pulsar. I need strict ordering and exclusivity per
    sensor_id
    , but I don’t want long-lived key to consumer stickiness. Instead, I’m trying to achieve this logic:
    If no consumer is currently processing
    sensor_id = X
    , then the next message for that sensor should be assigned to the next available consumer (round-robin or least-loaded).
    All while preserving ordering and ensuring no two consumers ever process the same key concurrently.
    KeyShared
    ensures ordering and exclusivity, but it uses stable key-range hashing, so a key stays with one consumer until that consumer dies. Is there any Pulsar pattern, config, or upcoming feature that supports dynamic per-message key assignment instead of sticky key-range ownership? Or is this fundamentally outside Pulsar’s delivery semantics? Thanks! 🙏
  • b

    Ben Hirschberg

    11/18/2025, 6:12 AM
    We already use a shared subscription with
    KeyShare
    option, since we do want messages to be processed in order per Key (this is something our design requires)
    👍 1
    d
    • 2
    • 1
  • s

    Sahin Sarkar

    11/20/2025, 6:23 AM
    Hi guys, how's everyone doing?
  • s

    Sahin Sarkar

    11/20/2025, 7:24 AM
    I had a scenario with me, in which I have microservices A and B (both are multi pod deployments), A does some computation and updates its database, then it needs to let all the pods of B know about the updates, which are some basic configs. how can this be done so the system is scalable? and I don't want service B to use much resources coz it is already resource constrained... I have checked the following approaches which are allowed in my company: 1. through kafka: all pods of service B will have a different consumer group, and they all subscribe to the same topic into which A would push its updates 2. through pulsar: similar to kafka, and I also got to know that pulsar is more suitable for these kinds of fan out scenarios. how exactly I so have some idea, but don't know this exactly 3. through zookeeper: this is more preferred by my seniors who have some experience using it. They have claimed that the approach using zk would use the least resources, and would suit for usage in service B. which approach should I use given the constraints? and if pulsar, then how exactly is its subscription model better than kafka?
    a
    l
    d
    • 4
    • 6
  • f

    Francesco Animali

    11/20/2025, 2:28 PM
    hey pulsarers! is there any chance that this issue gets merged? https://github.com/apache/pulsar/issues/24914
  • l

    Lari Hotari

    11/21/2025, 7:56 AM
    We've just released Apache Pulsar Helm Chart 4.4.0 🎉 The official source release, as well as the binary Helm Chart release, are available at https://www.apache.org/dyn/closer.lua/pulsar/helm-chart/4.4.0/?action=download The helm chart index at https://pulsar.apache.org/charts/ has been updated and the release is also available directly via helm. The main highlights of this release are the upgrade of the default Pulsar version to 4.0.8 and the Helm chart's integration with Dekaf UI. Dekaf is a web-based UI for Apache Pulsar, licensed under Apache 2.0 (GitHub: https://github.com/visortelle/dekaf). Thanks to @Kiryl Valkovich for this great contribution to the Apache Pulsar community! Release Notes: https://github.com/apache/pulsar-helm-chart/releases/tag/pulsar-4.4.0 Docs: https://github.com/apache/pulsar-helm-chart#readme and https://pulsar.apache.org/docs/helm-overview ArtifactHub: https://artifacthub.io/packages/helm/apache/pulsar/4.4.0 Thanks to all the contributors who made this possible.
    🤩 2
    thankyou 3
    🎉 2
  • d

    DANIEL STRAUGHAN

    11/21/2025, 7:14 PM
    Hello, I am trying to update the bearer token that a function is using in the Kubernetes runtime with a REST API. I am able to use the CLI
    bin/pulsar-admin functions update --tenant <TENANT> --namespace <NS> --name example-test-function --update-auth-data
    to perform this functionality. Is there a way to do this with the functions REST API?
  • j

    Jack Pham

    11/22/2025, 1:15 AM
    After update client from 4.0.0 to 4.0.7 (to include change to resolve issue where DLQ producer name conflict) we got exception:
    Copy code
    org.apache.pulsar.client.api.PulsarClientException$FeatureNotSupportedException: The feature of getting partitions without auto-creation is not supported by the broker. Please upgrade the broker to version that supports PIP-344 to resolve this issue.
    Looking at the code, i see something like
    useFallbackForNonPIP344Brokers
    it seems like from 4.0.7 no longer support falling back? what version from 4.0.0 to 4.07 that have the DLQ producer name conflict fix but still supporting fallback for non PIP-344 broker?
    l
    u
    • 3
    • 7
  • t

    Thomas MacKenzie

    11/25/2025, 1:19 AM
    Hello, We are experiencing an issue on some topics (persistent) in our cluster in production where some messages are consumed with a big delay after they are produced (45h or more). It seems that when the issue happens, it's located on specific partition(s) for a given topic. I think (maybe) that this issue occurs on topics that don't have much load, we're talking about a few messages a day because the heavy used topic don't seem to show that behavior, but can't be too sure as of right now. We also have more consumers (Pod/instance) for the "big" topics and they are all using a
    shared
    subscription with
    16
    partitions. We are using Pulsar
    4.0.6
    (via helm chart), our applications are using the Go client (commit
    ed7d4980034871e1db28770576151c4c05c7d0ea
    ) I noticed this behavior a few times already. I know that it happened once when the brokers were restarted (rollout used to move workload to other k8s nodes, no config change there), I'm not sure if that's the case when applications (Go apps) restart and I'm still trying to see if this is a server or client problem. Is there anything that could potentially trigger that behavior? I'm thinking maybe partition discovery in the Go client, or an issue related to the partition consumer as I this there is an issue on Git: https://github.com/apache/pulsar-client-go/issues/1426 I'm not seeing anything obvious that could fix the issue in the release logs of
    4.0.7
    or
    4.0.8
    , or similar issues open on Git. Regardless, any feedback is welcome. Thanks for the help
    l
    a
    • 3
    • 10
  • f

    Florian Federighi

    11/25/2025, 6:08 PM
    Hello I am unable to deploy a Debezium source Connector. I was able to do so without any issues on my staging cluster, but not on the production cluster. Everything appears to be identical. Here is the error I am encountering:
    Copy code
    2025-11-25T11:18:25,109+0000 [pulsar-web-41-5] ERROR org.apache.pulsar.functions.worker.rest.api.SourcesImpl - Failed process Source debezium/XXXXXX package: 
    org.apache.distributedlog.exceptions.BKTransmitException: Failed to transmit entry : -6
    ------
    Caused by: org.apache.distributedlog.exceptions.BKTransmitException: Failed to write to bookkeeper; Error is (-6) Not enough non-faulty bookies available: -6
    All my bookies are healthy Do you have any ideas ?
    d
    • 2
    • 6
  • x

    Xianle Wang

    11/25/2025, 11:07 PM
    Hello! I’m new to Pulsar and I’d like to build a Map-Reduce like pipeline using Pulsar Functions in Python: producer -> (submit raw messages without key) -> Topic A -> (Shared subscription) -> “Key Function” to assign key for each record -> (submit messages with keys) -> Topic B -> (key_shared subscription) -> “Compute Function” to process records The key requirements are: 1. The records of the same key should be processed in sequence and in micro batch 2. Different keys should be processed in parallel 3. Low end2end latency: around 100ms 4. High throughput: 1 million messages / second Questions: 1. Can “Compute Function” process a (micro) batch of records at a time. It seems like Pulsar Function doesn’t support batch input? 2. If batch input is not supported, is it a common/good practice to buffer the inputs inside function instance? What's the latency like to persist every record (1KB) to the BookKeeper? 3. Is it possible to create multiple threads within a function instance to handle different keys? Is it anti-pattern? Thanks!
    👀 1
    d
    • 2
    • 4
  • s

    Stanislaw Makarawec

    11/26/2025, 3:49 PM
    Hello. We are experiencing an issue with subscription replication. Pulsar version 4.0.7. We have two clusters: a primary production cluster and a secondary backup stand-by cluster. Subscription replication is enabled. At some point, a subscription stops replicating, and backlog begins to accumulate on the remote cluster (topic replication itself continues normally). Around a hundred topics are being replicated simultaneously, but the issue occurs only on one or a few of them. In some cases, the only workaround is to disable replication for the affected topic, clear the backlog on the remote cluster, and then re-enable replication. Sometimes a portion of updates still manages to replicate. There are no errors or warnings in the logs. What could be causing this issue? What additional information would be useful? In the topic stats for the subscription, the status shows replicated: true. Thanks!
    l
    d
    • 3
    • 12
  • g

    Glenn Glazer

    12/02/2025, 7:12 PM
    Copy code
    I have a simple script attached. Both the test host and the pulsar cluster are in the same Azure resource group.
    
    The producer can connect fine and send a message no problem. I can verify this with pulsar-admin:
    
    pulsar-broker-0:/pulsar$ pulsar-admin topics examine-messages -i earliest <persistent://public/default/test-topic-1764699674>
    Message ID: 8:0
    Publish time: 1764699674223
    Event time: 0
             +-------------------------------------------------+
             |  0  1  2  3  4  5  6  7  8  9  a  b  c  d  e  f |
    +--------+-------------------------------------------------+----------------+
    |00000000| 66 69 72 73 74 20 6d 65 73 73 61 67 65 20 66 6f |first message fo|
    |00000010| 72 20 74 6f 70 69 63 20 74 65 73 74 2d 74 6f 70 |r topic test-top|
    |00000020| 69 63 2d 31 37 36 34 36 39 39 36 37 34          |ic-1764699674   |
    +--------+-------------------------------------------------+----------------+
    
    Then the consumer can connect and create a subscription:
    
    pulsar-broker-0:/pulsar$ pulsar-admin topics subscriptions <persistent://public/default/test-topic-1764699674>
    test-subscription-1764699674
    
    But it can't read because it times out.
    
    Consumer log, note the error after the consumer is created:
    ============================================================
    Apache Pulsar Basic Functionality Test
    Started at: 2025-12-02T18:21:14.152135
    ============================================================
    Topic: public/default/test-topic-1764699674
    Subscription: test-subscription-1764699674
    [     0ms] Created topic and subscription names
    [    20ms] Created Pulsar client
    2025-12-02 18:21:14.172 INFO  [139864562563968] ClientConnection:193 | [<none> -> <pulsar://pulsar-broker-1.pulsar-broker.pulsar-poc.svc.cluster.local:6650>] Create ClientConnection, timeout=10000
    2025-12-02 18:21:14.172 INFO  [139864562563968] ConnectionPool:124 | Created connection for <pulsar://pulsar-broker-1.pulsar-broker.pulsar-poc.svc.cluster.local:6650>-<pulsar://pulsar-broker-1.pulsar-broker.pulsar-poc.svc.cluster.local:6650>-0
    2025-12-02 18:21:14.183 INFO  [139864529450688] ClientConnection:410 | [10.20.3.78:56274 -> 10.20.4.29:6650] Connected to broker
    2025-12-02 18:21:14.188 INFO  [139864529450688] HandlerBase:115 | [<persistent://public/default/test-topic-1764699674>, ] Getting connection from pool
    2025-12-02 18:21:14.188 INFO  [139864529450688] BinaryProtoLookupService:85 | Lookup response for <persistent://public/default/test-topic-1764699674>, lookup-broker-url <pulsar://pulsar-broker-1.pulsar-broker.pulsar-poc.svc.cluster.local:6650>, from [10.20.3.78:56274 -> 10.20.4.29:6650]
    2025-12-02 18:21:14.188 INFO  [139864529450688] ProducerImpl:147 | Creating producer for topic:<persistent://public/default/test-topic-1764699674>, producerName: on [10.20.3.78:56274 -> 10.20.4.29:6650]
    2025-12-02 18:21:14.222 INFO  [139864529450688] ProducerImpl:220 | [<persistent://public/default/test-topic-1764699674>, ] Created producer on broker [10.20.3.78:56274 -> 10.20.4.29:6650]
    2025-12-02 18:21:14.222 INFO  [139864529450688] HandlerBase:138 | Finished connecting to broker after 34 ms
    [    50ms] Created producer
    [    28ms] Sent message: 'first message for topic test-topic-1764699674'
    2025-12-02 18:21:14.250 INFO  [139864562563968] Client:86 | Subscribing on Topic :public/default/test-topic-1764699674
    2025-12-02 18:21:14.251 INFO  [139864529450688] HandlerBase:115 | [<persistent://public/default/test-topic-1764699674>, test-subscription-1764699674, 0] Getting connection from pool
    2025-12-02 18:21:14.252 INFO  [139864529450688] BinaryProtoLookupService:85 | Lookup response for <persistent://public/default/test-topic-1764699674>, lookup-broker-url <pulsar://pulsar-broker-1.pulsar-broker.pulsar-poc.svc.cluster.local:6650>, from [10.20.3.78:56274 -> 10.20.4.29:6650]
    2025-12-02 18:21:14.260 INFO  [139864529450688] ConsumerImpl:311 | [<persistent://public/default/test-topic-1764699674>, test-subscription-1764699674, 0] Created consumer on broker [10.20.3.78:56274 -> 10.20.4.29:6650]
    2025-12-02 18:21:14.260 INFO  [139864529450688] HandlerBase:138 | Finished connecting to broker after 8 ms
    [    10ms] Created consumer
    Copy code
    ERROR: Timeout while trying to receive message: Pulsar error: TimeOut 
    2025-12-02 18:21:19.260 INFO  [139864562563968] ConsumerImpl:1328 | [<persistent://public/default/test-topic-1764699674>, test-subscription-1764699674, 0] Closing consumer for topic <persistent://public/default/test-topic-1764699674>
    2025-12-02 18:21:19.261 INFO  [139864529450688] ConsumerImpl:1312 | [<persistent://public/default/test-topic-1764699674>, test-subscription-1764699674, 0] Closed consumer 0
    Consumer closed
    2025-12-02 18:21:19.261 INFO  [139864562563968] ProducerImpl:800 | [<persistent://public/default/test-topic-1764699674>, pulsar-1-7] Closing producer for topic <persistent://public/default/test-topic-1764699674>
    2025-12-02 18:21:19.261 INFO  [139864529450688] ProducerImpl:764 | [<persistent://public/default/test-topic-1764699674>, pulsar-1-7] Closed producer 0
    Producer closed
    2025-12-02 18:21:19.261 INFO  [139864562563968] ClientImpl:665 | Closing Pulsar client with 0 producers and 0 consumers
    2025-12-02 18:21:19.261 INFO  [139864504272576] ClientConnection:1336 | [10.20.3.78:56274 -> 10.20.4.29:6650] Connection disconnected (refCnt: 1)
    2025-12-02 18:21:19.261 INFO  [139864504272576] ClientConnection:282 | [10.20.3.78:56274 -> 10.20.4.29:6650] Destroyed connection to <pulsar://pulsar-broker-1.pulsar-broker.pulsar-poc.svc.cluster.local:6650>-0
    Client closed
    2025-12-02 18:21:19.261 INFO  [139864562563968] ProducerImpl:757 | Producer - [<persistent://public/default/test-topic-1764699674>, pulsar-1-7] , [batching  = off]
    
    Relevant broker logs:
    2025-12-02T18:21:14,186+0000 [pulsar-io-3-25] INFO  org.apache.pulsar.broker.service.ServerCnx - [/10.20.3.78:56274] connected with clientVersion=Pulsar-CPP-v3.7.2, clientProtocolVersion=20, proxyVersion=null
    2025-12-02T18:21:14,193+0000 [ForkJoinPool.commonPool-worker-1651] INFO  org.apache.bookkeeper.mledger.impl.ManagedLedgerImpl - Opening managed ledger public/default/persistent/test-topic-1764699674
    2025-12-02T18:21:14,197+0000 [bookkeeper-ml-scheduler-OrderedScheduler-11-0] INFO  org.apache.bookkeeper.mledger.impl.MetaStoreImpl - Creating '/managed-ledgers/public/default/persistent/test-topic-1764699674'
    2025-12-02T18:21:14,205+0000 [metadata-store-10-1] INFO  org.apache.bookkeeper.client.BookieWatcherImpl - New ensemble: [pulsar-bookie-1.pulsar-bookie.pulsar-poc.svc.cluster.local:3181, pulsar-bookie-0.pulsar-bookie.pulsar-poc.svc.cluster.local:3181] is not adhering to Placement Policy. quarantinedBookies: []
    2025-12-02T18:21:14,210+0000 [ForkJoinPool.commonPool-worker-1-EventThread] INFO  org.apache.bookkeeper.client.LedgerCreateOp - Ensemble: [pulsar-bookie-1.pulsar-bookie.pulsar-poc.svc.cluster.local:3181, pulsar-bookie-0.pulsar-bookie.pulsar-poc.svc.cluster.local:3181] for ledger: 8
    2025-12-02T18:21:14,210+0000 [BookKeeperClientWorker-OrderedExecutor-11-0] INFO  org.apache.bookkeeper.mledger.impl.ManagedLedgerImpl - [public/default/persistent/test-topic-1764699674] Created ledger 8 after closed null
    2025-12-02T18:21:14,217+0000 [bookkeeper-ml-scheduler-OrderedScheduler-11-0] INFO  org.apache.bookkeeper.mledger.impl.ManagedLedgerFactoryImpl - [public/default/persistent/test-topic-1764699674] Successfully initialize managed ledger
    2025-12-02T18:21:14,218+0000 [broker-topic-workers-OrderedExecutor-0-0] INFO  org.apache.pulsar.broker.service.BrokerService - Created topic <persistent://public/default/test-topic-1764699674> - dedup is disabled (latency: 28 ms)
    2025-12-02T18:21:14,223+0000 [broker-topic-workers-OrderedExecutor-4-0] INFO  org.apache.pulsar.broker.service.ServerCnx - [/10.20.3.78:56274] Created new producer: Producer{topic=PersistentTopic{topic=<persistent://public/default/test-topic-1764699674>}, client=[id: 0x244e0302, L:/10.20.4.29:6650 - R:/10.20.3.78:56274] [SR:10.20.3.78, state:Connected], producerName=pulsar-1-7, producerId=0}, role: null
    2025-12-02T18:21:14,253+0000 [pulsar-io-3-25] INFO  org.apache.pulsar.broker.service.ServerCnx - [[id: 0x244e0302, L:/10.20.4.29:6650 - R:/10.20.3.78:56274] [SR:10.20.3.78, state:Connected]] Subscribing on topic <persistent://public/default/test-topic-1764699674> / test-subscription-1764699674. consumerId: 0, role: null
    2025-12-02T18:21:14,253+0000 [ForkJoinPool.commonPool-worker-1651] INFO  org.apache.bookkeeper.mledger.impl.ManagedCursorImpl - [public/default/persistent/test-topic-1764699674] Cursor test-subscription-1764699674 recovered to position 8:0
    2025-12-02T18:21:14,260+0000 [bookkeeper-ml-scheduler-OrderedScheduler-11-0] INFO  org.apache.bookkeeper.mledger.impl.ManagedLedgerImpl - [public/default/persistent/test-topic-1764699674] Opened new cursor: ManagedCursorImpl{ledger=public/default/persistent/test-topic-1764699674, name=test-subscription-1764699674, ackPos=8:0, readPos=8:1}
    2025-12-02T18:21:14,260+0000 [bookkeeper-ml-scheduler-OrderedScheduler-11-0] INFO  org.apache.bookkeeper.mledger.impl.ManagedCursorImpl - [public/default/persistent/test-topic-1764699674-test-subscription-1764699674] Rewind from 8:1 to 8:1
    2025-12-02T18:21:14,260+0000 [bookkeeper-ml-scheduler-OrderedScheduler-11-0] INFO  org.apache.pulsar.broker.service.persistent.PersistentSubscription - backlog for <persistent://public/default/test-topic-1764699674> - 0
    2025-12-02T18:21:14,260+0000 [bookkeeper-ml-scheduler-OrderedScheduler-11-0] INFO  org.apache.pulsar.broker.service.ServerCnx - [/10.20.3.78:56274] Created subscription on topic <persistent://public/default/test-topic-1764699674> / test-subscription-1764699674
    2025-12-02T18:21:19,261+0000 [pulsar-io-3-25] INFO  org.apache.pulsar.broker.service.ServerCnx - [/10.20.3.78:56274] Closing consumer: consumerId=0
    2025-12-02T18:21:19,261+0000 [pulsar-io-3-25] INFO  org.apache.pulsar.broker.service.AbstractDispatcherSingleActiveConsumer - Removing consumer Consumer{subscription=PersistentSubscription{topic=<persistent://public/default/test-topic-1764699674>, name=test-subscription-1764699674}, consumerId=0, consumerName=9ba66a395e, address=[id: 0x244e0302, L:/10.20.4.29:6650 - R:/10.20.3.78:56274] [SR:10.20.3.78, state:Connected]}
    2025-12-02T18:21:19,261+0000 [pulsar-io-3-25] INFO  org.apache.pulsar.broker.service.ServerCnx - [/10.20.3.78:56274] Closed consumer, consumerId=0
    2025-12-02T18:21:19,262+0000 [pulsar-io-3-25] INFO  org.apache.pulsar.broker.service.ServerCnx - [PersistentTopic{topic=<persistent://public/default/test-topic-1764699674}][pulsar-1-7>] Closing producer on cnx /10.20.3.78:56274. producerId=0
    2025-12-02T18:21:19,262+0000 [pulsar-io-3-25] INFO  org.apache.pulsar.broker.service.ServerCnx - [PersistentTopic{topic=<persistent://public/default/test-topic-1764699674}][pulsar-1-7>] Closed producer on cnx /10.20.3.78:56274. producerId=0
    2025-12-02T18:21:19,262+0000 [pulsar-io-3-25] INFO  org.apache.pulsar.broker.service.ServerCnx - Closed connection from /10.20.3.78:56274
    2025-12-02T18:22:56,867+0000 [pulsar-io-3-27] INFO  org.apache.pulsar.broker.service.ServerCnx - [/10.20.3.78:59534] connected with clientVersion=Pulsar-CPP-v3.7.2, clientProtocolVersion=20, proxyVersion=null
    2025-12-02T18:23:02,569+0000 [pulsar-io-3-27] INFO  org.apache.pulsar.broker.service.ServerCnx - Closed connection from /10.20.3.78:59534
    
    Any assistance in diagnosing this would be appreciated. Otherwise, this is a full stop for us in terms of our evaluation of migrating from Kafka to Pulsar.
    
    Best,
    
    Glenn
    pulsar_basic_test.py
    d
    • 2
    • 9
  • d

    DANIEL STRAUGHAN

    12/02/2025, 7:37 PM
    Are there plans to move from jclouds for the offloading solution since it is EOL?
    l
    • 2
    • 1
  • g

    Glenn Glazer

    12/03/2025, 12:37 AM
    So, building off the previous, I tried to stress test our setup a little. The basic idea was to create a thousand topics, have a producer for each, write a thousand messages to each topic and read them back. The serial one ran for over two hours before the OOM killer killed it. So, I tried parallelizing it, ten producer workers at a time, each writing to their own topic. It stalled on the writes for a while and then said:
    Copy code
    WARNING:pulsar:[10.20.3.78:34944 -> 10.20.2.147:6650] Received send error from server: org.apache.bookkeeper.mledger.ManagedLedgerException: Not enough non-faulty bookies available error code: -6
    WARNING:pulsar:[10.20.3.78:34960 -> 10.20.2.147:6650] Received send error from server: org.apache.bookkeeper.mledger.ManagedLedgerException: Not enough non-faulty bookies available error code: -6
    over and over until I interrupted it. There are four bookies in this cluster. Thoughts welcome
    pulsar_load_test_serial_1000.pypulsar_load_test_parallel_1000.py
    l
    • 2
    • 2
  • s

    sindhushree

    12/04/2025, 8:45 AM
    @Lari Hotari [improve][broker] Recover susbcription creation on the broken schema ledger topic by rdhabalia · Pull Request #22469 · apache/pulsar is this PR not available in 3.0.x ? we are having a issue Failed to enable deduplication: org.apache.bookkeeper.mledger.ManagedLedgerException$LedgerNotExistException: No such ledger exists on Bookies And any plan of the fix of the below issue [Bug] Error while reading ledger - ledger=13 - operation=Failed to read entry · Issue #23493 · apache/pulsar
    l
    • 2
    • 3