https://pulsar.apache.org/ logo
Join Slack
Powered by
# general
  • s

    sindhushree

    07/07/2025, 1:42 PM
    Hi All, In one of our deployment we have 3 partitioned topics and we have java producer code producing data to that topics . When i check the producer stats its showing like below where for same number of messages the latency is high and because of which i think the pending messages are also increasing . 2025-07-04 124036.832 [pulsar-timer-6-1] INFO org.apache.pulsar.client.impl.ProducerStatsRecorderImpl - [persistent://public/default/-partition-1] [-175-224582] Pending messages: 15 --- Publish throughput: 66.55 msg/s --- 0.49 Mbit/s --- Latency: med: *3000.*000 ms - 95pct: 4741.000 ms - 99pct: 5997.000 ms - 99.9pct: 6000.000 ms - max: 6404.000 ms --- BatchSize: med: 2.000 - 95pct: 65.000 - 99pct: 68.000 - 99.9pct: 69.000 - max: 69.000 --- MsgSize: med: 1930.000 bytes - 95pct: 64713.000 bytes - 99pct: 65368.000 bytes - 99.9pct: 65465.000 bytes - max: 65465.000 bytes --- Ack received rate: 69.33 ack/s --- Failed messages: 0 --- Pending messages: 15 [persistent://public/default/-partition-0] [-175-224582] Pending messages: 88 --- Publish throughput: 103.66 msg/s --- 0.76 Mbit/s --- Latency: med: 2735.000 ms - 95pct: 5846.000 ms - 99pct: 5998.000 ms - 99.9pct: 6753.000 ms - max: 6753.000 ms --- BatchSize: med: 3.000 - 95pct: 59.000 - 99pct: 69.000 - 99.9pct: 69.000 - max: 69.000 --- MsgSize: med: 3066.000 bytes - 95pct: 57746.000 bytes - 99pct: 65327.000 bytes - 99.9pct: 65510.000 bytes - max: 65510.000 bytes --- Ack received rate: 110.80 ack/s --- Failed messages: 0 --- Pending messages: 88 2025-07-04 124036.833 [pulsar-timer-6-1] INFO org.apache.pulsar.client.impl.ProducerStatsRecorderImpl - [persistent://public/default/-partition-2] [-175-224582] Pending messages: 0 --- Publish throughput: 105.25 msg/s --- 0.78 Mbit/s --- Latency: med: 4.000 ms - 95pct: 252.000 ms - 99pct: 261.000 ms - 99.9pct: 430.000 ms - max: 961.000 ms --- BatchSize: med: 3.000 - 95pct: 63.000 - 99pct: 69.000 - 99.9pct: 69.000 - max: 69.000 --- MsgSize: med: 2878.000 bytes - 95pct: 59276.000 bytes - 99pct: 65241.000 bytes - 99.9pct: 65469.000 bytes - max: 65528.000 bytes --- Ack received rate: 105.25 ack/s --- Failed messages: 0 --- Pending messages: 0 have 4 broker running but still the topic is not distributed .partition1 and partition 0 are running on same broker . topic unload nor namespace unload is not redistributing topics across brokers .am having threshold shredder and running on 2.10.x pulsar .
  • k

    kailevy

    07/09/2025, 12:29 AM
    Hey, curious if anybody who is familiar with Entry Filters has ideas on why I’m observing higher bookie CPU while the filter is active, despite the fact that the filter runs on the broker: https://github.com/apache/pulsar/discussions/24493
  • j

    Jeroen van der Wal

    07/09/2025, 1:55 PM
    We want external partners be able to consume topics. These partners are outside our corporate networking boundaries and our policies currently prevent using the Pulsar or Kafka protocol over the public internet. I found Pulsar Beam [1] which seem to do the job (but looks abandoned). Who is using this? Or has a different approach? [1] https://github.com/kafkaesque-io/pulsar-beam Thanks, Jeroen
    m
    l
    • 3
    • 18
  • n

    Nikolas Petrou

    07/13/2025, 6:28 AM
    Hello everyone. I create a jdbc sink for mysql with this configuration
    Copy code
    tenant: public
    namespace: default
    name: jdbc_sink_pulsar_to_mysql_temp
    archive: connectors/pulsar-io-jdbc-sqlite-4.0.0.nar
    inputs:
      - <persistent://public/default/temp_schema>
    configs:
      jdbcUrl: "jdbc:<mysql://mysql:3306/mqtt_db>"
      userName: "user1"
      password: "1234567890"
      tableName: "pulsar_to_db_temp"
      insertMode: INSERT
      key: "message_id"  
      nonKey: "temperature,timestamp,pulsar_timestamp"
    and i mount the connector under the connectors folder like so
    - ./pulsar-mysql/pulsar-io-jdbc-sqlite-4.0.0.nar:/pulsar/connectors/pulsar-io-jdbc-sqlite-4.0.0.nar
    but i get this error
    Copy code
    ERROR org.apache.pulsar.functions.instance.JavaInstanceRunnable - Sink open produced uncaught exception:
    java.sql.SQLException: No suitable driver found for jdbc:<mysql://mysql:3306/mqtt_db>
            at java.sql.DriverManager.getConnection(Unknown Source) ~[java.sql:?]
            at java.sql.DriverManager.getConnection(Unknown Source) ~[java.sql:?]
            at org.apache.pulsar.io.jdbc.JdbcAbstractSink.open(JdbcAbstractSink.java:97) ~[pulsar-io-jdbc-core-4.0.0.jar:?]
            at org.apache.pulsar.functions.instance.JavaInstanceRunnable.setupOutput(JavaInstanceRunnable.java:1080) ~[?:?]
            at org.apache.pulsar.functions.instance.JavaInstanceRunnable.setup(JavaInstanceRunnable.java:263) ~[?:?]
            at org.apache.pulsar.functions.instance.JavaInstanceRunnable.run(JavaInstanceRunnable.java:313) ~[?:?]
            at java.lang.Thread.run(Unknown Source) [?:?]
    Am I using the wrong connector? Or am I missing a configuration
  • t

    Tyler Tafoya

    07/16/2025, 6:32 PM
    Good afternoon! I have 4.0.4 of Pulsar deployed and am experiencing an issue where Bookkeeper is leaving some orphaned ledgers around. At first I had this happening while offloading to S3 - my general configuration was to offload every 15 minutes, with a deletionLag of 0. (I've tried tweaking the offload settings many times, but my bookie PVCs kept filling due to these orphaned ledgers). Thinking it might be an issue with offloading, I switched to disabling offloading, increasing ledger volumes, and just setting a shorter retention time. When I look in the
    /pulsar/data/bookkeeper/ledger0/current
    I see files .log files days older than my retention period. When researching this issue, I came across [PCK](https://docs.streamnative.io/private-cloud/v1/tools/pck/pck-overview) which seems to confirm this issue, as it aims to mitigate the problem. This does not look to be publicly available though - are there any alternative solutions? Thanks in advance!
    l
    • 2
    • 4
  • s

    sagar

    07/17/2025, 10:59 AM
    context • ordering required • have partitioning for throughput • using pulsar client (non-reactive) • listener threads are getting shared across internal queue in a blocking fashion • mostly io ques: in order to have higher throughput i can increase listener threads so that internal queues processing do not block each others. partitions and topics will increase and therefore there’s not end to it. How can pulsar reactive client help with this? i want high throughput with less number of thread. how does it overcome native pulsar client listeners blocking behaviour? pipelining i am aware of but it seems conflicting with partitions
    l
    • 2
    • 5
  • t

    tcolak

    07/26/2025, 2:33 PM
    Hello everyone I have a Pulsar cluster with 5 brokers (each broker is on the same server as a bookie) and 3 ZooKeeper servers. Every day during the backup process, the bookie servers shut themselves down. What can I do to prevent this? Also, is it necessary to back up the bookies? Would backing up only the ZooKeepers be sufficient?
    d
    • 2
    • 1
  • t

    Thomas MacKenzie

    07/29/2025, 4:00 AM
    How reliable is the dispatch data (values set, not the dispatch feature itself) when the brokers restart? • On the application side, the consumers check for specific values and stop if the are set. I'm currently using the dispatch rate on the topic with pre-defined value to "pause" the consumer (stop them from reading any new messages) • The dispatch rate is set (done with pulsar-admin CLI) at
    1
    for the
    msg-dispatch-rate
    and
    315600000
    for the
    dispatch-rate-period
    , which essentially lower as much as possible the message throughput (I'm aware we can't actually pause message throughput with the dispatch rate). But I noticed that the dispatch rate values returned by the admin client, on the application side, when the brokers restart, are empty as if they were not set. The consequence is that it unpauses the consumers, only to re-pause then after the brokers are done restarting. I just wanted to know if this is an expected behavior? I was assuming that this type of data, like topic properties was some sort of distributed data (from zookeeper) and was not impacted by brokers restart
    l
    • 2
    • 2
  • k

    kazeem

    07/30/2025, 11:08 PM
    I am currently having issue with pulsar configuration on my our kubernetes (exoscales sks). After the installation and I check the pods only the zookeeper is running and the toolset Others where stuck at init 0/2 podinitializing. I also set all replica to one and disabled monitoring. the environment is set to use less that 3 nodes too. All storage class where assigned as well , all bounded successful.
    d
    • 2
    • 42
  • l

    Lari Hotari

    07/31/2025, 3:03 PM
    pulsarlogo 📣 [ANNOUNCE] Apache Pulsar 3.0.13, 3.3.8 and 4.0.6 released 📣 pulsarlogo For Pulsar release details and downloads, visit: https://pulsar.apache.org/download Release Notes are at: 3.0.13: https://pulsar.apache.org/release-notes/versioned/pulsar-3.0.13/ 3.3.8: https://pulsar.apache.org/release-notes/versioned/pulsar-3.3.8/ 4.0.6: https://pulsar.apache.org/release-notes/versioned/pulsar-4.0.6/ (Current LTS release) Although 3.3.8 was released, please notice that support for 3.3.x has already ended and no new releases are planned.
    🎉 5
    • 1
    • 1
  • s

    Shresht Jain

    08/02/2025, 6:43 PM
    Hi everyone, I’ve been working with Apache Pulsar recently and truly admire how far it has come. I’m exploring ways to build tools and services around Pulsar in 2025, focused on growing adoption in new regions and smaller teams. From your experience, how is the demand for Pulsar evolving in 2025 compared to previous years? Are you seeing more interest from companies moving away from Kafka or adopting Pulsar from scratch? I’d love to understand where the ecosystem is heading. Any trends, use cases, or growth signals would be super helpful to know as someone building on top of Pulsar. Thanks in advance
    👍 1
    s
    • 2
    • 1
  • j

    Jeffrey Tan

    08/04/2025, 8:36 AM
    Hi does anyone know how to extract the data from bookkeeper? I need to query the message, when i use version 3 there is trino for me to use SQL But version 4, trino is removed
    l
    • 2
    • 1
  • a

    Alain Pigeon

    08/04/2025, 10:15 PM
    Hi. I'm running Pulsar 3.2.x in a kubernetes cluster:
    Copy code
    [assure1@pg-apigeon4 ~]$ a1k -n messaging get pods
    NAME                       READY   STATUS      RESTARTS   AGE
    pulsar-bookie-0            1/1     Running     0          23h
    pulsar-broker-0            1/1     Running     0          23h
    pulsar-proxy-0             1/1     Running     0          23h
    pulsar-recovery-0          1/1     Running     0          23h
    pulsar-toolset-0           1/1     Running     0          23h
    pulsar-zookeeper-0         1/1     Running     0          23h
    If I restart the node (i.e. delete all the pods), then pulsar-recovery, pulsar-broker, pulsar-bookie and pulsar-proxy are all stuck in Init:
    Copy code
    [assure1@pg-apigeon4 ~]$ a1k -n messaging get pod
    NAME                 READY   STATUS     RESTARTS   AGE
    pulsar-bookie-0      0/1     Init:0/1   0          77s
    pulsar-broker-0      0/1     Init:0/2   0          77s
    pulsar-proxy-0       0/1     Init:0/3   0          77s
    pulsar-recovery-0    0/1     Init:0/1   0          77s
    pulsar-toolset-0     1/1     Running    0          77s
    pulsar-zookeeper-0   1/1     Running    0          77s
    If I look at the pulsar-bookkeeper-verify-clusterid container log from the pulsar-recovery-0 pod, I notice the error:
    Copy code
    ERROR org.apache.bookkeeper.discover.ZKRegistrationManager - BookKeeper metadata doesn't exist in zookeeper. Has the cluster been initialized? Try running bin/bookkeeper shell metaformat
    I can solve this issue by running the pulsar-bookie-init. Then, pulsar-recovery-0 starts, but gets stuck at Init with other pods (pulsar-broker-0). The error in the wait-zookeeper-ready container of the pulsar-broker-0 pod:
    Copy code
    Node does not exist: /admin/clusters/pulsar
    2025-07-31T22:27:36,444+0000 [main] ERROR org.apache.zookeeper.util.ServiceUtils - Exiting JVM with code 1
    Similarly, if I run pulsar-pulsar-init job job, then it gets unstuck and all the pods starts. Is there a way to automate this without having me to run the pulsar-pulsar-init and pulsar-bookie-init jobs manually - because running those jobs is a pain, I first need to get the yaml from Helm, delete the existing jobs and apply the new jobs from the yaml I exported from Helm. Thanks.
    d
    l
    • 3
    • 4
  • t

    Thomas MacKenzie

    08/05/2025, 5:14 PM
    Is there a plan to bump the Pulsar helm chart (to chart version from
    4.1.0
    to
    4.1.1
    with application
    4.0.6
    ) following the release of Pulsar
    4.0.6
    ?
    l
    • 2
    • 2
  • l

    Lari Hotari

    08/06/2025, 8:23 AM
    We've just released Apache Pulsar Helm Chart 4.2.0 🎉 The official source release, as well as the binary Helm Chart release, are available at https://downloads.apache.org/pulsar/helm-chart/4.2.0/. The helm chart index at https://pulsar.apache.org/charts/ has been updated and the release is also available directly via helm. Release Notes: https://github.com/apache/pulsar-helm-chart/releases/tag/pulsar-4.2.0 Docs: https://github.com/apache/pulsar-helm-chart#readme and https://pulsar.apache.org/docs/helm-overview ArtifactHub: https://artifacthub.io/packages/helm/apache/pulsar/4.2.0 Thanks to all the contributors who made this possible.
    🎉 4
  • f

    Filip

    08/08/2025, 3:53 PM
    I'm looking for an advice for a production grade E2E encryption. I've attempted to use AWS KMS, but it requires a new network call per message to decrypt it. That would decrease the performance drastically so I'm not in favour of it. The other option would be to store the key pair to AWS secret manager, pull them in the service memory and use for decryption. Any other ideas?
    👀 1
  • t

    Thomas MacKenzie

    08/08/2025, 11:47 PM
    We are running into issues with messages duplicates. So far what I've witnessed is happening with the RLQ messages, not sure for first time messages yet. Sometimes we have multiple messages for all the retry counts (2 ones, 2 twos...) or just for some. It's also not consistent, I do see some clean retries with first message, then retries from 1 to 10 (RLQ), then sent to DLQ. In case of a failing message what we do is • first use the client consumer
    ReconsumerLaterWithCustomProperties()
    to send to RLQ (different topic) •
    Nack()
    the retries • Send the message to the DLQ, still using
    Nack()
    built-in consumer method The issue is very inconsistent (we don't have a lot of errors in the system), but I noticed a few instances so far. One had
    25
    retries (instead of
    10
    ) Investigation: • I looked at the broker config, was aware there is a limit of producer count for the dedup, but we are well under 10,000 per broker.
    Copy code
    pulsar-broker-3:/pulsar$ cat conf/broker.conf | grep brokerDedup
    brokerDeduplicationEnabled=true
    brokerDeduplicationMaxNumberOfProducers=10000
    brokerDeduplicationSnapshotFrequencyInSeconds=120
    # It will run simultaneously with `brokerDeduplicationEntriesInterval`
    brokerDeduplicationSnapshotIntervalSeconds=120
    brokerDeduplicationEntriesInterval=1000
    brokerDeduplicationProducerInactivityTimeoutMinutes=360
    So my question is: is there anything that lead to RLQ messages duplicates in the system? I looked at our implementation and don't see anything suspicious so far but wanted to ask in case it's a know issue or if there something I should look into specifically. Pulsar
    4.0.5
    / Go client
    0.16.0
    but issue was also present in prior version Thank you
    ✅ 1
    • 1
    • 5
  • z

    Zach Blocker

    08/13/2025, 2:33 PM
    I'm looking for an update on a new feature that had been discussed: redelivery on nack where order of messages is guaranteed (i.e. not redelivering failed messages out of order). We have a use case where order of messages is more important than speed of delivery. In conversations with StreamNative close to a year ago, I understood that that capability was actively being planned, but I don't see a Pulsar blog post about it (or anything after October 2024). Can someone provide an update?
    d
    l
    d
    • 4
    • 13
  • k

    KP

    08/18/2025, 6:04 PM
    👋 Could we update the documentation to mention that Key_Shared subscriptions must require key-based batching if batching is enabled? I think batching is enabled by default so it can be very confusingly broken when one sees consumers seeing overlapping partition keys Spent an entire weekend thinking about this issue assuming that it's my pulsar's system configuration / code that's wrong and not something on the producer side that's missing causing the key overlaps seen in multiple consumers https://pulsar.apache.org/docs/next/concepts-messaging/#key_shared
    l
    • 2
    • 3
  • s

    sijieg

    08/18/2025, 8:48 PM
    🚀 Join us at Data Streaming Summit SF 2025! We’re bringing the global data streaming community together Sep 29–30 at the Grand Hyatt at SFO — with talks from OpenAI, LinkedIn, Netflix, Uber, Google, Databricks and deep-dive tracks on Pulsar, Kafka, Flink, Iceberg, AI + streaming, and more. 💡 Special for the Pulsar community: Use code
    PULSAR50
    for 50% off registration. 👉 Register: https://www.eventbrite.com/e/data-streaming-summit-san-francisco-2025-tickets-1432401484399?aff=oddtdtcreator 📅 Full schedule: https://datastreaming-summit.org/event/data-streaming-sf-2025/schedule What to expect • Sep 29 — Training & Workshop Day: Hands-on data streaming training + advanced Streaming Lakehouse workshop (with AWS). • Sep 30 — Main Summit: Inspiring keynotes + 4 tracks: Deep Dives, Use Cases, AI + Stream Processing, Streaming Lakehouse. • Talks from top companies and community sessions featuring Pulsar, Flink, Iceberg and other data streaming technologies. Would love to see you at the Summit! 🎉
  • s

    samriddhi

    08/19/2025, 9:19 PM
    Question: Best practices for schema-agnostic Pulsar Functions? I want to write a generic Pulsar Function that can work with any input/output schema without hardcoding schema types. Current approach: • ✅ Input: Using
    AUTO_CONSUME
    - works great for reading any schema • ❌ Output: Need exact schema match, but
    AUTO_PRODUCE
    doesn't exist The challenge: To avoid static schemas, I need to get schema info from Pulsar Admin at runtime, but Pulsar Functions don't have access to
    PulsarAdmin
    (getting errors when trying). Questions: 1. What's the recommended pattern for schema-agnostic functions? 2. How do I discover output topic schema at runtime without Admin access? 3. Any alternatives to runtime schema discovery for generic functions? Goal: One function that works with multiple topic pairs having different schemas (Avro→Avro, JSON→JSON, Avro→JSON, etc.) without recompiling. Anyone solved this or know the best practices?
    n
    r
    l
    • 4
    • 7
  • s

    samriddhi

    08/19/2025, 9:19 PM
    Please see we have schema enforcement enabled
  • d

    Dan Rossi

    08/20/2025, 7:44 PM
    Question: If I'm doing event sourcing, without a snapshot, how might I be able to load events for a given aggregate Id, to build aggregate state? I see that reader has the ability to use HashKeys, but as the db size grows, hashes will collide with other keys and load those objects back as well, which will slow performance. Is there a better way to do this that anyone is doing? I also see there's an option to copy events to a secondary storage. However, this could cause race conditions, because the events might not be there right away. Also it forces me to store twice as much data. Has anyone figured out the best way to handle this just using pulsar?
    l
    z
    • 3
    • 2
  • s

    Samuel

    08/21/2025, 11:22 AM
    We're currently using a single topic to publish messages intended for multiple customers. Since these are push-based messages, we want to avoid overloading customer servers by implementing rate limiting. Our idea is to introduce a message router that directs messages into customer-specific topics. This way, we can apply dispatch rate limiting on each individual topic, effectively controlling the push rate for each customer. The consumption rate from Apache Pulsar would then directly map to our push rate, ensuring we stay within safe limits. We expect to create approximately 30 customer-specific topics and are considering using a multi-topic consumer to handle them. Here are our key questions: 1. What limitations should we be aware of when using a multi-topic consumer in Pulsar? 2. How scalable is this approach? Is a multi-topic consumer suitable for handling around 30 topics? 3. What happens if we scale to 100 topics or more does the multi-topic consumer model still hold up, or are there recommended alternatives at that scale?
    l
    • 2
    • 2
  • t

    Thomas MacKenzie

    08/26/2025, 10:26 PM
    Are there any config, when set in the brokers that are not showing in
    conf/broker.conf
    by any chance? I've been applying various changes to the brokers configuration with no issues, but I'm trying to set
    managedLedgerForceRecovery
    and it does not seems to work. It's not present or applied (when the container starts, the application logs each config field being applied). https://github.com/apache/pulsar/blob/master/pulsar-broker-common/src/main/java/org/apache/pulsar/broker/ServiceConfiguration.java#L2355-L2360 Looking at the config with
    cat conf/broker.conf | grep managedLedgerForceRecovery
    , I have no results either. I understand it's a dynamic configuration field, but others are and I can see them in that file set with the right value so I'm wondering if I'm missing something? Pulsar
    4.0.6
    Thanks for your help
    ✅ 1
    l
    • 2
    • 4
  • j

    Jack LaPlante

    08/27/2025, 6:34 PM
    Does anyone know how I can cut a new release of terraform-provider-pulsar? I have merged a PR into it that I would like to use. I also have this PR to update the docs which needs a review. cc: @Lari Hotari @Rui Fu
  • t

    Thomas MacKenzie

    08/28/2025, 3:24 AM
    What would be the best option to handle (the most graceful way I'd say) to handle managed ledger exceptions? We recently had an outage (in a non production environment) and this error showed up
    Copy code
    server error: PersistenceError: org.apache.bookkeeper.mledger.ManagedLedgerException: Error while recovering ledger error code: -10
    This error was preventing the applications to publish messages, and also created producers. For 2h I could see the ledger count a bit off at 0 (not sure what happened, but bookies were up during that time I believe) Some context: We believe the bookies were restarted (we use AWS spot instances in this env, so maybe a ungraceful shutdown). I have 2 main questions: • What would be the best course of action when this happens? (curious in manual intervention although not reactive with a system running 24/7) • I know there are 2 brokers fields available fields
    managedLedgerForceRecovery
    and
    autoSkipNonRecoverableData
    ◦ Could one of them help? (do they serve the same purpose). It seems like
    autoSkipNonRecoverableData
    be avoided is part of the legacy codebase ◦ Are they both destructive (data loss permanently)? I opened a PR to add
    managedLedgerForceRecovery
    to the broker conf, thanks for the info about the risks it involves Lari ◦ Is one better than the other? Thank you for your help
  • g

    Gaurav Ashok

    09/04/2025, 11:12 AM
    Hi. I wanted to get some advice on taming the zk outstanding requests / broker latencies, that we are seeing every 15 mins. It is happening due to the ModularLoadManagerImpl::writeBundleDataOnZooKeeper() Pulsar : 3.0.12 Configs: Zk: MaxOustandingRequests = 1000 Pulsar: loadBalancerReportUpdateMaxIntervalMinutes=15 loadBalancerResourceQuotaUpdateIntervalMinutes=15 metadataStoreBatchingEnabled=true metadataStoreBatchingMaxDelayMillis=5 metadataStoreBatchingMaxOperations=1000 metadataStoreBatchingMaxSizeKb=128 The zk's outstanding requests touch 1000 for about a minute every 15 mins due to the LoadResourceQuotaUpdaterTask job on leader . During this duration, the message production face latencies upto 20 sec. There are ~100 brokers and bookies and > 20K bundles. We are trying to evaluate, what we can do to fix this in short term. 1. Evaluating if increasing MaxOustandingRequests further can have favourable impact. 2. Evaluating the approach of "throttling" the zk writes in method writeBundleDataOnZooKeeper(). Currently it enqueues all zk writes for all bundles, likely causing this issue. We were thinking of doing the writes on a new dedicated thread, over long time (5m-10m) slowly. The job anyway runs every 15mins, so if this write load can spread across this duration, maybe the pressure from ZK can be avoided. Are there gotchas that we need to keep in mind when trying this approach? 3. If we remove "zk writes" in this job ModularLoadManagerImpl::writeBundleDataOnZooKeeper() entirely. what are the repurcussions? Who is using the /LB/bundle-data & /LB/broker-time-avg data? The later versions of pulsar has a strategy of choosing topK bundles only, so we will likely explore that later. But in short term, what changes can we explore to fix this? -- Edit loadBalancerLoadSheddingStrategy=org.apache.pulsar.broker.loadbalance.impl.OverloadShedder loadBalancerLoadPlacementStrategy=org.apache.pulsar.broker.loadbalance.impl.LeastResourceUsageWithWeight loadBalancerReportUpdateThresholdPercentage=10 loadBalancerReportUpdateMinIntervalMillis=60000 loadBalancerReportUpdateMaxIntervalMinutes=15 loadBalancerHostUsageCheckIntervalMinutes=1
    l
    t
    • 3
    • 13
  • c

    Cong Zhao

    09/08/2025, 11:56 AM
    📣 [ANNOUNCE] Apache Pulsar 4.1.0 released 📣 The Apache Pulsar team is proud to announce Apache Pulsar version 4.1.0. For Pulsar release details and downloads, visit: https://pulsar.apache.org/download Release Notes are at: https://pulsar.apache.org/release-notes/versioned/pulsar-4.1.0
    🎉 6
    d
    e
    l
    • 4
    • 10
  • j

    Jiji K

    09/09/2025, 6:41 AM
    Hello Folks ! - Hope you guys are doing great. Want to know one thing, Does anyone tried Java version - IBM’s Semeru Java runtime on pulsar cluster ?
    s
    l
    • 3
    • 4