https://www.puppet.com/community logo
Join Slack
Powered by
# choria
  • i

    Ibrahim Sikandar

    10/01/2025, 8:14 PM
    its running choria --version 0.29.4
  • b

    bastelfreak

    10/01/2025, 8:18 PM
    Which error did you get when you tried to install it with puppet?
  • i

    Ibrahim Sikandar

    10/01/2025, 8:23 PM
    this didnt worked
  • s

    smortex

    10/01/2025, 10:01 PM
    the choria class has no "broker" parameter. The broker is managed with choria::broker. Here is the conf I use for end-to-end testing of choria when I update the FreeBSD ports: https://github.com/smortex/freebsd-puppet-test-infrastructure/tree/production/site-modules/profile/manifests/choria
  • a

    Anderson Ferreira

    10/15/2025, 6:34 PM
    hello! i'd like to propose the following PR for your review: https://github.com/choria-io/go-choria/pull/2282 we've hit a situation with an overloaded choria broker during a "reconnection storm" and increasing the connection timeout on the choria servers - along with tls_timeout and auth_timeout on the broker - resolved the issue.
  • r

    ripienaar

    10/15/2025, 7:03 PM
    Hmm, how many nodes are reconnecting? They should have random reconnect delays and expo backoff to help already. Do you have big RSA keys maybe? Like 4K or similar?
  • a

    Anderson Ferreira

    10/15/2025, 7:36 PM
    6K nodes. 2K keys. broker running on an aws ec2 instance (t3a.medium). i could see in the logs the backoff algorithm running. but in the environment i have, that wasn't enough. i went ahead, built a choria package with the connect timeout option, installed in a choria server, and that server could connect to the broker. these are the sort of messages i see in the broker during the reconnection storm:
    Copy code
    {
      "component": "network_broker",
      "level": "error",
      "msg": "redact:41534 - cid:29307 - TLS handshake timeout",
      "time": "2025-10-15T06:24:23-05:00"
    }
    and this is what i see in the server:
    Copy code
    {"component":"server","connection":"redact","identity":"redact","level":"info","msg":"Sleeping 3.062s till the next reconnection attempt","time":"2025-10-14T11:53:50-05:00"}
    {"component":"server","connection":"redact","identity":"redact","level":"error","msg":"NATS client encountered an error: EOF","time":"2025-10-14T11:53:50-05:00"}
    {"component":"server","connection":"redact","identity":"redact","level":"info","msg":"Sleeping 8.839s till the next reconnection attempt","time":"2025-10-14T11:54:08-05:00"}
    {"component":"server","connection":"redact","identity":"redact","level":"error","msg":"NATS client encountered an error: EOF","time":"2025-10-14T11:54:08-05:00"}
    {"component":"server","connection":"redact","identity":"redact","level":"info","msg":"Sleeping 4.7s till the next reconnection attempt","time":"2025-10-14T11:54:24-05:00"}
    {"component":"server","connection":"redact","identity":"redact","level":"error","msg":"NATS client encountered an error: EOF","time":"2025-10-14T11:54:24-05:00"}
    {"component":"server","connection":"redact","identity":"redact","level":"info","msg":"Sleeping 10.622s till the next reconnection attempt","time":"2025-10-14T11:54:38-05:00"}
    {"component":"server","connection":"redact","identity":"redact","level":"error","msg":"NATS client encountered an error: read tcp redact:57536-\u003eredact:4222: read: connection reset by peer","time":"2025-10-14T11:54:38-05:00"}
  • r

    ripienaar

    10/16/2025, 3:13 AM
    Yeah sounds like typical RSA certs that are very heavy. Not sure if Puppet can make different algo certs yet but that would really help. Itโ€™s one broker or a cluster? 6k nodes really not that bad.
  • r

    ripienaar

    10/16/2025, 3:41 AM
    Oh those are really crappy instances. With 2k key size certs you should probably go for something bigger. More CPUs would help as the TLS will use that well.
  • r

    ripienaar

    10/16/2025, 3:41 AM
    Can add another option but obviously prefer to identify the issue cos thatโ€™s not normal
  • b

    bastelfreak

    10/16/2025, 6:40 AM
    Puppet has an option for elliptic curves, but that's rarely used. Worth a try
  • a

    Anderson Ferreira

    10/16/2025, 3:39 PM
    it's one broker only. in practical terms, increasing the instance size might tackle the problem. but why giving aws more money if a configuration option would also handle it? ๐Ÿ˜› under usual load, the broker is performing fine, responses from the fleet are processed quick enough. so increasing the instance for a situation that does not happen often does not seem ideal. and if i can go a little further, conceptually it seems we really need the connect timeout. since we are providing a way to tune the timers on the broker side (tls_timeout/auth_timeout), we'd also need a way to reflect that on the server side. i hope you agree ๐Ÿ™‚
  • r

    ripienaar

    10/16/2025, 5:25 PM
    Yeah. Will review next week am ooo atm
  • a

    Anderson Ferreira

    10/16/2025, 5:27 PM
    oh, no problem. enjoy your time off. and thank you so much for being open to discuss/review this. really appreciate it.
  • i

    Ibrahim Sikandar

    10/16/2025, 6:42 PM
    Hey Guys, so I just build a custom choria module using rpm, its 0.29v. I configured broker using broker.conf file and on another server I configured server using server.conf and client.conf. It works, when i did
    choria ping
    it returned me the nodes hostname. Now I want to get facts like ip addresses and stuff but it returns
    null
    I also added this
    plugin.choria.facts_source = facter
    plugin.choria.facts_file = /etc/choria/facts.json
    in server.conf file but it still not getting the facts although it return the IP address when I do this
    sudo jq -r '.ipaddress' /etc/choria/facts.json
    192.168.20.30
  • i

    Ibrahim Sikandar

    10/21/2025, 10:24 AM
    no one??
  • r

    ripienaar

    10/21/2025, 10:25 AM
    Puppet would have configured the correct items for you
  • r

    ripienaar

    10/21/2025, 10:25 AM
    how did you install it?
  • i

    Ibrahim Sikandar

    10/21/2025, 10:26 AM
    I am not using puppet module, I installed choria using yum rpm and configuring it manually
  • r

    ripienaar

    10/21/2025, 10:26 AM
    support here is for puppet installation method only
  • i

    Ibrahim Sikandar

    10/21/2025, 12:27 PM
    so actually I was already using mcollective agent package, puppet and service and its already configured, now as Its EOL and i want to move to choria. So what will be the best approach to move without disturbing old configs and Infra? I did tried to use and install choria using puppet but it also tried to replace my configured mcollective (which I dont want) thats why I moved to custom module.
  • r

    ripienaar

    10/21/2025, 12:30 PM
    use the choria puppet modules
  • r

    ripienaar

    10/21/2025, 12:30 PM
    you cannot migrate without downtime, its a new thing
  • r

    ripienaar

    10/21/2025, 12:30 PM
    not protocol compatible, cant use the same server etc
  • r

    ripienaar

    10/21/2025, 12:31 PM
    hence new name etc, compatibility is that old client code written in ruby will keep working, old agents will keep working, but apart from that its a new thing, new configs, new protocol, new security new everything
  • a

    Anderson Ferreira

    11/18/2025, 7:01 PM
    hello, guys. is there sort of a date for a new choria release? not trying to bother, just want to plan the deploy of the newest code, and want to understand if i should wait for the official release ๐Ÿ™‚
  • r

    ripienaar

    11/18/2025, 7:12 PM
    i dont have immediate plans to do a release
    โœ… 1
    ๐Ÿ˜ž 1
  • r

    ripienaar

    11/23/2025, 4:43 PM
    This channel is migrating to the Vox Pupuli slack, please join using https://short.voxpupu.li/puppetcommunity_slack_signup Responses here will be limited
  • s

    smortex

    12/03/2025, 11:20 PM
    set the channel topic: https://choria.io โ€” This channel is migrating to the Vox Pupuli slack, please join using https://short.voxpupu.li/puppetcommunity_slack_signup
  • m

    Michael Harp

    12/12/2025, 10:00 PM
    @Michael Harp has left the channel