rootshellz12/05/2023, 6:00 PM
, but then make an ultimate decision on what environment to assign. If the agent specified environment changes on the node, it won't be seen by the ENC until a subsequent run. Is there a better pattern to solve for this?
riccsnet12/05/2023, 9:43 PM
resource that is failing after the upgrade. The following is the command being run and the error this command get when run at the command line.
During the upgrade the readline rpm was upgraded from version 6.2 to 7.0. Is there something I need to do to allow the puppet resource to use the new libreadline version. Also, everything on the puppet server seems to be working. We have about 150 clients that are not having any issue connecting and useing the puppet server.
/opt/puppetlabs/server/bin/psql --tuples-only --quiet -p 5432 --dbname postgres /opt/puppetlabs/server/bin/psql: error while loading shared libraries: libreadline.so.6: cannot open shared object file: No such file or directory
riccsnet12/05/2023, 9:45 PM
spp12/05/2023, 10:10 PM
bastelfreak12/05/2023, 10:16 PM
Neeloj12/06/2023, 10:23 AM
Guillem Liarte12/06/2023, 12:03 PM
With this in heira:
$baseline = lookup( 'baseline', Hash, 'deep' )
fails with puppet-agent 7 while is it perfectly fine when calling from puppet agent v6 The classification seems to work correctly. I have looked in changes and release notes for v7 and i can't find anything specific to this. instead, we get:
baseline: os_name: Rocky Linux os_version: 8.6 rpm_packages_present: - '<http://containerd.io|containerd.io> 1.4.3' - 'container-selinux 2.179.1' - 'docker-ce 20.10.17' - 'sqlite 3.26.0' - 'rsync 3.1.3' - 'perl-YAML 1.24' - 'perl-JSON 2.97.001' - 'qperf 0.4.11' - 'epel-release 8' (shortened on purpose)
---- The same happens when using puppet lookup from command line from one of the puppet servers; the same node lookup will fail when that node is in v7 , but behave as expected in v6. But here, there is a more interesting result:
Error while evaluating a Resource Statement, Function lookup() did not find a value for the name 'baseline' on node xxxxx
So, does it mean that the agent in v7 is not sending or finding the facts that v6 can find without issues? Has anyone any idea of what may be going on?
Error: Could not run: No facts available for target node: xxxxxx
Guillem Liarte12/06/2023, 12:05 PM
Guillem Liarte12/06/2023, 12:21 PM
2023-12-06T13:20:24.820+01:00 INFO [qtp698030402-52] [puppetserver] Puppet 'replace_facts' command for YYYYYYYYYYY submitted to PuppetDB with UUID 53d86d27-f4cc-466b-8a8d-b04cf0b3e4c6 2023-12-06T13:20:28.558+01:00 INFO [qtp698030402-52] [puppetserver] Puppet Compiled catalog for YYYYYYYYYYYYYYYYYYY in environment infra in 3.63 seconds 2023-12-06T13:20:28.561+01:00 INFO [qtp698030402-52] [puppetserver] Puppet Caching catalog for YYYYYYYYYYYYYYYYY
bastelfreak12/06/2023, 12:22 PM
on the CLI and debug it. you need to run it on the puppetserver
Guillem Liarte12/06/2023, 12:41 PM
Guillem Liarte12/06/2023, 12:41 PM
The same happens when using puppet lookup from command line from one of the puppet servers; the same node lookup will fail when that node is in v7 , but behave as expected in v6. But here, there is a more interesting result: Error: Could not run: No facts available for target node: xxxxxx So, does it mean that the agent in v7 is not sending or finding the facts that v6 can find without issues?
Guillem Liarte12/06/2023, 12:42 PM
190712/06/2023, 2:15 PM
. It appears that there isn’t much information about the
dnf module enable mariadb:10.5
provider in the documentation
Bruno Josafa12/06/2023, 2:58 PM
Elliott12/06/2023, 4:31 PM
? It seems both are deprecated maybe? What should I be using to validate type of a variable not in a parameter and not in a declaration statement? Bard and ChatGPT are misleading here too.
Jean Michel Feltrin12/06/2023, 5:14 PM
even with certs showing as ok:
Error: Could not retrieve catalog from remote server: Error 500 on SERVER: Server Error: Evaluation Error: Error while evaluating a Function Call, Failed to execute '/pdb/query/v4' on at least 1 of the following 'server_urls'
I'm also able to curl the same endpoint using the server certs..
openssl verify -CAfile ca.pem $(puppet master --configprint hostcert) /etc/puppetlabs/puppet/ssl/certs/xxx.comain.pem: OK
Jean Michel Feltrin12/06/2023, 5:15 PM
rismoney12/07/2023, 1:37 AM
vchepkov12/07/2023, 1:43 AM
Christian Michael Tan12/07/2023, 9:00 AM
rismoney12/07/2023, 2:28 PM
David Sandilands12/08/2023, 2:31 PM
matthew katzenstein12/08/2023, 6:47 PM
Kenneth Smith12/08/2023, 8:25 PM
It'll try 5 times and then give up with the final error and a very large stacktrace I can provide if needed. This can happen within minutes after restarting puppetdb after a few catalog/reports are successfully handled, though it sometimes can take an hour or two. It's hosted on the same physical box as the postgres (9.2, maybe can be upgraded if it would help) server. This is hardware, so no vm shenanigans at play at all. I've enabled connect/disconnect logging in postgres, and it does indeed log connections being closed, but it does not log any problem with them, just normal disconnects. Lots of googling has turned up various posts online from 2010-2013ish, which seems about right time-wise, with few things specifically puppet related, and lots of things jdbc specifically related. The most plausible being that there's likely some sql exception being thrown that is implicitly closing the connection in the pool, but isn't being being treated as such by the pool manager and/or application code. So the object says in the pool get's pulled to use later and then gets a connection closed exception. This translates to the retry loop above and mostly puppet agents timing out, because the master times out waiting for puppetdb. Watching the connection log in postgres, I do see connections being made and then disconnecting during this retry loop, however it's not clear of those are specific the retry loop trying to retry connections, or just other normal operations happening. I've tried tuning
2023-12-08 13:11:02,648 DEBUG [c.p.jdbc] Caught org.postgresql.util.PSQLException: 'This connection has been closed.'. SQL Error code: '08003'. Attempt: 4 of 5. 2023-12-08 13:11:14,172 DEBUG [c.p.jdbc] Caught org.postgresql.util.PSQLException: 'This connection has been closed.'. SQL Error code: '08003'. Attempt: 5 of 5. 2023-12-08 13:11:26,219 WARN [c.p.jdbc] Caught exception. Last attempt, throwing exception. 2023-12-08 13:11:26,220 DEBUG [c.p.p.command] [566d9d3a-4a77-4128-b2e9-1dce2aced0d8] [replace catalog] Retrying after attempt 2, due to: org.postgresql.util.PSQLException: This connection has been closed. org.postgresql.util.PSQLException: This connection has been closed.
to make the pool management code more aggressively check and rebuild the pool connections, but I can't tell if that's made any impact. The time between puppetdb restart and the problem happening seems about the same. If anyone has any knowledge about dusty versions of puppet to share, I'd be very happy to hear it, thanks!