This message was deleted.
# opal
s
This message was deleted.
s
Hello @Jeevan , I’ll check with the team members and get back to you
j
thanks Shuvy
o
Can you share more logs? And tell us a bit more about your setup, how many OPAL server / clients do you have? Do you use Docker / K8s / else?
j
Deployment: ECS OPAL Server(latest image): Currently 1 instance OPAL Client(latest image): 2
a
Hey @Jeevan you are probably setting up some configuration var incorrectly, can you please share you config?
j
sure.
Copy code
OPAL_BROADCAST_URI -> redis://<host_url>:<port>
OPAL_DATA_CONFIG_SOURCES: -> a json
OPAL_POLICY_REPO_MAIN_BRANCH -> alpha
OPAL_POLICY_REPO_SSH_KEY -> secret
OPAL_POLICY_REPO_URL -> git@github.com:<org>/<repo-name>.git
OPAL_POLICY_REPO_WEBHOOK_SECRET -> secret
OPAL_STATISTICS_ENABLED -> true
a
hey @Jeevan i need the full logs around the error
i suspect OPAL_BROADCAST_URI is setup incorrectly
but until i see the logs, less sure
j
sure @Asaf Cohen. Thanks for jumping in. getting hat
Copy code
2022-12-19T10:54:24.283925+0000 | asyncio_redis.protocol         | INFO | Redis connection made
2022-12-19T10:54:24.284287+0000 | asyncio_redis.connection        | INFO | Connecting to redis
2022-12-19T10:54:24.285451+0000 | asyncio_redis.protocol         | INFO | Redis connection made
2022-12-19T10:54:24.286752+0000 | asyncio_redis.protocol         | INFO | Redis connection lost
2022-12-19T10:54:24.286996+0000 | asyncio_redis.protocol         | INFO | Redis connection lost
2022-12-19T10:54:24.287276+0000 | asyncio_redis.connection        | INFO | Connecting to redis
2022-12-19T10:54:24.288547+0000 | asyncio_redis.protocol         | INFO | Redis connection made
2022-12-19T10:54:24.288813+0000 | asyncio_redis.connection        | INFO | Connecting to redis
2022-12-19T10:54:24.291760+0000 | asyncio_redis.protocol         | INFO | Redis connection made
that was one log
it's the above log that is flushed a bunch of times
can it be possible that the opal-client subscribed to wrong path and whenever a policy update is made, the server is failing?
a
TBH i am not sure what's happening here, i need the full opal server and client configuration, and the full logs from both of them, to try and figure this one out (feel free to redact secrets and send it to me privately) My only intuition is for now is - try to use postgres as broadcast backbone instead of redis (all the weird logs are due to a bug in encode/broadcaster and redis). See if it solves the problem. If it does at least we narrowed it out and now can figure a solution.
@Jeevan
j
yep, trying that. hopefully will be able to resolve this
will ping here once done
this is what i see in the server logs
so i haven't changed to postgres backbone
wanted to understand when does the
event_notifier
get's called? Only when some kind of data or policy update is received, right?
i mean i don't see anything breaking as such in the system. it's just the logs which is making me anxious
a
the event notifier can be trigger for internal communication inside the server, and sometimes it does not result in an actual update to the client. the "Notifying other side" log line will indicate that the server is sending an update to the client.