This message was deleted.
# opal
s
This message was deleted.
👀 1
s
Hi @Philip Claesson great question, I will check this out with my team and get back to you
p
Thanks @Shuvy Ankor 🙂
What actually brought me here is that I'm trying to set up Horizontal Pod Autoscaling, and I believe that's not possible without setting resource requests. Perhaps it could also be worth considering adding an optional HPA to the helm chart? 🤷
a
cc @Ro'e Katz who is now working on it
r
@Philip Claesson - a PR adding configurable resource requests would be great! I don’t have a strong advice regarding what values to use for CPU/Memory requests, Not least because many different considerations can be taken into account when choosing those. I can share that for
opal-server
in one of our environments which is actually pretty busy, the avg CPU is ~200 mcores (with some spikes towards ~1000). As for memory usage we se it capped at ~2.5GB - but we really push opal to the limit in that regard, So I believe much less would be enough at most cases (512Mb?) Maybe more people can share their experience with resource consumption :)
p
Thanks a lot @Ro'e Katz! That's really helpful. I guess we'll just have to give it a try in our env and see where we end up! I'll get back to you with a pr to the helm chart. 🙂
🎉 1
Hey @Ro'e Katz! I've added the fields to the helm chart and deployed it successfully on my end. Will submit the PR after lunch. However, I noticed that when bumping OPAL from 0.5.0 to 0.6.1 my client pods are not ready.
k describe
gives me
Copy code
Events:
  Type     Reason     Age                    From               Message
  ----     ------     ----                   ----               -------
  Normal   Scheduled  4m27s                  default-scheduler  Successfully assigned uas/uas-opal-client-764897d94d-28hwh to ip-10-11-14-144.eu-west-1.compute.internal
  Normal   Pulling    4m26s                  kubelet            Pulling image "<http://docker.io/permitio/opal-client:0.6.1|docker.io/permitio/opal-client:0.6.1>"
  Normal   Pulled     4m23s                  kubelet            Successfully pulled image "<http://docker.io/permitio/opal-client:0.6.1|docker.io/permitio/opal-client:0.6.1>" in 2.712545837s
  Warning  Unhealthy  114s (x5 over 3m57s)   kubelet            Liveness probe failed: HTTP probe failed with statuscode: 503
  Normal   Killing    114s                   kubelet            Container opal-client failed liveness probe, will be restarted
  Normal   Pulled     114s                   kubelet            Container image "<http://docker.io/permitio/opal-client:0.6.1|docker.io/permitio/opal-client:0.6.1>" already present on machine
  Normal   Created    113s (x2 over 4m23s)   kubelet            Created container opal-client
  Normal   Started    113s (x2 over 4m23s)   kubelet            Started container opal-client
  Warning  Unhealthy  113s                   kubelet            Readiness probe failed: Get "<http://10.11.14.127:7000/healthcheck>": dial tcp 10.11.14.127:7000: connect: connection refused
  Warning  Unhealthy  102s (x11 over 4m12s)  kubelet            Readiness probe failed: HTTP probe failed with statuscode: 503
Appears like readiness probe returns 503 - any idea why that would be?
I'll reproduce again and snatch some logs after lunch.
r
Have you set ‘dataSourcesConfig’? If not - you’ll have to set OPAL_DATA_UPDATER_ENABLED: False on the client’s extraEnv. (We have a new PR for the chart that does that automatically, but haven’t merged it yet @Raz Co )
Although in that case it shouldn’t have become alive on 0.5.0 either. so logs might help :)
p
I think I figured out why, however not how to fix it. In the error logs I could see a request failing:
Copy code
uas-opal-client-5d487bcfb4-85bp6 opal-client ValueError: OPA Client: unexpected status code: 401, error: {'code': 'unauthorized', 'message': 'unauthorized resource access'}
This error exists because we use OPA request auth. We make sure there's a certain secret is passed to all requests before giving access to OPA: https://www.openpolicyagent.org/docs/latest/security/#authentication-and-authorization
Copy code
opaStartupData:
  auth.rego: >
    package system.authz
    
    default allow := {
      "allowed": false,
      "reason": "unauthorized resource access"
    }

    allow := { "allowed": true } {   # Allow request if...
      # logic for allowing access in here
    }
If I disable this auth policy, the readiness probe works. However, that's not a solution since we don't want OPA access to be unrestricted.
Best possible solution I guess would be to be able to configure how the readinessprobe authorizes itself. In our case that would be using an
Authorization: Bearer $KEY
header passed with the readiness probe. 2nd alternative would be to edit the auth policy to always allow requests to the endpoint the readinessprobe is using.
Hmm, looking at the code and logs it seems like client.maybe_init_healthcheck_policy() is the first thing that fails. seems odd that would suddenly cause problems, since it has not been changed in a while. @Ro'e Katz, any suggestions?
Found it! This change alters the default behaviour when using POLICY_STORE_AUTH_TOKEN. From this change, I also need to pass POLICY_STORE_AUTH_TYPE=token. 🙂
👍 2
r
That’s amazing @Philip Claesson, thanks for the contribution ! I’ll review it asap 😉
💚 1
p
Thanks @Raz Co! No stress, whenever you have time 🙂
r
@Philip Claesson Haha sorry for not being available yesterday. Glad you’ve been able to find the issue yourself 🙂 So everything works now?
p
No problem @Ro'e Katz! Yes, I was able to figure it out myself and everything works 🙂 Perhaps it could be nice to note somewhere in release notes/docs that the default behaviour of the POLICY_STORE_AUTH_TOKEN variable changed from v0.5.0 -> v0.6.1 (I assume that other people who like me bump OPAL without looking through the changes carefully and are using token auth will face the same problem).
Hey @Raz Co, do you think you'll have to time to take a look at it today? 🙂
r
Hey @Philip Claesson , I’ll do it in the next hour :)
💚 1
Merged and released 🙂 Thanks a lot for the contribution Philip, this is highly appreciated !
p
Thanks a lot @Raz Co! 🙂
💪 3