This message was deleted.
# opal
s
This message was deleted.
a
regarding (1) yes this normal, OPA stores data with about 20x more memory compared to disk: https://www.openpolicyagent.org/docs/latest/policy-performance/#resource-utilization regarding (2) we don't have too many options with regards to OPAL itself at the moment, most of the optimizations would be at the OPA level with the rego and data you load. @Raz Co can you help @Dor Alon with (3) ?
r
Hey Dor, When you say that you run OPAL as “k8s sidecar” you mean that you run it as a second container in the same pod of your backend ? Can you elaborate more here ?
d
@Raz Co yes. a second container in the same pod as the service using it. what's the best practice deployment option for opal client ?
OPA recommends deploying it either as a sidecar to a daemon set https://www.openpolicyagent.org/docs/latest/integration/#integrating-with-the-rest-api
r
Running the client as part of your backend pod is a nice way to achieve the different k8s features of fault tolerance and availability. When your backend sending request to your localhost opal client (Same pod, same network interface) you can be sure that it’ll be sent to the client in the same node, which is obviously great.
I’m not a fan of daemon set in general, and specifically when you want to deploy an application or an agent. Usually i’ll use DS to deploy infra related services and not application related services.
deploying OPA as DS would be great if you use OPA for your infra policies.
d
a sidecar is a no go because of the memory footprint, a daemon set has no redundancy. looks like a deployment is the only way. do you have any real world data on the expected latency ?
r
It really depends on your cloud provider, instance types, network preferences, throughput and more. Why does the memory footprint an issue here ? You can set memory limits per container and in case of exceeding the memory limit, only the specific container will get OOM.
d
we have about 2000 pods in production, most of them will use opal, an extra 360MB per pod is going to cost us
r
Yeah, that’s a large infra… DS with node-port could be an option here to reduce latency, but it’s the same as I guess you have many nodes in your infra.