This message was deleted.
# helpdesk
s
This message was deleted.
m
cc @able-gigabyte-21598
d
how are you getting this error?
w
stress testing egress deployed on gke
running multiple web egress in parallel
d
this is a sign that Chrome was not launched correctly. It's possible you've exceeded some resource limits on the server side
w
yeah seems like it i also faced this issue in multiple recordings
Copy code
replicaCount: 1

# Suggested value for gracefully terminate the pod: 1 hour
terminationGracePeriodSeconds: 3600

egress:
  api_key: ""
  api_secret: ""
  ws_url: ""
  log_level: info
  health_port: 8080
  prometheus_port: 9090
  # template_base: "<https://your-custom-template.com>"
  redis:
    address: 1.2.3.4
    db: 0
    username: default
    password: qwe
    use_tls: false
  s3:
    access_key: ""
    secret: ""
    region: ""
    # endpoint:
    bucket: ""
  # cpu_cost:
    # room_composite_cpu_cost: 3
    # track_composite_cpu_cost: 2
    # track_cpu_cost: 1

# autoscaling requires resources to be defined
autoscaling:
  # set to true to enable autoscaling. when set, ignores replicaCount
  enabled: true
  minReplicas: 1
  maxReplicas: 5
  targetCPUUtilizationPercentage: 60
#  targetMemoryUtilizationPercentage: 60
  # for use with prometheus adapter - the egress service outputs a prometheus metric called livekit_egress_available
  # this can be used to ensure a certain number or percentage of instances are available
#  custom:
#    metricName: my_metric_name
#    targetAverageValue: 70

# if egress should run only on specific nodes
# this can be used to isolate designated nodes
nodeSelector: {}
# <http://node.kubernetes.io/instance-type|node.kubernetes.io/instance-type>: c5.2xlarge

resources:
  requests:
    cpu: 3500m
    memory: 1024Mi
  limits:
    cpu: 8000m
    memory: 2048Mi

serviceAccount:
  # Specifies whether a service account should be created
  create: false
  # Annotations to add to the service account
  annotations: {}
  # The name of the service account to use.
  # If not set and create is true, a name is generated using the fullname template
  name: ""

podAnnotations:
  <http://sidecar.istio.io/inject|sidecar.istio.io/inject>: "false"
  <http://linkerd.io/inject|linkerd.io/inject>: disabled

podSecurityContext: {}
# fsGroup: 2000

securityContext: {}
  # capabilities:
  #   drop:
  #   - ALL
  # readOnlyRootFilesystem: true
# runAsNonRoot: true
# runAsUser: 1000

tolerations: []

affinity: {}
this is my config for helm should I change the
resources
for better performance?
d
track composites are fairly expensive to run, you wouldn't be able to run more than a couple with
3500m
requested
w
got it i am using web egress btw should I have more resources + autoscale it at like 50% of cpu usage? @dry-elephant-14928
d
it's workload dependent so you'd have to play with the scaling parameters to see what's best for your workload
a
@wonderful-nail-69227 web egress uses 4 CPU in average cases, but it can use up to 6 for more intensive templates. An easy/effective way to autoscale is by using large machines (something like 12 or 16 CPU) and targeting around 70% CPU usage