does anyone have a `.env` file they can share that...
# ask-ai
j
does anyone have a
.env
file they can share that shows how to configure? i am getting issues with micronaut and im pretty sure its because i only have these two, but the docs are not the best:
Copy code
WORKSPACE_ROOT=/tmp/workspace/
WORKSPACE_DOCKER_MOUNT=airbyte_workspace
k
A message from kapa.ai
Hmm, I don't know enough to give you a confident answer yet. However, you can refer to the Airbyte documentation for more information on configuration and troubleshooting: https://docs.airbyte.com
s
@Jake Kagan I use
Copy code
WORKSPACE_ROOT: /workspace
WORKSPACE_DOCKER_MOUNT: airbyte_workspace
Not sure if it matters to you. This is what I set on all deployments where this is used.
Example of my local deployment I use also for real environments.
Copy code
apiVersion: v1
data:
  AIRBYTE_VERSION: "0.44.4"

  ####
  # Airbyte Internal Database, see <https://docs.airbyte.io/operator-guides/configuring-airbyte-db>
  ####
  DATABASE_HOST: airbyte-db-svc.airbyte.svc.cluster.local
  DATABASE_PORT: "5432"
  DATABASE_DB: airbyte
  DATABASE_URL: jdbc:<postgresql://airbyte-db-svc.airbyte.svc.cluster.local:5432/airbyte>
  JOBS_DATABASE_MINIMUM_FLYWAY_MIGRATION_VERSION: "0.29.15.001"
  CONFIGS_DATABASE_MINIMUM_FLYWAY_MIGRATION_VERSION: "0.35.15.001"
  DATABASE_USER: admin
  DATABASE_PASSWORD: admin

  RUN_DATABASE_MIGRATION_ON_STARTUP: "true"
  ####
  # When using the airbyte-db via default docker image:
  ####
  CONFIG_ROOT: /configs
  DATA_DOCKER_MOUNT: airbyte_data
  DB_DOCKER_MOUNT: airbyte_db

  # <http://Temporal.io|Temporal.io> worker configuration
  TEMPORAL_HOST: airbyte-temporal-svc.airbyte.svc.cluster.local:7233
  TEMPORAL_WORKER_PORTS: "9001,9002,9003,9004,9005,9006,9007,9008,9009,9010,9011,9012,9013,9014,9015,9016,9017,9018,9019,9020,9021,9022,9023,9024,9025,9026,9027,9028,9029,9030,9031,9032,9033,9034,9035,9036,9037,9038,9039,9040"

  ####
  # Workspace storage for running jobs (logs, etc)
  ####
  WORKSPACE_ROOT: /workspace
  WORKSPACE_DOCKER_MOUNT: airbyte_workspace
  LOCAL_ROOT: /tmp/airbyte_local
  WORKER_ENVIRONMENT: kubernetes
  RUN_DATABASE_MIGRATION_ON_STARTUP: "true"
  METRIC_CLIENT: ""
  OTEL_COLLECTOR_ENDPOINT: ""
  ACTIVITY_MAX_ATTEMPT: ""
  ACTIVITY_INITIAL_DELAY_BETWEEN_ATTEMPTS_SECONDS: ""
  ACTIVITY_MAX_DELAY_BETWEEN_ATTEMPTS_SECONDS: ""
  WORKFLOW_FAILURE_RESTART_DELAY_SECONDS: ""
  USE_STREAM_CAPABLE_STATE: "true"
  AUTO_DETECT_SCHEMA: "true"
  CONTAINER_ORCHESTRATOR_ENABLED: "true"
  CONTAINER_ORCHESTRATOR_IMAGE: ""
  WORKERS_MICRONAUT_ENVIRONMENTS: "control-plane"
  CRON_MICRONAUT_ENVIRONMENTS: "control-plane"
  WORKER_LOGS_STORAGE_TYPE: "MINIO"
  WORKER_STATE_STORAGE_TYPE:  "MINIO"
  SHOULD_RUN_NOTIFY_WORKFLOWS: "true"
  MAX_NOTIFY_WORKERS: "5"

  ####
  # Maximum total simultaneous jobs across all worker nodes
  ####
  SUBMITTER_NUM_THREADS: "10"

  ####
  # Miscellaneous
  ####
  TRACKING_STRATEGY: logging
  WEBAPP_URL: <http://airbyte-webapp-svc.airbyte.svc.cluster.local:80>
  API_URL: /api/v1/
  CONNECTOR_BUILDER_API_URL: "/connector-builder-api"
  INTERNAL_API_HOST: airbyte-server-svc.airbyte.svc.cluster.local:8001
  FULLSTORY: disabled
  IS_DEMO: "false"
  LOG_LEVEL: INFO

  ####
  # AWS
  ####
  AWS_ACCESS_KEY_ID: minioadmin
  AWS_SECRET_ACCESS_KEY: minioadmin1234

  ####
  # Minio
  ####
  STATE_STORAGE_MINIO_ACCESS_KEY: minioadmin
  STATE_STORAGE_MINIO_SECRET_ACCESS_KEY: minioadmin1234

  ####
  # K8S Specific
  ####
  WORKER_ENVIRONMENT: kubernetes
  JOB_KUBE_TOLERATIONS: ""
  JOB_KUBE_ANNOTATIONS: ""
  JOB_KUBE_NODE_SELECTORS: ""
  JOB_KUBE_MAIN_CONTAINER_IMAGE_PULL_POLICY: ""

  ####
  # S3/Minio Log Configuration
  ####
  S3_LOG_BUCKET: airbyte-local-logs
  S3_LOG_BUCKET_REGION: ""
  S3_MINIO_ENDPOINT: <http://airbyte-minio-svc.airbyte.svc.cluster.local:9000>
  S3_PATH_STYLE_ACCESS: "true"

  ####
  # GCS Log Configuration (not used but set to empty)
  ####
  GCS_LOG_BUCKET: ""
  GOOGLE_APPLICATION_CREDENTIALS: ""

  ####
  # State Storage Configuration
  ####
  STATE_STORAGE_MINIO_BUCKET_NAME: airbyte-local-logs
  STATE_STORAGE_MINIO_ENDPOINT: <http://airbyte-minio-svc.airbyte.svc.cluster.local:9000>

  ####
  # Docker Resource Limits
  ####
  JOB_MAIN_CONTAINER_CPU_REQUEST: "0.1"
  JOB_MAIN_CONTAINER_CPU_LIMIT: "0.1"
  JOB_MAIN_CONTAINER_MEMORY_REQUEST: "200Mi"
  JOB_MAIN_CONTAINER_MEMORY_LIMIT: "200Mi"

  ####
  # Launch a separate pod to orchestrate sync steps
  ####
  CONTAINER_ORCHESTRATOR_ENABLED: "true"

  ####
  # CONNECTOR BUILDER
  ####
  CONNECTOR_BUILDER_API_HOST: airbyte-connector-builder-server-svc.airbyte.svc.cluster.local:80

kind: ConfigMap
metadata:
  namespace: airbyte
  labels:
    service: airbyte-env
  name: airbyte-env
You will need to make sure all values align with your deployment for postgres etc.
🙏 1
You can also check by getting the latest helm but as a single yaml if you want to investigate.
Copy code
helm template airbyte/airbyte > airbyte.yml
given you have the chart
j
i actually am just using docker-compose. so it seems a bit different, but im going to try to implement with some of these using minio, thank you!!!
s
If you add the helm chart, and run the template command you can reverse engineer the k8s deployment into docker-compose, I would probably advice you to try to run with k8s locally however. Then you can just install with helm and only provide a config file.
j
welp i guess im gonna be learning kubernetes over the weekend
💪 1