<Release - v0.60.0> New release published by <octa...
# releases
u
Release - v0.60.0 New release published by octavia-squidington-iii 8cc9106 fix: remove invalid steps from airbyte-ci test options (#38246) 100b3ac 🐛Source Zendesk Support: fix record filter for ticket_metrics stream (#38310) b119353 [ISSUE #38154] temporarily remove wish_bid from fields (#38301) 6ea7957 🤖 Cut version 0.90.0 of source-declarative-manifest b90ba62 🤖 minor bump Python CDK to version 0.90.0 f9fd130 refactor(source-tiktok-marketing): Replace AirbyteLogger with logging.Logger and upgrade to latest base image (#38250) d0b8adc refactor(source-salesforce): Replace AirbyteLogger with logging.Logger (#38255) 8396fd2 airbyte-cdk: Improve Error Handling in Legacy CDK (#37576) 49fc60d source-postgres - Streams not in the CDC publication still have a cursor and PK (#38303) 5c492b5 remove empty file (#38307) d9792e7 Update the documentation with the new configuration. (#38175) 1bc850a Add availability_sla_days and fixed_period_in_days to GET_VENDOR_TRAFFIC_REPORT stream (#38210) 73a44d0 🐛 Source Hubspot: add
TypeTransformer
to
Tickets
stream (#38286) 8dffad5 Source Jira: update migration guide (#38233) 01acc55 fix: use correct icon url path in metadata publish (#38247) df9dfc0 Regression tests: wire through
--use-local-cdk
option to GHA (#38287) b82e1ed Airbyte CDK: remove unreleased version (#38237) e60d949 🏥 Source Recharge: Fix expected records (#38223) ffc613e 🏥 Source Zendesk Support: Update expected records (#38220) 464a89c Update the destination template not to use AirbyteLogger (#38199) 31c95da [skip ci] Add connectorTestSuitesOptions to metadata (#38188) 68324db Source Salesforce: Use new delete method of HttpMocker for test_bulk_stream (#38205) 2661d75 Snowflake Cortex : Update icon name in metadata file. (#38231) 073b940 db-sources: disable counts for state messages for FULL_REFRESH streams (#38208) e19e634 Snowflake Cortex destination : Bug fixes (#38206) 5ecaef0 more destination postgres warnings (#38219) 9c72d0e Source Delighted: Make Connector Compatible with Builder (#38142) 8f5fafc Source Open ExchangeRates Api: Make Connector Compatible with Builder (#38141) bce549a Source Chartmogul: Make Connector Compatible with Builder (#38145) 8db8a96 Source Confluence: Make Connector Compatible with Builder (#38137) ab0d60c Source Insightly: Make Connector Compatible with Builder (#38140) 0e76011 Source PokeApi: Make Connector Compatible with Builder (#38136) 1c3a6c4 Destination Pinecone: Add source_tag for attribution + unit tests (#38151) bc83bee 🤖 Cut version 0.89.0 of source-declarative-manifest ce408df 🤖 minor bump Python CDK to version 0.89.0 fb11ca2 low-code: Yield records from generators instead of keeping them in in-memory lists (#36406) 4b84c63 New: Snowflake Cortex Destination 🚀 (#36807) 100f4e0 source-mssql: bump jdbc driver to 12.6.1.jre11 (<https://github.com… airbytehq/airbyte
j
d9792e7 Update the documentation with the new configuration. (#38175)
@Marcos Marx (Airbyte) FYI, it looks like the Helm deployment docs weren't updated to match the new external database config (it still lists the config under
global.externalDatabase
instead of under
global.database
) . . . which means that anyone who tries to upgrade with an existing config will get:
Error: UPGRADE FAILED: execution error at (airbyte/templates/env-configmap.yaml346): You must set
global.database.host
when using an external database
m
Thanks for the heads up Justin. I’ll update them next week 😛 @Bryce Groff (Airbyte) fyi!
b
Thanks for the heads up. Those docs are very wrong in many places.
😂 1
j
@Bryce Groff (Airbyte) Just a heads-up that I'm also seeing errors when trying to upgrade with the current version of the Helm charts even after swapping the config:
couldn't find key DATABASE_URL in ConfigMap airbyte-ns/airbyte-airbyte-env: CreateContainerConfigError
It looks like it's trying to read this from
airbyte-env
, not from a config in
values.yaml
(or the internally-generated
airbyte.database.url
in
__database.tpl
) . . . but if I specify
DATABASE_URL
in
env_vars
(either under
global
or
airbyte-bootloader
), it says it's a duplicate and still throws the error.
b
Can you post your secrets and values.yaml file?
j
@Bryce Groff (Airbyte) Sure, here are redacted versions of what I'm currently using (you can probably ignore some of the funky GCP stuff, and there may be artifacts of my troubleshooting in there too):
Copy code
# Secret Creation
kubectl create secret generic airbyte-config-secrets \
  --from-literal=database-host='0.0.0.0' \
  --from-literal=database-port='5432' \
  --from-literal=database-name='redacted-db' \
  --from-literal=database-user='redacted-user' \
  --from-literal=database-password='redacted-password' \
  --from-file=gcp.json=airbyte-gcp-creds.json \
  --namespace airbyte-ns

# Resulting Secret
Name:         airbyte-config-secrets
Namespace:    airbyte-ns
Labels:       <none>
Annotations:  <none>

Type:  Opaque

Data
====
database-name:      10 bytes
database-password:  19 bytes
database-port:      4 bytes
database-user:      20 bytes
gcp.json:           2369 bytes
database-host:      12 bytes
Copy code
# values.yaml
postgresql:
  enabled: false

minio:
  enabled: false

global:

  airbyteUrl: "<https://redacted.example.com>"

  serviceAccountName: &service-account-name redacted-airbyte-sa

  database:
    secretName:  &secret-name airbyte-config-secrets
    hostSecretKey: database-host
    portSecretKey: database-port
    databaseSecretKey: database-name
    userSecretKey: database-user
    passwordSecretKey: database-password

  storage:
    type: GCS
    storageSecretName: *secret-name
    bucket:
      log: &storage-bucket redacted-bucket-name
      state: *storage-bucket
      workloadOutput: *storage-bucket
      activityPayload: *storage-bucket
    gcs:
      projectId: redacted-project-name
      bucket: *storage-bucket
      credentialsPath: &log-creds-path /secrets/airbyte-config-secrets/gcp.json
      credentialsJson: &sa-creds-json REDACTED

  state:
    storage:
      type: GCS

  logs:
    minio:
      enabled: false
    storage:
      type: GCS
    gcs:
      bucket: *storage-bucket
      credentials: *log-creds-path
      credentialsJson: *sa-creds-json

  secretsManager:
    type: googleSecretManager
    storageSecretName: *secret-name
    googleSecretManager:
      projectId: redacted-project-name
      credentialsSecretKey: gcp.json

  jobs:
    resources:
      requests:
        cpu: 500m
        memory: 1Gi
      limits:
        cpu: 2000m
        memory: 3Gi

serviceAccount:
  name: *service-account-name

webapp:
  service:
    annotations:
      <http://cloud.google.com/backend-config|cloud.google.com/backend-config>: '{"default": "airbyte-prod-backend-config"}'
      <http://cloud.google.com/neg|cloud.google.com/neg>: '{"ingress": true}'

  ingress:
    enabled: true
    annotations:
      <http://kubernetes.io/ingress.class|kubernetes.io/ingress.class>: gce
      <http://kubernetes.io/ingress.global-static-ip-name|kubernetes.io/ingress.global-static-ip-name>: redacted-airbyte-ingress-ip
      <http://networking.gke.io/managed-certificates|networking.gke.io/managed-certificates>: redacted-cert
      <http://networking.gke.io/v1beta1.FrontendConfig|networking.gke.io/v1beta1.FrontendConfig>: airbyte-prod-frontend-config
    hosts:
      - host: <http://redacted.example.com|redacted.example.com>
        paths:
          - path: /*
            pathType: ImplementationSpecific

server:
  replicaCount: 2
  resources:
    requests:
       memory: 1Gi
       cpu: 500m
    limits:
       cpu: 2000m
       memory: 3Gi
  livenessProbe:
    initialDelaySeconds: 240
    periodSeconds: 30
    timeoutSeconds: 30
  readinessProbe:
    initialDelaySeconds: 60
    periodSeconds: 30
    timeoutSeconds: 30

worker:
  replicaCount: 5
  resources:
    requests:
      memory: 1Gi
      cpu: 500m
    limits:
      cpu: 2000m
      memory: 3Gi
  livenessProbe:
    initialDelaySeconds: 240
    periodSeconds: 30
    timeoutSeconds: 30
  readinessProbe:
    initialDelaySeconds: 60
    periodSeconds: 30
    timeoutSeconds: 30
  extraEnv:
    - name: STATE_STORAGE_GCS_BUCKET_NAME
      value: *storage-bucket
    - name: STATE_STORAGE_GCS_APPLICATION_CREDENTIALS
      value: *log-creds-path
    - name: CONTAINER_ORCHESTRATOR_SECRET_NAME
      value: airbyte-config-secrets
    - name: CONTAINER_ORCHESTRATOR_SECRET_MOUNT_PATH
      value: /secrets/airbyte-config-secrets

temporal:
  replicaCount: 2
  resources:
    requests:
      memory: 256Mi
      cpu: 250m
    limits:
      cpu: 1000m
      memory: 2Gi
a
@Justin Beasley thanks for flagging this, we have a ticket to fix this, it will be going out today. We'll be removing host/port/database as fields that reference a secret so the solution is to use the
host
port
and
database
And not their
xxxSecretKey
versions
l
tried to upgrade to 0.390.0 it errors with "You must set
global.database.password
when using an external database", but my gcp deployment just connects to the db with cloud-sql-proxy and IAM authn (so, user without password) got around it, but would be nice if pass wasnt mandatory 🙂
👀 1