https://linen.dev logo
Join Slack
Powered by
# support
  • m

    mammoth-answer-4269

    11/28/2022, 10:43 PM
    Hi! Just wanted to check in about the customer.io integration - does Rudderstack automatically convert timestamps from ISO to UNIX so that they work in customer.io? It seems to be working for
    createdAt
    but not
    birthday
    for us
  • r

    rhythmic-ability-64998

    11/28/2022, 10:54 PM
    Hi, RS. I need to set up Google sheet as a source for ingestion. I was able to successfully test the sync using my google user account. Now I'm trying to change the user account to a google service account (which I'm the owner of), but couldn't set that connection up, because service account can't log in through browser, according to google cloud doc. What's the best practice for Google sheet source set up? I don't want to have any ingestion to depend on individual's google user account.
    • 1
    • 1
  • a

    ambitious-engineer-11304

    11/29/2022, 5:37 AM
    Hi, we have recently connected Rudderstack account with our Mixpanel account. Now when we are testing the events they are coming in Rudderstack but not coming in Mixpanel. Please let me know if I am doing something wrong here or how to test the events to the Mixpanel destination.
    • 1
    • 4
  • q

    quiet-microphone-89528

    11/29/2022, 8:25 AM
    hey
    • 1
    • 8
  • c

    colossal-motorcycle-19800

    11/29/2022, 6:01 PM
    Hello Rudderstack friends šŸ™‚ I was wondering if there's any plans to allow loading Adobe Launch on a webpage similar to how GTM can be loaded... or if it's already possible, please let me know! I couldn't find anything in the docs
    g
    • 2
    • 1
  • b

    big-carpet-17623

    11/30/2022, 7:36 AM
    Hello~Iā€˜m using GA4 device mode to track.I connect my website to individual ga4 and RS with ga4 as destination, because i want to compare data.However, the RS ga4 sessions and users are 1/5 less than the separate ga4.I don't know why.
    • 1
    • 1
  • s

    salmon-plastic-31303

    11/30/2022, 10:10 AM
    Hi. I setup Google analytics G4 with rudderstack and am receiving hits/events but no referrer or country information. Is there something I missed in the setup?
    g
    s
    • 3
    • 10
  • r

    refined-processor-42729

    11/30/2022, 10:34 AM
    Hi folks! would love your help. I’ve setup a snowfalke intergation, and followed all the requirements in the docs. granted roles/users and everything, but when trying to sync it I get an error of: ā€œSF: snowflake alter session error : (390201 (08004): The requested database does not exist or not authorized.)ā€ (it exists..) Would love your help please šŸ™
    • 1
    • 2
  • f

    fancy-answer-79663

    11/30/2022, 12:29 PM
    Hi. I updated config-generator but there is no webengage destination. How should I add it?
    g
    • 2
    • 2
  • w

    worried-lighter-96658

    11/30/2022, 1:04 PM
    Hi Team, We have 10 pods for rudder backend service associated to 10 postgres pods. Out of which 2 pods (i.e backend service & postgres) were using more than 100% memory & CPU utilization respectively. ā—¦ All backend service pods were restarted/reassigned to different node ā—¦ Rudder service was down for 10 mins which caused data loss ā—¦ Only 2 PODS were receiving maximum requests and other 8 were idle Below are errors on ALL postgres PODS:
    Copy code
    FATAL: terminating connection because protocol synchronization was lost
    Copy code
    {kubernetes:{namespace_name:prod-dataplatformpod_name:rudderstack-service-rudderstack-postgresql-v5-0host:ip-10-1-94-191.ap-south-1.compute.internal}log:2022-11-30 11:35:06.153 UTC [2973] FATAL: terminating connection due to administrator command
    Copy code
    {kubernetes:{namespace_name:prod-dataplatformpod_name:rudderstack-service-rudderstack-postgresql-v5-0host:ip-10-1-94-191.ap-south-1.compute.internal}log:2022-11-30 11:41:18.459 UTC [4514] FATAL: connection to client lost
    Copy code
    {kubernetes:{namespace_name:prod-dataplatformpod_name:rudderstack-service-rudderstack-postgresql-v5-0host:ip-10-1-94-191.ap-south-1.compute.internal}log:2022-11-30 11:38:38.899 UTC [3955] FATAL: terminating connection because protocol synchronization was lost
    Here is our config:
    Copy code
    maxProcess: 12
    gwDBRetention: 0h
    routerDBRetention: 0h
    enableProcessor: true
    enableRouter: true
    enableStats: true
    statsTagsFormat: influxdb
    Http:
      ReadTimeout: 0s
      ReadHeaderTimeout: 0s
      WriteTimeout: 10s
      IdleTimeout: 720s
      MaxHeaderBytes: 524288
    RateLimit:
      eventLimit: 1000
      rateLimitWindow: 60m
      noOfBucketsInWindow: 12
    Gateway:
      webPort: 8080
      maxUserWebRequestWorkerProcess: 64
      maxDBWriterProcess: 256
      CustomVal: GW
      maxUserRequestBatchSize: 128
      maxDBBatchSize: 128
      userWebRequestBatchTimeout: 15ms
      dbBatchWriteTimeout: 5ms
      maxReqSizeInKB: 4000
      enableRateLimit: false
      enableSuppressUserFeature: true
      allowPartialWriteWithErrors: true
      allowReqsWithoutUserIDAndAnonymousID: false
      webhook:
        batchTimeout: 20ms
        maxBatchSize: 32
        maxTransformerProcess: 64
        maxRetry: 5
        maxRetryTime: 10s
        sourceListForParsingParams:
          - shopify
    EventSchemas:
      enableEventSchemasFeature: false
      syncInterval: 240s
      noOfWorkers: 128
    Debugger:
      maxBatchSize: 32
      maxESQueueSize: 1024
      maxRetry: 3
      batchTimeout: 2s
      retrySleep: 100ms
    SourceDebugger:
      disableEventUploads: false
    DestinationDebugger:
      disableEventDeliveryStatusUploads: false
    TransformationDebugger:
      disableTransformationStatusUploads: false
    Archiver:
      backupRowsBatchSize: 100
    JobsDB:
      jobDoneMigrateThres: 0.8
      jobStatusMigrateThres: 5
      maxDSSize: 100000
      maxMigrateOnce: 10
      maxMigrateDSProbe: 10
      maxTableSizeInMB: 300
      migrateDSLoopSleepDuration: 30s
      addNewDSLoopSleepDuration: 5s
      refreshDSListLoopSleepDuration: 5s
      backupCheckSleepDuration: 5s
      backupRowsBatchSize: 1000
      archivalTimeInDays: 10
      archiverTickerTime: 1440m
      backup:
        enabled: true
        gw:
          enabled: true
          pathPrefix: ""
        rt:
          enabled: true
          failedOnly: true
        batch_rt:
          enabled: false
          failedOnly: false
    Router:
      jobQueryBatchSize: 10000
      updateStatusBatchSize: 1000
      readSleep: 1000ms
      fixedLoopSleep: 0ms
      noOfJobsPerChannel: 1000
      noOfJobsToBatchInAWorker: 20
      jobsBatchTimeout: 5s
      maxSleep: 60s
      minSleep: 0s
      maxStatusUpdateWait: 5s
      useTestSink: false
      guaranteeUserEventOrder: true
      kafkaWriteTimeout: 2s
      kafkaDialTimeout: 10s
      minRetryBackoff: 10s
      maxRetryBackoff: 300s
      noOfWorkers: 64
      allowAbortedUserJobsCountForProcessing: 1
      maxFailedCountForJob: 3
      retryTimeWindow: 180m
      failedKeysEnabled: false
      saveDestinationResponseOverride: false
      responseTransform: false
      MARKETO:
        noOfWorkers: 4
      throttler:
        MARKETO:
          limit: 45
          timeWindow: 20s
      BRAZE:
        forceHTTP1: true
        httpTimeout: 120s
        httpMaxIdleConnsPerHost: 32
    BatchRouter:
      mainLoopSleep: 2s
      jobQueryBatchSize: 100000
      uploadFreq: 30s
      warehouseServiceMaxRetryTime: 3h
      noOfWorkers: 8
      maxFailedCountForJob: 128
      retryTimeWindow: 180m
    Warehouse:
      mode: embedded
      webPort: 8082
      uploadFreq: 1800s
      noOfWorkers: 8
      noOfSlaveWorkerRoutines: 4
      mainLoopSleep: 5s
      minRetryAttempts: 3
      retryTimeWindow: 180m
      minUploadBackoff: 60s
      maxUploadBackoff: 1800s
      warehouseSyncPreFetchCount: 10
      warehouseSyncFreqIgnore: false
      stagingFilesBatchSize: 960
      enableIDResolution: false
      populateHistoricIdentities: false
      redshift:
        maxParallelLoads: 3
        setVarCharMax: false
      snowflake:
        maxParallelLoads: 2
      bigquery:
        maxParallelLoads: 20
      postgres:
        maxParallelLoads: 3
      mssql:
        maxParallelLoads: 3
      azure_synapse:
        maxParallelLoads: 3
      clickhouse:
        maxParallelLoads: 3
        queryDebugLogs: false
        blockSize: 1000
        poolSize: 10
        disableNullable: false
        enableArraySupport: false
    Processor:
      webPort: 8086
      loopSleep: 10ms
      maxLoopSleep: 5000ms
      fixedLoopSleep: 0ms
      maxLoopProcessEvents: 7000
      transformBatchSize: 100
      userTransformBatchSize: 200
      maxConcurrency: 200
      maxHTTPConnections: 100
      maxHTTPIdleConnections: 50
      maxRetry: 30
      retrySleep: 100ms
      timeoutDuration: 30s
      errReadLoopSleep: 30s
      errDBReadBatchSize: 1000
      noOfErrStashWorkers: 2
      maxFailedCountForErrJob: 3
      Stats:
        captureEventName: false
    Dedup:
      enableDedup: false
      dedupWindow: 3600s
    BackendConfig:
      configFromFile: false
      configJSONPath: /etc/rudderstack/workspaceConfig.json
      pollInterval: 5s
      regulationsPollInterval: 300s
      maxRegulationsPerRequest: 1000
    recovery:
      enabled: true
      errorStorePath: /tmp/error_store.json
      storagePath: /tmp/recovery_data.json
      normal:
        crashThreshold: 5
        duration: 300s
    Logger:
      enableConsole: true
      enableFile: false
      consoleJsonFormat: false
      fileJsonFormat: false
      logFileLocation: /tmp/rudder_log.log
      logFileSize: 100
      enableTimestamp: true
      enableFileNameInLog: true
      enableStackTrace: false
    Diagnostics:
      enableDiagnostics: true
      gatewayTimePeriod: 60s
      routerTimePeriod: 60s
      batchRouterTimePeriod: 6l
      enableServerStartMetric: true
      enableConfigIdentifyMetric: true
      enableServerStartedMetric: true
      enableConfigProcessedMetric: true
      enableGatewayMetric: true
      enableRouterMetric: true
      enableBatchRouterMetric: true
      enableDestinationFailuresMetric: true
    RuntimeStats:
      enabled: true
      statsCollectionInterval: 10
      enableCPUStats: true
      enableMemStats: true
      enableGCStats: true
    PgNotifier:
      retriggerInterval: 2s
      retriggerCount: 500
      trackBatchInterval: 2s
      maxAttempt: 3
    Can someone please take a look at it.
    b
    g
    • 3
    • 10
  • c

    clean-thailand-68527

    12/01/2022, 4:50 PM
    hey folks, I'm trying to figure out how to set up a reverse ETL connection without an IAM user. is it possible to do this w/ an IAM role and trust relationship? we don't want to make more service users
    • 1
    • 1
  • c

    clean-thailand-68527

    12/01/2022, 4:50 PM
    https://www.rudderstack.com/docs/sources/reverse-etl/amazon-s3/
    s
    • 2
    • 1
  • p

    proud-businessperson-71460

    12/01/2022, 10:35 PM
    We are working on POC for NetSuite to Redshift. Once we move the data first time, later on how we can identify the delta that is coming on daily basis? Is there any document or rudderstack adding few columns to destination tables, any of those columns helps us to identify the delta?
    s
    • 2
    • 1
  • s

    steep-pilot-56431

    12/02/2022, 10:45 AM
    Hello Team šŸ‘‹ We are self hosting RS on K8s and have 3 backend instances, single Postgres pod with separate database for each RS instance. Got 'No space left on device' error so increased postgres persistence size from 10GB to 40GB. 1 of the databases is not clearing events and is slowly growing in size. There are 1000
    batch_rt_jobs_
    ,
    batch_rt_job_status_
    tables and the tables are not empty. Would appreciate any help. Thanks!
    g
    s
    h
    • 4
    • 7
  • l

    limited-wall-6616

    12/02/2022, 1:30 PM
    Hello Rudder team šŸ‘‹ Following this https://www.rudderstack.com/docs/sources/event-streams/sdks/rudderstack-node-sdk/ we are trying to implement persistent queue for events using your node SDK. As we are testing implementation, we see that events key never expire (ttl -1), does that mean event that keys will infinitely grow in our redis ? because this could quickly lead to unpredicatable behavior from redis. Or am I missing something ?
    • 1
    • 1
  • c

    chilly-quill-31544

    12/02/2022, 4:01 PM
    Hi all, my team signed up for Blendo, which I was told ā€œis Rudderstack.ā€ I learned here through search that Blendo was acquired by Rudderstack. We signed up because we needed the Bamboo HR connector to ingest data into Snowflake. In Rudderstack though, I’m not finding a connector for Bamboo. I’m trying to determine what’s going on with that? Is there still a way to use it? It was and still is advertised on Blendo’s site. Thanks in advance.
    j
    • 2
    • 2
  • m

    mammoth-answer-4269

    12/05/2022, 2:39 AM
    Hi! We're setting up Facebook Pixel as a Device mode integration, but we keep getting this error:
    Copy code
    {
      "firstAttemptedAt": "2022-12-05T02:36:07.777Z",
      "response": "{\"error\":{\"message\":\"Unsupported post request. Object with ID 'PIXELID' does not exist, cannot be loaded due to missing permissions, or does not support this operation. Please read the Graph API documentation at https:\\/\\/developers.facebook.com\\/docs\\/graph-api\",\"type\":\"GraphMethodException\",\"code\":100,\"error_subcode\":33,\"fbtrace_id\":\"A2wVb4zxAirtJkxchRg_YHf\"}}",
      "content-type": "text/javascript; charset=UTF-8"
    }
  • l

    little-horse-45610

    12/05/2022, 3:24 AM
    All my requests are failing to "Google Adwords Enhanced Conversions"
    • 1
    • 3
  • m

    mammoth-answer-4269

    12/05/2022, 3:24 AM
    In addition - has anyone had any luck setting up Google Optimize + GA4 (in cloud mode) together in Rudderstack? Optimize says it's not seeing the GA4 tag on our website. Do we have to use GA4 in Device Mode in order to have Google Optimize work?
    a
    g
    • 3
    • 24
  • a

    abundant-manchester-95584

    12/05/2022, 12:34 PM
    Is it possible to convert a rudderstack "Model" to an event and send it to a destination such as Intercom?
    • 1
    • 2
  • s

    steep-caravan-35760

    12/05/2022, 5:16 PM
    I’m running into an interesting issue trying to use a reverse ETL from snowflake to mixpanel, and would appreciate any help/guidance. There are shared properties between the events I’m loading from the reverse ETL and the events we send via JS. But the property keys in JS are lowercase and the property keys (column names) in the reverse ETL are uppercase so they do not match. I tried to merge them in Mixpanel’s lexicon functionality, and it works somewhat, but because I’m using one property key as a group analytics key they don’t fully support merging. So the question I have is how do I lowercase names coming out of the snowflake reverse etl before sending to Mixpanel. I tried being explicit in the model and it still didn’t work, as can be seen in the attached screenshot.
    s
    • 2
    • 10
  • a

    able-tailor-26948

    12/05/2022, 8:18 PM
    I’m running reverse ETL from bigquery to PostHog. Identify events work fine, but when I use track events the distinct_id is set to just the string
    sources
    , instead of either the user_id or anonymous_id.
    $anon_distinct_id
    is set. Also, if user_id is set it fails with ā€œNull values are present in the unique columnā€. Any ideas?
    šŸ‘‹ 1
    • 1
    • 14
  • f

    fancy-answer-79663

    12/05/2022, 10:33 PM
    Hi, We use open source version of control plane and our data plane deployed on Kubernetes, I updated the helm charts today and deploy it to cluster, but after updates no event has been received and this error log continuously shows up.
    batchrouter/batchrouter.go:1171 BRT: Error uploading to object storage: BRT: Failed to route staging file URL to warehouse service@http://localhost:8082/v1/process, status:
    400 Bad Request, body: invalid payload: workspaceId is required
    n
    q
    r
    • 4
    • 7
  • s

    straight-raincoat-91897

    12/06/2022, 8:01 AM
    This message was deleted.
  • a

    average-spring-6104

    12/06/2022, 8:18 AM
    Hello, everyone! šŸ‘‹ I’m facing a problem with rudder for flutter, i need help, I’m trying to implement
    Copy code
    rudder_integration_firebase_flutter
    for our app. Everything work well on Android, but not on iOS. When I launch
    Copy code
    pod install --repo-update
    , I get this error šŸ‘‡:
    Copy code
    [!] CocoaPods could not find compatible versions for pod "Firebase/Analytics":
      In Podfile:
        firebase_core (from `.symlinks/plugins/firebase_core/ios`) was resolved to 2.3.0, which depends on
          Firebase/CoreOnly (= 10.2.0)
    
        rudder_integration_firebase_flutter (from `.symlinks/plugins/rudder_integration_firebase_flutter/ios`) was resolved to 1.0.1, which depends on
          Rudder-Firebase (= 2.0.6) was resolved to 2.0.6, which depends on
            Firebase/Analytics (~> 8.15.0)
    I wanted to clarify that we use in our project the following dependencies related to firebase:
    Copy code
    firebase_core: ^2.3.0
      firebase_dynamic_links: ^5.0.6
      firebase_messaging: ^14.1.1
      firebase_crashlytics: ^3.0.6
    Thanks you !
    s
    • 2
    • 5
  • d

    dazzling-art-86976

    12/06/2022, 10:52 AM
    Hey. Has anyone hooked up stripe source data into the warehouse using rudderstack? If so, have you any documentation that explains the stripe source data and its tables?
    c
    g
    • 3
    • 3
  • f

    fancy-answer-79663

    12/06/2022, 12:37 PM
    Hi, We use open source version of control plane and our data plane deployed on Kubernetes, because of large events traffic I added a feature for sending events in batch in rudder stack js sdk, I tested this in rudder free data plane and I was able to see live event for batches. But when I tested it in our local data plane the rudder server send status code 200 and there is no error, but there is no sign of data either. I would appreciate if you could help me on this. I'm using rudder server image version 1-alpine.
    • 1
    • 1
  • b

    bright-afternoon-44693

    12/06/2022, 2:02 PM
    Hello, we're using https://github.com/rudderlabs/dbt-id-stitching on snowflake. Does anyone have a good mechanism for matching the rudder_id that is generated from the dbt package to the tracks table?
    m
    • 2
    • 2
  • s

    shy-terabyte-38386

    12/06/2022, 2:15 PM
    we are using shopify app to integrate with Bigquerry. Since yesterday sales are not going through.
    šŸ‘ 1
    s
    a
    • 3
    • 8
  • m

    most-analyst-45295

    12/06/2022, 5:29 PM
    Hello We have installed self-hosted rudderstack. dst=webhook In postgresql we observe an increase count of
    proc_error_jobs tables
    , event type
    custom_val=WEBHOOK.
    So we got error message
    'no space left on device'
    in pod How can we resend this events?
1...9899100...127Latest