HI, is there a lightdash for cloud run ?
# general
k
HI, is there a lightdash for cloud run ?
1
p
👋 Thanks for your message - someone from the Lightdash support team will get back to you asap. Feel free to add any additional context to the thread here in the meantime (screenshots, app version if you're self-hosting etc.).
electric mint heart 1
Hello @Kabir Gaire could you please elaborate what do you mean by "cloud run"? GCP cloud run?
k
@Giorgi Bagdavadze Thanks for replying. Yes, I want to deploy to cloud run and cloud sql postgres on GCP. The deployment fails to start apparently. I'm also surprised that Macbook apple silicon is not supported.
l
Hi! I'm running Lightdash on GCP cloud run. There were many things that had to be configured correctly for everything to work. What do you mean with "deployment fails to start"? Do you see any errors in logs? Obviously you need to have the postgres on cloud sql first and set all the environment variables for Lightdash in the cloud run service. If the postgres instance doesn't have a public IP address (which is highly recommended for security reasons), you'll need to enable VPC peering. I think this guide has most of the details: https://cloud.google.com/sql/docs/postgres/connect-instance-cloud-run#expandable-2
For the Lightdash scheduler I actually ended up switching to Cloud Compute VMs. They're a lot cheaper than having a long running Cloud Run job. Basically we have an instance template with a startup script that loads the Lightdash image and runs it as a scheduler (docker). Then a Managed instance group (MIG) to have 1 instance of the template running all the time.
Also the browserless is easier to run using VM + MIG as cloud run had some issues with the WebSocket protocol. It worked with cloud run but required an internal load balancer or something that was a lot more expensive than having a VM on
g1-small
instance.
thank you 1
k
@Giorgi Bagdavadze @Lari Haataja Thank you for replying. I am getting this error and cannot fix it at all, any advice would be greatly appreciated.
Copy code
2025-06-10 05:14:52 [Lightdash] error: Unhandled Rejection at Promise connect ECONNREFUSED 127.0.0.1:5432
2025-06-10 05:14:52 [Lightdash] error: Error migrating graphile worker connect ECONNREFUSED 127.0.0.1:5432
 ELIFECYCLE  Command failed with exit code 1.
Copy code
ERROR: (gcloud.run.deploy) Revision 'lightdash-00010-mz9' is not ready and cannot serve traffic. The user-provided container failed to start and listen on the port defined provided by the PORT=8080 environment variable within the allocated timeout. This can happen when the container port is misconfigured or if the timeout is too short. The health check timeout can be extended. Logs for this revision might contain more information.
--- Dockerfile
Copy code
FROM lightdash/lightdash:latest

# Copy dbt project files
COPY ./dbt_project.yml /usr/app/dbt/dbt_project.yml
COPY ./data /usr/app/dbt/data
COPY ./models /usr/app/dbt/models

# Copy dbt profiles config
COPY ./profiles/profiles.yml /root/.dbt/profiles.yml

# Copy custom entrypoint script
COPY ./lightdash-entrypoint.sh /usr/bin/lightdash-entrypoint.sh

# COPY ./credentials.json /root/.gcp/credentials.json
# ENV GOOGLE_APPLICATION_CREDENTIALS=/root/.gcp/credentials.json

# Install pnpm (Corepack already included)
RUN corepack enable && corepack prepare pnpm@latest --activate

# Install Python and pip before dbt
RUN apt-get update && apt-get install -y python3 python3-pip

# Install dbt-postgres adapter with override
RUN pip3 install --break-system-packages dbt-postgres

# Expose Lightdash port
EXPOSE 8080

# Set entrypoint
ENTRYPOINT ["/usr/bin/lightdash-entrypoint.sh"]
lightdash-entrypoint.sh
Copy code
#!/bin/bash
set -e

# Migrate Lightdash backend DB
yarn workspace backend migrate-production

# Run dbt (optional but common)
cd /usr/app/dbt
dbt seed --full-refresh --profiles-dir /root/.dbt || true
dbt run --profiles-dir /root/.dbt || true
cd /usr/app/packages/backend

# Start Lightdash
exec pnpm start
.env.example
Copy code
PGPASSWORD=
PGUSER=
# PGHOST=host.docker.internal
PGHOST=
PGPORT=
PGDATABASE=

GCP_PROJECT_ID=
GCP_DATASET_ID=
GCP_DATASET_LOCATION=
GOOGLE_APPLICATION_CREDENTIALS=

LIGHTDASH_SECRET=
SITE_URL=

LOG_LEVEL=
i
hi @Kabir Gaire
I’m also surprised that Macbook apple silicon is not supported.
Could you share more on this one? I am running lightdash locally on an m1 machine. regarding this message over here:
Copy code
2025-06-10 05:14:52 [Lightdash] error: Unhandled Rejection at Promise connect ECONNREFUSED 127.0.0.1:5432
2025-06-10 05:14:52 [Lightdash] error: Error migrating graphile worker connect ECONNREFUSED 127.0.0.1:5432
 ELIFECYCLE  Command failed with exit code 1.
it seems like lightdash is unable to connect to the postgres instance. could you double check you are setting required environment variables listed here -> https://docs.lightdash.com/self-host/customize-deployment/environment-variables ?
also, file
.env.example
does not do anything, and just serves the purpose of showcasing some of the environment variables required for local development. in Production, it is advised to use system environment variables
thank you 1
k
@Irakli Thank you for your kind response. I am injecting these variables when I deploy to cloud run.
l
ECONNREFUSED 127.0.0.1:5432
indicates that Lightdash is trying to connect to postgres running on localhost (127.0.0.1) which is not the case. So probably at least the PGHOST environment variable is not set correctly. It should be the IP address of the Cloud SQL instance.
Also, I'm not quite following why do add dbt in the Dockerfile and run dbt in the entrypoint? The Lightdash server is separate from the dbt project. Once you have the server running, you can use Lightdash CLI to add / "upload" the dbt project to the Lightdash service. I'm not using any custom Dockerfile, but simply run the lightdash/lightdash:latest image as is with Cloud Run.
thank you 1
Great to hear!
I'm not really using SQL runner much, but I can reproduce your error. It seems that the SQL runner doesn't work with "unknown columns". So you need to define the column name in the query:
Copy code
select 1 as myColumn;
k
@Lari Haataja Thanks for your very quick reploy. I just tried
select 1 as myColumn;
and I am getting the error below:
Copy code
Could not fetch SQL query results
Bigquery warehouse error: badRequest
l
Strange, that works for me. Maybe there's some issue with the BigQuery permissions. Do you see more detailed error in the BigQuery job history?
k
@Lari Haataja @Irakli Sorry for my late reply. I double checked all of my setup. Everything is running great except that result from bigquery are not being shown in lightdash. Bigquery seems to be getting the queries and the queries are valid. Although on the job history, I am getting this error. • Job Status : Green ( which is good) • Owner : The correct service account I am using • Type : QUERY • Error :
Copy code
Access Denied: Permission bigquery.tables.getData denied on table my-project-name:_9ce6547e035945d60f5bf0684e8266f7cdaf9567.anon1cc3c338b303b41f7a42570c1edbcc9fa06a74ad04f175ee1f7ed24c80de504b (or it may not exist).
: my-project-name is set to my bigquery project name I have done everything I can, the service account has
Bigquery Data Owner
permission and many more. I do not understand why I am still getting this error.
Copy code
ROLE: roles/artifactregistry.writer

ROLE: roles/bigquery.admin

ROLE: roles/bigquery.connectionAdmin

ROLE: roles/bigquery.connectionUser

ROLE: roles/bigquery.dataEditor

ROLE: roles/bigquery.dataOwner

ROLE: roles/bigquery.dataViewer

ROLE: roles/bigquery.jobUser

ROLE: roles/bigquery.user

ROLE: roles/bigqueryconnection.serviceAgent

ROLE: roles/bigquerydatapolicy.viewer

ROLE: roles/cloudsql.client

ROLE: roles/cloudsql.editor

ROLE: roles/editor

ROLE: roles/logging.logWriter

ROLE: roles/secretmanager.secretAccessor

ROLE: roles/storage.admin

ROLE: roles/storage.objectAdmin
i
Hi @Kabir Gaire I myself have not done a Lightdash deployment on Cloud Run, but based on the error you’re seeing, it looks like BigQuery is denying access to temporary anonymous tables created during query execution. This usually happens in SQL Runner when the service account lacks permission to read from these temp tables. Try adding the
roles/bigquery.readSessionUser
role to the service account you’re using with Lightdash. This should allow it to access the temporary results and fix the issue you’re seeing in the SQL Runner. LMK if this helps lightdash circle
k
@Irakli Thanks for the reply. I just added `roles/bigquery.readSessionUser`and it didn't fix the issue.
l
Have you added these roles to the Cloud Run service account or to the service account used by Lightdash via dbt's profiles.yml? They might be different.
k
@Irakli @Lari Haataja Update : I have realized that the error I am getting is from me not having enough permission to see other job owner's job results. Now I am very suspicious of the VPC setup connecting lightdash to private IP cloud sql postgres.
@Lari Haataja Thanks. I given them both the same service account and all required the permissions and a bit more just to get finished with the setup for now.
l
We've added the service account simply BigQuery Job User on the project level and Data Viewer on the tables / datasets
I'd recommend double checking the connection details in the Lightdash settings: https://docs.lightdash.com/get-started/setup-lightdash/connect-project
k
@Lari Haataja Thanks a lot for the url and for the hint :
BigQuery Job User on the project level and Data Viewer on the tables / datasets