rapid-artist-48509
10/28/2025, 1:48 AMaverage-secretary-61436
10/28/2025, 2:25 PMcool-nest-98527
10/28/2025, 4:39 PMlittle-cricket-84530
10/28/2025, 11:10 PMfew-angle-62167
10/29/2025, 10:48 AMbq_template = BigQueryTask(
name="<name>",
inputs={},
query_template="SELECT * FROM <project_id>.<dataset_id>.<table>",
output_structured_dataset_type=StructuredDataset,
task_config=BigQueryConfig(ProjectID="<project_id>"),
)
@task(
container_image=image_name,
)
def convert_bq_table_to_pandas_dataframe(ds: StructuredDataset) -> pd.DataFrame:
return ds.open(pd.DataFrame).all()
@workflow
def full_bigquery_wf() -> pd.DataFrame:
ds = bq_template()
return convert_bq_table_to_pandas_dataframe(ds=ds)
So, what happen is when the bigquery task query data from bq it uses flyteconnector service account but after that when the python task try to extract pandas dataframe it is unable to do so.
google.api_core.exceptions.PermissionDenied: 403 Access Denied: Dataset <project_id>:<job_id>: User does not have permission to access results of another user's job.
I have already deploy flyteconnector and enable plugin as documentation mentioned. Any help would be greatly appreciate :).mysterious-painter-66441
10/30/2025, 3:37 PMstrong-soccer-41351
10/31/2025, 11:27 AMflyte-core helm deployment using a custom MinIO S3 bucket, without IAM configuration? There's helm chart parameters to pass in accessKey and secretKey but we want to avoid baking long-term credentials into our source code. I checked all the pages under https://docs-legacy.flyte.org/en/latest/deployment/deployment/index.html and https://www.union.ai/docs/v1/flyte/deployment/flyte-deployment/. I also checked the example flyte core chart and read its README.md but I haven't seen if there's alternatives or usages for the accessKey secretKey fieldsgentle-tomato-480
11/03/2025, 11:25 AMgentle-tomato-480
11/03/2025, 12:03 PMabundant-laptop-47033
11/04/2025, 9:33 PMgentle-tomato-480
11/05/2025, 2:23 PMv0.9.0 got removed/deprecated for the flytectl-setup-action? I was using that in my CICD and it was still working last week.
Today I'm getting:
Error: Unable to find flytectl version "v0.9.0" for platform "Linux" and architecture "x86_64".
in my GHA workflow when running this action.high-autumn-89220
11/10/2025, 5:08 PMClientSecret) auth type? does anyone know if it will work without custom auth servers on our plan? been struggling with this for a few weeks to no availwonderful-continent-24967
11/12/2025, 12:01 AMFailed to write output for this execution to cache. . I looked into datacatalog logs for the corresponding flyte task, nothing unusual there. Datacatalog created, updated & deleted reservations for that task as other tasks. We are using flyte 1.15.3fancy-hamburger-89099
11/12/2025, 10:10 AM400 Bad Request
Request Header Or Cookie Too Large
nginx
I tried different browsers, incognito mode, wiping cookies, everything.
This only happens on that one instance, and it works without any issues on the other 3.
Any ideas?abundant-judge-84756
11/12/2025, 11:34 AMABORTING state and can't be fully terminated. The executions include a dynamic workflow step, and these dynamic workflows show 2 x tasks as RUNNING - the task descriptions specify they are initializing. I think these initializing dynamic tasks are somehow blocking the workflows from resolving the abort request. Any suggestions for ways we can trigger these workflows to transition to ABORTED? We're currently running flyte 1.15.3.cool-waitress-85601
11/12/2025, 3:40 PMpyflyte run --remote?fierce-monitor-77717
11/13/2025, 12:20 PMcool-waitress-85601
11/17/2025, 5:41 PMmysterious-painter-66441
11/17/2025, 9:57 PMdataclass) are displayed as a single opaque field rather than expanding into individual attributes. This makes it unclear to users what values are expected for each field.
Could you advise if there’s a recommended approach to make structured inputs more user-friendly in the UI? For example, is there a way to automatically expand fields or provide schema hints for structured types?
Thanks for your help!brash-ram-89454
11/18/2025, 1:23 PMcool-waitress-85601
11/18/2025, 8:49 PMgray-ocean-43286
11/19/2025, 4:33 PMcool-waitress-85601
11/19/2025, 4:45 PMraw_output_data_config ? For instance the user local code when using fast registration, will it be uploaded to the global or project scoped bucket? More generally, what data would go in the global bucket vs the project scoped bucket?
Thanksearly-addition-41415
11/20/2025, 10:21 PMearly-addition-41415
11/20/2025, 10:22 PMearly-addition-41415
11/20/2025, 10:23 PMfancy-twilight-30247
11/21/2025, 10:12 AM@task(
task_config=task_config,
cache=False,
container_image=container_image,
pod_template=pod_template,
timeout=timeout,
retries=max_retries,
)
def flyte_training_main_task():
...
with the task_config being (note that we don't really need the elastic part of things - we just need to launch a multi-node pytorch task):
task_config = Elastic(
nnodes=num_nodes,
nproc_per_node=8,
)
Now imagine that a rank in the distributed training has an error of some sort - is there a way for us to configure our task so that the whole task/workflow is terminated (including all the pods corresponding to it) as soon as a single rank errors? Currently it seems like it requires all the ranks to exit/error until the task/workflow is terminated, which we often don't want (because other ranks might be stuck until NCCL timeout or might be stuck for other reasons). I've tried raising special exception types like SignalException or ChildFailedError , but it seems like it always waits until all the ranks exit. One hacky workaround I could think of is to manually terminate the workflow, but that also does not seem ideal.
Thanks!!numerous-hamburger-7178
11/25/2025, 11:45 PMcool-waitress-85601
11/26/2025, 1:16 PMaloof-magazine-44547
12/01/2025, 10:39 AM