Damar Adi
04/28/2023, 7:57 AMMayur Choubey
04/28/2023, 8:11 AM./gradlew :airbyte-integrations:connectors:destination-iceberg:build
./gradlew :airbyte-integrations:connectors:destination-iceberg:airbyteDocker
docker run --rm airbyte/destination-iceberg:dev spec
in the output of the last command I can see the test changes I made on UI (just change of titles).Mayur Choubey
04/28/2023, 8:11 AMMayur Choubey
04/28/2023, 8:11 AMDhanji Mahto
04/28/2023, 12:57 PMSlackbot
04/28/2023, 6:44 PMaidan
04/28/2023, 7:49 PMSlackbot
04/29/2023, 11:39 AMMF
04/30/2023, 3:44 PMMF
04/30/2023, 4:09 PMMF
04/30/2023, 4:48 PMRutger Weemhoff
05/01/2023, 12:38 PM{{ last_records[-1]['id'] }}
. Now I would like to use the actual cursor value in a custom Request Body parameter with key "query".
The value of this request body parameter would be something like:
select id, ..., ... from table where id > {{ cursor_value }} order by id
I am not sure in which variable the actual cursor value would be stored or how I can find out. Can you please point me in the right direction?
For incremental syncs I am already using {{ stream_slice['start_date'] }}
and {{ stream_slice['end_date'] }}
succesfully in the same way.Adam Roderick
05/01/2023, 3:57 PMMF
05/01/2023, 5:28 PMGlauber Costa
05/01/2023, 11:14 PMGlauber Costa
05/02/2023, 1:07 AMKonstantin Shamko
05/02/2023, 6:03 AMjson
{
......
"all_workflows": ["workflow_1", "workflow_2", ..., "workflow_N"]
}
2. Next, I need to fetch some workflow stats from a different endpoint, which is structured as follows: "https://circleci.com/api/v2/insights/time-series/{project-slug}/workflows/{workflow-name}/jobs". In this URL, the "project-slug" is defined as a variable, while the "workflow-name" should be taken from the previous step's response. To accomplish this, I created a separate stream with partitioning settings defined (refer to pic1.png). In these partitioning settings, I refer to the stream/field from the previous step and define some alias on that field.
3. When I test this stream, I do not receive any response, which is expected because the workflows for the request are concatenated with commas. Unfortunately, the endpoint does not support multiple workflow names in a single request, so I need to send N requests to fetch the required stats.
My questions are as follows:
1. Is it possible to iterate over fields from another stream to make multiple requests?
2. If not, what is a workaround that I can use with a low-code approach to implement this use case?
3. (optional) What is the correct Airbyte way to implement such an integration?
Thank you in advanceMatheus Barbosa
05/02/2023, 1:39 PMDion Duran
05/02/2023, 7:09 PM[
{
"name": "Joel Miller"
},
{
"name": "Ellie Williams"
}
]
Quazi Hoque
05/02/2023, 9:09 PMSorry. Something went wrong...
Looking at our airbyte-worker logs, I see a more specific error message:
Error while getting spec from image mitodl/destination-s3-glue:0.1.7-d
Slackbot
05/02/2023, 9:14 PMRandal Boyle
05/02/2023, 9:14 PMRandal Boyle
05/02/2023, 9:16 PMNon-breaking schema updates detected
when i click review changes there are no changes observed?Balaji Seetharaman
05/03/2023, 5:58 AMAazam Thakur
05/03/2023, 11:49 AMSlackbot
05/03/2023, 12:48 PMNicolas Jullien
05/03/2023, 12:51 PMSlackbot
05/03/2023, 3:17 PMVivek Jain
05/03/2023, 3:25 PMY L
05/03/2023, 4:48 PM