Hi, i am developing custom connector. Let’s say I ...
# ask-community-for-troubleshooting
v
Hi, i am developing custom connector. Let’s say I have surveys entities and stream for it. For each entity I need: • call POST request to start export surveys responses • call POST request to download the csv/json file contains responses • import that responses from file how is it better to do? thanks!
s
@Vika Petrenko I would recommend extending the Stream class from the CDK (instead of httpstream) and overriding the read_records method to perform what you need
v
thanks! you are awesome support!
I have a survey item with its own id-related json schema returned by API (fields are dynamic for each survey). I am trying to use
get_json_schema
method, but seems it isn’t called. • For dynamic schema I shouldn’t create a
stream_name.json
file? • what to define in properties in
configured_catalog.json
? • are there an examples of dynamic schema? thanks!
s
@Vika Petrenko which connector are you building out of curiosity?
v
@s connector to Qualtrics and destination to BigQuery, here is an endpoint to get survey item JSON schema. So surveys are the same item, but with different structure for each of them, behave like a different entities Is it the right way to override
get_json_schema
and return right json schema based on survey_id? Also, as a workaround it would be ok to have one struct field in BigQuery and put all survey data here. Are there any recommendations how to do this way?
Is it possible to specify field to be
record
in BigQuery? I am configuring like this
Copy code
"fields": {
      "properties": {},
      "type": ["null", "object"],
      "additionalProperties": true
    }
but in table it is still
string
It is possible to query by string field using
JSON_EXTRACT_SCALAR(fields, "$.ResponseId")
but not sure it is efficient
s
Is it the right way to override 
get_json_schema
 and return right json schema based on survey_id?
Yes that is the correct way — is that something you can do on your end?
would be ok to have one struct field in BigQuery
Airbyte’s normalization currently doesn’t support writing STRUCTs to Bigquery natively. We have an issue open for this and expect to support it in 3-4 weeks: https://github.com/airbytehq/airbyte/issues/1927 For the time being the workaround is to use JSON_EXTRACT_SCALAR (which is not great, but we’re working on it 🙂 )
v
thanks!