I am getting so close to finishing my first connec...
# connector-development
h
I am getting so close to finishing my first connector. 🥳 Thanks to the excellent support on this community! One of my streams has heavy limitations from the source system. The current dataset have >2mill rows, and the system only allows maximum 100 pages, with 1.000 records each - per query Not a huge issue, as I have been able to segment the data, and by manipulating
def next_page_token
I am able to build a list of different filter params that are well under the limits. But, here is my issue: • As soon as I run my
main.py read
script, it kicks off the request on the endpoint. (this query doesn’t work, as it tries to return all the 2 million rows). • I need this stream to wait for the next_page_token to make a few secondary requests - to build a list of filters - before starting to pull from the dataset. What is the best way to do this?
a
@Chris Conradi did you overwrite the
request_params
function? For streams with large collection of data the best way is to implement a incremental method (if possible).