Hey, welcome to the community @Peter! octavia wave
Yeah, the docs would be pretty misleading for this use case. Airbyte is built more for handling API calls rather than the actual data collection and storage. For this use case typically the web scraping would happen outside of Airbyte, the scraped data would be stored somewhere (a database?) and then the Airbyte source connector can request data from its API and sync it into the destination API.
So, once your web scraper has an API to pull from, you can build a source connector and it can sync with Snowflake 🙂