average-lock-95905
05/04/2023, 7:13 AMfrom datahub.ingestion.run.pipeline import Pipeline
from datahub.ingestion.run.pipeline import Pipeline
# The pipeline configuration is similar to the recipe YAML files provided to the CLI tool.
pipeline = Pipeline.create(
{
"source": {
"type": "mysql",
"config": {
"username": "user",
"password": "pass",
"database": "db_name",
"host_port": "localhost:3306",
},
},
"sink": {
"type": "datahub-rest",
"config": {"server": "<http://localhost:8080>"},
},
}
)
# Run the pipeline and report the results.
pipeline.run()
pipeline.pretty_print_summary()
I'm using above code to ingest data into datahub, is there a way to add cloumn description while ingesting the data??lively-cat-88289
05/04/2023, 7:13 AMaverage-lock-95905
05/04/2023, 7:15 AMmodern-artist-55754
05/04/2023, 7:49 AMaverage-lock-95905
05/04/2023, 8:44 AMmodern-artist-55754
05/04/2023, 9:02 AMaverage-lock-95905
05/04/2023, 9:17 AMmodern-artist-55754
05/04/2023, 1:02 PMaverage-lock-95905
05/04/2023, 1:11 PMmodern-artist-55754
05/04/2023, 1:16 PMaverage-lock-95905
05/04/2023, 1:30 PMmodern-artist-55754
05/04/2023, 1:59 PMaverage-lock-95905
05/04/2023, 2:05 PMmodern-artist-55754
05/04/2023, 2:23 PMaverage-lock-95905
05/04/2023, 2:27 PMfrom datahub.ingestion.run.pipeline import Pipeline
pipeline = Pipeline.create(
{
"source":{
"type":"csv-enricher"
"config":{
"filename":"file path"
"should_overwrite": "false",
"delimeter":",",
"array_delimeter":"|",
"sink": {
"type": "datahub-rest",
"config": {"server": "<http://localhost:8080>"},
},
},
},
}
)
# Run the pipeline and report the results.
pipeline.run()
pipeline.pretty_print_summary()
Is this the approach you are talking about??modern-artist-55754
05/05/2023, 1:13 AMaverage-lock-95905
05/05/2023, 5:50 AMmodern-artist-55754
05/05/2023, 5:52 AMaverage-lock-95905
05/05/2023, 6:02 AMmodern-artist-55754
05/05/2023, 6:04 AMmodern-artist-55754
05/05/2023, 6:04 AMaverage-lock-95905
05/05/2023, 6:14 AMastonishing-answer-96712
05/05/2023, 5:38 PMaverage-lock-95905
05/08/2023, 4:53 PM