brainy-tailor-93048
02/09/2023, 11:58 AMNote that this field will soon be deprecated in favor of a more standardized concept of Environment
Is there anything that could be shared, RFC or discussion, about what the plans are for this field, even if tentative? Will be very excited to take advantage of a more general environment implementation when it arrives!kind-kite-29761
02/09/2023, 12:56 PMlemon-daybreak-58504
02/09/2023, 1:11 PMlemon-daybreak-58504
02/09/2023, 1:17 PMlemon-daybreak-58504
02/09/2023, 1:18 PMalert-fall-82501
02/09/2023, 1:54 PMtall-pizza-132
02/09/2023, 3:31 PMHello everyone, just wondering if anyone tried to connect with Exasol using SqlAlchemy in the past? I'm working with the receipt and I'm getting this error:
["Tables error: 'pyodbc.Row' object has no attribute 'table_name'"]}
Any advice or help?
Thanks
victorious-evening-88418
02/09/2023, 5:32 PMsalmon-spring-51500
02/09/2023, 6:53 PMchilly-ability-77706
02/09/2023, 9:49 PMbland-lighter-26751
02/10/2023, 12:02 AMdatahub delete --entity_type dashboard --platform metabase --hard
datahub delete --entity_type chart --platform metabase --hard
Then I reingested, watched the logs pick up everything, but the UI doesn't show Metabase assets anywhere. Help?many-helicopter-71451
02/10/2023, 12:24 AMsalmon-spring-51500
02/10/2023, 1:08 AMbest-planet-6756
02/10/2023, 1:31 AMTask exited with return code Negsignal.SIGSEGV
Searching the error it mentions to increase resources on your cluster but monitoring the resources on GKE I don't see anything out of the ordinary. Any advise?fierce-baker-1392
02/10/2023, 2:19 AMgreat-notebook-53658
02/10/2023, 6:31 AMbillions-family-12217
02/10/2023, 7:08 AMripe-eye-60209
02/10/2023, 8:51 AMsquare-football-37770
02/10/2023, 8:57 AMtimeout
? If not I can use filters to ingest databases from a server in batches, but increasing timeout I could interest all the DBs on a server in one go. Thanks!agreeable-cricket-61480
02/10/2023, 10:04 AMquick-megabyte-61846
02/10/2023, 11:44 AMAuthorization: "Basic ${DATAHUB_SYSTEM_CLIENT_ID:-__datahub_system}:${DATAHUB_SYSTEM_CLIENT_SECRET:-JohnSnowKnowsNothing}"
source
And this in acryl-datahub-action
pod deployed by helm chart DATAHUB_SYSTEM_CLIENT_ID
The internal system id that is used to communicate with DataHub GMS. Required if metadata_service_authentication is 'true'.
Where should I find this internal system id?bitter-evening-61050
02/10/2023, 12:36 PMlemon-scooter-69730
02/10/2023, 4:18 PMdefault backend - 404
quiet-beach-32846
02/10/2023, 6:56 PMsalmon-spring-51500
02/10/2023, 7:14 PMwhite-horse-97256
02/10/2023, 10:12 PMagreeable-cricket-61480
02/13/2023, 5:32 AMincalculable-manchester-41314
02/13/2023, 10:05 AMbitter-evening-61050
02/13/2023, 10:23 AMclean-doctor-27061
02/13/2023, 3:31 PMdatahub.configuration.common.OperationalError: ('Unable to emit metadata to DataHub GMS', {'exceptionClass': 'com.linkedin.restli.server.RestLiServiceException', 'stackTrace': 'com.linkedin.restli.server.RestLiServiceException [HTTP Status:500]: org.apache.kafka.common.errors.SerializationException: Error registering Avro schema: {"type":"record","name":"MetadataChangeLog","namespace":"com.linkedin.pegasus2avro.mxe","doc":"Kafka event for capturing update made to an entity\'s metadata.","fields":[{"name":"auditHeader","type":["null",{"type":"record","name":"KafkaAuditHeader","namespace":"com.linkedin.events","doc":"This header records information about the context of an event as it is emitted into kafka and is intended to be used by the kafka audit application.