Hello again, I am trying out the new Teams integra...
# troubleshoot
m
Hello again, I am trying out the new Teams integration but I am having some trouble. I have followed this guide. Theorically it should work as I am getting the messages that indicate that such connection is running, but I am getting the following error:
Copy code
[2022-12-05 17:09:18,100] INFO     {datahub_actions.cli.actions:119} - Action Pipeline with name 'datahub_teams_action' is now running
 DEBUG    {datahub_actions.pipeline.pipeline_manager:63} - Attempting to start pipeline with name datahub_teams_action...
 Exception in thread Thread-2 (run_pipeline):
 Traceback (most recent call last):
   File "/usr/local/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
     self.run()
   File "/usr/local/lib/python3.10/threading.py", line 953, in run
     self._target(*self._args, **self._kwargs)
   File "/usr/local/lib/python3.10/site-packages/datahub_actions/pipeline/pipeline_manager.py", line 42, in run_pipeline
     pipeline.run()
  File "/usr/local/lib/python3.10/site-packages/datahub_actions/pipeline/pipeline.py", line 166, in run
     for enveloped_event in enveloped_events:
   File "/usr/local/lib/python3.10/site-packages/datahub_actions/plugin/source/kafka/kafka_event_source.py", line 154, in events
     msg = self.consumer.poll(timeout=2.0)
   File "/usr/local/lib/python3.10/site-packages/confluent_kafka/deserializing_consumer.py", line 131, in poll
     raise ConsumeError(msg.error(), kafka_message=msg)
 confluent_kafka.error.ConsumeError: KafkaError{code=_TRANSPORT,val=-195,str="FindCoordinator response error: Local: Broker transport failure"}
 [2022-12-05 17:09:18,174] ERROR    {datahub_actions.entrypoints:122} - File "/usr/local/lib/python3.10/site-packages/datahub_actions/entrypoints.py", line 114, in main
     111  def main(**kwargs):
     112      # This wrapper prevents click from suppressing errors.
     113      try:
 --> 114          sys.exit(datahub_actions(standalone_mode=False, **kwargs))
     115      except click.exceptions.Abort:

 File "/usr/local/lib/python3.10/site-packages/click/core.py", line 1130, in __call__
     1128  def __call__(self, *args: t.Any, **kwargs: t.Any) -> t.Any:
  (...)
--> 1130      return self.main(*args, **kwargs)

 File "/usr/local/lib/python3.10/site-packages/click/core.py", line 1055, in main
     rv = self.invoke(ctx)
 File "/usr/local/lib/python3.10/site-packages/click/core.py", line 1657, in invoke
     return _process_result(sub_ctx.command.invoke(sub_ctx))
 File "/usr/local/lib/python3.10/site-packages/click/core.py", line 1657, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
 File "/usr/local/lib/python3.10/site-packages/click/core.py", line 1404, in invoke
     return ctx.invoke(self.callback, **ctx.params)
 File "/usr/local/lib/python3.10/site-packages/click/core.py", line 760, in invoke
     return __callback(*args, **kwargs)
 File "/usr/local/lib/python3.10/site-packages/click/decorators.py", line 26, in new_func
     return f(get_current_context(), *args, **kwargs)
 File "/usr/local/lib/python3.10/site-packages/datahub_actions/cli/actions.py", line 118, in run
     73   def run(ctx: Any, config: List[str], debug: bool) -> None:
  (...)
     114      logger.debug("Starting Actions Pipelines")
     115
     116      # Start each pipeline.
     117      for p in pipelines:
 --> 118          pipeline_manager.start_pipeline(p.name, p)
     119          <http://logger.info|logger.info>(f"Action Pipeline with name '{p.name}' is now running.")

 File "/usr/local/lib/python3.10/site-packages/datahub_actions/pipeline/pipeline_manager.py", line 71, in start_pipeline
     62   def start_pipeline(self, name: str, pipeline: Pipeline) -> None:
  (...)
     67           spec = PipelineSpec(name, pipeline, thread)
     68           self.pipeline_registry[name] = spec
     69           logger.debug(f"Started pipeline with name {name}.")
     70       else:
 --> 71           raise Exception(f"Pipeline with name {name} is already running.")
 Exception: Pipeline with name datahub_teams_action is already running.
 [2022-12-05 17:09:18,174] INFO     {datahub_actions.entrypoints:131} - DataHub Actions version: 0.0.0.dev0 at /usr/local/lib/python3.10/site-packages/datahub_actions/__init__.py
 [2022-12-05 17:09:18,176] INFO     {datahub_actions.entrypoints:134} - Python version: 3.10.7 (main, Oct  5 2022, 14:33:54) [GCC 10.2.1 20210110] at /usr/local/bin/python on Linux-3.10.0-1160.76.1.el7.x86_64-x86_64-with-glibc2.3
I am also getting the initialization message on Teams, but that is it. I have added some tags to a few datasets and columns, done some ingestions but haven't received any message. I should have received one message per action I have done, right??
a
Hi @microscopic-mechanic-13766, is this bug still affecting you?
m
Yes, haven't been able to solve it
It seems to be an error of the parser not accepting the comments in the file previously mentioned. As mentioned in this github issue, I erased the lines containing "#" and all seems to be working perfectly
Now with the Teams integration I am founding this error during what I think it is the communication from Datahub to Teams of the occurance of an event:
Copy code
[2022-12-14 13:16:46,147] DEBUG    {datahub_actions.plugin.source.kafka.kafka_event_source:207} - Successfully committed offsets at message: topic: PlatformEvent_v1, partition: 0, offset: 2828
--- Logging error ---
Traceback (most recent call last):
  File "/usr/local/lib/python3.10/site-packages/datahub_actions/plugin/action/teams/teams.py", line 92, in act
    semantic_message = get_message_from_entity_change_event(
  File "/usr/local/lib/python3.10/site-packages/datahub_actions/utils/social_util.py", line 79, in get_message_from_entity_change_event
    actor_name = get_entity_name_from_urn(event.auditStamp.actor, datahub_graph)
  File "/usr/local/lib/python3.10/site-packages/datahub_actions/utils/name_resolver.py", line 268, in get_entity_name_from_urn
    return _name_resolver_registry.get_resolver(entity_urn).get_entity_name(
  File "/usr/local/lib/python3.10/site-packages/datahub_actions/utils/name_resolver.py", line 205, in get_entity_name
    ] = datahub_graph.get_aspect(str(entity_urn), CorpUserEditableInfoClass)
  File "/usr/local/lib/python3.10/site-packages/datahub/ingestion/graph/client.py", line 162, in get_aspect
    raise OperationalError(
datahub.configuration.common.OperationalError: Failed to find com.linkedin.identity.CorpUserEditableInfo in response {'version': 0, 'aspect': {'com.linkedin.identity.CorpUserEditableInfo': {}}}
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
  File "/usr/local/lib/python3.10/logging/__init__.py", line 1100, in emit
    msg = self.format(record)
  File "/usr/local/lib/python3.10/logging/__init__.py", line 943, in format
    return fmt.format(record)
  File "/usr/local/lib/python3.10/logging/__init__.py", line 678, in format
    record.message = record.getMessage()
  File "/usr/local/lib/python3.10/logging/__init__.py", line 368, in getMessage
    msg = msg % self.args
TypeError: not all arguments converted during string formatting
Call stack:
  File "/usr/local/lib/python3.10/threading.py", line 973, in _bootstrap
    self._bootstrap_inner()
  File "/usr/local/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
    self.run()
  File "/usr/local/lib/python3.10/threading.py", line 953, in run
    self._target(*self._args, **self._kwargs)
  File "/usr/local/lib/python3.10/site-packages/datahub_actions/pipeline/pipeline_manager.py", line 42, in run_pipeline
    pipeline.run()
  File "/usr/local/lib/python3.10/site-packages/datahub_actions/pipeline/pipeline.py", line 168, in run
    self._process_event(enveloped_event)
  File "/usr/local/lib/python3.10/site-packages/datahub_actions/pipeline/pipeline.py", line 200, in _process_event
    self._execute_action(transformed_event)
  File "/usr/local/lib/python3.10/site-packages/datahub_actions/pipeline/pipeline.py", line 257, in _execute_action
    self.action.act(enveloped_event)
  File "/usr/local/lib/python3.10/site-packages/datahub_actions/plugin/action/teams/teams.py", line 103, in act
    logger.debug("Failed to process event", e)
Message: 'Failed to process event'
Arguments: (OperationalError("Failed to find com.linkedin.identity.CorpUserEditableInfo in response {'version': 0, 'aspect': {'com.linkedin.identity.CorpUserEditableInfo': {}}}"),)
m
Seems like a small bug in the graph client. One simple workaround is to go to your user profile and edit your name in there, until this bug gets fixed.
m
That was it, thanks! The source of the problem I think was, as you pointed out, the name of the user as it had a number in its name which it is not allowed