Hello I’m having trouble getting lineage data push...
# troubleshoot
f
Hello I’m having trouble getting lineage data pushed to DataHub from Spark running on EMR. My job is running with Spark 3.3.0, and I have also tried with Spark 3.2.0. The pipelines get created in Datahub but there are no datasets, no lineage, no tasks (cf. attached screenshot) I use the following configurations in my spark-submit
Copy code
--packages io.acryl:datahub-spark-lineage:0.9.3-1
--conf spark.extraListeners=datahub.spark.DatahubSparkListener 
--conf spark.datahub.rest.server=<http://10.5.0.37:8080>
My DataHub server is running on docker with the command
datahub docker quickstart
on the version 0.9.3-1. During the execution of my Spark job I get the following error from DatahubSparkListener (I removed the detailed spark plans from the log):
Copy code
22/12/06 17:31:09 INFO McpEmitter: MetadataWriteResponse(success=true, responseContent={"value":"urn:li:dataFlow:(spark,ModelizeBankAccountEvaluations,yarn)"}, underlyingResponse=HTTP/1.1 200 OK [Date: Tue, 06 Dec 2022 17:31:09 GMT, Content-Type: application/json, X-RestLi-Protocol-Version: 2.0.0, Content-Length: 71, Server: Jetty(9.4.46.v20220331)] [Content-Length: 71,Chunked: false])
22/12/06 17:31:10 ERROR DatahubSparkListener: java.lang.NullPointerException
	at datahub.spark.DatasetExtractor.lambda$static$6(DatasetExtractor.java:147)
	at datahub.spark.DatasetExtractor.asDataset(DatasetExtractor.java:237)
	at datahub.spark.DatahubSparkListener$SqlStartTask.run(DatahubSparkListener.java:114)
	at datahub.spark.DatahubSparkListener.processExecution(DatahubSparkListener.java:350)
	at datahub.spark.DatahubSparkListener.onOtherEvent(DatahubSparkListener.java:262)
	at org.apache.spark.scheduler.SparkListenerBus.doPostEvent(SparkListenerBus.scala:100)
	at org.apache.spark.scheduler.SparkListenerBus.doPostEvent$(SparkListenerBus.scala:28)
	at org.apache.spark.scheduler.AsyncEventQueue.doPostEvent(AsyncEventQueue.scala:37)
	at org.apache.spark.scheduler.AsyncEventQueue.doPostEvent(AsyncEventQueue.scala:37)
	at org.apache.spark.util.ListenerBus.postToAll(ListenerBus.scala:117)
	at org.apache.spark.util.ListenerBus.postToAll$(ListenerBus.scala:101)
	at org.apache.spark.scheduler.AsyncEventQueue.super$postToAll(AsyncEventQueue.scala:105)
	at org.apache.spark.scheduler.AsyncEventQueue.$anonfun$dispatch$1(AsyncEventQueue.scala:105)
	at scala.runtime.java8.JFunction0$mcJ$sp.apply(JFunction0$mcJ$sp.java:23)
	at scala.util.DynamicVariable.withValue(DynamicVariable.scala:62)
	at <http://org.apache.spark.scheduler.AsyncEventQueue.org|org.apache.spark.scheduler.AsyncEventQueue.org>$apache$spark$scheduler$AsyncEventQueue$$dispatch(AsyncEventQueue.scala:100)
	at org.apache.spark.scheduler.AsyncEventQueue$$anon$2.$anonfun$run$1(AsyncEventQueue.scala:96)
	at org.apache.spark.util.Utils$.tryOrStopSparkContext(Utils.scala:1447)
	at org.apache.spark.scheduler.AsyncEventQueue$$anon$2.run(AsyncEventQueue.scala:96)

22/12/06 17:31:10 INFO AsyncEventQueue: Process of event SparkListenerSQLExecutionStart(0,save at NativeMethodAccessorImpl.java:0,org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:239)
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
java.lang.reflect.Method.invoke(Method.java:498)
py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
py4j.Gateway.invoke(Gateway.java:282)
py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
py4j.commands.CallCommand.execute(CallCommand.java:79)
py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:182)
py4j.ClientServerConnection.run(ClientServerConnection.java:106)
java.lang.Thread.run(Thread.java:750),== Parsed Logical Plan ==

== Analyzed Logical Plan ==
SaveIntoDataSourceCommand org.apache.hudi.Spark3DefaultSource@6b5aa9cf, ...
== Optimized Logical Plan ==
SaveIntoDataSourceCommand org.apache.hudi.Spark3DefaultSource@6b5aa9cf...
== Physical Plan ==
Execute SaveIntoDataSourceCommand
   +- SaveIntoDataSourceCommand org.apache.hudi.Spark3DefaultSource@6b5aa9cf...
by listener DatahubSparkListener took 2.295850812s.
I think it’s worth pointing out that I use Apache Hudi format to write the data. Is there something I’m missing here ? Thanks for your help
1
m
Hello, is your Spark application correct?? It seems that the error comes from obtaining a null URL (the if line is where the code prints the Exception):
Copy code
PLAN_TO_DATASET.put(SaveIntoDataSourceCommand.class, (p, ctx, datahubConfig) -> {

      SaveIntoDataSourceCommand cmd = (SaveIntoDataSourceCommand) p;

      Map<String, String> options = JavaConversions.mapAsJavaMap(cmd.options());
      String url = options.get("url"); // e.g. jdbc:<postgresql://localhost:5432/sparktestdb>
      if (!url.contains("jdbc")) {
        return Optional.empty();
      }
f
Yes the Spark application is correct and the data is written to S3 as expected. If it helps here is a the full log of the line starting with
SaveIntoDataSourceCommand org.apache.hudi.Spark3DefaultSource@6b5aa9cf
(I have simply changed the s3 URL to a fake bucket name and path ):
Copy code
SaveIntoDataSourceCommand org.apache.hudi.Spark3DefaultSource@6b5aa9cf, Map(hoodie.datasource.hive_sync.database -> data_lake_gold_history, hoodie.combine.before.insert -> true, hoodie.datasource.hive_sync.mode -> hms, hoodie.schema.on.read.enable -> true, path -> <s3://my-bucket/my/path/>, hoodie.datasource.write.precombine.field -> updated_at, hoodie.datasource.write.operation -> bulk_insert, hoodie.datasource.hive_sync.enable -> true, hoodie.datasource.write.recordkey.field -> _history_id, hoodie.table.name -> bank_account_evaluations, hoodie.table.type -> COPY_ON_WRITE, hoodie.datasource.write.table.name -> bank_account_evaluations, hoodie.combine.before.upsert -> true), Overwrite
a
CC @dazzling-judge-80093 does our spark lineage work on spark 3?
d
It should work with Spark 3 and this definitely an issue as nullpointer is super bad
f
I there any more information I can provide to help investigate this issue ?
a
@freezing-garage-69869 we aren’t actively working on spark lineage right now, so we haven’t been able to prioritize this fix yet- is it still affecting your workflow?
f
Hi @astonishing-answer-96712 It's not really affecting our workflow as we are currently exploring our options for an automated Spark lineage tool. We are planning to pick a tool and set it up in the coming weeks, probably beginning of January. Datahub was a good candidate as it offers some very nice features around the lineage. Anyway, if Spark is not in your current priorities we will go for another tool (probably spline https://absaoss.github.io/spline/) and keep an eye on Datahub to follow the progress on Spark lineage.
a
Consulting internally with the team about ability to support it, I’ll let you know!
c
Hey @freezing-garage-69869 I was searching for Hudi integration comments and bumped into this thread. I am not sure if it still relevant for you but Hudi has integration with DataHub since version 0.11.0 via sync tool classes. If it still relevant for you: https://hudi.apache.org/docs/syncing_datahub/
w
Hey @freezing-garage-69869, I'm currently trying to ingest lineage from spark to get data linage for delta table. I'm running spark-submit like below:
Copy code
spark-submit  \
  --packages org.apache.hadoop:hadoop-aws:3.2.3,io.delta:delta-core_2.12:2.4.0,io.acryl:datahub-spark-lineage:0.10.5-2rc7 \
  --conf "spark.sql.extensions=io.delta.sql.DeltaSparkSessionExtension" \
  --conf "spark.sql.catalog.spark_catalog=org.apache.spark.sql.delta.catalog.DeltaCatalog" \
  --conf "spark.datahub.rest.server=<http://localhost:8080>" \
  --conf "spark.extraListeners=datahub.spark.DatahubSparkListener" \
  spark-code/delta_join.py
File delta-join just like this:
Copy code
spark = SparkSession.builder \
    .appName("Spark Test") \
    .master("local[*]") \
    .getOrCreate()

df1 = spark.read.format("delta").load("./delta-table/table1")
df2 = spark.read.format("delta").load("./delta-table/table2")
joinedDF = df1.join(df2, ["id"])
joinedDF.write.format("delta").save("delta-table/table_join_lineage")
But I am getting errors, the logs is below:
Copy code
23/07/25 22:01:42 ERROR Utils: uncaught error in thread spark-listener-group-shared, stopping SparkContext
java.lang.NoSuchMethodError: org.apache.spark.util.JsonProtocol.sparkEventToJson(Lorg/apache/spark/scheduler/SparkListenerEvent;)Lorg/json4s/JsonAST$JValue;
        at datahub.spark.DatahubSparkListener$SqlStartTask.<init>(DatahubSparkListener.java:87)
        at datahub.spark.DatahubSparkListener.processExecution(DatahubSparkListener.java:350)
        at datahub.spark.DatahubSparkListener.onOtherEvent(DatahubSparkListener.java:262)
        at org.apache.spark.scheduler.SparkListenerBus.doPostEvent(SparkListenerBus.scala:100)
        at org.apache.spark.scheduler.SparkListenerBus.doPostEvent$(SparkListenerBus.scala:28)
        at org.apache.spark.scheduler.AsyncEventQueue.doPostEvent(AsyncEventQueue.scala:37)
        at org.apache.spark.scheduler.AsyncEventQueue.doPostEvent(AsyncEventQueue.scala:37)
        at org.apache.spark.util.ListenerBus.postToAll(ListenerBus.scala:117)
        at org.apache.spark.util.ListenerBus.postToAll$(ListenerBus.scala:101)
        at org.apache.spark.scheduler.AsyncEventQueue.super$postToAll(AsyncEventQueue.scala:105)
        at org.apache.spark.scheduler.AsyncEventQueue.$anonfun$dispatch$1(AsyncEventQueue.scala:105)
        at scala.runtime.java8.JFunction0$mcJ$sp.apply(JFunction0$mcJ$sp.java:23)
        at scala.util.DynamicVariable.withValue(DynamicVariable.scala:62)
        at <http://org.apache.spark.scheduler.AsyncEventQueue.org|org.apache.spark.scheduler.AsyncEventQueue.org>$apache$spark$scheduler$AsyncEventQueue$$dispatch(AsyncEventQueue.scala:100)
        at org.apache.spark.scheduler.AsyncEventQueue$$anon$2.$anonfun$run$1(AsyncEventQueue.scala:96)
        at org.apache.spark.util.Utils$.tryOrStopSparkContext(Utils.scala:1471)
        at org.apache.spark.scheduler.AsyncEventQueue$$anon$2.run(AsyncEventQueue.scala:96)
23/07/25 22:01:42 ERROR Utils: throw uncaught fatal error in thread spark-listener-group-shared
java.lang.NoSuchMethodError: org.apache.spark.util.JsonProtocol.sparkEventToJson(Lorg/apache/spark/scheduler/SparkListenerEvent;)Lorg/json4s/JsonAST$JValue;
        at datahub.spark.DatahubSparkListener$SqlStartTask.<init>(DatahubSparkListener.java:87)
        at datahub.spark.DatahubSparkListener.processExecution(DatahubSparkListener.java:350)
        at datahub.spark.DatahubSparkListener.onOtherEvent(DatahubSparkListener.java:262)
        at org.apache.spark.scheduler.SparkListenerBus.doPostEvent(SparkListenerBus.scala:100)
        at org.apache.spark.scheduler.SparkListenerBus.doPostEvent$(SparkListenerBus.scala:28)
        at org.apache.spark.scheduler.AsyncEventQueue.doPostEvent(AsyncEventQueue.scala:37)
        at org.apache.spark.scheduler.AsyncEventQueue.doPostEvent(AsyncEventQueue.scala:37)
        at org.apache.spark.util.ListenerBus.postToAll(ListenerBus.scala:117)
        at org.apache.spark.util.ListenerBus.postToAll$(ListenerBus.scala:101)
        at org.apache.spark.scheduler.AsyncEventQueue.super$postToAll(AsyncEventQueue.scala:105)
        at org.apache.spark.scheduler.AsyncEventQueue.$anonfun$dispatch$1(AsyncEventQueue.scala:105)
        at scala.runtime.java8.JFunction0$mcJ$sp.apply(JFunction0$mcJ$sp.java:23)
        at scala.util.DynamicVariable.withValue(DynamicVariable.scala:62)
        at <http://org.apache.spark.scheduler.AsyncEventQueue.org|org.apache.spark.scheduler.AsyncEventQueue.org>$apache$spark$scheduler$AsyncEventQueue$$dispatch(AsyncEventQueue.scala:100)
        at org.apache.spark.scheduler.AsyncEventQueue$$anon$2.$anonfun$run$1(AsyncEventQueue.scala:96)
        at org.apache.spark.util.Utils$.tryOrStopSparkContext(Utils.scala:1471)
        at org.apache.spark.scheduler.AsyncEventQueue$$anon$2.run(AsyncEventQueue.scala:96)
Exception in thread "spark-listener-group-shared" java.lang.NoSuchMethodError: org.apache.spark.util.JsonProtocol.sparkEventToJson(Lorg/apache/spark/scheduler/SparkListenerEvent;)Lorg/json4s/JsonAST$JValue;
        at datahub.spark.DatahubSparkListener$SqlStartTask.<init>(DatahubSparkListener.java:87)
        at datahub.spark.DatahubSparkListener.processExecution(DatahubSparkListener.java:350)
        at datahub.spark.DatahubSparkListener.onOtherEvent(DatahubSparkListener.java:262)
        at org.apache.spark.scheduler.SparkListenerBus.doPostEvent(SparkListenerBus.scala:100)
        at org.apache.spark.scheduler.SparkListenerBus.doPostEvent$(SparkListenerBus.scala:28)
        at org.apache.spark.scheduler.AsyncEventQueue.doPostEvent(AsyncEventQueue.scala:37)
        at org.apache.spark.scheduler.AsyncEventQueue.doPostEvent(AsyncEventQueue.scala:37)
        at org.apache.spark.util.ListenerBus.postToAll(ListenerBus.scala:117)
        at org.apache.spark.util.ListenerBus.postToAll$(ListenerBus.scala:101)
        at org.apache.spark.scheduler.AsyncEventQueue.super$postToAll(AsyncEventQueue.scala:105)
        at org.apache.spark.scheduler.AsyncEventQueue.$anonfun$dispatch$1(AsyncEventQueue.scala:105)
        at scala.runtime.java8.JFunction0$mcJ$sp.apply(JFunction0$mcJ$sp.java:23)
        at scala.util.DynamicVariable.withValue(DynamicVariable.scala:62)
        at <http://org.apache.spark.scheduler.AsyncEventQueue.org|org.apache.spark.scheduler.AsyncEventQueue.org>$apache$spark$scheduler$AsyncEventQueue$$dispatch(AsyncEventQueue.scala:100)
        at org.apache.spark.scheduler.AsyncEventQueue$$anon$2.$anonfun$run$1(AsyncEventQueue.scala:96)
        at org.apache.spark.util.Utils$.tryOrStopSparkContext(Utils.scala:1471)
        at org.apache.spark.scheduler.AsyncEventQueue$$anon$2.run(AsyncEventQueue.scala:96)
23/07/25 22:01:42 INFO SparkContext: SparkContext is stopping with exitCode 0.
23/07/25 22:01:42 INFO SparkUI: Stopped Spark web UI at <http://192.168.0.108:4040>
23/07/25 22:01:42 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
23/07/25 22:01:42 INFO MemoryStore: MemoryStore cleared
23/07/25 22:01:42 INFO BlockManager: BlockManager stopped
23/07/25 22:01:42 INFO BlockManagerMaster: BlockManagerMaster stopped
23/07/25 22:01:42 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
23/07/25 22:01:42 INFO SparkContext: Successfully stopped SparkContext
Traceback (most recent call last):
  File "/home/leo/Projects/DataHub-LakeHouse/spark-code/delta_join.py", line 16, in <module>
    df1 = spark.read.format("delta").load("./delta-table/table1")
  File "/opt/spark-3.4.0-bin-hadoop3/python/lib/pyspark.zip/pyspark/sql/readwriter.py", line 300, in load
  File "/opt/spark-3.4.0-bin-hadoop3/python/lib/py4j-0.10.9.7-src.zip/py4j/java_gateway.py", line 1322, in __call__
  File "/opt/spark-3.4.0-bin-hadoop3/python/lib/pyspark.zip/pyspark/errors/exceptions/captured.py", line 169, in deco
  File "/opt/spark-3.4.0-bin-hadoop3/python/lib/py4j-0.10.9.7-src.zip/py4j/protocol.py", line 326, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o33.load.
: java.lang.IllegalStateException: SparkContext has been shutdown
How can I fix it ? Thank you.
a
Hi, @wonderful-coat-54946 did you find a solution here? I’m also seeing the same issue
Uncaught exception in thread spark-listener-group-shared! java.lang.NoSuchMethodError:…
. full message, similar but we are using spark through Databricks:
Copy code
23/09/08 13:28:13 INFO AsyncEventQueue: Process of event SparkListenerSQLExecutionStart(executionId=0, ...) by listener DatahubSparkListener took 1.996030788s.
23/09/08 13:28:13 ERROR Utils: uncaught error in thread spark-listener-group-shared, stopping SparkContext
java.lang.NoSuchMethodError: org.apache.spark.util.JsonProtocol.sparkEventToJson(Lorg/apache/spark/scheduler/SparkListenerEvent;)Lorg/json4s/JsonAST$JValue;
	at datahub.spark.DatahubSparkListener$SqlStartTask.<init>(DatahubSparkListener.java:87)
	at datahub.spark.DatahubSparkListener.processExecution(DatahubSparkListener.java:350)
	at datahub.spark.DatahubSparkListener.onOtherEvent(DatahubSparkListener.java:262)
	at org.apache.spark.scheduler.SparkListenerBus.doPostEvent(SparkListenerBus.scala:102)
	at org.apache.spark.scheduler.SparkListenerBus.doPostEvent$(SparkListenerBus.scala:28)
	at org.apache.spark.scheduler.AsyncEventQueue.doPostEvent(AsyncEventQueue.scala:42)
	at org.apache.spark.scheduler.AsyncEventQueue.doPostEvent(AsyncEventQueue.scala:42)
	at org.apache.spark.util.ListenerBus.postToAll(ListenerBus.scala:118)
	at org.apache.spark.util.ListenerBus.postToAll$(ListenerBus.scala:102)
	at org.apache.spark.scheduler.AsyncEventQueue.super$postToAll(AsyncEventQueue.scala:114)
	at org.apache.spark.scheduler.AsyncEventQueue.$anonfun$dispatch$1(AsyncEventQueue.scala:114)
	at scala.runtime.java8.JFunction0$mcJ$sp.apply(JFunction0$mcJ$sp.java:23)
	at scala.util.DynamicVariable.withValue(DynamicVariable.scala:62)
	at <http://org.apache.spark.scheduler.AsyncEventQueue.org|org.apache.spark.scheduler.AsyncEventQueue.org>$apache$spark$scheduler$AsyncEventQueue$$dispatch(AsyncEventQueue.scala:109)
	at org.apache.spark.scheduler.AsyncEventQueue$$anon$2.$anonfun$run$1(AsyncEventQueue.scala:105)
	at org.apache.spark.util.Utils$.tryOrStopSparkContext(Utils.scala:1655)
	at org.apache.spark.scheduler.AsyncEventQueue$$anon$2.run(AsyncEventQueue.scala:105)
23/09/08 13:28:13 ERROR Utils: throw uncaught fatal error in thread spark-listener-group-shared
java.lang.NoSuchMethodError: org.apache.spark.util.JsonProtocol.sparkEventToJson(Lorg/apache/spark/scheduler/SparkListenerEvent;)Lorg/json4s/JsonAST$JValue;
	at datahub.spark.DatahubSparkListener$SqlStartTask.<init>(DatahubSparkListener.java:87)
	at datahub.spark.DatahubSparkListener.processExecution(DatahubSparkListener.java:350)
	at datahub.spark.DatahubSparkListener.onOtherEvent(DatahubSparkListener.java:262)
	at org.apache.spark.scheduler.SparkListenerBus.doPostEvent(SparkListenerBus.scala:102)
	at org.apache.spark.scheduler.SparkListenerBus.doPostEvent$(SparkListenerBus.scala:28)
	at org.apache.spark.scheduler.AsyncEventQueue.doPostEvent(AsyncEventQueue.scala:42)
	at org.apache.spark.scheduler.AsyncEventQueue.doPostEvent(AsyncEventQueue.scala:42)
	at org.apache.spark.util.ListenerBus.postToAll(ListenerBus.scala:118)
	at org.apache.spark.util.ListenerBus.postToAll$(ListenerBus.scala:102)
	at org.apache.spark.scheduler.AsyncEventQueue.super$postToAll(AsyncEventQueue.scala:114)
	at org.apache.spark.scheduler.AsyncEventQueue.$anonfun$dispatch$1(AsyncEventQueue.scala:114)
	at scala.runtime.java8.JFunction0$mcJ$sp.apply(JFunction0$mcJ$sp.java:23)
	at scala.util.DynamicVariable.withValue(DynamicVariable.scala:62)
	at <http://org.apache.spark.scheduler.AsyncEventQueue.org|org.apache.spark.scheduler.AsyncEventQueue.org>$apache$spark$scheduler$AsyncEventQueue$$dispatch(AsyncEventQueue.scala:109)
	at org.apache.spark.scheduler.AsyncEventQueue$$anon$2.$anonfun$run$1(AsyncEventQueue.scala:105)
	at org.apache.spark.util.Utils$.tryOrStopSparkContext(Utils.scala:1655)
	at org.apache.spark.scheduler.AsyncEventQueue$$anon$2.run(AsyncEventQueue.scala:105)
23/09/08 13:28:13 INFO HiveServer2: Shutting down HiveServer2
23/09/08 13:28:13 ERROR DatabricksMain$DBUncaughtExceptionHandler: Uncaught exception in thread spark-listener-group-shared!
java.lang.NoSuchMethodError: org.apache.spark.util.JsonProtocol.sparkEventToJson(Lorg/apache/spark/scheduler/SparkListenerEvent;)Lorg/json4s/JsonAST$JValue;
	at datahub.spark.DatahubSparkListener$SqlStartTask.<init>(DatahubSparkListener.java:87)
	at datahub.spark.DatahubSparkListener.processExecution(DatahubSparkListener.java:350)
	at datahub.spark.DatahubSparkListener.onOtherEvent(DatahubSparkListener.java:262)
	at org.apache.spark.scheduler.SparkListenerBus.doPostEvent(SparkListenerBus.scala:102)
	at org.apache.spark.scheduler.SparkListenerBus.doPostEvent$(SparkListenerBus.scala:28)
	at org.apache.spark.scheduler.AsyncEventQueue.doPostEvent(AsyncEventQueue.scala:42)
	at org.apache.spark.scheduler.AsyncEventQueue.doPostEvent(AsyncEventQueue.scala:42)
	at org.apache.spark.util.ListenerBus.postToAll(ListenerBus.scala:118)
	at org.apache.spark.util.ListenerBus.postToAll$(ListenerBus.scala:102)
	at org.apache.spark.scheduler.AsyncEventQueue.super$postToAll(AsyncEventQueue.scala:114)
	at org.apache.spark.scheduler.AsyncEventQueue.$anonfun$dispatch$1(AsyncEventQueue.scala:114)
	at scala.runtime.java8.JFunction0$mcJ$sp.apply(JFunction0$mcJ$sp.java:23)
	at scala.util.DynamicVariable.withValue(DynamicVariable.scala:62)
	at <http://org.apache.spark.scheduler.AsyncEventQueue.org|org.apache.spark.scheduler.AsyncEventQueue.org>$apache$spark$scheduler$AsyncEventQueue$$dispatch(AsyncEventQueue.scala:109)
	at org.apache.spark.scheduler.AsyncEventQueue$$anon$2.$anonfun$run$1(AsyncEventQueue.scala:105)
	at org.apache.spark.util.Utils$.tryOrStopSparkContext(Utils.scala:1655)
	at org.apache.spark.scheduler.AsyncEventQueue$$anon$2.run(AsyncEventQueue.scala:105)
We’re on DBR 12.2, Spark 3.2, which should have JsonProtocol.sparkEventToJson available.
Looks like a spark version incompatibility issue. Found thread here: https://datahubspace.slack.com/archives/C029A3M079U/p1692781825178859