Wei Wang
11/08/2022, 7:40 PMWei Wang
11/08/2022, 7:40 PMFile "/usr/local/lib/python3.7/site-packages/apache_beam/runners/worker/bundle_processor.py", line 1596, in _create_pardo_operation
dofn_data = pickler.loads(serialized_fn)
File "/usr/local/lib/python3.7/site-packages/apache_beam/internal/pickler.py", line 52, in loads
encoded, enable_trace=enable_trace, use_zlib=use_zlib)
File "/usr/local/lib/python3.7/site-packages/apache_beam/internal/dill_pickler.py", line 289, in loads
return dill.loads(s)
File "/usr/local/lib/python3.7/site-packages/dill/_dill.py", line 275, in loads
return load(file, ignore, **kwds)
File "/usr/local/lib/python3.7/site-packages/dill/_dill.py", line 270, in load
return Unpickler(file, ignore=ignore, **kwds).load()
File "/usr/local/lib/python3.7/site-packages/dill/_dill.py", line 472, in load
obj = StockUnpickler.load(self)
TypeError: code() takes at most 15 arguments (16 given)
Krish Narukulla
11/08/2022, 8:46 PMscylladb.
Should i go route of JDBC driver support? or custom connector like hbase
?
https://nightlies.apache.org/flink/flink-docs-master/docs/connectors/table/jdbc/
https://nightlies.apache.org/flink/flink-docs-master/docs/connectors/table/hbase/Simi Ily
11/09/2022, 3:25 AMEmmanuel Leroy
11/09/2022, 5:22 AMEcho Lee
11/09/2022, 8:54 AMConradJam
11/09/2022, 10:23 AMVictor Costa
11/09/2022, 12:03 PMPulsarSource
working but i’m getting this error
Caused by: java.lang.ClassNotFoundException: org.apache.pulsar.client.api.MessageId
More details in the 🧵Sahak Maloyan
11/09/2022, 1:40 PMAndy Mei
11/09/2022, 5:42 PMRICHARD JOY
11/09/2022, 8:18 PMNick
11/09/2022, 9:26 PMNick
11/09/2022, 9:29 PMNick
11/09/2022, 9:34 PMDan Andreescu
11/09/2022, 9:47 PMEmmanuel Leroy
11/10/2022, 12:45 AMNormal JobStatusChanged 5m42s Job Job status changed from CREATED to RUNNING
Normal JobStatusChanged 2m33s Job Job status changed from RUNNING to FINISHED
Normal SpecChanged 77s (x6 over 2m38s) JobManagerDeployment SCALE change(s) detected (FlinkSessionJobSpec[job.parallelism=2] differs from FlinkSessionJobSpec[job.parallelism=4]), starting reconciliation.
Normal Suspended 77s (x6 over 2m38s) JobManagerDeployment Suspending existing deployment.
and the operator logs say:
org.apache.flink.kubernetes.operator.exception.ReconciliationException: java.util.concurrent.ExecutionException: org.apache.flink.runtime.messages.FlinkJobNotFoundException: Could not find Flink job (ffffffff9751e3e50000000000000001)
is this expected?Mingliang Liu
11/10/2022, 1:31 AMFor Table / SQL users, the new module flink-table-planner-loader replaces flink-table-planner_2.12 and avoids the need for a Scala suffix. For backwards compatibility, users can still swap it with flink-table-planner_2.12 located in opt/. flink-table-uber has been split into flink-table-api-java-uber, flink-table-planner(-loader), and flink-table-runtime. Scala users need to explicitly add a dependency to flink-table-api-scala or flink-table-api-scala-bridge.
It makes me think I can add flink-table-api-scala
and/or flink-table-api-scala-bridge
to my application which already pulls in Scala 2.13. However, I do not really get the dependency resolved as the above two packages do not exist in Maven central repository. Did I miss anything? Anyone using Scala 2.13? ThanksTan Trinh
11/10/2022, 4:15 AMOwen Lee
11/10/2022, 9:10 AM_void_ resumeTransaction(_long_
_producerId_, _short_
_epoch_) {
_Object_ topicPartitionBookkeeper =
getField(transactionManager, "*topicPartitionBookkeeper*");
...
}Sandeep Kongathi
11/10/2022, 3:12 PM[ERROR] Could not execute SQL statement. Reason:
org.apache.flink.table.api.ValidationException: Could not find any factory for identifier 'kafka' that implements 'org.apache.flink.table.factories.DynamicTableFactory' in the classpath.
Available factory identifiers are:
blackhole
datagen
filesystem
print
python-input-format
It was working fine in 1.15.0, is there any external jar I should includeDylan Fontana
11/10/2022, 4:45 PMaddGoup(key, value)
. Similarly the same method is used for adding user scope. It's also mentioned that user variables affect the outcome of getScopeComponents.
Is it possible to add a user variable without affecting the scope components of a metric?
I'm finding (when trying to register counters for different categories) each counter reports as a different metric (due to the scope component including the variable) and gets tagged with the variable. I instead want them to be the same metric but different tags.
For example:
Map<Integer, Counter> counters = new HashMap<>();
for (int i = 0; i < 10; i++) {
String category = String.format("cat-%d", i);
counters.put(
i,
context.getMetricGroup()
.addGroup("category", category)
.counter("category_sums")
);
}
// Results in 10 metrics named:
// flink.operator.category.cat-{i}.category_sums
// each with tag {category: cat-i}
Kevin Lam
11/10/2022, 7:03 PM张思航
11/10/2022, 8:10 PMSlackbot
11/10/2022, 11:58 PMVictor Costa
11/11/2022, 12:32 AMTypeError: Could not found the Java class 'org.apache.flink.avro.shaded.org.apache.avro.Schema.Parser'
More details in the 🧵Jackwangcs
11/11/2022, 3:24 AMINFO org.apache.flink.runtime.executiongraph.ExecutionGraph [] - Join[292] -> Calc[293] -> ConstraintEnforcer[294] (13/48) (0e3956d4cb2a2fb37397f401d50eec99_084fe6556ee4c166eb77ae58879c63be_12_0) switched from RUNNING to FAILED on container_1667978922771_0006_01_000004 @ host-name (dataPort=33409).
java.lang.NullPointerException: null
at StreamExecCalc$18110.processElement_split890(Unknown Source) ~[?:?]
at StreamExecCalc$18110.processElement(Unknown Source) ~[?:?]
at org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.pushToOperator(CopyingChainingOutput.java:82) ~[flink-dist-1.16.0.jar:1.16.0]
at org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.collect(CopyingChainingOutput.java:57) ~[flink-dist-1.16.0.jar:1.16.0]
at org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.collect(CopyingChainingOutput.java:29) ~[flink-dist-1.16.0.jar:1.16.0]
at org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:56) ~[flink-dist-1.16.0.jar:1.16.0]
at org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:29) ~[flink-dist-1.16.0.jar:1.16.0]
at org.apache.flink.streaming.api.operators.TimestampedCollector.collect(TimestampedCollector.java:51) ~[flink-dist-1.16.0.jar:1.16.0]
at org.apache.flink.table.runtime.operators.join.stream.StreamingJoinOperator.outputNullPadding(StreamingJoinOperator.java:334) ~[flink-table-runtime-1.16.0.jar:1.16.0]
at org.apache.flink.table.runtime.operators.join.stream.StreamingJoinOperator.processElement(StreamingJoinOperator.java:219) ~[flink-table-runtime-1.16.0.jar:1.16.0]
at org.apache.flink.table.runtime.operators.join.stream.StreamingJoinOperator.processElement1(StreamingJoinOperator.java:124) ~[flink-table-runtime-1.16.0.jar:1.16.0]
at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:780) ~[flink-dist-1.16.0.jar:1.16.0]
at org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:935) ~[flink-dist-1.16.0.jar:1.16.0]
at org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:914) ~[flink-dist-1.16.0.jar:1.16.0]
at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:728) ~[flink-dist-1.16.0.jar:1.16.0]
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:550) ~[flink-dist-1.16.0.jar:1.16.0]
at java.lang.Thread.run(Thread.java:750) ~[?:1.8.0_342]
Do you guys have any suggestions about how to debug it? Any help is appreciated.Gaurav Miglani
11/11/2022, 5:00 AMtaskmanager.numberOfTaskSlots: "16"
and parallelism: 80,
each TM memory is 32 gb and 12 cpu(c5n.4xlarge), i have tried tuning the parallelism, but it is not working, followed https://docs.immerok.cloud/docs/how-to-guides/development/read-from-apache-kafka-write-to-parquet-files-with-apache-flink/Jim
11/11/2022, 6:44 AMJirawech Siwawut
11/11/2022, 12:09 PMJaume
11/11/2022, 2:15 PMjava.lang.RuntimeException: Can not retract a non-existent record. This should never happen
error in a Flink Job using Table SQL. Our Flink version is v.1.13.6
❓ Does anyone know in which conditions this exception can happen?
🧵 More info about job & exception in thread ⤵️