Andrej Mitrovic
06/02/2023, 8:31 AMstatefun-rust
's last commit: https://flink.apache.org/2021/04/15/stateful-functions-3.0.0-remote-functions-front-and-center/. But I don't see any mention of API breaking changes.Mourad HARMIM
06/02/2023, 11:45 AMYuliya Bogdan
06/02/2023, 12:32 PMFileSystem
connector to read avro files from a path which may contain non avro files that have to be filtered out (no partitioning), what I assume is a common case. Is there an easy way to do it?
I was looking for a way to pass custom file filter predicate to the default NonSplittingRecursiveEnumerator
, or specify FileEnumrator
provider to the FileSource
without the need to implement custom connector, but it seems to be tricky because FileSource.FileSourceBuilder
, which has a setter for the FileEnumerator, is initiated in private method in FileSystemTableSource.Jirawech Siwawut
06/02/2023, 4:10 PMTarek Ajjour
06/02/2023, 6:15 PMVitor Leal
06/03/2023, 5:56 PMHygor Knust
06/03/2023, 8:58 PMtable.exec.resource.default-parallelism=16
. It’s performing well generally, but there’s a join operation at the end of the job graph which seems to be causing a bottleneck.
Upon retrieving the job INFO
, I noticed that it’s using the GLOBAL
ship strategy and it seems to be forcing the operator to maintain a parallelism of 1.
{
"id": 96,
"type": "Join[93]",
"pact": "Operator",
"contents": "...",
"parallelism": 1,
"predecessors": [
{
"id": 88,
"ship_strategy": "GLOBAL",
"side": "second"
},
{
"id": 94,
"ship_strategy": "GLOBAL",
"side": "second"
}
]
}
I’ve tried searching through the documentation for more information on this, but I haven’t found anything yet.
I would like to know why is it doing this, and if there is a way to configure it.
Thank you!Hangyu Wang
06/05/2023, 2:53 AMtableEnv.executeSql("CREATE TABLE metric (\n" +
" metric_name STRING,\n" +
" function_name STRING,\n" +
" line_number INT\n" +
") WITH (\n" +
" 'connector' = 'filesystem',\n" +
" 'path' = '" + parameter.get("input_file") + "', \n" +
" 'format' = 'protobuf',\n" +
" 'protobuf.message-class-name' = 'MetricTest',\n" +
" 'protobuf.ignore-parse-errors' = 'true'\n" +
")");
Table t = tableEnv.from("metric");
t.execute().print();
Here is result:
~/Downloads/flink-1.17.0/bin/flink run target/import-metric-event-1.0-SNAPSHOT.jar --input_file metric_test
Job has been submitted with JobID 9128383c4d60cccaa252145478f31ee5
Empty set
And here is the protobuf file:
metric_name: "test_metric_name"
function_name: "test_function_name"
line_number: 111
Protobuf file:
syntax = "proto3";
package metric;
message MetricTest {
string metric_name = 1;
string function_name = 2;
uint32 line_number = 3;
}
Sumit Khaitan
06/05/2023, 7:44 AMHangyu Wang
06/05/2023, 10:02 AMCaused by: java.lang.NoSuchMethodError: 'java.lang.Long org.apache.iceberg.util.PropertyUtil.propertyAsNullableLong(java.util.Map, java.lang.String)'
at org.apache.iceberg.aws.AwsProperties.<init>(AwsProperties.java:744) ~[iceberg-aws-1.1.0.jar:?]
at org.apache.iceberg.aws.s3.S3FileIO.initialize(S3FileIO.java:355) ~[iceberg-aws-1.1.0.jar:?]
at org.apache.iceberg.CatalogUtil.loadFileIO(CatalogUtil.java:295) ~[iceberg-flink-runtime-1.14-1.0.0.jar:?]
at org.apache.iceberg.jdbc.JdbcCatalog.initialize(JdbcCatalog.java:98) ~[iceberg-flink-runtime-1.14-1.0.0.jar:?]
at autox.sim.eval.datalake.CatalogUtil.initCatalog(CatalogUtil.java:26) ~[?:?]
at autox.sim.eval.datalake.ImportMetricEvent.main(ImportMetricEvent.java:60) ~[?:?]
at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:?]
at jdk.internal.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) ~[?:?]
at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) ~[?:?]
at java.lang.reflect.Method.invoke(Unknown Source) ~[?:?]
at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:355) ~[flink-dist-1.17.0.jar:1.17.0]
at org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:222) ~[flink-dist-1.17.0.jar:1.17.0]
at org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:105) ~[flink-dist-1.17.0.jar:1.17.0]
at org.apache.flink.client.deployment.application.ApplicationDispatcherBootstrap.runApplicationEntryPoint(ApplicationDispatcherBootstrap.java:301) ~[flink-dist-1.17.0.jar:1.17.0]
... 13 more
hueiyuan su
06/05/2023, 3:08 PMRaghunadh Nittala
06/06/2023, 3:50 AMAmmar Master
06/06/2023, 4:08 AMDataStream
to a Table
? I have tried disabling chaining for the upstream operators and setting table.exec.resource.default-parallelism
but it doesn't respect it.sziwei
06/06/2023, 5:28 AMHangyu Wang
06/06/2023, 9:31 AMCaused by: java.lang.NoClassDefFoundError: org/apache/kafka/clients/consumer/ConsumerRecord
at java.base/java.lang.Class.getDeclaredMethods0(Native Method)
at java.base/java.lang.Class.privateGetDeclaredMethods(Class.java:3166)
at java.base/java.lang.Class.getDeclaredMethod(Class.java:2473)
at java.base/java.io.ObjectStreamClass.getPrivateMethod(ObjectStreamClass.java:1452)
at java.base/java.io.ObjectStreamClass$2.run(ObjectStreamClass.java:381)
at java.base/java.io.ObjectStreamClass$2.run(ObjectStreamClass.java:355)
at java.base/java.security.AccessController.doPrivileged(Native Method)
at java.base/java.io.ObjectStreamClass.<init>(ObjectStreamClass.java:355)
at java.base/java.io.ObjectStreamClass$Caches$1.computeValue(ObjectStreamClass.java:98)
at java.base/java.io.ObjectStreamClass$Caches$1.computeValue(ObjectStreamClass.java:95)
at java.base/java.io.ClassCache$1.computeValue(ClassCache.java:73)
at java.base/java.io.ClassCache$1.computeValue(ClassCache.java:70)
at java.base/java.lang.ClassValue.getFromHashMap(ClassValue.java:228)
at java.base/java.lang.ClassValue.getFromBackup(ClassValue.java:210)
at java.base/java.lang.ClassValue.get(ClassValue.java:116)
at java.base/java.io.ClassCache.get(ClassCache.java:84)
at java.base/java.io.ObjectStreamClass.lookup(ObjectStreamClass.java:336)
at java.base/java.io.ObjectStreamClass.initNonProxy(ObjectStreamClass.java:542)
at java.base/java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:2020)
at java.base/java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1870)
at java.base/java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2201)
at java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1687)
at java.base/java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2496)
at java.base/java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2390)
at java.base/java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2228)
at java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1687)
at java.base/java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2496)
at java.base/java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2390)
at java.base/java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2228)
at java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1687)
at java.base/java.io.ObjectInputStream.readObject(ObjectInputStream.java:489)
at java.base/java.io.ObjectInputStream.readObject(ObjectInputStream.java:447)
at org.apache.flink.util.InstantiationUtil.deserializeObject(InstantiationUtil.java:534)
at org.apache.flink.util.InstantiationUtil.deserializeObject(InstantiationUtil.java:522)
at org.apache.flink.util.InstantiationUtil.readObjectFromConfig(InstantiationUtil.java:476)
at org.apache.flink.streaming.api.graph.StreamConfig.getStreamOperatorFactory(StreamConfig.java:383)
at org.apache.flink.streaming.runtime.tasks.OperatorChain.<init>(OperatorChain.java:166)
at org.apache.flink.streaming.runtime.tasks.RegularOperatorChain.<init>(RegularOperatorChain.java:60)
at org.apache.flink.streaming.runtime.tasks.StreamTask.restoreInternal(StreamTask.java:688)
at org.apache.flink.streaming.runtime.tasks.StreamTask.restore(StreamTask.java:675)
at org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:952)
at org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:921)
at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:745)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:562)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: java.lang.ClassNotFoundException: org.apache.kafka.clients.consumer.ConsumerRecord
at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:581)
at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:178)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:522)
... 45 more
And sure I have put the kafka-client.jar to the flink/lib/Viktor Hrtanek
06/06/2023, 2:15 PMAttributeError: 'Test' object has no attribute '_set_takes_row_as_input'
and when i add it, then the error says TypeError: 'Test' object is not callable
def method1(input):
return "#".join([input, input])
class Test(MapFunction):
def __init__(self, testValue):
self.key = None
self.testValue = testValue
# def _set_takes_row_as_input(self):
# self._takes_row_as_input = True
# return self
def map(self, data, testValue, output_type=DataTypes.ROW([DataTypes.FIELD("FIRST_NAME", DataTypes.STRING()),
DataTypes.FIELD("FIRST_NAM", DataTypes.STRING()),
DataTypes.FIELD("FIRST_NA", DataTypes.STRING())])):
return Row(
FIRST_NAME = data['FIRST_NAME'],
FIRST_NAM = data['EMAIL'],
FIRST_NA = method1(self.testValue)
)
source_table = table_env.from_path("table")
source_table.map(Test(testValue="DummyValue")).execute().print()
then i tried with udf, but no success. Error says TypeError: __call__() got an unexpected keyword argument 'dummyVariable'
@udf(result_type=DataTypes.ROW([DataTypes.FIELD("FIELD_1", DataTypes.STRING()),
DataTypes.FIELD("FIELD_2", DataTypes.STRING()),
DataTypes.FIELD("FIELD_3", DataTypes.STRING())]))
def func1(data: Row, dummyVariable: str) -> Row:
return Row(
FIELD_1 = data['FIRST_NAME'],
FIELD_2 = data['EMAIL'],
FIELD_3 = method1(dummyVariable)
)
source_table = table_env.from_path("table")
source_table.map(func1(dummyVariable = "dummyValue")).execute().print()
Iqbal Singh
06/06/2023, 5:20 PMDaniel Packard
06/06/2023, 5:34 PMAmir Hossein Sharifzadeh
06/06/2023, 7:27 PMCaused by: java.lang.Exception: Exception while creating StreamOperatorStateContext.
at org.apache.flink.streaming.api.operators.StreamTaskStateInitializerImpl.streamOperatorStateContext(StreamTaskStateInitializerImpl.java:256)
at org.apache.flink.streaming.api.operators.AbstractStreamOperator.initializeState(AbstractStreamOperator.java:256)
at org.apache.flink.streaming.runtime.tasks.RegularOperatorChain.initializeStateAndOpenOperators(RegularOperatorChain.java:106)
at org.apache.flink.streaming.runtime.tasks.StreamTask.restoreGates(StreamTask.java:734)
at org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.call(StreamTaskActionExecutor.java:55)
at org.apache.flink.streaming.runtime.tasks.StreamTask.restoreInternal(StreamTask.java:709)
at org.apache.flink.streaming.runtime.tasks.StreamTask.restore(StreamTask.java:675)
at org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:952)
at org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:921)
at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:745)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:562)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: org.apache.flink.util.FlinkException: Could not restore keyed state backend for StreamMap_e4cb5bc1ac70d352198b4a1200f6b296_(4/12) from any of the 1 provided restore options.
at org.apache.flink.streaming.api.operators.BackendRestorerProcedure.createAndRestore(BackendRestorerProcedure.java:160)
at org.apache.flink.streaming.api.operators.StreamTaskStateInitializerImpl.keyedStatedBackend(StreamTaskStateInitializerImpl.java:353)
at org.apache.flink.streaming.api.operators.StreamTaskStateInitializerImpl.streamOperatorStateContext(StreamTaskStateInitializerImpl.java:165)
... 11 more
Caused by: java.io.IOException: Could not load the native RocksDB library
at org.apache.flink.contrib.streaming.state.EmbeddedRocksDBStateBackend.ensureRocksDBIsLoaded(EmbeddedRocksDBStateBackend.java:977)
at org.apache.flink.contrib.streaming.state.EmbeddedRocksDBStateBackend.createKeyedStateBackend(EmbeddedRocksDBStateBackend.java:448)
at org.apache.flink.contrib.streaming.state.EmbeddedRocksDBStateBackend.createKeyedStateBackend(EmbeddedRocksDBStateBackend.java:99)
at org.apache.flink.streaming.api.operators.StreamTaskStateInitializerImpl.lambda$keyedStatedBackend$1(StreamTaskStateInitializerImpl.java:336)
at org.apache.flink.streaming.api.operators.BackendRestorerProcedure.attemptCreateAndRestore(BackendRestorerProcedure.java:168)
at org.apache.flink.streaming.api.operators.BackendRestorerProcedure.createAndRestore(BackendRestorerProcedure.java:135)
... 13 more
Caused by: java.lang.UnsatisfiedLinkError: C:\Users\...\AppData\Local\Temp\minicluster_8800a67d6f12e85c690a30d4b08db590\tm_0\tmp\rocksdb-lib-072fe938ffd58de9d936923ddbb55280\librocksdbjni-win64.dll: Can't find dependent libraries
at java.base/java.lang.ClassLoader$NativeLibrary.load0(Native Method)
at java.base/java.lang.ClassLoader$NativeLibrary.load(ClassLoader.java:2432)
at java.base/java.lang.ClassLoader$NativeLibrary.loadLibrary(ClassLoader.java:2489)
at java.base/java.lang.ClassLoader.loadLibrary0(ClassLoader.java:2689)
at java.base/java.lang.ClassLoader.loadLibrary(ClassLoader.java:2619)
at java.base/java.lang.Runtime.load0(Runtime.java:765)
at java.base/java.lang.System.load(System.java:1835)
at org.rocksdb.NativeLibraryLoader.loadLibraryFromJar(NativeLibraryLoader.java:102)
at org.rocksdb.NativeLibraryLoader.loadLibrary(NativeLibraryLoader.java:82)
at org.apache.flink.contrib.streaming.state.EmbeddedRocksDBStateBackend.ensureRocksDBIsLoaded(EmbeddedRocksDBStateBackend.java:951)
This is also my maven: <dependency>
<groupId>com.ververica</groupId>
<artifactId>frocksdbjni</artifactId>
<version>6.20.3-ververica-2.0</version>
</dependency>
Tan Kim
06/07/2023, 1:20 AM"json": {
"eventName": "event-ABC",
...
}
The source is json format and sink is avro format with confluent-schema registry.
Here is my code.
tableEnv.executeSql("CREATE TABLE source_table (..) WITH (
'connector'='kafka',
'format'='json',
)")
tableEnv.executeSql("CREATE TABLE sink_table WITH (
'connector'='kafka',
'format'='avro-confluent',
..
) AS SELECT * FROM source_table")
If I run this code without ‘value.avro-confluent.subject’ configuration, the record is something like this.
{
"json": {
"org.apache.flink.avro.generated.record_json": {
"eventName": {
"string": "event-ABC"
},
..
}
}
I don’t understand why flink-avro inserts “org.apache.flink.avro.generated.record_json” between json
and eventName
.
Also eventName
is not just ‘event-ABC’ but string: event-ABC
.
Is this bug? or something I missed?Hangyu Wang
06/07/2023, 3:33 AMmessage B {
repeated message A = 1;
}
How can we extract message A with Table API?Slackbot
06/07/2023, 7:00 AMNicolas P
06/07/2023, 8:53 AMupdateTime
field to only have the latest value (this way we could handle late messages properly).
As far as I understood the kafka connector to MySQL doesn't handle conditional upsert nor stored procedure.
It's not clear to me either the Flink kafka upsert table sink does handle conditional upsert or not? Would you know if it does? Would you have any other ideas on how to proceed?Shubham Saxena
06/07/2023, 11:33 AMZhong Chen
06/07/2023, 4:39 PM2023-06-07 16:34:45,026 WARN org.apache.flink.runtime.taskmanager.Task [] - Source: data enrichment counters source -> Flat Map -> Process -> (Sink: Writer -> Sink: Committer, Sink: Writer -> Sink: Committer) (1/1)#75129 (8a8c64ebda8a014a3032ac201948250e_cbc357ccb763df2852fee8c4fc7d55f2_0_75129) switched from INITIALIZING to FAILED with failure cause: java.lang.IllegalStateException: Failed to commit KafkaCommittable{producerId=238443, epoch=2, transactionalId=team-data-de-kong-counters-0-13868}
at org.apache.flink.streaming.runtime.operators.sink.committables.CommitRequestImpl.signalFailedWithUnknownReason(CommitRequestImpl.java:77)
at org.apache.flink.connector.kafka.sink.KafkaCommitter.commit(KafkaCommitter.java:119)
at org.apache.flink.streaming.runtime.operators.sink.committables.CheckpointCommittableManagerImpl.commit(CheckpointCommittableManagerImpl.java:126)
at org.apache.flink.streaming.runtime.operators.sink.CommitterOperator.commitAndEmit(CommitterOperator.java:176)
at org.apache.flink.streaming.runtime.operators.sink.CommitterOperator.commitAndEmitCheckpoints(CommitterOperator.java:160)
at org.apache.flink.streaming.runtime.operators.sink.CommitterOperator.initializeState(CommitterOperator.java:121)
at org.apache.flink.streaming.api.operators.StreamOperatorStateHandler.initializeOperatorState(StreamOperatorStateHandler.java:122)
at org.apache.flink.streaming.api.operators.AbstractStreamOperator.initializeState(AbstractStreamOperator.java:283)
at org.apache.flink.streaming.runtime.tasks.RegularOperatorChain.initializeStateAndOpenOperators(RegularOperatorChain.java:106)
at org.apache.flink.streaming.runtime.tasks.StreamTask.restoreGates(StreamTask.java:726)
at org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.call(StreamTaskActionExecutor.java:55)
at org.apache.flink.streaming.runtime.tasks.StreamTask.restoreInternal(StreamTask.java:702)
at org.apache.flink.streaming.runtime.tasks.StreamTask.restore(StreamTask.java:669)
at org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:935)
at org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:904)
at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:728)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:550)
at java.base/java.lang.Thread.run(Unknown Source)
Caused by: java.lang.RuntimeException: Incompatible KafkaProducer version
at org.apache.flink.connector.kafka.sink.FlinkKafkaInternalProducer.getField(FlinkKafkaInternalProducer.java:266)
at org.apache.flink.connector.kafka.sink.FlinkKafkaInternalProducer.getField(FlinkKafkaInternalProducer.java:253)
at org.apache.flink.connector.kafka.sink.FlinkKafkaInternalProducer.resumeTransaction(FlinkKafkaInternalProducer.java:292)
at org.apache.flink.connector.kafka.sink.KafkaCommitter.getRecoveryProducer(KafkaCommitter.java:143)
at org.apache.flink.connector.kafka.sink.KafkaCommitter.lambda$commit$0(KafkaCommitter.java:72)
at java.base/java.util.Optional.orElseGet(Unknown Source)
at org.apache.flink.connector.kafka.sink.KafkaCommitter.commit(KafkaCommitter.java:72)
... 16 more
Caused by: java.lang.NoSuchFieldException: topicPartitionBookkeeper
at java.base/java.lang.Class.getDeclaredField(Unknown Source)
at org.apache.flink.connector.kafka.sink.FlinkKafkaInternalProducer.getField(FlinkKafkaInternalProducer.java:262)
... 22 more
Daniel Packard
06/07/2023, 5:35 PMCarter Fendley
06/08/2023, 3:07 AMSylvia Lin
06/08/2023, 4:30 AMflink-main-container
, the volume is defined by some annotation injection.
Here is the podTemplate snippet, but it gives volumeMount name not found:
podTemplate:
....
containers:
- name: flink-main-container
volumeMounts:
- mountPath: /opt/flink/log
name: flink-logs
- mountPath: /opt/conf
name: isc-confbundle
volumes:
- name: flink-logs
emptyDir: { }
--------
Error: {"type":"org.apache.flink.kubernetes.operator.exception.ReconciliationException","message":"org.apache.flink.client.deployment.ClusterDeploymentException: Could not create Kubernetes cluster \"event-router\".","throwableList":[{"type":"org.apache.flink.client.deployment.ClusterDeploymentException","message":"Could not create Kubernetes cluster \"event-router\"."},{"type":"org.apache.flink.kubernetes.shaded.io.fabric8.kubernetes.client.KubernetesClientException","message":"Failure executing: POST at: <https://192.168.0.1/apis/apps/v1/namespaces/data-infra-event-router/deployments>. Message: Deployment.apps \"event-router\" is invalid: spec.template.spec.containers[0].volumeMounts[1].name: Not found: \"isc-confbundle\". Received status: Status(apiVersion=v1, code=422, details=StatusDetails(causes=[StatusCause(field=spec.template.spec.containers[0].volumeMounts[1].name, message=Not found: \"isc-confbundle\", reason=FieldValueNotFound, additionalProperties={})], group=apps, kind=Deployment, name=event-router, retryAfterSeconds=null, uid=null, additionalProperties={}), kind=Status, message=Deployment.apps \"event-router\" is invalid: spec.template.spec.containers[0].volumeMounts[1].name: Not found: \"isc-confbundle\", metadata=ListMeta(_continue=null, remainingItemCount=null, resourceVersion=null, selfLink=null, additionalProperties={}), reason=Invalid, status=Failure, additionalProperties={})."}]}
So I tried manually add this volumes as below, then the error turns out to be duplicate volume name(so I guess the volume should be there?)
podTemplate:
....
containers:
- name: flink-main-container
volumeMounts:
- mountPath: /opt/flink/log
name: flink-logs
- mountPath: /opt/conf
name: isc-confbundle
volumes:
- name: flink-logs
emptyDir: { }
- name: isc-confbundle
emptyDir: { }
--------
Error:
[ERROR][data-infra-event-router/event-router] Flink Deployment failed
org.apache.flink.kubernetes.operator.exception.DeploymentFailedException: Pod "event-router-66b68d8b7b-sq6qr" is invalid: spec.volumes[6].name: Duplicate value: "isc-confbundle"
Any advice here?Sumit Singh
06/08/2023, 7:22 AMCaused by: <http://org.apache.flink.kafka.shaded.org|org.apache.flink.kafka.shaded.org>.apache.kafka.common.errors.TimeoutException: Topic flink not present in metadata after 60000 ms.
Hangyu Wang
06/08/2023, 8:23 AM