Alex Brekken
04/18/2023, 7:11 PMMiniClusterWithClientResourceenv.fromElements()Z Mario
04/19/2023, 3:30 AMapiVersion: <http://flink.apache.org/v1beta1|flink.apache.org/v1beta1>
kind: FlinkDeployment
metadata:
  namespace: default
  name: basic-example
spec:
  image: flink:1.15
  flinkVersion: v1_15
  flinkConfiguration:
    taskmanager.numberOfTaskSlots: "2"
  serviceAccount: flink
  jobManager:
    resource:
      memory: "2048m"
      cpu: 1
  taskManager:
    resource:
      memory: "2048m"
      cpu: 1Soumya Ghosh
04/19/2023, 6:54 AMelasticsearch-7Caused by: org.apache.flink.table.api.ValidationException: Unsupported options found for 'elasticsearch-7'.
Unsupported options:
connection.request-timeout
connection.timeout
socket.timeout
Supported options:
connection.path-prefix
connector
document-id.key-delimiter
failure-handler
format
hosts
index
json.encode.decimal-as-plain-number
json.fail-on-missing-field
json.ignore-parse-errors
json.map-null-key.literal
json.map-null-key.mode
json.timestamp-format.standard
password
property-version
sink.bulk-flush.backoff.delay
sink.bulk-flush.backoff.max-retries
sink.bulk-flush.backoff.strategy
sink.bulk-flush.interval
sink.bulk-flush.max-actions
sink.bulk-flush.max-size
sink.flush-on-checkpoint
username
	at org.apache.flink.table.factories.FactoryUtil.validateUnconsumedKeys(FactoryUtil.java:631)
	at org.apache.flink.table.factories.FactoryUtil$FactoryHelper.validate(FactoryUtil.java:923)
	at org.apache.flink.streaming.connectors.elasticsearch.table.Elasticsearch7DynamicSinkFactory.createDynamicTableSink(Elasticsearch7DynamicSinkFactory.java:93)
	at org.apache.flink.table.factories.FactoryUtil.createDynamicTableSink(FactoryUtil.java:266)
	... 18 moreAlokh P
04/19/2023, 8:47 AMCaused by: org.apache.parquet.io.ParquetEncodingException: empty fields are illegal, the field should be ommited completely instead
	at org.apache.parquet.io.MessageColumnIO$MessageColumnIORecordConsumer.endField(MessageColumnIO.java:329)
	at org.apache.flink.formats.parquet.row.ParquetRowDataWriter$ArrayWriter.write(ParquetRowDataWriter.java:420)
	at org.apache.flink.formats.parquet.row.ParquetRowDataWriter$RowWriter.write(ParquetRowDataWriter.java:456)parquet.avro.write-old-list-structureFelix Angell
04/19/2023, 10:33 AMrelease-1.15.2-rc2release-1.15rc2release-1.15Thijs van de Poll
04/19/2023, 10:57 AM-D[1,83819211,545900928]
+I[-83819211,83819211,545900928]-----------+----------------+--------------------
 -83819211 |       83819211 |          545900928
         1 |       83819211 |          545900928Peter Esselius
04/19/2023, 1:31 PMDataStream[RowData]DataStream[Row]tEnv.fromChangelogstream()tEnv.fromDataStream()DataStream[RowData]DataStream[MyCaseClass]ds.map()CaseClassConverterRowRowConverterDataStream API IntegrationRowDataRowFelix Angell
04/19/2023, 2:56 PM2023-04-19 15:49:21
java.lang.NoSuchMethodError: 'org.apache.flink.metrics.MetricGroup org.apache.flink.api.common.functions.RuntimeContext.getMetricGroup()'
	at org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase.run(FlinkKafkaConsumerBase.java:765)flink-streaming-javaThijs van de Poll
04/19/2023, 3:40 PM# This works
table.select(col("parties_id"), col("publication_id"), col("nested_obj"))
# This works
table.select(col("nested_obj").get("value_from_nested"))
# This does NOT work
table.select(col("parties_id"), col("publication_id"), col("nested_obj").get("value_from_nested"))py4j.protocol.Py4JJavaError: An error occurred while calling o8.toChangelogStream.
: java.lang.IndexOutOfBoundsException: Index 1 out of bounds for length 1
        at java.base/jdk.internal.util.Preconditions.outOfBounds(Unknown Source)
        at java.base/jdk.internal.util.Preconditions.outOfBoundsCheckIndex(Unknown Source)
        at java.base/jdk.internal.util.Preconditions.checkIndex(Unknown Source)
        at java.base/java.util.Objects.checkIndex(Unknown Source)
        at java.base/java.util.ArrayList.get(Unknown Source)
        at org.apache.calcite.rex.RexProgramBuilder$RegisterInputShuttle.visitLocalRef(RexProgramBuilder.java:975)
        at org.apache.calcite.rex.RexProgramBuilder$RegisterInputShuttle.visitLocalRef(RexProgramBuilder.java:924)
        at org.apache.calcite.rex.RexLocalRef.accept(RexLocalRef.java:75)
        at org.apache.calcite.rex.RexShuttle.visitFieldAccess(RexShuttle.java:198)
        at org.apache.calcite.rex.RexProgramBuilder$RegisterShuttle.visitFieldAccess(RexProgramBuilder.java:904)
        at org.apache.calcite.rex.RexProgramBuilder$RegisterShuttle.visitFieldAccess(RexProgramBuilder.java:887)
        at org.apache.calcite.rex.RexFieldAccess.accept(RexFieldAccess.java:92)
        at org.apache.calcite.rex.RexShuttle.visitList(RexShuttle.java:158)
        at org.apache.calcite.rex.RexShuttle.visitCall(RexShuttle.java:110)
        at org.apache.calcite.rex.RexProgramBuilder$RegisterShuttle.visitCall(RexProgramBuilder.java:889)
        at org.apache.calcite.rex.RexProgramBuilder$RegisterShuttle.visitCall(RexProgramBuilder.java:887)
        at org.apache.calcite.rex.RexCall.accept(RexCall.java:174)
        at org.apache.calcite.rex.RexShuttle.visitList(RexShuttle.java:158)
        at org.apache.calcite.rex.RexShuttle.visitCall(RexShuttle.java:110)
        at org.apache.calcite.rex.RexProgramBuilder$RegisterShuttle.visitCall(RexProgramBuilder.java:889)
        at org.apache.calcite.rex.RexProgramBuilder$RegisterShuttle.visitCall(RexProgramBuilder.java:887)
        at org.apache.calcite.rex.RexCall.accept(RexCall.java:174)
        at org.apache.calcite.rex.RexProgramBuilder.registerInput(RexProgramBuilder.java:295)
        at org.apache.calcite.rex.RexProgramBuilder.addProject(RexProgramBuilder.java:206)
        at org.apache.calcite.rex.RexProgram.create(RexProgram.java:224)
        at org.apache.calcite.rex.RexProgram.create(RexProgram.java:193)
        at org.apache.flink.table.planner.plan.rules.logical.PythonCalcSplitRuleBase.onMatch(PythonCalcSplitRule.scala:98)
        at org.apache.calcite.plan.AbstractRelOptPlanner.fireRule(AbstractRelOptPlanner.java:333)
        at org.apache.calcite.plan.hep.HepPlanner.applyRule(HepPlanner.java:542)
        at org.apache.calcite.plan.hep.HepPlanner.applyRules(HepPlanner.java:407)
        at org.apache.calcite.plan.hep.HepPlanner.executeInstruction(HepPlanner.java:243)
        at org.apache.calcite.plan.hep.HepInstruction$RuleInstance.execute(HepInstruction.java:127)
        at org.apache.calcite.plan.hep.HepPlanner.executeProgram(HepPlanner.java:202)
        at org.apache.calcite.plan.hep.HepPlanner.findBestExp(HepPlanner.java:189)
        at org.apache.flink.table.planner.plan.optimize.program.FlinkHepProgram.optimize(FlinkHepProgram.scala:64)
        at org.apache.flink.table.planner.plan.optimize.program.FlinkHepRuleSetProgram.optimize(FlinkHepRuleSetProgram.scala:78)
        at org.apache.flink.table.planner.plan.optimize.program.FlinkGroupProgram.$anonfun$optimize$2(FlinkGroupProgram.scala:59)
        at scala.collection.TraversableOnce.$anonfun$foldLeft$1(TraversableOnce.scala:156)
        at scala.collection.TraversableOnce.$anonfun$foldLeft$1$adapted(TraversableOnce.scala:156)
        at scala.collection.Iterator.foreach(Iterator.scala:937)
        at scala.collection.Iterator.foreach$(Iterator.scala:937)
        at scala.collection.AbstractIterator.foreach(Iterator.scala:1425)
        at scala.collection.IterableLike.foreach(IterableLike.scala:70)
        at scala.collection.IterableLike.foreach$(IterableLike.scala:69)
        at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
        at scala.collection.TraversableOnce.foldLeft(TraversableOnce.scala:156)
        at scala.collection.TraversableOnce.foldLeft$(TraversableOnce.scala:154)
        at scala.collection.AbstractTraversable.foldLeft(Traversable.scala:104)
        at org.apache.flink.table.planner.plan.optimize.program.FlinkGroupProgram.$anonfun$optimize$1(FlinkGroupProgram.scala:56)
        at org.apache.flink.table.planner.plan.optimize.program.FlinkGroupProgram.$anonfun$optimize$1$adapted(FlinkGroupProgram.scala:51)
        at scala.collection.TraversableOnce.$anonfun$foldLeft$1(TraversableOnce.scala:156)
        at scala.collection.TraversableOnce.$anonfun$foldLeft$1$adapted(TraversableOnce.scala:156)
        at scala.collection.immutable.Range.foreach(Range.scala:155)
        at scala.collection.TraversableOnce.foldLeft(TraversableOnce.scala:156)
        at scala.collection.TraversableOnce.foldLeft$(TraversableOnce.scala:154)
        at scala.collection.AbstractTraversable.foldLeft(Traversable.scala:104)
        at org.apache.flink.table.planner.plan.optimize.program.FlinkGroupProgram.optimize(FlinkGroupProgram.scala:51)
        at org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram.$anonfun$optimize$1(FlinkChainedProgram.scala:59)
        at scala.collection.TraversableOnce.$anonfun$foldLeft$1(TraversableOnce.scala:156)
        at scala.collection.TraversableOnce.$anonfun$foldLeft$1$adapted(TraversableOnce.scala:156)
        at scala.collection.Iterator.foreach(Iterator.scala:937)
        at scala.collection.Iterator.foreach$(Iterator.scala:937)
        at scala.collection.AbstractIterator.foreach(Iterator.scala:1425)
        at scala.collection.IterableLike.foreach(IterableLike.scala:70)
        at scala.collection.IterableLike.foreach$(IterableLike.scala:69)
        at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
        at scala.collection.TraversableOnce.foldLeft(TraversableOnce.scala:156)
        at scala.collection.TraversableOnce.foldLeft$(TraversableOnce.scala:154)
        at scala.collection.AbstractTraversable.foldLeft(Traversable.scala:104)
        at org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram.optimize(FlinkChainedProgram.scala:55)
        at org.apache.flink.table.planner.plan.optimize.StreamCommonSubGraphBasedOptimizer.optimizeTree(StreamCommonSubGraphBasedOptimizer.scala:176)
        at org.apache.flink.table.planner.plan.optimize.StreamCommonSubGraphBasedOptimizer.doOptimize(StreamCommonSubGraphBasedOptimizer.scala:83)
        at org.apache.flink.table.planner.plan.optimize.CommonSubGraphBasedOptimizer.optimize(CommonSubGraphBasedOptimizer.scala:87)
        at org.apache.flink.table.planner.delegation.PlannerBase.optimize(PlannerBase.scala:315)
        at org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:195)
        at org.apache.flink.table.api.bridge.internal.AbstractStreamTableEnvironmentImpl.toStreamInternal(AbstractStreamTableEnvironmentImpl.java:224)
        at org.apache.flink.table.api.bridge.internal.AbstractStreamTableEnvironmentImpl.toStreamInternal(AbstractStreamTableEnvironmentImpl.java:219)
        at org.apache.flink.table.api.bridge.java.internal.StreamTableEnvironmentImpl.toChangelogStream(StreamTableEnvironmentImpl.java:263)
        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
        at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
        at java.base/java.lang.reflect.Method.invoke(Unknown Source)
        at org.apache.flink.api.python.shaded.py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
        at org.apache.flink.api.python.shaded.py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
        at org.apache.flink.api.python.shaded.py4j.Gateway.invoke(Gateway.java:282)
        at org.apache.flink.api.python.shaded.py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
        at org.apache.flink.api.python.shaded.py4j.commands.CallCommand.execute(CallCommand.java:79)
        at org.apache.flink.api.python.shaded.py4j.GatewayConnection.run(GatewayConnection.java:238)
        at java.base/java.lang.Thread.run(Unknown Source)Yaroslav Bezruchenko
04/19/2023, 4:42 PMclass AttributePretraining(ProcessFunction):
    def process_element(self, value, ctx: 'ProcessFunction.Context'):
        id = value['id']
        training_dto = TrainingDTO(id)
        return training_dtoKevin L
04/19/2023, 6:17 PMt_row = t.select(row("key", "value").alias('my_row')t_row.get_schema()root
 |-- my_row : ROW<`key` VARCHAR NOT NULL> NOT NULLNathanael England
04/19/2023, 6:27 PMSimpleStringSchemaEric Xiao
04/19/2023, 9:05 PMsetParallelismsetMaxParallelismDataStreamSinksetMaxParallelismElizaveta Batanina
04/20/2023, 11:36 AM./bin/sql-client.shMax Dubinin
04/20/2023, 3:57 PMՎահե Քարամյան
04/20/2023, 4:04 PMMax Dubinin
04/20/2023, 4:53 PMAli Zia
04/20/2023, 5:12 PMakka.ask.timeout
akka.lookup.timeout
akka.tcp.timeout
heartbeat.interval
heartbeat.rpc-failure-threshold
heartbeat.timeoutZachary Schmerber
04/20/2023, 5:38 PMstatic void runJob() throws Exception {
        KafkaSource<Message> source =
                KafkaSource.<Message>builder()
                        .setBootstrapServers(BROKER)
                        .setTopics(TOPIC)
                        .setGroupId("KafkaExactlyOnceRecipeApplication")
                        .setStartingOffsets(
                                OffsetsInitializer.committedOffsets(OffsetResetStrategy.EARLIEST))
                        .setValueOnlyDeserializer(new JsonDeserializationSchema<>(Message.class))
                        .build();Rion Williams
04/20/2023, 5:42 PMENV GCS_CONNECTOR_VERSION="hadoop2-2.1.1"
ENV FLINK_HADOOP_VERSION="2.8.3-10.0"
ENV GCS_CONNECTOR_NAME="gcs-connector-${GCS_CONNECTOR_VERSION}.jar"
ENV GCS_CONNECTOR_URI="$artifactStorage/${GCS_CONNECTOR_NAME}"
ENV FLINK_HADOOP_JAR_NAME="flink-shaded-hadoop-2-uber-${FLINK_HADOOP_VERSION}.jar"
ENV FLINK_HADOOP_JAR_URI="$artifactStorage/flink-shaded-hadoop-2-uber/${FLINK_HADOOP_VERSION}/${FLINK_HADOOP_JAR_NAME}"Zachary Piazza
04/20/2023, 8:46 PMflinkConfiguration:collect-sink.batch-size.max: "8388608"taskmanager.numberOfTaskSlots: "2"metrics.reporters: prommetrics.reporter.prom.factory.class: org.apache.flink.metrics.prometheus.PrometheusReportermetrics.reporter.prom.port: "9999"Amenreet Singh Sodhi
04/21/2023, 5:38 AM苏超腾
04/21/2023, 7:36 AM张奇文
04/21/2023, 7:44 AMMaryam
04/21/2023, 3:41 PMUrs Schoenenberger
04/21/2023, 3:57 PMCountTumblingWindowAssignerCountTrigger.trigger()CountTumblingWindowAssignerDaniel Craig
04/21/2023, 4:05 PMKryo serializer scala extensions are not availableFelix Angell
04/21/2023, 4:39 PMIt does not work for legacy or if applied after the source via DataStream#assignTimestampsAndWatermarks.is another thing noted. does this mean that if i specify a watermarkstrategy via
assign_timestamps_and_watermarksassign_timestamps_and_watermarks(watermark_strategy)Zachary Piazza
04/22/2023, 3:46 PMRICHARD JOY
04/23/2023, 2:38 AM