Artun Duman
01/20/2023, 4:55 PMGuruguha Marur Sreenivasa
01/20/2023, 4:59 PMKafkaSource
vs FlinkKafkaConsumer
.
My streaming application runs just fine using FlinkKafkaConsumer
but when I use the KafkaSource
API, it errors out saying, "unable to fetch topic metadata"! Anyone faced this issue before? The configuration to both clients are exactly the same.Leon Xu
01/20/2023, 5:11 PMParquetPojoInputFormat
in 1.12 and looks like this class was removed starting from 1.14 (part of this PR). I wonder if there’s docs on what substitute class or migration we can reference? ThanksDheeraj Balakavi
01/20/2023, 6:42 PM3.4-SNAPSHOT
for some of the work that I’m doing: https://issues.apache.org/jira/browse/FLINK-29814.Mustafa Akın
01/20/2023, 8:26 PMEnvironmentSettings settings = EnvironmentSettings
.newInstance()
.inBatchMode()
.build();
TableEnvironment tEnv = TableEnvironment.create(settings);
Eric Liu
01/20/2023, 9:15 PMjava.io.IOException: Unknown operation 71
at org.apache.flink.runtime.blob.BlobServerConnection.run(BlobServerConnection.java:116) [flink-dist-1.15.3.jar:1.15.3]
2023-01-19 16:43:37,448 ERROR org.apache.flink.runtime.blob.BlobServerConnection [] - Error while executing BLOB connection.
DJ
01/21/2023, 1:41 AMNipuna Shantha
01/21/2023, 8:34 PMtaskmanager
if the jobmanager
is terminated in the middle of the process until I make the jobmanager
to a running state again?
(Also I do not use the cluster mode. Need a solution other than using the cluster mode)Kyle Ahn
01/21/2023, 9:31 PMEXACTLY_ONCE
checkpointing mode (k8s Flink operator).
When I change the job’s state to suspended
to manually and change back to running
. It seems that the raw data sink, which does not involve TumblingEventTimeWindows
+ ProcessWindowFunction
, do not write duplicate records, but the processed data seem to either miss some records or write duplicate records.
The two sinks are sharing the same sink. How is this happening?chunilal kukreja
01/23/2023, 5:08 AMEric Liu
01/23/2023, 7:05 AMRocksDBException: file is too short (xxxx bytes) to be an sstable
error?Bastien DINE
01/23/2023, 8:01 AMAviv Dozorets
01/23/2023, 4:07 PMFlinkKinesisConsumer
: as it doesn’t have committed offsets like kafka, after a restart (or new deploy), i’d love to see flink to continue reading data from the point he left and not from latest
or timestamp
. Can anyone share how they approached this issue ? I’m sure that i’m not the only one with this usecase ?
I don’t see flink starting from the checkpoint/savepoint.Felix Angell
01/23/2023, 4:08 PMRichMapFunction
whereas in Python (PyFlink) it is just a normal MapFunction
?
How does the open method lifecycle work in the context of PyFlink in this case?Marcelo Miranda
01/23/2023, 4:50 PMMaryam
01/23/2023, 5:19 PMSELECT *
FROM (
SELECT *,
ROW_NUMBER() OVER (PARTITION BY id ORDER BY processedCount DESC) AS row_num
FROM event)
WHERE row_num = 1
note ORDER BY processedCount
. I know it does not translate to Deduplicate optimized plan because I am not using time attribute as the Flink doc suggests. My question is that whether the result preserve the watermark?Jason Politis
01/23/2023, 6:10 PMEmmanuel Leroy
01/23/2023, 7:02 PMreduce(next, agg) and return next
but if there are no guarantees that doesn’t work.Artun Duman
01/23/2023, 9:42 PMAmir Halatzi
01/24/2023, 8:09 AMPRASHANT GUPTA
01/24/2023, 8:35 AMkingsathurthi
01/24/2023, 10:28 AMGiannis Polyzos
01/24/2023, 12:48 PMright-records
and left-records
.
1. Is there any way to specify the name for the column families? How are these names assigned?
2. Do we always keep both tables in state? for example if the first table is an append only stream how does state get expired in sql? if i have billions of events how do i make sure they dont stay around and fill up the disk?Slackbot
01/24/2023, 1:07 PMPradeep Ramachandra
01/24/2023, 1:28 PMAmyth
01/24/2023, 3:26 PMSumit Nekar
01/24/2023, 5:46 PM2023-01-24 17:43:25,532 INFO org.apache.flink.client.deployment.application.executors.EmbeddedExecutor [] - Job 7eef2349fa26a94509d5126440972a25 is submitted.
2023-01-24 17:43:25,532 INFO org.apache.flink.client.deployment.application.executors.EmbeddedExecutor [] - Submitting Job with JobId=7eef2349fa26a94509d5126440972a25.
2023-01-24 17:43:27,040 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - RECEIVED SIGNAL 15: SIGTERM. Shutting down as requested.
2023-01-24 17:43:27,042 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - Shutting KubernetesApplicationClusterEntrypoint down with application status UNKNOWN. Diagnostics Cluster entrypoint has been closed externally..
2023-01-24 17:43:27,044 INFO org.apache.flink.runtime.jobmaster.MiniDispatcherRestEndpoint [] - Shutting down rest endpoint.
2023-01-24 17:43:27,299 INFO akka.remote.RemoteActorRefProvider$RemotingTerminator [] - Shutting down remote daemon.
2023-01-24 17:43:27,299 INFO akka.remote.RemoteActorRefProvider$RemotingTerminator [] - Shutting down remote daemon.
2023-01-24 17:43:27,301 INFO akka.remote.RemoteActorRefProvider$RemotingTerminator [] - Remote daemon shut down; proceeding with flushing remote transports.
2023-01-24 17:43:27,301 INFO akka.remote.RemoteActorRefProvider$RemotingTerminator [] - Remote daemon shut down; proceeding with flushing remote transports.
2023-01-24 17:43:27,313 INFO akka.remote.RemoteActorRefProvider$RemotingTerminator [] - Remoting shut down.
2023-01-24 17:43:27,313 INFO akka.remote.RemoteActorRefProvider$RemotingTerminator [] - Remoting shut down.
Guruguha Marur Sreenivasa
01/24/2023, 6:22 PMSergii Mikhtoniuk
01/24/2023, 7:17 PM(
`offset` BIGINT NOT NULL,
`system_time` TIMESTAMP(3) NOT NULL,
`event_time` TIMESTAMP(3) NOT NULL *ROWTIME*,
`symbol` STRING NOT NULL,
`price` INT NOT NULL,
)
when I call table.createTemporalTableFunction($"event_time", $"symbol")
I get a following error:
TableException: Unsupported conversion from data type 'TIMESTAMP(3) NOT NULL' (conversion class: org.apache.flink.table.data.TimestampData) to type information. Only data types that originated from type information fully support a reverse conversion.
Is anyone familiar with migration to the new type system? I'm at a loss.Rommel
01/24/2023, 7:25 PM