Maryam
01/24/2023, 7:48 PMPartition By
in the Table API and KeyBy
in the DataStream API the same in terms of how they process data in streaming mode, specifically in terms of whether data is processed together in the same task slot
?
Additionally, I am asking because my Top-N query seems to be producing non-deterministic results when using parallelism greater than 1.Adrian Chang
01/24/2023, 8:01 PMAndre Cloutier
01/24/2023, 8:17 PMEric Xiao
01/24/2023, 10:02 PM{
"instant": {
"epochSecond": 1674595600,
"nanoOfSecond": 114000000
},
"thread": "flink-akka.actor.default-dispatcher-17",
"level": "INFO",
"loggerName": "org.apache.flink.runtime.jobmaster.JobMaster",
"message": "Disconnect TaskExecutor .... because: The TaskExecutor is shutting down.",
"endOfBatch": false,
"loggerFqcn": "org.apache.logging.slf4j.Log4jLogger",
"contextMap": {},
"threadId": 103,
"threadPriority": 5
}
Thomas Zhang
01/24/2023, 10:15 PMstmt_set \
.add_insert_sql("INSERT INTO OFFER_ELIGIBILITY_CURRENT_STATE SELECT * FROM offer_elig_j_1")
stmt_set \
.add_insert_sql("INSERT INTO OFFER_ELIGIBILITY_CURRENT_STATE SELECT * FROM offer_elig_j_2")
table_result = stmt_set.execute()
but I'm running into this error
Caused by: java.lang.IllegalArgumentException: Hash collision on user-specified ID "uid_hoodie_stream_writeOFFER_ELIGIBILITY_CURRENT_STATE". Most likely cause is a non-unique ID. Please check that all IDs specified via `uid(String)` are unique.
is how I'm using StatementSets to write into one destination from multiple sources supported?Colin Williams
01/24/2023, 10:25 PMSuparn Lele
01/25/2023, 6:49 AMSamrat Deb
01/25/2023, 9:37 AMCatalogPartitionSpec
and CatalogPartition
with respect to flink codebase ?kingsathurthi
01/25/2023, 12:16 PMError: container has runAsNonRoot and image has non-numeric user (flink), cannot verify user is non-root (pod: "podname-66bfdf99f6-s6pzr_welk-tx02-vzw-anpd-y-nk-inf-020(3a796c2e-f11b-4de2-8a09-eab801cde6d4)", container: conainer_name)
flink kubernetes operator has belwo security context enabled
podSecurityContext:
runAsUser: 9999
runAsGroup: 9999
runAsNonRoot: true
fsGroup: 9999
sharing pointer will be helpfulAri Huttunen
01/25/2023, 1:44 PMAri Huttunen
01/25/2023, 3:36 PMThomas Abraham
01/25/2023, 6:34 PM__event_headers.header_key1
⢠cast it as String
⢠save it under a nested field. __event_headers_parsed.header_key1
table.addOrReplaceColumns($("__event_headers").get("header_key1").cast(DataTypes.STRING()).as(s"__event_headers_parsed.header_key1"))
But instead it is creating a new field with .
and having it in the root as follows.
Output i am getting:
{
"id": 3,
"name": {
"firstName": "Jon",
"lastName": "Doe"
},
"__event_headers": {
"header_key1": "aGVhZGVyX3ZhbHVlMQ==",
"changeAction": "aGVhZGVyX3ZhbHVl"
},
"__event_headers_parsed": null,
"__event_headers_parsed.header_key1": "header_value"
}
Expected output
{
"id": 3,
"name": {
"firstName": "Jon",
"lastName": "Doe"
},
"__event_headers": {
"header_key1": "aGVhZGVyX3ZhbHVlMQ==",
"changeAction": "aGVhZGVyX3ZhbHVl"
},
"__event_headers_parsed": {
"header_key1": "header_value"
}
}
Sami Badawi
01/25/2023, 7:04 PMYaroslav Bezruchenko
01/25/2023, 8:14 PMJeremy DeGroot
01/25/2023, 8:18 PMJason Politis
01/25/2023, 11:39 PMDJ
01/26/2023, 12:21 AMRuslan Danilin
01/26/2023, 4:12 AMMikhail Spirin
01/26/2023, 7:39 AMAlmog Golbar
01/26/2023, 9:27 AMsecurity.protocol = SSL
ssl.truststore.type = JKS
ssl.truststore.location = /store/truststore.jks
ssl.keystore.type = JKS
ssl.keystore.location = /store/my-keystore.jks
ssl.keystore.password=keystore_password
ssl.truststore.password=trustore_password
I'm getting SSL handshake error in kafka broker side,
and this error in flink job manager:
java.io.IOException: Connection to ******:9093 (id: -1 rack: null) failed.
at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:70) ~[validator_2.0.0-RC2-48-d6f38ff4.jar:2.0.0-RC2-48-d6f38ff4]
at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:526) ~[validator_2.0.0-RC2-48-d6f38ff4.jar:2.0.0-RC2-48-d6f38ff4]
at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:447) [validator_2.0.0-RC2-48-d6f38ff4.jar:2.0.0-RC2-48-d6f38ff4]
at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) [validator_2.0.0-RC2-48-d6f38ff4.jar:2.0.0-RC2-48-d6f38ff4]
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) [validator_2.0.0-RC2-48-d6f38ff4.jar:2.0.0-RC2-48-d6f38ff4]
java.io.EOFException: null
at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:120) ~[validator_2.0.0-RC2-48-d6f38ff4.jar:2.0.0-RC2-48-d6f38ff4]
at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:452) ~[validator_2.0.0-RC2-48-d6f38ff4.jar:2.0.0-RC2-48-d6f38ff4]
at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:402) ~[validator_2.0.0-RC2-48-d6f38ff4.jar:2.0.0-RC2-48-d6f38ff4]
at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674) ~[validator_2.0.0-RC2-48-d6f38ff4.jar:2.0.0-RC2-48-d6f38ff4]
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576) [validator_2.0.0-RC2-48-d6f38ff4.jar:2.0.0-RC2-48-d6f38ff4]
at org.apache.kafka.common.network.Selector.poll(Selector.java:481) [validator_2.0.0-RC2-48-d6f38ff4.jar:2.0.0-RC2-48-d6f38ff4]
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) [validator_2.0.0-RC2-48-d6f38ff4.jar:2.0.0-RC2-48-d6f38ff4]
at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:73) [validator_2.0.0-RC2-48-d6f38ff4.jar:2.0.0-RC2-48-d6f38ff4]
at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:526) [validator_2.0.0-RC2-48-d6f38ff4.jar:2.0.0-RC2-48-d6f38ff4]
at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:447) [validator_2.0.0-RC2-48-d6f38ff4.jar:2.0.0-RC2-48-d6f38ff4]
at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) [validator_2.0.0-RC2-48-d6f38ff4.jar:2.0.0-RC2-48-d6f38ff4]
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) [validator_2.0.0-RC2-48-d6f38ff4.jar:2.0.0-RC2-48-d6f38ff4]
Important: i have another microservice that works fine with kafka.
any ideas?Sami Badawi
01/26/2023, 10:17 AMorg.apache.flink.api.connector.source.lib.util.IteratorSourceReaderBase
org.apache.flink.api.connector.source.util.ratelimit.RateLimitedSourceReader
I can see them in the main Flink repo on GitHub, but I cannot find the right dependencies for my project.
My Gradle dependencies:
flinkVersion = '1.16.0'
implementation "org.apache.flink:flink-streaming-java:${flinkVersion}"
implementation "org.apache.flink:flink-clients:${flinkVersion}"
implementation "org.apache.flink:flink-table-common:${flinkVersion}"
implementation "org.apache.flink:flink-table-api-java:${flinkVersion}"
implementation "org.apache.flink:flink-table-api-java-bridge:${flinkVersion}"
implementation "org.apache.flink:flink-table-planner_2.12:${flinkVersion}"
implementation "org.apache.flink:flink-connector-base:${flinkVersion}"
implementation "org.apache.flink:flink-connectors:${flinkVersion}"
implementation "org.apache.flink:flink-connector-test-utils:${flinkVersion}"
implementation "org.apache.flink:flink-core:${flinkVersion}"
Felix Angell
01/26/2023, 2:10 PMAviv Dozorets
01/26/2023, 4:07 PM-s <s3://bucket-name/checkpoints/000000/chk-20/>
or location to the manually created savepoint, iām getting:
Exception in thread "main" java.lang.NoSuchMethodError: 'boolean org.apache.commons.cli.CommandLine.hasOption(org.apache.commons.cli.Option)'
at org.apache.flink.client.cli.CliFrontendParser.createSavepointRestoreSettings(CliFrontendParser.java:631)
at org.apache.flink.container.entrypoint.StandaloneApplicationClusterConfigurationParserFactory.createResult(StandaloneApplicationClusterConfigurationParserFactory.java:90)
at org.apache.flink.container.entrypoint.StandaloneApplicationClusterConfigurationParserFactory.createResult(StandaloneApplicationClusterConfigurationParserFactory.java:45)
at org.apache.flink.runtime.entrypoint.parser.CommandLineParser.parse(CommandLineParser.java:51)
at org.apache.flink.runtime.entrypoint.ClusterEntrypointUtils.parseParametersOrExit(ClusterEntrypointUtils.java:70)
at org.apache.flink.container.entrypoint.StandaloneApplicationClusterEntryPoint.main(StandaloneApplicationClusterEntryPoint.java:58)
Iāve been all over documentation and feel like iām missing something. Any help would be appreciated.Amir Halatzi
01/26/2023, 4:28 PMVivek
01/26/2023, 6:11 PMJin S
01/26/2023, 7:45 PMSandeep Kathula
01/26/2023, 9:43 PMflink run -s <savepoint path>
But I would like to run from intellij starting from savepoint. Is there a way to specify in Flink to start from savepoint programatically instead of command line?
I am trying to debug something and would like to set breakpoints from Intellij. So I would like to start my flink application from intellij from a savepointSergio Sainz
01/27/2023, 2:52 AM{
"day": 20230126,
"stations": [
{
"name": "station1",
"avail_spots": 1
},
{
"name": "station2",
"avail_spots": 2
}
]
}
I am investigating how we can select the list of station names by day:
SELECT A.day, B.name FROM TOPIC1 as A JOIN INNER_ARRAY as B
Wonder if something like above is supported?
Thanks š !Artun Duman
01/27/2023, 7:16 AMSajid Samsad
01/27/2023, 9:00 AM