Rafael Jeon
01/27/2023, 9:46 AMflink-sql
on native k8s ?Kosta Sovaridis
01/27/2023, 10:07 AMLeon Xu
01/27/2023, 3:38 PMorg.apache.flink.client.deployment.ClusterDeploymentException: Couldn't deploy Yarn Application Cluster
at org.apache.flink.yarn.YarnClusterDescriptor.deployApplicationCluster(YarnClusterDescriptor.java:478) ~[flink-yarn-1.16.0.jar!/:1.16.0]
......
Caused by: org.apache.hadoop.fs.PathIOException: `Cannot get relative path for URI:file:///tmp/application_1674531932229_0030-flink-conf.yaml587547081521530798.tmp': Input/output error
at org.apache.hadoop.fs.s3a.impl.CopyFromLocalOperation.getFinalPath(CopyFromLocalOperation.java:360) ~[flink-s3-fs-hadoop-1.16.0.jar!/:1.16.0]
at org.apache.hadoop.fs.s3a.impl.CopyFromLocalOperation.uploadSourceFromFS(CopyFromLocalOperation.java:222) ~[flink-s3-fs-hadoop-1.16.0.jar!/:1.16.0]
at org.apache.hadoop.fs.s3a.impl.CopyFromLocalOperation.execute(CopyFromLocalOperation.java:169) ~[flink-s3-fs-hadoop-1.16.0.jar!/:1.16.0]
at org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$copyFromLocalFile$25(S3AFileSystem.java:3920) ~[flink-s3-fs-hadoop-1.16.0.jar!/:1.16.0]
at org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.lambda$trackDurationOfOperation$5(IOStatisticsBinding.java:499) ~[hadoop-common-3.3.3.jar!/:?]
at org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.trackDuration(IOStatisticsBinding.java:444) ~[hadoop-common-3.3.3.jar!/:?]
at org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2337) ~[flink-s3-fs-hadoop-1.16.0.jar!/:1.16.0]
at org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2356) ~[flink-s3-fs-hadoop-1.16.0.jar!/:1.16.0]
at org.apache.hadoop.fs.s3a.S3AFileSystem.copyFromLocalFile(S3AFileSystem.java:3913) ~[flink-s3-fs-hadoop-1.16.0.jar!/:1.16.0]
at org.apache.flink.yarn.YarnApplicationFileUploader.copyToRemoteApplicationDir(YarnApplicationFileUploader.java:397) ~[flink-yarn-1.16.0.jar!/:1.16.0]
at org.apache.flink.yarn.YarnApplicationFileUploader.uploadLocalFileToRemote(YarnApplicationFileUploader.java:202) ~[flink-yarn-1.16.0.jar!/:1.16.0]
at org.apache.flink.yarn.YarnApplicationFileUploader.registerSingleLocalResource(YarnApplicationFileUploader.java:181) ~[flink-yarn-1.16.0.jar!/:1.16.0]
at org.apache.flink.yarn.YarnClusterDescriptor.startAppMaster(YarnClusterDescriptor.java:1047) ~[flink-yarn-1.16.0.jar!/:1.16.0]
at org.apache.flink.yarn.YarnClusterDescriptor.deployInternal(YarnClusterDescriptor.java:623) ~[flink-yarn-1.16.0.jar!/:1.16.0]
at org.apache.flink.yarn.YarnClusterDescriptor.deployApplicationCluster(YarnClusterDescriptor.java:471) ~[flink-yarn-1.16.0.jar!/:1.16.0]
... 35 more
Darin Amos
01/27/2023, 3:39 PMContinuousFileReaderOperator
is discouraged in favour of the new FileSource
. I’m wondering if there is any risk of the ContinuousFileReaderOperator
being deprecated in future versions. This is a critical operator for my team. We use this as a source in the middle of our pipeline because the true data source depends on other upstream events (user actions).Bobby Richard
01/27/2023, 3:39 PMProducerId set to 969204 with epoch 8
Invoking InitProducerId for the first time in order
Discovered transaction coordinator
ProducerId set to 970010 with epoch 8
Invoking InitProducerId for the first time in order
Why are so many producers being created, and what determines the number of producers? I would expect the number of producers to equal the parallelism (8 in this case), but that doesn't seem to be the case?Arran
01/27/2023, 3:53 PMAns Fida
01/27/2023, 8:02 PMAdesh Dsilva
01/27/2023, 9:31 PMSumit Nekar
01/28/2023, 10:22 AMkubernetes.jobmanager.cpu.limit-factor
was added in flink >= 1.15.X. I am using 1.13.6 currently and tried following options. But din work.
taskManager:
resource:
memory: "2000m"
cpu: "1"
podTemplate:
apiVersion: v1
kind: Pod
metadata:
name: task-manager-pod-template
spec:
containers:
- name: flink-main-container
resources:
requests:
memory: "2000m"
cpu: "1"
limits:
memory: "3000m"
cpu: "2"
Section resource overrides the resources section inside containers section. Tried removing resources all together and that was setting CPU requests and limits equal to number of task slots . Its completely ignoring the resources section inside containers section
taskManager:
podTemplate:
apiVersion: v1
kind: Pod
metadata:
name: task-manager-pod-template
spec:
containers:
- name: flink-main-container
resources:
requests:
memory: "2000m"
cpu: "1"
limits:
memory: "3000m"
cpu: "2"
Any options in flink 1.13.6 or does flink-kubernetes-operator provide any such options (edited)Slackbot
01/29/2023, 5:07 AMP Singh
01/29/2023, 12:30 PMKosta Sovaridis
01/29/2023, 12:57 PMWai Chee Yau
01/29/2023, 1:12 PMSrivatsav Gorti
01/29/2023, 1:18 PMpublic void run() throws Exception {
StreamExecutionEnvironment environment = StreamExecutionEnvironment.getExecutionEnvironment();
environment.setStreamTimeCharacteristic(TimeCharacteristic.EventTime);
environment.setStateBackend(new FsStateBackend("<s3://dev/checkpoint>"));
environment.enableCheckpointing(50000L);
environment.getConfig().setGlobalJobParameters(parameters);
Properties properties = new Properties();
properties.setProperty("bootstrap.servers", "localhost:9092");
properties.setProperty("group.id", "temp");
DataStream<ObjectNode> source = environment
.addSource(new FlinkKafkaConsumer<>("test", new JSONKeyValueDeserializationSchema(false), properties))
.rebalance();
SingleOutputStreamOperator<Tuple4<String, String, Integer, Long>> workflow = source
.map(new RecordMapper())
.keyBy(0, 1)
.timeWindow(Time.minutes(2))
.aggregate(new TimeStampAggregateFunction());
//S3 sink
StreamingFileSink<Tuple4<String, String, Integer, Long>> sink = StreamingFileSink
.forRowFormat(new Path("<s3://dev/sink-test>"),
(Encoder<Tuple4<String, String, Integer, Long>>) (element, stream) -> {
PrintStream out = new PrintStream(stream);
out.println(element.f1);
})
.withBucketAssigner(new BucketAssigner())
.build();
workflow.addSink(sink);
environment.execute();
}
Sumit Aich
01/30/2023, 12:14 PMError: UPGRADE FAILED: rendered manifests contain a resource that already exists. Unable to continue with update: ServiceAccount "flink" in namespace "flink-operator" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "<http://app.kubernetes.io/managed-by|app.kubernetes.io/managed-by>": must be set to "Helm"; annotation validation error: missing key "<http://meta.helm.sh/release-name|meta.helm.sh/release-name>": must be set to "flink-operator"; annotation validation error: missing key "<http://meta.helm.sh/release-namespace|meta.helm.sh/release-namespace>": must be set to "flink-operator"
Can you guys help check this ? or is this happening because the Operator Helm chart is not backward compatible ?Mohit Aggarwal
01/30/2023, 12:30 PMkubectl create -f <https://github.com/jetstack/cert-manager/releases/download/v1.8.2/cert-manager.yaml>
I checked the logs for pods. I can see this error in pod logs
exec /app/cmd/cainjector/cainjector: exec format error
Has anyone faced a similar issue ?Sumit Aich
01/30/2023, 1:24 PMchunilal kukreja
01/30/2023, 1:55 PMGuruguha Marur Sreenivasa
01/30/2023, 6:28 PMEric Xiao
01/30/2023, 10:28 PMkubernetes.ephemeral_storage.usage
and when the pipeline has been running for over a day every TM has about 20 GB of storage used.
I'm trying to understand if:
• This is "normal" and the pipeline just has a large amount of state
• If one of our operators is keeping state indefinitely or
• If our RocksDB is not clearing out old state properly.
I have been noticing that the flatmap
operator's state has only been increase, only have 1 hour of data as we just enabled the rocksDB metric (state.backend.rocksdb.metrics.estimate-num-keys: true
). I saw in this guide to turn on some other RocksDB metrics, so we are waiting for that as well.
I was wondering if anyone else has encountered this situation before?Sumit Nekar
01/31/2023, 3:26 AMjava.net.SocketTimeoutException: Read timed out at java.net.SocketInputStream.socketRead0(Native Method) ~[?:?] at java.net.SocketInputStream.socketRead(Unknown Source) ~[?:?] at java.net.SocketInputStream.read(Unknown Source) ~[?:?] at java.net.SocketInputStream.read(Unknown Source) ~[?:?] at sun.security.ssl.SSLSocketInputRecord.read(Unknown Source) ~[?:?] at sun.security.ssl.SSLSocketInputRecord.readHeader(Unknown Source) ~[?:?] at sun.security.ssl.SSLSocketInputRecord.decode(Unknown Source) ~[?:?] at sun.security.ssl.SSLTransport.decode(Unknown Source) ~[?:?] at sun.security.ssl.SSLSocketImpl.decode(Unknown Source) ~[?:?] at sun.security.ssl.SSLSocketImpl.readHandshakeRecord(Unknown Source) ~[?:?] at sun.security.ssl.SSLSocketImpl.startHandshake(Unknown Source) ~[?:?] at sun.security.ssl.SSLSocketImpl.startHandshake(Unknown Source) ~[?:?] at org.apache.flink.kubernetes.shaded.okhttp3.internal.connection.RealConnection.connectTls(RealConnection.java:319) ~[flink-dist_2.12-1.13.6.jar:1.13.6] at org.apache.flink.kubernetes.shaded.okhttp3.internal.connection.RealConnection.establishProtocol(RealConnection.java:283) ~[flink-dist_2.12-1.13.6.jar:1.13.6] at org.apache.flink.kubernetes.shaded.okhttp3.internal.connection.RealConnection.connect(RealConnection.java:168) ~[flink-dist_2.12-1.13.6.jar:1.13.6] at org.apache.flink.kubernetes.shaded.okhttp3.internal.connection.StreamAllocation.findConnection(StreamAllocation.java:257) ~[flink-dist_2.12-1.13.6.jar:1.13.6] at org.apache.flink.kubernetes.shaded.okhttp3.internal.connection.StreamAllocation.findHealthyConnection(StreamAllocation.java:135) ~[flink-dist_2.12-1.13.6.jar:1.13.6] at org.apache.flink.kubernetes.shaded.okhttp3.internal.connection.StreamAllocation.newStream(StreamAllocation.java:114) ~[flink-dist_2.12-1.13.6.jar:1.13.6] at org.apache.flink.kubernetes.shaded.okhttp3.internal.connection.ConnectInterceptor.intercept(ConnectInterceptor.java:42) ~[flink-dist_2.12-1.13.6.jar:1.13.6] at org.apache.flink.kubernetes.shaded.okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147) ~[flink-dist_2.12-1.13.6.jar:1.13.6] at org.apache.flink.kubernetes.shaded.okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:121) ~[flink-dist_2.12-1.13.6.jar:1.13.6] at org.apache.flink.kubernetes.shaded.okhttp3.internal.cache.CacheInterceptor.intercept(CacheInterceptor.java:93) ~[flink-dist_2.12-1.13.6.jar:1.13.6] at org.apache.flink.kubernetes.shaded.okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147) ~[flink-dist_2.12-1.13.6.jar:1.13.6] at org.apache.flink.kubernetes.shaded.okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:121) ~[flink-dist_2.12-1.13.6.jar:1.13.6] at org.apache.flink.kubernetes.shaded.okhttp3.internal.http.BridgeInterceptor.intercept(BridgeInterceptor.java:93) ~[flink-dist_2.12-1.13.6.jar:1.13.6] at org.apache.flink.kubernetes.shaded.okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147) ~[flink-dist_2.12-1.13.6.jar:1.13.6] at org.apache.flink.kubernetes.shaded.okhttp3.internal.http.RetryAndFollowUpInterceptor.intercept(RetryAndFollowUpInterceptor.java:126) ~[flink-dist_2.12-1.13.6.jar:1.13.6] at org.apache.flink.kubernetes.shaded.okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147) ~[flink-dist_2.12-1.13.6.jar:1.13.6] at org.apache.flink.kubernetes.shaded.okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:121) ~[flink-dist_2.12-1.13.6.jar:1.13.6] at io.fabric8.kubernetes.client.utils.BackwardsCompatibilityInterceptor.intercept(BackwardsCompatibilityInterceptor.java:134) ~[flink-dist_2.12-1.13.6.jar:1.13.6] at org.apache.flink.kubernetes.shaded.okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147) ~[flink-dist_2.12-1.13.6.jar:1.13.6] at org.apache.flink.kubernetes.shaded.okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:121) ~[flink-dist_2.12-1.13.6.jar:1.13.6] at io.fabric8.kubernetes.client.utils.ImpersonatorInterceptor.intercept(ImpersonatorInterceptor.java:68) ~[flink-dist_2.12-1.13.6.jar:1.13.6] at org.apache.flink.kubernetes.shaded.okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147) ~[flink-dist_2.12-1.13.6.jar:1.13.6] at org.apache.flink.kubernetes.shaded.okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:121) ~[flink-dist_2.12-1.13.6.jar:1.13.6] at io.fabric8.kubernetes.client.utils.HttpClientUtils.lambda$createHttpClient$3(HttpClientUtils.java:109) ~[flink-dist_2.12-1.13.6.jar:1.13.6] at org.apache.flink.kubernetes.shaded.okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147) ~[flink-dist_2.12-1.13.6.jar:1.13.6] at org.apache.flink.kubernetes.shaded.okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:121) ~[flink-dist_2.12-1.13.6.jar:1.13.6] at org.apache.flink.kubernetes.shaded.okhttp3.RealCall.getResponseWithInterceptorChain(RealCall.java:254) ~[flink-dist_2.12-1.13.6.jar:1.13.6] at org.apache.flink.kubernetes.shaded.okhttp3.RealCall$AsyncCall.execute(RealCall.java:200) [flink-dist_2.12-1.13.6.jar:1.13.6] at org.apache.flink.kubernetes.shaded.okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32) [flink-dist_2.12-1.13.6.jar:1.13.6] at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) [?:?] at java.lang.Thread.
Dheeraj Panangat
01/31/2023, 4:18 AMAbhinav sharma
01/31/2023, 9:17 AMsoudipta das
01/31/2023, 9:19 AMGroupWindowedTable groupWindowedTable = baseTable
.window(Session.withGap(lit(10).minutes())
.on($("received_timestamp")).as("window"));
WindowGroupedTable windowGroupedTable = groupWindowedTable.groupBy($("window"), $("account_id"));
TablePipeline tablePipeline = windowGroupedTable.select(
$("account_id"), $("play_device_upload_time").lastValue().over($("account_id")))
From what the log says, seems like its not possible? But wanted to check if there is something wrong with the way I have chained the calls.
Sample log
Caused by: org.apache.flink.table.api.ValidationException: Could not resolve over call.
at org.apache.flink.table.expressions.resolver.rules.OverWindowResolverRule$ExpressionResolverVisitor.lambda$visit$0(OverWindowResolverRule.java:75) ~[flink-table-api-java-uber-1.16.0.jar:1.16.0]
at java.util.Optional.orElseThrow(Unknown Source) ~[?:?]
at org.apache.flink.table.expressions.resolver.rules.OverWindowResolverRule$ExpressionResolverVisitor.visit(OverWindowResolverRule.java:73) ~[flink-table-api-java-uber-1.16.0.jar:1.16.0]
at org.apache.flink.table.expressions.resolver.rules.OverWindowResolverRule$ExpressionResolverVisitor.visit(OverWindowResolverRule.java:57) ~[flink-table-api-java-uber-1.16.0.jar:1.16.0]
at org.apache.flink.table.expressions.ApiExpressionVisitor.visit(ApiExpressionVisitor.java:37) ~[flink-table-api-java-uber-1.16.0.jar:1.16.0]
at org.apache.flink.table.expressions.UnresolvedCallExpression.accept(UnresolvedCallExpression.java:97) ~[flink-table-api-java-uber-1.16.0.jar:1.16.0]
at org.apache.flink.table.expressions.resolver.rules.OverWindowResolverRule.lambda$apply$0(OverWindowResolverRule.java:53) ~[flink-table-api-java-uber-1.16.0.jar:1.16.0]
at java.util.stream.ReferencePipeline$3$1.accept(Unknown Source) ~[?:?]
at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(Unknown Source) ~[?:?]
at java.util.stream.AbstractPipeline.copyInto(Unknown Source) ~[?:?]
at java.util.stream.AbstractPipeline.wrapAndCopyInto(Unknown Source) ~[?:?]
at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(Unknown Source) ~[?:?]
at java.util.stream.AbstractPipeline.evaluate(Unknown Source) ~[?:?]
at java.util.stream.ReferencePipeline.collect(Unknown Source) ~[?:?]
at org.apache.flink.table.expressions.resolver.rules.OverWindowResolverRule.apply(OverWindowResolverRule.java:54) ~[flink-table-api-java-uber-1.16.0.jar:1.16.0]
at org.apache.flink.table.expressions.resolver.ExpressionResolver.lambda$null$2(ExpressionResolver.java:247) ~[flink-table-api-java-uber-1.16.0.jar:1.16.0]
at java.util.function.Function.lambda$andThen$1(Unknown Source) ~[?:?]
at java.util.function.Function.lambda$andThen$1(Unknown Source) ~[?:?]
at java.util.function.Function.lambda$andThen$1(Unknown Source) ~[?:?]
at java.util.function.Function.lambda$andThen$1(Unknown Source) ~[?:?]
at java.util.function.Function.lambda$andThen$1(Unknown Source) ~[?:?]
at org.apache.flink.table.expressions.resolver.ExpressionResolver.resolve(ExpressionResolver.java:210) ~[flink-table-api-java-uber-1.16.0.jar:1.16.0]
at org.apache.flink.table.operations.utils.OperationTreeBuilder.projectInternal(OperationTreeBuilder.java:199) ~[flink-table-api-java-uber-1.16.0.jar:1.16.0]
at org.apache.flink.table.operations.utils.OperationTreeBuilder.project(OperationTreeBuilder.java:174) ~[flink-table-api-java-uber-1.16.0.jar:1.16.0]
at org.apache.flink.table.api.internal.TableImpl$WindowGroupedTableImpl.select(TableImpl.java:638) ~[flink-table-api-java-uber-1.16.0.jar:1.16.0]
at *******************.*****.execute(Test.java:195) ~[?:?]
at ****************************.******.main(Test.java:23) ~[?:?]
at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:?]
at jdk.internal.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) ~[?:?]
at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) ~[?:?]
at java.lang.reflect.Method.invoke(Unknown Source) ~[?:?]
Any pointers or suggestions much appreciated, thanks in advanceYang LI
01/31/2023, 9:30 AMReme Ajayi
01/31/2023, 11:39 AMCaused by: org.apache.flink.streaming.runtime.tasks.ExceptionInChainedOperatorException: Could not forward element to next operator
at org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.pushToOperator(CopyingChainingOutput.java:99)
at org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.collect(CopyingChainingOutput.java:57)
at org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.collect(CopyingChainingOutput.java:29)
at org.apache.flink.streaming.runtime.tasks.SourceOperatorStreamTask$AsyncDataOutputToOutput.emitRecord(SourceOperatorStreamTask.java:313)
at org.apache.flink.streaming.api.operators.source.SourceOutputWithWatermarks.collect(SourceOutputWithWatermarks.java:110)
at org.apache.flink.connector.kafka.source.reader.KafkaRecordEmitter$SourceOutputWrapper.collect(KafkaRecordEmitter.java:67)
at org.apache.flink.api.common.serialization.DeserializationSchema.deserialize(DeserializationSchema.java:84)
at org.apache.flink.connector.kafka.source.reader.deserializer.KafkaValueOnlyDeserializationSchemaWrapper.deserialize(KafkaValueOnlyDeserializationSchemaWrapper.java:51)
at org.apache.flink.connector.kafka.source.reader.KafkaRecordEmitter.emitRecord(KafkaRecordEmitter.java:53)
... 14 more
Caused by: org.apache.flink.streaming.runtime.tasks.ExceptionInChainedOperatorException: Could not forward element to next operator
at org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.pushToOperator(CopyingChainingOutput.java:99)
at org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.collect(CopyingChainingOutput.java:57)
at org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.collect(CopyingChainingOutput.java:29)
at org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:56)
at org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:29)
at org.apache.flink.streaming.api.operators.StreamMap.processElement(StreamMap.java:38)
at org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.pushToOperator(CopyingChainingOutput.java:82)
... 22 more
My transformation code:
DataStream<GenericRecord> mappedStream = historyLedgerStream.map(new MapFunction<GenericRecord, GenericRecord>() {
@Override
public GenericRecord map(GenericRecord value) throws Exception {
String result ="";
if (value.get("currency") == "cad") {
value.put("currency", "Canadian Dollar" );
}
return value;
}
});
Nitin Agrawal
01/31/2023, 2:13 PMNitin Agrawal
01/31/2023, 2:14 PMWITH (
'auto-compaction' = 'true',
'connector' = 'filesystem',
'format' = 'json',
'path' = '/flink-data-service/',
'source.monitor-interval' = '30s'
)
PRASHANT GUPTA
01/31/2023, 3:00 PMPiotr Pawlaczek
01/31/2023, 3:30 PMunnest
function counterpart in Flink SQL?
For example, lets say I have a following csv file:
Date,USD,GBP
2022-08-01,1.01,1.30
In postgres I can run:
SELECT date,
unnest(array['USD','GBP']) as currency
unnset(array[USD,GBP]) as rate
so I get:
date. |currency|rate
----------+--------+----
2022-08-01|USD. |1.01
2022-08-01|GBP. |1.30
In other words: is there a way to convert columns into rows?