hello i m looking around to see if anyone has gone...
# troubleshooting
c
hello i m looking around to see if anyone has gone through flink clientSQL setup on kubernetes. along with hive meta-store. Currently attempting to read csv from s3 via a hive-metatsore. I have got it working with trino-cluster, but since i already have a flink-ecosystem running, i would like to relegate it to flink if possible and not add another component to the mix. Any suggestion would be greatly appreciated.
my jobmanager and taskmanager
Copy code
volumeMounts:
        - name: flink-config-volume
          mountPath: /opt/flink/conf
        - name: hive-conf-vol
          mountPath: /opt/flink/hive-conf/core-site.xml
          subPath: core-site.xml
        - name: hive-conf-vol
          mountPath: /opt/flink/hive-conf/metastore-site.xml
          subPath: metastore-site.xml
        - name: hive-conf-vol
          mountPath: /opt/flink/hive-conf/hive-site.xml
          subPath: hive-site.xml
----
      volumes:
      - name: flink-config-volume
        configMap:
          name: flink-config
          items:
          - key: flink-conf.yaml
            path: flink-conf.yaml
          - key: log4j-console.properties
            path: log4j-console.properties
      - name: hive-conf-vol
        configMap:
          name: flink-hive-configmap
added the following in flink-config
Copy code
security.kerberos.fetch.delegation-token: false
    env.java.opts: "-Dflink.catalog.configuration-file=/opt/flink/hive-conf/flink-hive-conf.yaml"
my hive-related config
Copy code
cat flink-hive-config.yaml
catalogs:
  - name: myhive
    type: hive
    hive-version: 3.0.0
    hive-conf-dir: /opt/flink/hive-conf
hive-site.xml that i have added the flink-cluster
Copy code
<configuration>
  <property>
    <name>hive.metastore.local</name>
    <value>false</value>
  </property>

  <property>
    <name>hive.metastore.uris</name>
    <value><thrift://metastore.flink-hive-test.svc.cluster.local:9083></value>
  </property>

</configuration>
Dockerfile that i m using for flink standalone cluster
Copy code
FROM flink:1.16.2@sha256:d836ec8670b74308df5300e48da1905d09c075cbdcf64ef77af3cc28593fceb9

WORKDIR /opt/flink

# Create the hive-conf directory
RUN mkdir -p /opt/flink/hive-conf
COPY flink-hive-config.yaml /opt/flink/hive-conf
RUN ls conf/

# Move flink-table-planner JAR to lib directory
RUN mv $FLINK_HOME/opt/flink-table-planner_2.12-1.16.2.jar $FLINK_HOME/lib/flink-table-planner_2.12-1.16.2.jar

# Move flink-table-planner-loader JAR out of lib directory
RUN mv $FLINK_HOME/lib/flink-table-planner-loader-1.16.2.jar $FLINK_HOME/opt/flink-table-planner-loader-1.16.2.jar

# Download and include Hive dependencies
RUN wget -O $FLINK_HOME/lib/flink-connector-hive_2.12-1.16.2.jar <https://repo1.maven.org/maven2/org/apache/flink/flink-connector-hive_2.12/1.16.2/flink-connector-hive_2.12-1.16.2.jar> && \
    wget -O $FLINK_HOME/lib/hive-exec-3.1.0.jar <https://repo1.maven.org/maven2/org/apache/hive/hive-exec/3.1.0/hive-exec-3.1.0.jar> && \
    wget -O $FLINK_HOME/lib/libfb303-0.9.3.jar <https://repo1.maven.org/maven2/org/apache/thrift/libfb303/0.9.3/libfb303-0.9.3.jar> && \
    wget -O $FLINK_HOME/lib/antlr-runtime-3.5.2.jar <https://repo.maven.apache.org/maven2/org/antlr/antlr-runtime/3.5.2/antlr-runtime-3.5.2.jar> && \
    wget -O $FLINK_HOME/lib/flink-sql-connector-hive-3.1.2_2.12-1.16.2.jar <https://repo.maven.apache.org/maven2/org/apache/flink/flink-sql-connector-hive-3.1.2_2.12/1.16.2/flink-sql-connector-hive-3.1.2_2.12-1.16.2.jar>


RUN wget -O $FLINK_HOME/lib/hadoop-common-3.2.0.jar <https://repo1.maven.org/maven2/org/apache/hadoop/hadoop-common/3.2.0/hadoop-common-3.2.0.jar>
my running hive-metastore
Copy code
FROM openjdk:8-slim

ARG HADOOP_VERSION=3.2.0

RUN apt-get update && apt-get install -y curl --no-install-recommends \
	&& rm -rf /var/lib/apt/lists/*

# Download and extract the Hadoop binary package.
RUN curl <https://archive.apache.org/dist/hadoop/core/hadoop-$HADOOP_VERSION/hadoop-$HADOOP_VERSION.tar.gz> \
	| tar xvz -C /opt/  \
	&& ln -s /opt/hadoop-$HADOOP_VERSION /opt/hadoop \
	&& rm -r /opt/hadoop/share/doc

# Add S3a jars to the classpath using this hack.
RUN ln -s /opt/hadoop/share/hadoop/tools/lib/hadoop-aws* /opt/hadoop/share/hadoop/common/lib/ && \
    ln -s /opt/hadoop/share/hadoop/tools/lib/aws-java-sdk* /opt/hadoop/share/hadoop/common/lib/

# Set necessary environment variables.
ENV HADOOP_HOME="/opt/hadoop"
ENV PATH="/opt/spark/bin:/opt/hadoop/bin:${PATH}"

# Download and install the standalone metastore binary.
RUN curl <http://apache.uvigo.es/hive/hive-standalone-metastore-3.0.0/hive-standalone-metastore-3.0.0-bin.tar.gz> \
	| tar xvz -C /opt/ \
	&& ln -s /opt/apache-hive-metastore-3.0.0-bin /opt/hive-metastore

# Download and install the mysql connector.
RUN curl -L <https://dev.mysql.com/get/Downloads/Connector-J/mysql-connector-java-5.1.47.tar.gz> \
	| tar xvz -C /opt/ \
	&& ln -s /opt/mysql-connector-java-5.1.47/mysql-connector-java-5.1.47.jar /opt/hadoop/share/hadoop/common/lib/ \
	&& ln -s /opt/mysql-connector-java-5.1.47/mysql-connector-java-5.1.47.jar /opt/hive-metastore/lib/
this metastore is backed by mariadb.
bit lost here, since flink doesn't seem tp be able to identify the hive store itself.
o
you can have a look at this project, see if it is of any help
c
well my test was based on the that repo as a reference. However the key difference would be that they are using 2.3.6 and i m on 3.0.0. the rest of the files should be the same that worked with trino. So that is what stumps me as to what m i doing wrong here.
and the confusing bit is that from inside the jobmanager. if i were to run flink@flink-jobmanager-f858dfc46-hdbsc:~$ curl metastore:9083 curl: (52) Empty reply from server flink@flink-jobmanager-f858dfc46-hdbsc:~$ echo $? 52 if it giving the exit code required which acts as the test to confirm if Metastore exists. https://github.com/fhueske/flink-sql-demo/blob/master/client-image/docker-entrypoint.sh