ok... reaching out... my brain is turning to mush....
# troubleshooting
g
ok... reaching out... my brain is turning to mush... trying to get flink to use a hive catalog, stored in postgresql, with data going as iceberg to minio. i got hive talking to postgresql and created the catalog under credentials given, i got hive talking to minio - confirmed because if i dont pre create the bucket then hive crashes. here is the sql being issued in my sql-client, where i try and define/create a catalog, and error.
Copy code
Flink SQL>
> CREATE CATALOG c_iceberg_hive WITH (
>    'type'                  = 'iceberg',
>    'io-impl'               = 'org.apache.iceberg.aws.s3.S3FileIO',
>    'warehouse'             = '<s3://warehouse>',
>    's3.endpoint'           = '<http://minio:9000>',
>    's3.path-style-access'  = 'true',
>    'catalog-type'          = 'hive'
> );
[ERROR] Could not execute SQL statement. Reason:
java.lang.NoSuchMethodError: 'void com.google.common.base.Preconditions.checkArgument(boolean, java.lang.String, java.lang.Object)'
thinking it will take someone that's done this like 5 min to see my gap/miss... please.
d
Do you know what version of Guava is in your dependency?
d
make sure compile time/ runtime are the same version
g
from the link dockerfile
d
looks like Hadoop is shading it …
maybe you can try upgrading Hadoop?
its more complex and error prone but you could try excluding the shade and managing it yourself
g
huh
note i'm an idiot as far as java is concerned...
let me ask simple question... for me that type needs to be hive, as i'm creating a catalog of type hive stored via my hive-metastore inside postgresql. inside this catalog i then create a database where i create tables thats connector iceberg with a 'write.format.default' = 'parquet'
my thinking should say i create hive catalog pointing it ./conf file where hive-site.xml is telling it where the postgresql database is, and telling it where my minio server is and what datawarehrou/aka bucket to use.
d
makes sense
not too familiar with shaded jars but i think its a custom jar used by hadoop with some classes pulled out of it
this is not guava but a special version of it with maybe some missing or modified classes. So the idea was to either upgrade hadoop so it uses a different shadded guava jar that might work better OR
remove this jar and try to add your own guava jar. All of this is a bit risky of course
upgrading hadoop to a newer version or just installing a newer version might be the easier course if thats possible to try to get a different result.
Anyway, just wanted to clarify about the term shaded. I had not heard that before either and I have been working with Java back in the day
πŸ‘ 1