Hi guys, what if I get an error while opening the ...
# ui
s
Hi guys, what if I get an error while opening the FE on the browser? I use the quickstart. The error:
datahub-frontend-react  | Caused by: com.linkedin.r2.message.rest.RestException: Received error 500 from server for URI <http://datahub-gms:8080/corpUsers>
and few rows later:
datahub-mae-consumer   | Caused by: com.linkedin.r2.message.rest.RestException: Received error 500 from server for URI <http://datahub-gms:8080/corpUsers>
I already tried to nuke and retry. Any suggestion?
g
Can you look at GMS's logs? Do they have any informative errors?
s
Not sure is what you ask for , but running Docker logs: I can find that it succesfully reach (
Received 200 from
) elasticsearch, mysql, neo4j, broker. Then there is a ton of warnings like:
2021-05-27 15:23:44.026:WARN:oeja.AnnotationParser:main: Unrecognized ASM version, assuming ASM7
2021-05-27 15:23:44.352:WARN:oeja.AnnotationParser:qtp544724190-10: org.antlr.runtime.ANTLRFileStream scanned from multiple locations: jar:file:///tmp/jetty-0_0_0_0-8080-war_war-_-any-6428066200743817198.dir/webapp/WEB-INF/lib/antlr-runtime-3.5.2.jar!/org/antlr/runtime/ANTLRFileStream.class, jar:file:///tmp/jetty-0_0_0_0-8080-war_war-_-any-6428066200743817198.dir/webapp/WEB-INF/lib/antlr4-4.5.jar!/org/antlr/runtime/ANTLRFileStream.class
2021-05-27 15:23:44.352:WARN:oeja.AnnotationParser:qtp544724190-10: org.antlr.runtime.ANTLRInputStream scanned from multiple locations: jar:file:///tmp/jetty-0_0_0_0-8080-war_war-_-any-6428066200743817198.dir/webapp/WEB-INF/lib/antlr-runtime-3.5.2.jar!/org/antlr/runtime/ANTLRInputStream.class, jar:file:///tmp/jetty-0_0_0_0-8080-war_war-_-any-6428066200743817198.dir/webapp/WEB-INF/lib/antlr4-4.5.jar!/org/antlr/runtime/ANTLRInputStream.class
2021-05-27 15:23:44.352:WARN:oeja.AnnotationParser:qtp544724190-10: org.antlr.runtime.ANTLRReaderStream scanned from multiple locations: jar:file:///tmp/jetty-0_0_0_0-8080-war_war-_-any-6428066200743817198.dir/webapp/WEB-INF/lib/antlr-runtime-
then a (maybe fake) error:
ERROR StatusLogger Log4j2 could not find a logging implementation. Please add log4j-core to the classpath. Using SimpleLogger to log to the console...
A bung of INFO, and that's all
l
what does
datahub check
say?
s
I cannot run such a command, I'm using the quickstart
l
datahub check local-docker
s
Maybe I found something:
datahub-mae-consumer   | 15:54:48.471 [datahub-usage-event-consumer-job-client-0-C-1] ERROR o.s.k.listener.LoggingErrorHandler - Error while processing: ConsumerRecord(topic = DataHubUsageEvent_v1, partition = 0, leaderEpoch = 0, offset = 3, CreateTime = 1622130888436, serialized key size = 21, serialized value size = 442, headers = RecordHeaders(headers = [], isReadOnly = false), key = urn:li:corpuser:prova, value = {"title":"DataHub","url":"<http://hidden:9002/>","path":"/","hash":"","search":"","width":1773,"height":951,"prevPathname":"","type":"PageViewEvent","actorUrn":"urn:li:corpuser:prova","timestamp":1622130887086,"date":"Thu May 27 2021 17:54:47 GMT+0200 (Central European Summer Time)","userAgent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0) Gecko/20100101 Firefox/78.0","browserId":"e4396f69-110f-4c95-9089-9172ee795ff2"})
the part ``"actorUrn":"urnlicorpuser:prova"`` is not expecter. "prova" was a temporary user I tried to create, but the nuke was supposed to delete everything
g
Did you use the nuke.sh script to nuke?
s
Yes, I used that one. the issue with the "prova" user was due to the browser cookies. Now I can see the login page. But when entering a random user and password I get a similar error about that random username I tried to add a user via ingestion, but I get
ingestion  | {'failures': [{'error': 'Unable to emit metadata to DataHub GMS',
More details: I'm running docker in a VM with Photon Linux
g
does
datahub check local-docker
report any issues?
s
✔️ No issues detected but while trying to ingest I still get
{'error': 'Unable to emit metadata to DataHub GMS',
g
Are you running ingestion via the datahub cli tool or using the docker image?
Seems like it’s within the docker image based on the logs but wanted to check
Could you also paste the recipe config - particularly the sink?
s
Yes, via the ingestion.sh so the docker image. Here the recipe: source:   type: "file"   config:    filename: "./user_lv.json" sink:   type: "datahub-rest"   config:    server: 'http://datahub-gms:8080' The json contains just a user. FYI, same error if I use the demo data for ingestion.
I finally found the issue. At some point I spotted: ``Table 'datahub.metadata_aspect' doesn't exist``, so I acted as explained in the troubleshooting guide: https://datahubproject.io/docs/debugging
g
Huh that’s really odd - haven’t seen that issue come up in a long while and I thought it was solved. Do you recall the sequence of steps you used to reproduce that issue?
s
Not really, but tomorrow I will try from a clean VM
👍 1
b
I encountered this when I ran the containers in a rhel VM in my company's network. Apparently the init.sql never ran successfully or something. But I had no issues running it in a ec2 rhel VM, so I thought it's just some security policy at work.
s
I confirm, with Photon Linux I need to call
docker exec -i mysql sh -c 'exec mysql datahub -udatahub -pdatahub' < docker/mysql/init.sql
g
cc @early-lamp-41924
e
very interesting. so we have a mysql-setup-job that we created for kubernetes
we may want to add it to docker-compose as well. it will be a no-op if the init sql runs, but if it didn’t it will explicitly run a setup sql
f
Hi, I am getting the same with the latest version. When I do initialize, this table is not called. The init.sql only contains metadata_aspect_v2, no metadata_aspect. https://github.com/linkedin/datahub/blob/master/docker/mysql/init.sql I've created this manually in the MySQL db and can get past the error.
e
Hey @faint-hair-91313! How are you deploying datahub? If you are using quickstart.sh to deploy datahub, we recently made a change to make it easier to extend the metadata models. Because it was backward incompatible, we changed the name of the storage system. If you have already ingested, can you follow this guide https://datahubproject.io/docs/advanced/no-code-upgrade to migrate? Otherwise, try nuking and restarting with the latest quickstart.sh. Once the datahub containers have been updated, it should be reading from metadata_aspect_v2 table.
f
Hi, I did the latter. I have nuked, took the new code and restarted with quickstart ...