Hi, I had discovered some really strange issue la...
# troubleshoot
b
Hi, I had discovered some really strange issue lately, after ingesting some demo data to datahub, server stops responding after some period of time returning error 500. I had created an issue on github for it: https://github.com/datahub-project/datahub/issues/4619 Is this some known problem?
d
@breezy-portugal-43538 It seems like your Elasticsearch container ran out of memory. I can see this in your logs:
Copy code
elasticsearch             | {"type": "server", "timestamp": "2022-04-08T08:18:10,060Z", "level": "ERROR", "component": "o.e.b.ElasticsearchUncaughtExceptionHandler", "cluster.name": "docker-cluster", "node.
name": "elasticsearch", "message": "fatal error in thread [Thread-8], exiting", "cluster.uuid": "UX0eKJnwTJ-O2lUjTy91kw", "node.id": "7EpuXZTQR6GwQzXeRKC9IQ" ,
elasticsearch             | "stacktrace": ["java.lang.OutOfMemoryError: Cannot reserve 1048576 bytes of direct buffer memory (allocated: 133410575, limit: 134217728)",
Please, can you give a bit more memory to you docker environment?
how much memory you docker container have?
b
Hi @dazzling-judge-80093, thank you a lot for all your help and input 😄 So, I have around 1 TB of memory, what is the correct way to extend the memory for a datahub docker container? Also: if I am running this with bare minimum number of datasets (7) and it goes over after some period of time, without any additional ingestion, doesn't it point to some mem leak?
d
@early-lamp-41924 have you seen out of memory with the ES on docker?
e
From the image you posted seems like it is only using a small amount of memory?
b
I think this is a problem where you may not have granted either a) docker or b) the specific container enough memory. If you have docker desktop installed (the desktop app), make sure that you've given it 8GB of memory - this is the bare min required for elastic 🙂
Here's my setup - I'm going over the recommended 8GB to 12GB
b
I changed the parameters in yml files:
ES_JAVA_OPTS=-Xms8g -Xmx8g -Dlog4j2.formatMsgNoLookups=true
and
mem_limit: 10g
So now my Elasticsearch has around 10GB, but it still allocates a lot of memory (see screenshot for reference) around 96% I had also uploaded the screen from datahub with uploaded data for reference and screen from elasticsearch container. I left the datahub running over for weekend and it did not crash from out of memory, but this huge memory consumption still bugs me a little. Would there be possibility to change datahub yml files to increase memory limit for elasticsearch? Additionally, is there any way to flush the memory used by elasticsearch?