Hi everyone I have just added a new ingestion sour...
# contribute-code
w
Hi everyone I have just added a new ingestion source: SAP HANA. 🄳 It is my first bigger contribution so please be gentle. Anyways, happy to hear your feedback. Thanks in advance! https://github.com/linkedin/datahub/pull/4376
šŸŽ‰ 2
šŸ™Œ 1
thank you 2
teamwork 1
s
@green-football-43791 looks like some wrong commits in master is breaking PRs. Can you take a look?
w
Hi @square-activity-64562 I thought it might be caused by the update of Gradle and Java in v.8.2.9 that caused these pains. Therefore I have updated my fork to the current code base. Now, the test fail at other points. It seems the test pipelines are extremely unstable. Locally the tests are still fine. Most of the tests often seem to be related to performance issues with the tests. I.e. an image that could not be pulled because of "too many requests" or a docker image that could not boot in time for the test to complete. Do you have any ideas how I should continue, here?
s
The problem with smoke test was fixed recently.
The too many requests pull error is something that is not fixed yet.
But this test https://github.com/linkedin/datahub/runs/5550063839?check_suite_focus=true seems to be related to your changes unless I am missing something here
w
Yeah, this one is related but it is a timeout because the docker container could not boot in time. Hence, the timeout. Locally this one runs just fine unless I put my machine under severe load (with something else) so that the container does not boot in time (hana takes at least 3-4mins to spin up on my machine). I could increase the timeout to 10mins+ if that helps. However, that means that the slow integration test could be running for a very long time
s
Two options come to mind • Increase timeout to 10 mins to see if that helps. If it does not help that means we do have a memory issue in github actions with the new test. How much memory does HANA take? Maybe it is too large for github actions current settings • Is the
slow_hana
marker not used anywhere? Maybe we can use that to run this as a separate step?
w
I added the slow_hana as a fallback already, when I ran into the problem on my local machine šŸ™‚ Anyways, I will do two things I'll measure the container true memory consumption just in case (the recommended settings are 16 GB RAM) and I ll will bump the timeout settings. Let's keep our fingersšŸ¤ž.
šŸ‘ 1
The memory consumption measurement via docker stats shows a max. of 5.7 GiB of RAM needed of 12.5 GiB allocated. Let me know if that might be a problem. I'll push the new timeout and then we will see if that helps. Locally everything ran fine again. I could also move the test from
integration
to
slow_integration
(Currently only NiFi is tested there)
@square-activity-64562 I am at wit's end for today. The test passes in the slow version. It passes in the ingestion by version in the 3.9 version. But it fails in the 3.6 version. But the error is clearly a time out and not a version conflict. I am out of ideas. šŸ¤·ā€ā™‚ļø 🤬 Any ideas?
s
Let me find a solution
b
ketbkbfvkkninfulbvdgtlvltfebrttdguvldurnhvjf
h
Hi @witty-dream-29576, I'll take a stab at it and get back to you! šŸ™‚
w
Thanks Ravindra!
@helpful-optician-78938 Hi Ravindra. Did you find something?
@helpful-optician-78938 Hi Ravindra, sorry for bothering you again. Did you have time to take a look, yet?
h
Hi @witty-dream-29576, I'll definitely get to reviewing it this week. Thanks!
w
Thanks, Ravindra
Hi @helpful-optician-78938 I had to update the pull request due to merge conflict because the project changed some parts in sqlalchemy. So I have updated the code to resolve the new conflict.
@helpful-optician-78938 Hi Ravindra, the code is updated and seems to work except for a timeout issue with 3.6. ingestion. Apparently the runner crashed after a certain time. Maybe you can still have a look at it? Or maybe restarting the runner maybe enough.