This message was deleted.
# performance
s
This message was deleted.
e
lowering
--max-workers
could help
but Gradle should already try to avoid spawning workers when memory is too tight, so possibly something is wrong with that estimation
šŸ¤” 1
c
we are using 8.6 and test tasks have set a high Xmx maxHeapSize
j
I can only share that I am facing similar issues when going full parallel with the config ache. The tricky thing is that you want to limit the tests, but not other tasks. So
--max-workers
is not hepful IMO. If you already parallelize the tests through JUnit, using an empty build service with
maxParallelUsages = 1
to limit how many test tasks run in parallel sounds like a good solution (we do that). And also keep
maxParallelForks
at 1 so that there is only one worker per test task.
c
I used the fake service approach to limit it. I talked with some Gradle engs yesterday, it's under the radar but not started yet. The 8.1 memory management improvements are only about not keeping idle workers when there are not enough memory
I believe it will become very painful when Gradle enables CC by default, because people will have this kind of issues.
This also makes me wonder what's faster... • using on each test
maxParallelForks=8
but using a single worker for tests • using 8 workers, but keeping
maxParallelForks=1
(I could try, but I believe it's about the same)
e
probably depends on project structure in practice (e.g. with 2 modules, test forking is definitely more parallelizable)
j
I believe it will become very painful when Gradle enables CC by default
I think that will mostly happen to folks who did not activate "parallel" before as well. But for those there might be other surprises as well.
probably depends on project structure in practice
Yes. having seen a bunch of different projects in the past months, this is my experience. It depends a lot how you are testing and what tests are doing. I think this is to a large degree also a documentation and usability issue. I think like 90% of the users do not understand the different layers of parallelization: multiple tasks, one task with multiple worker VMs, multiple threads in a VM by JUnit5 parallelization, tests creating multiple threads themselves, tests starting additional processes themselves And this is also too complex to "just work". You can maybe improve some things in Gradle but it will never be able to automatically figure out what the best parallelization for your setup ist. Plus there is the issue of different contexts: Sometimes you want to use as much machine power as possible (CI agents). And sometimes you want to run it in the background and only use half of your machine (run all the tests in the background without making my laptop feel slow while I am in a Zoom call)
c
This build had
--no-parallel
precisely to handle the memory issue, but you are right. Multiple layers of parallelization + not well documented (tracking memory usage is not good either from gradle side). I agree it's complex, the question is how to help reducing the friction
In this case there are more than 8 test tasks, so I don't think it would make a big difference. If someday I make tests cacheable tests forking wins.
m
Precisely for this reason I implemented a semaphore for test executions
Gradle misses a concept like "these things are not safe to run in parallel", e.g when they share resources
c
Isn't the build service exactly for shared resources?
however, I am facing this bug so often that I have lost all trust in build services: https://github.com/gradle/gradle/issues/17559
😱 1
I am not surprised this hasn't been fixed for more than 2 years, it's a deep design problem
however it makes build services barely impossible to use unless you force users into rewriting their builds
but that's another problem than yours, let's not divert
c
I'm happy that my case is just about my build, not a plugin