This message was deleted.
# community-support
s
This message was deleted.
v
If build scans do not help, maybe the https://github.com/gradle/gradle-profiler can be useful
r
So I’ve spent quite a time combing through build scans on this. Gradle profiler isn’t an option here because the issue only manfiests itself in Jenkins, not on local. If I need to install gradle profiler in jenkins, that will take significantly more work, but we’ll see. There are two traits that I’ve been able to correlate so far: • Number of tasks created during task graph calculation. The gradle docs say this is a best practice, but I’m still keeping an eye on it. • Time spent fingerprinting task inputs. In builds with the long task graph calculation time, I’m seeing over an hour spent on this (i assume that’s serial time, not real time). In builds with a shorter task graph calculation, I’m seeing anywhere from 30 seconds to 3 minutes spent.
c
perhaps the fingerprinting is including things it shouldn’t from the CI server? Temp files accumulating?
r
Oh good thinking. The amount of task inputs that are fingerprinted doesn’t seem to correlate with the serial execution time of the fingerprinting itself. I’ve also started tagging my build scans with the physical hostname to see if there’s any correlation there, and there isn’t so far.
r
Besides the specific gradle configuration problem, if you only face this problem on Jenkins, have you isolated what is different from the local machines that is causing this? Examples: - Does it has less hardware available? (CPU, num of cores, memory...). - Do you have ephemeral containers for each build or do you keep a long living one? If the latest: might your daemon be exhausted after long time running because might have some memory leaks that are forcing it to do too much garbage collection during the build? - Other specific configuration for the agents that run your builds that might be impacting there. Like different OS or others.