Hey if y'all have a moment, could you let me know ...
# testing
a
Hey if y'all have a moment, could you let me know how many unit tests you have for the app yer working on, and how long it takes TestBox to run them? NB: specifically unit tests. Not tests that hit external services / DBs etc. Just tests that are exercising algorithmic / business logic, not network connectivity. And specifically TestBox. Cheers.
a
Simple answer is - not nearly enough! (when I started we had none so making slow progress but nowhere close to decent coverage!) With a freshly started server on ACF it is very slow (seems to be related to cf being slow at checking cfc metadata). Second run is 2,217 specs and runs in 28,638 ms.
a
That's a good number of specs! Don't sell yerself short.
r
For my business apps, virtually zero "unit tests". No mocks, everything hits a db or api except a few value objects. Very state and database oriented. Looking at one package of 47 tests that are only algorithm, takes 700-900ms on second run. Even in the unit tests, I test very 'outside in', don't mock anything unless I have to.
1
d
First run of the 2,217 specs is 900,000 ms which is a killer for our CI/CD pipeline - we have had loooong discussions with adobe on the initial cf compiler speeds especially with testbox. 2nd run is more like it though (one day I hope they sort out what ever it is so the first run is as quick)
1
c
Not an app, but https://github.com/cfsimplicity/spreadsheet-cfml is approaching 500 specs which generally take around 10 seconds on second run.
1
d
Per spec - anywhere between "0"-7 ms on a well specced laptop running linux
a
Is that Lucee @domwatson? Guessing it is 🙂
d
yes
👍 1
a
@domwatson yeah an individual test is gonna be as fast as it takes the code to run: I think the TestBox overhead there is negligible. I largely take that as a given for unit tests, hence my specific stipulation to not include integration/etc tests. I bet when you run one spec the total execution time is not just 0-7ms... My suspicion is that the overhead of doing a full test run for a large wodge of tests (to use the correct collective noun ;-)) is not insignificant. We have around 500 cases (don't start... it's a lot better than it was... ;-)), and the overhead seems to be 10-20sec.
The reason I'm asking is that our tests... leave a lot of room for improvement when it comes to execution time, but I fear most of this is because the tests are slow (I mean: I know they are). Just having a look at other ppl's experiences overall before I decide how much attn to give it.
d
Would be interesting to see a fusionreactor evaluation of the single request that runs the tests. GC could be a significant slowing factor of test runs, especially as more and more tests are run (rather than any overhead from TestBox itself).
a
If I run the tests in our test dir, with every single one
xit
-ed out, it has about a 10-20sec overhead. Just TestBox doing TestBox stuff.
r
How many levels of nesting do your BDD tests have? We have seen performance issues when we add too many levels of nesting. Flattening out the structure resulted in very significant performance gains. Do you use createStub() for external models/objects or createMock()/prepareMock()? Are you using the ANTJunit reporter? We use Jenkins to run our TestBox builds and have seen memory issues with TestBox building the XML to return to Jenkins.
b
Last run in our CI pipeline was 741 tests taking 60s on a custom docker image. It usually doesn't take that long but I don't have specific numbers at the moment. No external dependencies at all in these tests.
1
If I run the same tests from CommandBox on my PC, the first run is about 17s (due to server startup) and subsequent runs take around 1.5s.
1