Comment by david_draco
Comment by david_draco 6 days ago
Step 10, add the bug as a test to the CI to prevent regressions? Make sure the CI fails before the fix and works after the fix.
Comment by david_draco 6 days ago
Step 10, add the bug as a test to the CI to prevent regressions? Make sure the CI fails before the fix and works after the fix.
I don't think this is always worth it. Some tests can be time consuming or complex to write, have to be maintained, and we accept that a test suite won't be testing all edge cases anyway. A bug that made it to production can mean that particular bug might happen again, but it could be a silly mistake and no more likely to happen again than 100s of other potential silly mistakes. It depends, and writing tests isn't free.
Writing tests isn't free but writing non-regression tests for bugs that were actually fixed is one of the best test cases to consider writing right away, before the bug is fixed. You'll be reproducing the bug anyway (so already consider how to reproduce). You'll also have the most information about it to make sure the test is well written anyway, after building a mental model around the bug.
Writing tests isn't free, I agree, but in this case a good chunk of the cost of writing them will have already been paid in a way.
For people who aren't getting the value of unit tests, this is my intro to the idea. You had to do some sort of testing on your code. At its core, the concept of unit testing is just, what if instead of throwing away that code, you kept it?
To the extent that other concerns get in the way of the concept, like the general difficulty of testing that GUIs do what they are supposed to do, I don't blame the concept of unit testing; I blame the techs that make the testing hard.
I also think that this is a great way to emphasis their value.
If anything I'd only keep those if it's hard to write them, if people push back against it (and I myself don't like them sometimes, e.g. when the goal is just to push up the coverage metric but without actually testing much, which only add test code to maintain but no real testing value...).
Like any topic there's no universal truth and lots of ways to waste time and effort, but this specifically is extremely practical and useful in a very explicit manner: just fix it once and catch it the next time before production. Massively reduce the chance one thing has to be fixed twice or more.
> Writing tests isn't free, I agree, but in this case a good chunk of the cost of writing them will have already been paid in a way.
Some examples that come to mind are bugs to do with UI interactions, visuals/styling, external online APIs/services, gaming/simulation stuff, and asynchronous/thread code, where it can be a substantial effort to write tests for, vs fixing the bug that might just be a typo. This is really different compared to if you're testing some pure functions that only need a few inputs.
It depends on what domain you're working in, but I find people very rarely mention how much work certain kinds of test can be to write, especially if there aren't similar tests written already and you have to do a ton of setup like mocking, factories, and UI automation.
Definitely agree with you on the fact that there are tests which are complicated to write and will take effort.
But I think all other things considered my impression still holds, and that I should maybe rather say they're easier to write in a way, though not necessarily easy.
There's really no situation you wouldn't write a test? Have you not had situations where writing the test would take a lot of effort vs the risk/impact of the bug it's checking for? Your test suite isn't going to be exhaustive anyway so there's always a balance of weighing up what's worth testing. Going overkill with tests can actually get in the way of refactoring as well when a change to the UI or API isn't done because it would require updating too many tests.
I have, and in 90% of cases that's because my code is seriously messed up, so that (1) it is not easy to write a unit test (2) there has not been enough effort in testing so that mocks are not available when I need it.
Adding tests is not easy, and you can always find excuses to not do that and instead "promise" to do it later, which almost never happens. I have seen enough to know this. Which is why I myself have put more work in writing unit tests, refactoring code and creating test mocks than anyone else in my team.
And I can't tell you how much I appreciate it when I find that this has benefitted me personally -- when I need to write a new test, often I find that I can reuse 80% if not 80% of the test setup and focus on the test body.
After adding the first test, it becomes much easier to add the second and third test. If you don't add the first test or put the effort into making your code actually testable, it's never going to be easy to test anything.
Yes, just more generally document it
I've lost count of how many things i've fixed only to to see;
1) It recurs because a deeper "cause" of the bug reactivated it.
2) Nobody knew I fixed something so everyone continued to operate workarounds as if the bug was still there.
I realise these are related and arguably already fall under "You didn't fix it". That said a bit of writing-up and root-cause analysis after getting to "It's fixed!" seems helpful to others.
I'm not particularly passionate about arguing the exact details of "unit" versus "integration" testing, let alone breaking down the granularity beyond that as some do, but I am passionate that they need to be fast, and this is why. By that, I mean, it is a perfectly viable use of engineering time to make changes that deliberately make running the tests faster.
A lot of slow tests are slow because nobody has even tried to speed them up. They just wrote something that worked, probably literally years ago, that does something horrible like fully build a docker container and fully initialize a complicated database and fully do an install of the system and starts processes for everything and liberally uses "sleep"-based concurrency control and so on and so forth, which was fine when you were doing that 5 times but becomes a problem when you're trying to run it hundreds of times, and that's a problem, because we really ought to be running it hundreds of thousands or millions of times.
I would love to work on a project where we had so many well-optimized automated tests that despite their speed they were still a problem for building. I'm sure there's a few out there, but I doubt it's many.
I would say yes, your CI should accumulate all of those regression tests. Where I work we now have many, many thousands of regression test cases. There's a subset to be run prior to merge which runs in reasonable time, but the full CI just cycles through.
For this to work all the regression tests must be fast, and 100% reliable. It's worth it though. If the mistake was made once, unless there's a regression test to catch it, it'll be made again at some point.
> For this to work all the regression tests must be fast,
Doesn't matter how fast it is, if you're continually adding tests for every single line of code introduced eventually it will get so slow you will want to prune away old tests.
I think for some types of bugs a CI test would be valuable if the developer believes regressions may occur, for other bugs they would be useless.
The largest purely JavaScript repo I ever worked on (150k LoC) had this rule and it was a life saver, particularly because the project had commits dating back more than five years and since it was a component/library, it had quite few strange hacks for IE.