imiric 10 hours ago

There are some good tips here.

One comment about this:

> But what if you can’t add a test for something because the function is untestable? You could refactor it to be testable, but that’s not safe, because you don’t have a test. It’s Catch-22.

> Can we do better? Here’s one emergency rescue procedure for untestable functions that I’ve used successfully. Ignore the existing function and write a test for the function you’d like to have. Then make that test pass by writing a new function from scratch. Finally, delete the old function.

But how do you ensure that the new function replicates the same behavior of the old one? What's suggested is essentially to do a rewrite, but that's as risky as introducing any change to the untested code.

One alternative could be to test at a higher level by adding an integration or E2E test. This way you encode how the system should behave functionally, which at least gives you some safety net to do lower-level changes. Then you could refactor the function to make it testable, or use the rewrite approach to remove it altogether.

  • quonn 10 hours ago

    Rewriting probably even more risky. In my experience the kind of change that is needed to make the function testable rarely introduces bugs. Rewriting the function will likely introduce bugs, especially if bugfixes have been done on the original function before.

    • imiric 9 hours ago

      I agree that rewriting is riskier, but refactoring the function to make it testable can indeed introduce bugs. The thing is that you'd never be sure of it, since you don't have tests to prove how it should've worked initially. I mean, if the function is trivial, it's easier to change it without affecting its behavior, but sometimes this is not the case. So having _some_ tests, even if they're not unit tests, makes this a bit safer.

joeyagreco 2 hours ago

> See if a small block of lines around where you’re working can be extracted into its own little function [...] add a little test for the little function, and make your change, leaving the rest of the code alone.

To me this is the best compromise.

You can:

- change code without worrying about breakage

- ensure your new code is tested

- not have to worry about long LOE that comes with trying to scope in writing tests for the entire codebase

rightbyte 10 hours ago

"Eventually, though, the cost of adding anything to an untested codebase becomes unmanageable. Indeed, the cost of touching the code at all becomes so high that everyone starts finding excuses not to do it."

Dunno about this. In my experience test makes it harder to add code, since usually way too many tests test implementation details.

Pair this with some process heavy and written in stone agile methodology and arcane CI environment (which shows test failures on a tv in the break room). And you can't easely change the tests since that code is not your team's.

Like, if the author can smell so can I.

The problems he write technical solutions to are social not technical.

Even using the concept of 'code smell' makes my alarm go off. We would have been better off without all these missused rule of thumbs and aphorisms.

  • wesselbindt 9 hours ago

    > In my experience test makes it harder to add code, since usually way too many tests test implementation details.

    This is actually one of the reasons why I adopt a test-first methodology. It's very hard to couple your test to implementation details if no such details exist yet. Combined with writing automated tests on a use case level rather than a lower level, I end up with pretty robust test suites. So instead of

      addToBasket(basketId=1234, productId=9)
      basket = fakeDb.getBasket(1234)
      assert 9 in basket
    
      fakeDb.insertBasket(<<basket with product id 9>>)
      basketView = viewBasket(1234)
      assert 9 in basketView
    
    I'm more likely to have a single test mimicking actual user behavior, like

      addToBasket(basketId=1234, productId=9)
      basketView = viewBasket(1234)
      assert 9 in basketView
    
    This tends to be more robust because use-cases for existing functionality tend not to change too much. And when they do, it makes sense for your test suite to change accordingly. Writing tests on an even lower scope makes the suite even more brittle. If the addToBasket endpoint is composed of three subroutines, and you write tests for each subroutine individually, then as soon as you refactor to some other number of subroutines, your test suite breaks even though the endpoint still works. One reason why writing your tests first is effective in producing tests decoupled from implementation details is that if you write the test first, it's much more natural to write a higher scope test.

    > The problems he write technical solutions to are social not technical.

    Pretty much all of the software we use today are technical solutions to social problems. Why is this problematic to you?

  • hennell 9 hours ago

    What's the social solution to his problem then? Your issues can be solved by the social solutions of teaching against implementation tests and giving ownership of tests to the appropriate team.

    • rightbyte 6 hours ago

      Decrease transperancy towards non programmers. (I am not joking)

      I blame many of the problems with modern dev practiceses on unilateral transperancy that enables micromanagement and non programmers that tell programmers what to do in too great of a detail.

  • imiric 9 hours ago

    > Dunno about this. In my experience test makes it harder to add code, since usually way too many tests test implementation details.

    I'm puzzled by opinions like this. So you see tests as a nuisance that hinder your productivity, rather than a safety net that ensures that the system behaves the way it's supposed to, giving you the freedom to safely make changes and peace of mind that they will make the product better for your users.

    > Pair this with some process heavy and written in stone agile methodology and arcane CI environment (which shows test failures on a tv in the break room). And you can't easely change the tests since that code is not your team's.

    This sounds like you've had a bad experience in poorly managed teams, which has nothing to do with testing.

    > Even using the concept of 'code smell' makes my alarm go off. We would have been better off without all these missused rule of thumbs and aphorisms.

    Eh, as with anything, these things can be abused. But when you've worked in any industry for some time, you accumulate "rules of thumb" and wisdom from experience, and experienced developers can indeed point out "code smells" that are worth discussing at the very least. Best practices exist for a reason, and while there are always reasons to not follow them, deciding to do so requires as much experience as what it took to discover and understand them.

    Blindly following any methodology is not good, but the benefits of testing are pretty clear. There are details and tradeoffs worth considering, but I just don't understand the argument that tests make your job more difficult. Testing is a part of your job as much as writing production code is.

    • rightbyte 6 hours ago

      > I'm puzzled by opinions like this

      I'm for testing. In practice I think there are many drawbacks though.

      Specifically, engraving implementation details, which makes changing code way harder.

      In the commercial projects where I worked, integration testing (by non programmers) are the one that I think added value.

      I myself spam unit tests when I code like I want. It is not that.

      > experienced developers can indeed point out "code smells" that are worth discussing at the very least

      Ye I agree. I am not really sure what I am getting at. I think I need some sort of 'dogmatic wisdom is unwise' aphorism.

      Many newbies seem to take aphorisms way too litteraly. But maybe we would be even worse of without them?