Hacker Newsnew | past | comments | ask | show | jobs | submit | vinipolicena's commentslogin

> Software engineering has got - in my opinion, often needlessly - complicated, with people rushing to very labour intensive patterns such as TDD, microservices, super complex React frontends and Kubernetes.

TDD as defined by Kent Beck (https://tidyfirst.substack.com/p/canon-tdd ) doesn't belong in that list. Beck's TDD is a way to order work you'd do anyway: slice the requirement, automate checks to confirm behavior and catch regressions, and refactor to keep the code healthy. It's not a bloated workflow, and it generalizes well to practices like property-based testing and design-by-contract.


Parts of the talk remind me of https://www.amundsens-maxim.com/


ha, I wish I saw that while working on that talk! adding it to the resources!


Reminds me of this passage from Nassim Taleb's Skin in the Game:

> The great Karl Popper often started a discussion with an unerring representation of his opponent’s positions, often exhaustive, as if he were marketing them as his own ideas, before proceeding to systematically dismantle them. Also, take Hayek’s diatribes Contra Keynes and Cambridge: it was a “contra,” but not a single line misrepresents Keynes or makes an overt attempt at sensationalizing.


Post in which simonw explains the "snapshot testing" approach: https://til.simonwillison.net/pytest/syrupy


I've found this approach super useful for anything that has a potentially complex implementation with relatively simple output (e.g. compilers, query generators, format converters). You can comprehensively cover your code with simple snapshot tests and not really anything else.

It's critically important to very carefully review the snapshots before committing them for the first time though - it's far too easy to run the test, look at the output, think "yup that looks like some sql/assembly/whatever that does the thing I'm trying to do" and carry on. Only to realize days/weeks/months later that there's a bunch of bugs that you never caught because your supposed "correct" output was never actually correct.


100% agree - the risk of snapshot tests is that you can get lazy, at which point you're losing out on the benefit of using tests to help protect against bugs!


This is exactly why I strongly dislike snapshot tests most of the time in big repositories with lots of collaborators. The snapshot isn’t encoding any logic, it’s just saying “are you sure you wanted to change that?” Whereas a good unit test will tell you exactly what’s broken.

It’s just too easy to update the snapshot, and when you glance at changes to a large snapshot, it’s impossible to tell what the test is actually trying to check


This reminds me of expect tests in OCaml[0]. You create a test function that prints some state and the test framework automatically handles diffing and injecting the snapshot back into the test location. It helps keep your code modular because you need to create some visual representation of it. And it's usually obvious that's wrong through the diff.

[0] https://github.com/janestreet/ppx_expect


Can you elaborate on what do you mean by 'becoming one with the computer'? My understanding is that you mean that we should put some assumptions aside and try to look from the CPU point of view, which can be helpful for debugging, testing, design, etc. Am I right? edit: spelling mistake


"Design, Composition and Performance" by Rich Hickey. His view of what improvising means is brilliant.

https://www.infoq.com/presentations/Design-Composition-Perfo...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: