Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Just as an anecdotal experience. It doesn't necessarily go without saying.

The most memorable discussion I had around PBT was with a colleague (a skip report) who saw "true" randomness as a net benefit and that reproducibility was not a critical characteristic of the test suite (I guess the reasoning was then it could catch things at a later date?). To be honest, it scared the hell out of me and I pushed back pretty hard on them and the broader team.

I have no issue with a psuedo-random set of test cases that are declaratively generated. That makes sense if that is what is meant by PBT. Since it is just a more efficient way of testing (and you would assume this would allow you to cast a wider net).



What's the issue you have?

The idea is you have random testing and the test failures are added as explicit tests that then always get run.

Is that so different from someone else testing?

The main issue is you stumble across a new issue in an unrelated branch, but it's not wildly different from doing that while using your application.


This dilemma is of course trivially solvable by persisting a (presumably randomly generated) RNG seed with each test run. You just have to ensure that your RNG is configured once with the seed at the beginning of each test run.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: