Just as an anecdotal experience. It doesn't necessarily go without saying.
The most memorable discussion I had around PBT was with a colleague (a skip report) who saw "true" randomness as a net benefit and that reproducibility was not a critical characteristic of the test suite (I guess the reasoning was then it could catch things at a later date?). To be honest, it scared the hell out of me and I pushed back pretty hard on them and the broader team.
I have no issue with a psuedo-random set of test cases that are declaratively generated. That makes sense if that is what is meant by PBT. Since it is just a more efficient way of testing (and you would assume this would allow you to cast a wider net).
This dilemma is of course trivially solvable by persisting a (presumably randomly generated) RNG seed with each test run. You just have to ensure that your RNG is configured once with the seed at the beginning of each test run.
The most memorable discussion I had around PBT was with a colleague (a skip report) who saw "true" randomness as a net benefit and that reproducibility was not a critical characteristic of the test suite (I guess the reasoning was then it could catch things at a later date?). To be honest, it scared the hell out of me and I pushed back pretty hard on them and the broader team.
I have no issue with a psuedo-random set of test cases that are declaratively generated. That makes sense if that is what is meant by PBT. Since it is just a more efficient way of testing (and you would assume this would allow you to cast a wider net).