> The machine I'm using to type this comment is also a finite state machine, as due to having non-infinite memory, it also cannot count arbitrarily high.
Right. And if you really want to push that argument further then there are numbers for which your machine can't decide whether they are prime or not. Most of them, in fact, i.e., for all but a finite number of primes and non-primes, your machine can't tell.
For the theory of computability, this is not a useful model. Neither for complexity theory. Statements like "My machine can sort in constant time" which is arguably true but not useful either. It only holds up to a certain collection size which in a sense makes it both nonsensical and useless. Even concepts like the Chomsky hierarchy with context free languages being one of the prominent components make little sense as long as your computational model is finite state.
Instead, we model them as unbounded, to be able to differentiate between classes of computation problems as well as reason about complexity. The Turing Machine is one of the simple and elegant models, but there are others for other purposes.
I wish that basic education about the theory of computation would stress more why theory treats things like this, to avoid this kind of confusion. (And then people would need to take those classes as well, and not just claim they can write React apps just fine without a degree.)
> And then people would need to take those classes as well, and not just claim they can write React apps just fine without a degree.
Weird tangent. They can write React apps without a degree, can't they? What's wrong with that?
The amount of complexity reasoning you need for something like that is around "this part seems slow, maybe there's a better algorithm somewhere" or "that's a lot of nested loops, maybe I can do with less". Knowing more might help in some pretty rare cases but doesn't seem like a requirement. On the other hand, knowing the complexity of all algorithms and blindly assuming "good complexity = good performance" while never checking is a recipe for desaster.
> On the other hand, knowing the complexity of all algorithms and blindly assuming "good complexity = good performance" while never checking is a recipe for desaster.
Exactly. That's the difference between an actual education and having looked up some Big-O notation on Wikipedia.
Right. And if you really want to push that argument further then there are numbers for which your machine can't decide whether they are prime or not. Most of them, in fact, i.e., for all but a finite number of primes and non-primes, your machine can't tell.
For the theory of computability, this is not a useful model. Neither for complexity theory. Statements like "My machine can sort in constant time" which is arguably true but not useful either. It only holds up to a certain collection size which in a sense makes it both nonsensical and useless. Even concepts like the Chomsky hierarchy with context free languages being one of the prominent components make little sense as long as your computational model is finite state.
Instead, we model them as unbounded, to be able to differentiate between classes of computation problems as well as reason about complexity. The Turing Machine is one of the simple and elegant models, but there are others for other purposes.
I wish that basic education about the theory of computation would stress more why theory treats things like this, to avoid this kind of confusion. (And then people would need to take those classes as well, and not just claim they can write React apps just fine without a degree.)