Not the same kind of experiment. An experiment in the scientific sense tweaks the process that generates the data, not the interpretation of the data. There is an inspiration / hypothesis creation step between old data and new experiment.
Main differences: A hypothesis is sorta kinda like your model's coefficients, but more generally applicable. And you have no feedback loop between model coefficients and input data.
So yeah, you are doing very sophisticated curve fitting. It is useful alright, it's just not very much like science.
What Chomsky is saying is that the control variables don't exist until you create them because the most telling things don't happen until you have a specific hypothesis and make them happen to test the hypothesis.
I disagree. What he is saying is that there is a special rule for languages that he doesn't think you would get at without an enormous amount of data. So a passive learning algorithm wouldn't uncover this structure in a reasonable amount of time or data (I guess it is poor sample efficiency he is worried about). A learning algorithm that has a distribution over it's own internal model of language would be able to ask questions that minimize the uncertainty of the model.
Main differences: A hypothesis is sorta kinda like your model's coefficients, but more generally applicable. And you have no feedback loop between model coefficients and input data.
So yeah, you are doing very sophisticated curve fitting. It is useful alright, it's just not very much like science.