You can have a reasonable expectation secure military pagers are only going to be used by soldiers. Given how few collateral deaths there were this was a reasonable assumption.
Sure, if you use unique passwords, then changing passwords isn't as useful. Yet we shouldn't judge a security policy based on the existence or not of another policies.
What you are judging then is a whole set of policies, which is a bit too controlling, you will most often not have absolute control over the users policy set, all you can do is suggest policies which may or may not be adopted, you can't rely on their strict adoption.
A similar case is on the empiric efficacy of birth control. The effectiveness of abstinence based methods is lower than condoms in practice. Whereas theoretically abstinence based birth control would be better, who cares what the rates are in theory? The actual success rates are what matters.
A teacher in an environment with +100 students and a lot of assignments that are graded by a random grad student is useless anyway and might as well not exist. If AI could move us away from this cargo cult, that's great.
I beg to differ. Tactical use of a scientific or graphing calculator can absolutely replace large parts of the thinking process. If you're testing for the ability to solve differential equations, a powerful enough calculator can trivialize it, so they aren't allowed in calculus exams. A 10-digit calculator cannot trivialize calculus, so they are allowed. That's the distinction. LLMs operate at the maximum level of "helpfulness" and there's no good way to dial them back.
> Maybe it would be possible to design labs with LLM's in such a way that you teach them how to evaluate the LLM's answer? This would require them to have knowledge of the underlying topic. That's probably possible with specialized tools / LLM prompts but is not going to help against them using a generic LLM like ChatGPT or a cheating tool that feeds into a generic model.
What you are desribing is that they should use LLM just after they know the topic. A dilemma.
Yeah, I kinda like the method siscia suggests downthread [0] where the teacher grades based on the question they ask the LLMs during the test.
I think you should be able to use the LMM at home to help you better understand the topic (they have endless patience and you can usually you can keep asking until you actually grok the topic) but during the test I think it's fair to expect that basic understanding to be there.
> Personally my biggest complain from Rust is that I wish it was more readable. I've seen function signatures that seemed straight out of C++.
There is always a trade-off. You really cannot provide enough information for the compiler without the current signatures. There is a certain point where you cannot compress the information without losing some features.
reply