These models have proven to develop incredible abilities through pattern matching on massive text data, so I wouldn’t be too quick to dismiss the limits of what they could do.
Having them use specialized tools would probably be more effective (e.g. have the reasoning LLM use the DNA LLM), but in the long term with scale… who knows? The bitter lesson keeps biting us every time we think we know better.
> amount of data you'd need to learn in order to give descent law advice on a spot?
amount of data you'd need to learn to generate and cite fake court cases and give advice that may or not be correct with equal apparent confidence in both cases
> As the confidence of advice, how much the rates of the mistakes are different between human lawyers and the latest GPT?
Notice I am not talking about "rates of mistakes" (i.e. accuracy). I am talking about how confident they are depending on whether they know something.
It's a fair point that unfortunately many humans sound just as confident regardless of their knowledge, but "good" experts (lawyers or otherwise) are capable of saying "I don't know (let me check)", a feature LLMs still struggle with.
> Should we not mimic our biology as closely as possible rather than trying to model how we __think__ it works (i.e. chain of thought, etc.).
Should we not mimic migrating birds’ biology as closely as possible instead of trying to engineer airplanes for transatlantic flight that are only very loosely inspired in the animals that actually fly?
Exactly this! If you wanted to make something bird-like in capability, we aren't even close! Planes do things birds can't do, but birds also do things planes can't do! ML is great at things humans aren't very good at yet, but terrible at things humans are still good at (brain efficiency)
> My guess would be that a lottery system is actually better for most people currently in the H1-B process because my personal experience
Regardless of your personal experience, if H1-B visas are currently allocated randomly to less than 50% of the applicants, then this is mathematically true.
We actually start what we call "diversification" (=eating solid food) even earlier in France: we were advised by our paediatrician to do it when the baby was 4 months old.
Apparently if you start early and have him try a lot of different stuff (especially potentially allergenic things like nuts), it causes less allergy problems later on.
It's worked very well for us, the kid loves it and feeding him is quite fun (although certainly messy!)
It's interesting to see how different it is depending on the country though!
> In my opinion, this was a clear case of a programmer resisting the system approach because he wanted to spend time fixing the same problems over and over and pretend to be working hard
Which is funny when the first bullet point under "skills" in the author's resume [0] is:
The “prove you’re special” motivation is definitely a strong third reason that does not align with the nepotism baby or monk archetypes