You seem to be making an implicit claim that LLMs can create an effective cancer drug "10% of the time".
Smells like complete and total bullshit to me.
Edit: @eucyclos: I don't assume that Chat GPT and LLM tools have saved cancer researchers any time at all.
On the contrary, I assume that these tools have only made these critical researchers less productive, and made their internal communications more verbose and less effective.
No, that's not the claim. The claim is that we will create a hypothetical LLM that, when tasked with a problem at the scientific frontier of molecular biology will, about 10% of the time, correctly reason about existing literature and reach conclusions that are valid or plausible to similar experts in the field.
Let's say you run that LLM one million times and get 100.000 valid reasoning chains. Let's say among them are variations on 1000 fundamentally new approaches and ideas, and out of those, you can actually synthesize in the laboratory 200 new candidate compounds, and out of those, 10 substance show strong in-vitro response, and then one of those completely cures some cancerous mice.
There you go, you have substantially automated the intellectual work of cancer research and you have one very promising compound you can start phase 1 trials that you didn't have before AI, and all without any AGI.
Smells like complete and total bullshit to me.
Edit: @eucyclos: I don't assume that Chat GPT and LLM tools have saved cancer researchers any time at all.
On the contrary, I assume that these tools have only made these critical researchers less productive, and made their internal communications more verbose and less effective.