Among the fatal flaws I see, some are ethical / philosophical regardless how the thing actually perform. I care a lot about this. It's actually my main motivation for not even trying. I don't want to use a tool that has "blood" on it, and I don't need experience in using the tool to assess this (I don't need to kill someone to assess that it's bad to kill someone).
On the technical part I do believe LLMs are fundamentally limited in their design and are going to plateau, but this we shall see. I can imagine they can already be useful is certain cases despite their limitations. I'm willing to accept that my lack of experience doesn't make my opinion so relevant here.
> My suggestion is to be an objective scientist
Sure, but I also want to be a reasonable Earth citizen.
> -- use the best model released (regardless of origins) with minor research into 'best practices' to see what is possible
Yeah… but no, I won't. I don't think it will have much practical impact. I don't feel like I need this anecdotal experience, I'd not use it either way. Reading studies will be incredibly more relevant anyway.
> and then ask yourself if the 'origins' issue were addressed by a responsible open-source player in 6-12 months, whether it would change anything about your views on the likely impact of this technology
I doubt so, but open to changing my mind on this.
> and your willingness to adopt it.
Yeah, if the thing is actually responsible (I very much doubt it is possible), then indeed, I won't limit myself. I'd try it and might use it for some stuff. Note: I'll still avoid any dependency on any cloud for programming - this is not debatable - and in 6-12 months, I won't have the hardware to run a model like this locally unless something incredible happens (including not having to depend on proprietary nvidia drivers).
What's more, an objective scientist doesn't use anecdotal datapoints like their own personal experience, they run well-designed studies. I will not conduct such studies. I'll read them.
> I think that it also seems like we disagree on the foundations/premise of the technology.
Yeah, we have widely different perspectives on this stuff. It's an enriching discussion. I believe we start having said all that could be said.
On the technical part I do believe LLMs are fundamentally limited in their design and are going to plateau, but this we shall see. I can imagine they can already be useful is certain cases despite their limitations. I'm willing to accept that my lack of experience doesn't make my opinion so relevant here.
> My suggestion is to be an objective scientist
Sure, but I also want to be a reasonable Earth citizen.
> -- use the best model released (regardless of origins) with minor research into 'best practices' to see what is possible
Yeah… but no, I won't. I don't think it will have much practical impact. I don't feel like I need this anecdotal experience, I'd not use it either way. Reading studies will be incredibly more relevant anyway.
> and then ask yourself if the 'origins' issue were addressed by a responsible open-source player in 6-12 months, whether it would change anything about your views on the likely impact of this technology
I doubt so, but open to changing my mind on this.
> and your willingness to adopt it.
Yeah, if the thing is actually responsible (I very much doubt it is possible), then indeed, I won't limit myself. I'd try it and might use it for some stuff. Note: I'll still avoid any dependency on any cloud for programming - this is not debatable - and in 6-12 months, I won't have the hardware to run a model like this locally unless something incredible happens (including not having to depend on proprietary nvidia drivers).
What's more, an objective scientist doesn't use anecdotal datapoints like their own personal experience, they run well-designed studies. I will not conduct such studies. I'll read them.
> I think that it also seems like we disagree on the foundations/premise of the technology.
Yeah, we have widely different perspectives on this stuff. It's an enriching discussion. I believe we start having said all that could be said.
[1] https://salsa.debian.org/deeplearning-team/ml-policy/-/blob/...