Why is giving precise instructions bad? I would expect LLMs to be pretty good at following instructions after three years of training them that way. Plus, if the instructions are precise enough and therefore each step is simple enough, I would expect everything it needs to do to be 'in-distribution'.
You could include those as part of the tooling. I have been experimenting with including mise as part of the image and then layer on the extra tools within there. Put all of those steps into the build so it is automatic.
I think the other poster is confused. Both Gemma 3 and Gemma 3n are open-weight models.
Google's proprietary model line is called Gemini. There is a variant that can be ran offline called Gemini Nano, but I don't think it can be freely distributed and is only allowed as part of Android.
As for what's new, Gemma 3n seems to have some optimizations done to it that lead it to be better than the 'small' Gemma 3 models (such as 4B) at similar speed or footprint.
What do you mean by Anthropic shifting their ordering? It seems to still be consistently Opus > Sonnet > Haiku. They didn't release 4 Haiku, but they also didn't release 3.5 Opus, and pricing wise Sonnet 4 lines up with earlier Sonnets.
As for this Gemma release, I don't think Gemma 4 would be an appropriate name. 3n is limited to very small versions (like 8B total parameters) and is therefore likely less powerful than Gemma 3.
From my impression this is more like a "Gemma 3 Lite" that provides a better speed/quality tradeoff than the smaller Gemma 3 models.
I believe that PyTorch already uses Triton; I recently tried to do torch.compile on a Windows machine and it did not work because the inductor backend relies on Triton.
Made that up. Say function can be composed, that's the core algebra. f,g,h... and compose operator, but often you need more involved logic and types that can't be encoded in a simple domain Int and types like Int -> Int[0]. You need DB, Logging, Transaction, whatever lower level system used. In OO you use inheritance to be able to integrate all this through layered method calls.. I kinda describe this a protocol. Problem is, OO is loose on types and mutability.. so I'd think there's a gap to fill between function algebras and these 'protocols'. A way to describe typed "function graphs" in a way that can be intercepted / adjusted without passing tons of functions as parameters.
Again that's a bedroom thought, maybe people do this with Category Theory in a haskell library, or caml modules, I'm just not aware of it.
[0] Then there are monadic types to embed a secondary type but that seems too restrictive still.
Unsloth also works very diligently to find and fix tokenizer issues and many other problems as soon as they can. I have comparatively little trust on ollama following up and updating everything in a timely manner. Last I checked, there is little information on when the GGUFs and etc. on ollama were updated or what llama.cpp version / git commit did they use for it. As such, quality can vary and be significantly lower with the ollama versions for new models I believe.
This article really grates me the wrong way. A company selling an AI product saying the following about AI sceptics:
> All you crazy MFs are completely overlooking the fact that software engineering exists as a discipline because you cannot EVER under any circumstances TRUST CODE.
is straight up insulting to me, because it effectively comes down to "use my product or you're a looney".
Also, while two years after this post (which should be labeled 2023) I've still barely tried to entirely offload coding to an LLM, the few times I did try have been pretty crap. I also really, really don't want to 'chat' with my codebase or editor. 'Chatting' to me feels about as slow as writing something myself, while I also don't get a good mental model 'for free'.
I am a moderately happy user of AI autocomplete (specifically Supermaven), but I only ever accept suggestions that are trivially correct to me. If it's not trivial, it might be useful as a guide of where to look in actual documentation if relevant, but just accepting it will lead me down a wrong path more often than not.
Yiannopoulos is an... interesting case in general. Apparently[1] he declared himself to be "ex-gay", 'demoted' his husband to housemate, and is treating his homosexuality 'like an addiction'. His future plans include 'rehabilitating conversion therapy'.
Seeing all of that, I'm really not sure his boat has been rising with the tide, so to speak. I personally don't believe anyone thinks conversion therapy is good for themselves unless they are deeply troubled.