I do think that it was weird to focus on the AI aspect so much. AI is going to pollute everything going forward whether you like it or not. And honestly who cares, either it is a good ressource for learning or it’s not. You have to decide that for yourself and not based on whether AI helped writing it.
However I think some of the critique was because he stole the code for the interactive editor and claimed he made it himself, which of course you shouldn’t do.
You can correct me if I'm wrong, but I believe the actual claim was that Zigbook had not complied with the MIT license's attribution clause for code someone believed was copied. MIT only requires attribution for copies of "substantial portions" of code, and the code copied was 22 lines.
Does that count as substantial? I'm not sure because I'm not a lawyer, but this was really an issue about definitions in an attribution clause over less code than people regularly copy from stack overflow without a second thought. By the time this accusation was made, the Zigbook author was already under attack from the community which put them in a defensive posture.
Now, just to be clear, I think the book author behaved poorly in response. But the internet is full of young software engineers who would behave poorly if they wrote a book for a community and the community turned around and vilified them for it. I try not to judge individuals by the way they behave on their worst days. But I do think something like a community has a behavior and culture of its own and that does need to be guided with intention.
> You can correct me if I'm wrong, but I believe the actual claim was that Zigbook had not complied with the MIT license's attribution clause for code someone believed was copied. MIT only requires attribution for copies of "substantial portions" of code, and the code copied was 22 lines.
Without including proper credit, it is classic infringement. I wouldn't personally call copyright infringement "theft", though.
Imagine for a moment, the generosity of the MIT license: 'you can pretty much do anything you want with this code, I gift it to the world, all you have to do is give proper credit'. And so you read that, and take and take and take, and can't even give credit.
> Now, just to be clear, I think the book author behaved poorly in response
Precisely: maybe it was just a mistake? So, the author politely and professionally asks, not for the infringer to stop using the author's code, but just to give proper credit. And hey, here's a PR, so doing the right thing just requires an approval!
The infringer's response to the offer of help seemed to confirm that this was not a mistake, but rather someone acting in bad faith. IMO, people should learn early on in their life to say "I was wrong, I'm sorry, I'll make it right, it won't happen again". Say that when you're wrong, and the respect floods in.
> By the time this accusation was made, the Zigbook author was already under attack
This is not quite accurate, from my recollection of events (which could be mistaken!): the community didn't even know about it until after the author respectfully, directly contacted the infringer with an offer to help, and the infringer responded with hostility and what looked like a case of Oppositional Defiant Disorder.
> I do think that it was weird to focus on the AI aspect so much. AI is going to pollute everything going forward whether you like it or not.
The bigger issue is that they claimed no AI was used. That’s an outright lie which makes you think if you should trust anything else about it.
> And honestly who cares, either it is a good ressource for learning or it’s not. You have to decide that for yourself and not based on whether AI helped writing it.
You have no way of knowing if something is a good resource for learning until you invest your time into it. If it turns out it’s not a good resource, your time was wasted. Worse, you may have learned wrong ideas you now have to unlearn. If something was generated with an LLM, you have zero idea which parts are wrong or right.
I agree with you. It is shitty behavior to say it is not AI written when it clearly is.
But I also think we at this point should just assume that everything is partially written using AI.
For your last point, I think this was also a problem before LLMs. It has of course become easier to fake some kind of ethos in your writing, but it is also becoming easier to spot AI slop when you know what to look after right?
> I agree with you. It is shitty behavior to say it is not AI written when it clearly is.
> But I also think we at this point should just assume that everything is partially written using AI.
Using "but" here implies your 2nd line is a partial refutation to the first. No one would have been angry if he'd posted it without clearly lying. Using AI isn't what pissed anyone off, being directly lied to (presumably to get around the strict "made by humans" rules across all the various Zig communities). Then there was the abusive PR edits attacking someone that seems to have gotten him banned. And his history of typosquatting, both various crypto surfaces, and cursor, and the typosquatting account for zigglang. People are mad because the guy is a selfish asshole, not because he dared to use AI.
Nothing I've written has been assisted by AI in any way, and I know a number of people who do and demand the same. I don't think it's a reasonable default assumption.
I do think that it was weird to focus on the AI aspect so much. AI is going to pollute everything going forward whether you like it or not. And honestly who cares, either it is a good ressource for learning or it’s not. You have to decide that for yourself and not based on whether AI helped writing it.
However I think some of the critique was because he stole the code for the interactive editor and claimed he made it himself, which of course you shouldn’t do.