Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Given that Python tends to produce fewer hallucinations when generated by LLMs I wonder if former Django developers using AI tools are secretly having a blast right now.




I think another ace up Django's sleeve is that it has had a remarkable stable API for a long time with very few breaking changes, so almost all blogposts about Django that the LLM has gobbled up will still be mostly correct whether they are a year or a decade old.

I get remarkably good and correct LLM output for Django projects compared to what I get in project with more fast moving and frequently API breaking frameworks.


The "one way" / "batteries included" aspect of Django may also make it easier for LLMs

Whenever I saw people complain about LLMs writing code, I never really understood why they were so adamant that it just didn’t work at all for them. The moment I did try to use LLMs outside of Django, it became clear that some frameworks are just much easier to work with LLMs than others. I immediately understood their frustrations.

What a lot of people don’t know is that SWE-bench is over 50% Django code, so all of the top labs hyper optimize to perform well on it.

I know python is more prevalent in SWE-Bench than any other language, but more than 50% django sounds like a big stretch. Citation?

Edit, it's about 37%, and python-only. https://arxiv.org/pdf/2310.06770v3


If Python produces less hallucinations it's not because of the syntax, it's because there's so much training data.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: