GPT tools give piss-poor suggestions when working with the Godot game engine.
The Godot engine is an open-source project that has matured significantly since GPT tools hit the market.
The GPTs don't know what the new Godot features are, and there is a training gap that I'm not sure Open AI and their competitors will ever be able to overcome.
Do you work on web applications? I've found that GPT tools get pretty stupid once you stop working with HTML/JS.
Thanks for the Godot example. I experimented with it through Claude Code (I have no prior experience with it). Got a Vampire Survivors-esque game working from scratch with plain shapes representing the game elements like player or enemies in 70 minutes on and off. It included a variety of 5 weapons, enemies moving to the player, player movement, little experience orbs on enemies expiring, a projectile and area of effect damage system, and a levelling up and upgrade system and UI which influenced weapon behaviours.
Godot with AI was definitely a worse experience than usual for me. I did not use the Godot editor. It seems like the development flow for Godot however is based around it. Scenes were generated through a Python script, which was of course written by Claude Code. Personally, I reviewed no line of code during the process.
My findings afterwards are;
1) Code quality was not good. Personally I have a year of experience working with Unity and online the code examples tend to be of incredibly poor quality. My guess is if AI is trained on the online corpus of game development forums, the output should be absolutely terrible. For the field of game development especially AI is tainted with this poor quality. It did indeed not follow modern practices, even after having hooked up a context MCP which provides code examples.
2) It was able to refactor the codebase to modern practices upon instructing it to; I told it to figure out what modern practices were and to apply them; it started making modifications like adding type hints and such. Commonly you would use predefined rules for this with an LLM tool, I did not use any for my experiment. That would be a one-time task after which the AI will prefer your way of working. An example for Godot can be found here: https://github.com/sanjeed5/awesome-cursor-rules-mdc/blob/ma...
3) It was very difficult to debug for Claude Code. The platform seems to require working with a dedicated editor, and the flow for debugging is either through that editor or by launching the game and interacting with it. This flow is not suitable at the moment for out of the box Claude Code or similar tools which need to be able to independently verify that certain functions or features work as expected.
> Do you work on web applications? I've found that GPT tools get pretty stupid once you stop working with HTML/JS.
Not really - I work on developer experience and internal developer platforms. That is 80~90% Python, Go, Bash, Terraform and maybe a 10~20% Typescript with React depending on the project.
The Godot engine is an open-source project that has matured significantly since GPT tools hit the market.
The GPTs don't know what the new Godot features are, and there is a training gap that I'm not sure Open AI and their competitors will ever be able to overcome.
Do you work on web applications? I've found that GPT tools get pretty stupid once you stop working with HTML/JS.