I would recommend using it, yeah. You have limited context and it will be compacted/summarized occasionally. The compaction/summary will lose some information and it is easy for it to forget certain instructions you gave it. Afaik claude.md will be loaded into the context on every compaction which allows you to use it for instructions that should always be included in the context.
> Why is that good advice? If that thing is eventually supposed to do the most tricky coding tasks, and already a year ago could have won a medal at the informatics olympics, then why wouldn't it eventually be able to tell if I'm using 2 or 4 spaces and format my code accordingly? Either it's going to change the world, then this is a trivial task, or it's all vaporware, then what are we even discussing..
This is the exact reason for the advice: The LLM already is able to follow coding conventions by just looking at the surrounding code which was already included in the context. So by adding your coding conventions to the claude.md, you are just using more context for no gain.
And another reason to not use an agent for linting/formatting(i.e. prompting to "format this code for me") is that dedicated linters/formatters are faster and only take maybe a single cent of electricity to run whereas using an LLM to do that job will cost multiple dollars if not more.
Sorry, but no. Those functionalities fall under "functional cookies" and as such do not require consent. Also, there is no tracking needed for the dark mode at all. And "logging in" does not mean "tracking"
Strictly necessary cookies, session tokens and such, are exempted. But there’s no general exemption for cookies that provide functionality a user might like. If your site will function without remembering who I am when I come back tomorrow, you have to disclose that you’re going to try to remember me and give me a chance to say I don’t want you to. Doesn’t matter how benign your plans for that information are - the whole point is that the user is in control and they get to make that decision.
No. You can still install apks through ADB, which is how you would do it during development. But you won't be able to distribute it without signing it through google.
I think in part the issue is that the LLM does not have enough context. The difference between a bug in the test or a bug in the implementation is purely based on the requirements which are often not in the source code and stored somewhere else(ticket system, documentation platform).
Without providing the actual feature requirements to the LLM(or the developer) it is impossible to determine which is wrong.
Which is why I think it is also sort of stupid by having the LLM generate tests by just giving it access to the implementation. That is at best testing the implementation as it is, but tests should be based on the requirements.
Oh, absolutely, context matters a lot. But the thing is, they still fail even with solid context.
Before I let an agent touch code, I spell out the issue/feature and have it write two markdown files - strategy.md and progress.md (with the execution order of changes) inside a feat_{id} directory. Once I’m happy with those, I wipe the context and start fresh: feed it the original feature definition + the docs, then tell it to implement by pulling in the right source code context. So by the time any code gets touched, there’s already ~80k tokens in play. And yet, the same confusion frequently happens.
Even if I flat out say “the issue is in the test/logic,”, even if I point out _exactly_ what the issue is, it just apologizes and loops.
At that point I stop it, make it record the failure in the markdown doc, reset context, and let it reload the feature plus the previous agent’s failure. Occasionally that works, but usually once it’s in that state, I have to step in and do it myself.
It's a very easy way to write some markdown that renders as a website. If the URL didn't have obsidian in the title I wouldn't have guessed it was involved.