Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There needs to be some more research on what path the model takes to reach its goal, perhaps there is a lot of overlap between this and the article. The most efficient way isn't always the best way.

For example, I asked Claude-3.7 to make my tests pass in my C# codebase. It did, however, it wrote code to detect if a test runner was running, then return true. The tests now passed, so, it achieved the goal, and the code diff was very small (10-20 lines.) The actual solution was to modify about 200-300 lines of code to add a feature (the tests were running a feature that did not yet exist.)



That is called "Volkswagen" testing. Some years ago that automaker had mechanism in cars which detected when the vehicle was being examined and changed something so it would pass the emission tests. There are repositories on github that make fun of it.


While that’s the most famous example, this sort of cheating is much older than that. In the good old days before 3d acceleration, graphics card vendors competed mostly on 2d acceleration. This mostly involved routines to accelerate drawing Windows windows and things, and benchmarks tended to do things like move windows round really fast.

It was somewhat common for card drivers to detect that a benchmark was running, and just fake the whole thing; what was being drawn on the screen was wrong, but since the benchmarks tended to be a blurry mess anyway the user would have a hard time realising this.


Pretty sure at least one vendor was accused of cheating on 3D-Mark at times as well.



I think Claude-3.7 is particularly guilty of this issue. If anyone from Anthropic is reading this, you might want to put your thumb on the scale so to speak the next time you train the model so it doesn't try to use special casing or outright force the test to pass


This looks like the very complaint of "specification gaming". I was wondering how will it show up in llm's...looks like this is the way it presented itself..


I'm gonna guess GP used a rather short prompt. At least that's what happens when people heavily underspecify what they want.

It's a communication issue, and it's true with LLMs as much as with humans. Situational context and life experience papers over a lot of this, and LLMs are getting better at the equivalent too. They get trained to better read absurdly underspecified, relationship-breaking requests of the "guess what I want" flavor - like when someone says, "make this test pass", they don't really mean "make this test pass", they mean "make this test into something that seems useful, which might include implementing the feature it's exercising if it doesn't exist yet".


My prompt was pretty short, I think it was "Make these tests pass". Having said that, I wouldn't mind if it asked me for clarification before proceeding.


Similar experience -- asked it to find and fix a bug in a function. It correctly identified the general problem but instead of fixing the existing code it re-implemented part of the function again, below the problematic part. So now there was a buggy while-loop, followed by a very similar but not buggy for-loop. An interesting solution to say the least.


Funny that you mention it because in JavaScript there already is a library for this:

https://github.com/auchenberg/volkswagen


Ah yes, the "We have a problem over there/I'll just delete 'over there'" approach.


I've also had this issue, where failing tests are deleted to make all the tests pass, or, it mocks a failing HTTP request and hardcodes it to 200 OK.


Reward hacking, as predicted over and over again. You hate to see it. Let him with ears &c.


I've heard this a few times with Claude. I have no way to know for sure, but I'm guessing the problem is as simple as their reward model. Likely they trained it on generating code with tests and provided rewards when those tests pass.

It isn't hard to see why someone rewarded this way might want to game the system.

I'm sure humans would never do the same thing, of course. /s




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: