Hacker Newsnew | past | comments | ask | show | jobs | submit | onionisafruit's commentslogin

I've been using it through this and it occasionally stops with an error message saying something like "repeated 529 responses". Kind of annoying but it's fine.

The blue jay too. The plumage on the top and bottom looks like it comes from different birds.

I think I would like a “stackvar” declaration that works the same as “var” except my code won’t compile if escape analysis shows it would wind up on the heap. I say that knowing I’m not a language designer and have never written a compiler. This may be an obviously bad idea to somebody experienced in either of those.

I commented elsewhere on this post that I rarely have to think about stacks and heaps when writing Go, so maybe this isn’t my issue to care about either.


This could probably be implemented as an expensive comment-driven lint during compilation.

I don’t think it can be a true linter because it depends on the compiler. But it’s not a bad idea anyway

Escape analysis sends large allocation to the stack. The information is there.

Go has been my primary language for a few years now, and I’ve had to do extra work to make sure I’m avoiding the heap maybe five times. Stack and heap aren’t on my mind most of the time when designing and writing Go, even though I have a pretty good understanding of how it works. The same applies to the garbage collector. It just doesn’t matter most of the time.

That said, when it matters it matters a lot. In those times I wish it was more visible in Go code, but I would want it to not get in the way the rest of the time. But I’m ok with the status quo of hunting down my notes on escape analysis every few months and taking a few minutes to get reacquainted.

Side note: I love how you used “from above” and “from below”. It makes me feel angelic as somebody who came from above; even if Java and Ruby hardly seemed like heaven.


Why have you had to avoid the heap? Performance concerns?

For me, avoiding heap, or rather avoiding gc came when I was working (at work) on backend and web server using Java, and there was default rule for our code that if gc takes more than 1% (I don't remember the exact value) then the server gets restarted.

Coming (back then) from C/C++ gamedev - I was puzzled, then I understood the mantra - it's better for the process to die fast, instead of being pegged by GC and not answering to the client.

Then we started looking what made it use GC so much.

I guess it might be similar to Go - in the past I've seen some projects using a "baloon" - to circumvent Go's GC heuristic - e.g. if you blow this dummy baloon that takes half of your memory GC might not kick so much... Something like this... Then again obviously bad solution long term


Garbage Collection.

The content of the stack is (always?) known at compile time; it can also be thrown away wholesale when the function is done, making allocations on the stack relatively cheaper. These FOSDEM talks by Bryan Boreham & Sümer Cip talk about it a bit:

- Optimising performance through reducing memory allocations (2018), https://archive.fosdem.org/2018/schedule/event/faster/

- Writing GC-Friendly [Go] code (2025), https://archive.fosdem.org/2025/schedule/event/fosdem-2025-5...

Speaking of GC, Go 1.26 will default to a newer one viz. Green Tea: https://go.dev/blog/greenteagc


Ha! I had not intended to imply that one is better than the other, but I am glad that it made you feel good :).

I also came "from above".


What is especially bad about this ad? To me it seems no worse than the infernal Paintin Manning ad from last year or the State Farm Megan Trainor ad this year. If this was on rotation in NFL games it wouldn’t make me scramble for the mute button any faster than other ads.

I thought similarly to you, until I saw it: https://www.youtube.com/watch?v=abRie4vAvJ4

It almost seems intentionally AI? If anything, if my job at Maccas…ahem, McDonald's (sorry, spot the Aussie) is in marketing, I’d expect to to be promptly fired if this wasn’t expected to pass for anything less than satire.

Do you have children’s names for other restaurants?

Yeah, seeing it just once it looks like an ad with a terrible premise.

It certainly has an AI feel to it though, and I'm sure the more times you see it the more it falls apart.

On 1st watch the part that sticks out is the couple sitting by a window, who seem to be somehow sitting both inside AND outside at the same time.


I still cannot see the bad part

I am legitimately sorry

You have to turn your monitor on, silly!

Have you ever ridden a bike over a canal? The ad was pushed in front of a lot of people who have. I thought it was creepy throughout, but I can't believe they used that clip up front.

“a lot of people” - citation needed.

You realize it was a Dutch ad, correct?

Well, for a start it's a bad concept, but also the actual images are kinda nightmarish. The living teddy bear is particularly off-putting. And it's very obviously AI slop; physics is at best a mild suggestion.

I have taken it as a tongue in cheek reference to the current AI slop discussions, so like purposefully made sloppy. Appropriate joke? Apparently not, according to the masses. Well, just a matter of taste.

I mean, it's _possible_ they were aiming for "so bad it's good", missed, and ended up at just "really, really bad", I suppose. In practice, conscious attempts at "so bad it's good" virtually never work out; it is a thing which happens, not which is deliberately done.

The difference is that those ads were annoying on purpose

Fixtures are great for integration tests. But I agree that unit tests needing fixtures indicates a design issue.

Still, most of us work on code bases with design issues either of our own making or somebody else’s.


Yup, so I'm not against fixtures per se, they have their uses and can be a pragmatic choice. I just often don't like when I have to use them, as it's often to patch over something else. But things are never perfect.

I use the pattern you describe, but not in Ruby. I use code to build fixtures through sql inserts. The code creates a new db whose name includes a hash of the test data (actually a hash the source files that build the fixtures).

Read-only tests only need to run the bootstrap code if their particular fixture hasn’t been created on that machine before. Same with some tests that write data but can be encapsulated in a transaction that gets rolled back at the end.

Some more complex tests need an isolated db because their changes can’t be contained in a db transaction (usually because the code under test commits a db transaction). These need to run the fixture bootstrap every time. We don’t have many of these so it’s not a big deal that they take a second or two. If we had more we would probably use separate, smaller fixtures for these.


I’ve found that golden master tests (aka snapshot testing) pair very well with fixtures. If I need to add to the fixtures for a new test, I regenerate the golden files for all the known good tests. I barely need to glance at these changes because, as I said, they are known good. Still I usually give them a brief once over to make sure I didn’t do something like add too many records to a response that’s supposed to be a partial page. Then I go about writing the new test and implementing the change I’m testing. After implementing the change, only the new test’s golden files should change.

They are also nice because I don’t have to think so much about assertions. They automatically assert the response is exactly the same as before.


I'm familiar with snapshot testing for UI and I agree with you, they can work really well for this because they're usually quick to verify. And especially if you can build in some smart tolerance to the comparison logic, it can be really easy to maintain.

But how would you do snapshot testing for behaviour? I'm approaching the problem primarily from the backend side and there most tests are about behaviour.


I'm also primarily on the back end. Like most backenders, I spend my workdays on http endpoints that return json. When I test these the "snapshot" is a json file with a pretty-printed version of the endpoint's response body. Tests fail when the file generated isn't the same as the existing file.

Ah, Ok, yes, for API endpoints it makes a lot of sense. Especially if it's a public API, you need to inspect the output anyway, to ensure that the public contract is not broken.

But, I spend very little or no time on API endpoints since I don't work on projects where the frontend is an SPA. :)


Just wait until the jepsen report on /dev/null. It's going to be brutal.

/dev/null works according to spec, can't accuse it of not doing something it has never promised

On the other hand, this issue has been known to GitHub since shortly after Actions’ release[0]. They added some cya verbiage to their docs, but they never followed up by making version pinning meaningful.

Sure you can implement it yourself for direct dependencies and decide to only use direct dependencies that also use commit sha pinning, but most users don’t even realize it’s a problem to begin with. The users who know often don’t bother to use shas anyway.

Or GitHub could spend a little engineer time on a feasible lock file solution.

I say this as somebody who actually likes GitHub Actions and maintains a couple of somewhat well-used actions in my free time. I use sha pinning in my composite actions and encourage users to do the same when using them, but when I look at public repos using my actions it’s probably 90% using @v1, 9% @v1.2 and 1% using commit shas.

[0] Actions was the first Microsoft-led project at GitHub — from before the acquisition was even announced. It was a sign of things to come that something as basic as this was either not understood or swept under the rug to hit a deadline.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: