Hacker Newsnew | past | comments | ask | show | jobs | submit | jayshua's commentslogin

I think it's important to note a couple of things about this.

First, Casey offers refunds on the handmade website for anyone who purchased the pre-order. Second, the pre-orders were primarily purchased by people who wanted to get the in-progress source code of the project, not people who just wanted to get the finished game. I'm not aware of anyone who purchased the pre-order solely to get the finished game itself. (Though it's certainly possible that there were some people.) Whether that makes a difference is up to the reader I suppose, since the original versions of the site didn't say anything about how likely the project was to finish and did state that the pre-order was for both the source-code and the finished game.

Second, the ten-year timeline (I believe the live streams only spanned 8 years) should be taken with the the note that this is live streaming for just one hour per day on weekdays, or for two hours two or three times a week later in the project. There's roughly 1000 hours of video content not including the Q&As at the end of every video. The 1000 hours includes instructional content and white board explanations in addition to the actual coding which was done while explaining the code itself as it was written. (Also, he wrote literally everything from scratch, something which he stated multiple times probably doesn't make sense in a real project.)

Taking into account the non-coding content, and the slower rate of coding while explaining what is being written, I'd estimate somewhere between 2-4 months of actual (40hr/week) work was completed, which includes both a software and a hardware renderer. No idea how accurate that estimate is, but it's definitely far less than 10 years and doesn't seem very indicative that the coding style he was trying to teach is untenable for game projects. (To be clear, it might be untenable. I don't know. I just don't see how the results of the Handmade Hero project specifically are indicative either way.)


Look, for whatever reason, he's not good at finishing what he starts: https://www.destructoid.com/he-worked-on-it-for-three-years-...

How much of that is due to the programming practices he espouses, I'm not sure. Ironically, if he went all-in on OOP with Smalltalk, I could see the super productivity that environment provides actually making it harder for him to finish anything, given how much it facilitates prototyping and wheel-reinvention. You see this with Pharo, where they rewrite the class browser (and other tools) every 2-3 years.

But his track record doesn't support the reputation he's built for himself.

> for game projects

That's the problem. Casey holds up a small problem domain, like AAA games, where OOP's overhead (even C++'s OOP) may genuinely pose a real performance problem, and suggests that it's representative of software as a whole; as if vtables are the reason VisualStudio takes minutes to load today vs. seconds 20 years ago.


The article you linked indicates the reason for him not finishing is specifically that he didn't like his game design, which seems orthogonal to coding practices.

He appears to have shipped middleware projects for RAD, and other contract work where he was not in charge of game design.


RAD was what, 15, 20 years ago? What has he released, in terms of proprietary or open source products, since then? Not just games, I mean ANYTHING. Refterm, and... what else? It's not like he was busy with his MSFT or RAD dayjob during this period.


He created Meow Hash somewhat recently and open sourced that. It's not a huge project but it's very useful. A lot of his time goes toward education, his personal projects and contract programming. Not every programmer is dedicated to releasing their own open source or commercial software. I'd bet most programmers don't. Using this as a metric to claim that he has a bad coding approach is ridiculous and laughable. Especially using Handmade Hero as an example... It really reveals your ignorance.

Also, since you care so much, let's see what you've released, smart guy. Preferably code so that we can see how talented you are.


> Also, since you care so much, let's see what you've released, smart guy. Preferably code so that we can see how talented you are.

I'm not the one telling everyone they're doing everything wrong, and did it not occur to you that my perception of what his output ought to have been over that timeframe (especially for someone who rates his own abilities as highly as he does) is informed by my own?


Think the microsoft terminal saga shows a pretty clear track record.


Could you expand a bit on 'immediate mode design purity'? I've done immediate mode stuff from scratch and my experience had been "just write whatever code you need to get it to work," there hasn't really been an overall architecture guiding it. Never worked with an im library before.


Looking at the Nuklear example code again it turns out that I was remembering wrong. I seemed to remember that Nuklear requires to store persistent state on the API user side and pass that into the Nuklear functions, but this was actually microui 1.x (which also has been fixed in the meantime in microui 2.x).

Sorry about the confusion.

E.g. in microui 1.x defining a window worked like this:

    static mu_Container window;
    if (mu_begin_window(ctx, &window, "Style Editor")) { ... }
...in microui 2.x it's no longer necessary to store the window state on the user side:

    if (mu_begin_window(ctx, "Style Editor", mu_rect(350, 250, 300, 240))) { ... }


This is not actually that difficult if you just want the basics. I have a project that implements just the x11 key & mouse events, and the shared memory extension in 400 lines of Rust. Was worth it for me since it eliminated dependency on libx11 and libc, which removed a dependency on something like 800,000 lines of code across those two libraries. (Determined by a basic cloc on each library source. Actually compiled code for a specific architecture would probably be less than that, but still orders of magnitude more than 400.)


I probably wouldn't bother with implementing X11 from scratch for a game as even simply fullscreening it would likely require some diverging code paths to actually work everywhere, but Wayland should be a breeze. Having worked on SDL's Wayland backend I'd say that most of the difficulty came from impedance mismatches with preexisting SDL APIs, so if you design your thing from scratch all you really need to deal with there are the protocol bits - which you could just mostly hardcode and automatically get rid of most of libwayland's complexity that deals with protocol extensions and their XML descriptions.


Let's say I'm sorting some information about an event occuring in Florida, USA next year. It will be on July 4th at 3:00pm. I store this as an ISO string: "2025-07-04T03:00:00Z-5".

Florida (and the USA as a whole) have been discussing getting rid of the daylight savings change on-and-off for several years.

If that law goes through in December of this year, the date I stored for my event will now be off by an hour because come July 2025 Florida will be on UTC offset -6, not -5.

On the other hand, if I store the date with a fully qualified tz database timezone like this: "2025-07-04 03:00:00 America/New_York" the time will continue to be displayed correctly because the tz database will be updated with the new offset information.


If the calendar application want to be really accurate in a couple of years it's probably best to ask the user for coordinates for the event. You never know if that spot in Florida will be part of America/New_York next year.

(Of course a tz identifier will be better than an integer offset, but I'm only half joking: https://en.wikipedia.org/wiki/Time_in_Indiana#tz_database)


> You never know if that spot in Florida will be part of America/New_York next year.

The example really threw me; in the case where you assume that Florida stops observing DST, Florida is definitely not going to be part of America/New_York, so that example is guaranteed to have the same problem as the UTC timestamp.


You're highlighting an important edge case here: The TZ database only splits regions reactively, not proactively.

But an actual lat/long pair is often neither available, nor desirable to be stored for various reasons.

Now that you mention it, I think I've seen some web applications that had a list of cities much longer than what's present in the TZ database for calendar scheduling purposes, probably to accomodate for just that future edge case.


I appreciate those longer lists, though they do have some bizarre omissions.


Gilead claims that is false and that they spent 1.1 billion on developing Truvada. https://www.gilead.com/news-and-press/company-statements/gil...


> Gilead claims that is false and that they spent 1.1 billion on developing Truvada. https://www.gilead.com/news-and-press/company-statements/gil...

You are quoting a corporate press release that was written in response to an editorial criticizing Gilead, which was based on my colleagues' work.

This is a great example of how easy it is to fall for propaganda, because not a single thing in your link refutes what I said! They spent money developing Truvada as a treatment for HIV, then made that money back in record profits for nearly a decade. Only then did clinical trials for PrEP begin, and for those, Gilead donated only the production costs of Truvada (which are minimal). They did not spend any money in actually conducting the trials - which, as pharmaceutical companies are generally very quick to point out - is where most of the costs of bringing a drug to market are.

Gilead is claiming that, when it spent half a billion dollars to acquire a biotech company that went bankrupt, 100% of the money in that transaction should as "R&D related to Truvada". This is preposterous. Neither the SEC nor the IRS would endorse that accounting, which is why you're seeing it in a press release and not their 10-K.

That's a ridiculous claim even when you're talking about the development of Truvada, but that's not even the question at hand. The actual topic is how much was paid for the development of PrEP, which came nearly a decade later, and for which Gilead paid nothing but the per-unit costs of production.


The $1.1 billion figure is for Truvada total, not for PrEP specifically. It’s perhaps notable that Gilead chose not to break that down, given that the original claim they were responding to was about PrEP specifically.


> The $1.1 billion figure is for Truvada total, not for PrEP specifically. It’s perhaps notable that Gilead chose not to break that down, given that the original claim they were responding to was about PrEP specifically.

And even then it's a dishonest claim. Half of that $1.1 billion is the amount of money they paid to acquire another biotech company in a firesale. It's beyond disingenuous for them to claim all of that towards the amount they spent developing Truvada, since they received way more assets in that sale than just the patent for one drug.


Not sure if it could be extended here, but I've seen a lock free hash map that supported lock free reallocation by allocating new space and moving each entry one by one, either when the entry is accessed or in a separate thread concurrently. Accessing an entry during the reallocation would check the new region first, and if not found check the old region. Entries in the old region would be marked as moved and once all entries were moved the old allocation could be freed.


For the (un)bounded logs, the whole concept reside on the fact that the log isn't going to move once allocated, and that references to an item will never be invalided until the end of the program


I read this as mut1 and mut2 both being downgraded to shared references because they aren't used to mutate anymore. I'd imagine that's not what's actually happening though?


No - that they are both &mut _ indicates that a mutable reference is being acquired regardless of whether or not they’re used for any mutation. Possibly the compiler could automatically lower to a shared reference if it detects no mutation access locally but there may be design reasons why that’s impossible (+ you can’t have a mutable and a shared reference simultaneously anyway so downgrading to shared would still be disallowed)


> anyone know of an IDE/tool that can spell check UI strings without tripping up on variable names

Sublime Text works pretty well for me. I think you need to turn on spell check in the settings (off by default?) Then you can choose which syntax highlighting scopes you want to spell check. I have mine configured to spell check comments and string literals.


What does a broken repo look like? What does broken mean here?


I use a similar technique for typing all the symbols. If I press f I get f, r gives me r, but if I press f and r at the same time with my left index finger positioned between the two keys I get $. w and s are {, e and d are }.

All the other symbols, parens, etc. are mapped to similar one-finger-two-key shortcuts.

Significantly cut down on finger stretching which used to cause me mild pain.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: