Hacker Newsnew | past | comments | ask | show | jobs | submit | more gmueckl's commentslogin

This is probably one of those events where everyone on the inside has their own story that won't fit into a neat overarching narrative of how the files are handled because they only gets to feel part of the elefant each.


Most gamers don't have the faintest clue regarding how much work and effort a game requires these days to meet even the minimum expectations they have.


That's bullshit. I don't care about graphics, I play lots of indie games, some of them are made by a single person. There are free game engines, so basically all one needs for a successful game is just a good idea for the game.

And a friend of mine still mostly plays the goddamn Ultima Online, the game that was released 28 years ago.


and if a new game came out today that looked and played the same as Ultima online… What would you (and the rest of gamers) think about it?

Your expectations of that game are set appropriately. Same with a lot of Indy games, the expectation can be that its in early access for a decade+. You would never accept that from, say, Ubisoft.


Depends on what that game brings, I might like it a lot. Again, me and all my friends love indie games, most of them with pixel graphics or just low polygon. The market for such games is big enough. Just look up some popular indie games sales estimations.


You are a minor share of the overall market and the sad truth is that most indie games sell a pityfull handfull of copies and can't sustain their creators financially. And even indie games have to meet certain standards and given that they are developed nostly by single devs, meeting even those "minimal" standards takes years for many devs.


But then you have a money trail connecting the company unambiguously to copyright violations on a scale that is arguably larger than Napster.


I mean Facebook and Anthropic both torrented LibGen in its entirety.


I believe they're largely targeting foreign companies who don't care much about US copyright law.


Yeah,how devstating it would be for Anna's Archive to be found skirting copyright laws. Their reputation may never recover.

\s


Ah, yikes, just ignore this comment, my literacy skills failed me here.

He meant the AI companies


I mean, the same comment applies mutatis mutandis.


Nanite is just working around an inefficiency that occurs on small triangles that require screen space derivatives, which the hardware approxinates using finite differences between neighbors, e.g. for the texture footprint estimation in mipmapping. The rasterizer invokes additional shader instances around triangle borders to get the source values for these operations. That gets excessive when triangles are tiny. This is an edge case, but it becomes important when there is lots of tiny geometric details on screen.


While it is conceivably possible to write perfect software that will run flawlessly on a perfect computer forever, the reality is that the computer it runs on and the devices it controls will eventually fail - it's just a question of when and how, never if. A device that hasn't failed during its lifespan was simply not used long enough to fail.

In light of this, even software development has to focus on failures when you apply this standard. And that does include considerations like failures occurring with in the computer itself (faulty RAM or faulty CPU core).


The problem of focusing on failures is that such analysis misses all the losses that occur even when everything works as designed. Analysis has to focus on all losses -- both failures (often the trivial case) and non-failures (design errors, often trickier to find.)


Your post reads like an admission to me that the system is broken. Real persons need real recourse, especially if an adverse action has major impact on their lives.

Could it be that fully automated payment processes are just so fundamentally vulnerable that their very existence needs to be questioned because of how overwhelmed they get with fraud attempts? I'm deliberately being controversial here for the sake of discussion.


That is an accurate reading of my comment, and I have asked myself the same question.


There are tons of places within the GPU where dedicated fixed function hardware provides massive speedups within the relevant pipelines (rasterization, raytracing). The different shader types are designed to fit inbetween those stages. Abandoning this hardware would lead to a massive performance regression.


Just consider the sheer number of computations offloaded to TMUs. Shaders would already do nothing but interpolate texels if you removed them.


Offtop, but sorry, I can't resist. "Inbetween" is not a word. I started seeing many people having trouble with prepositions lately, for some unknown reason.

> “Inbetween” is never written as one word. If you have seen it written in this way before, it is a simple typo or misspelling. You should not use it in this way because it is not grammatically correct as the noun phrase or the adjective form. https://grammarhow.com/in-between-in-between-or-inbetween/


"Offtop" is not a word. It's not in any English dictionary I could find and doesn't appear in any published literature.

Matthew 7:3 "And why beholdest thou the mote that is in thy brother's eye, but considerest not the beam that is in thine own eye?"


Oh, it's a transliteration of Russian "офтоп", which itself started as a borrowing of "off-topic" from English (but as a noun instead of an adjective/stative) and then went some natural linguistic developments, namely loss of a hyphen and degemination, surface analysis of the trailing "-ic" as Russian suffix "-ик" [0], and its subsequent removal to obtain the supposed "original, non-derived" form.

[0] https://en.wiktionary.org/wiki/-%D0%B8%D0%BA#Russian


>subsequent removal to obtain the supposed "original, non-derived" form

Also called a "back-formation". FWIF I don't think the existence of corrupted words automatically justifies more corruptions nor does the fact that it is a corruption automatically invalidate it. When language among a group evolves, everyone speaking that language is affected, which is why written language reads pretty differently looking back every 50 years or so, in both formal and informal writing. Therefore language changes should have buy-in from all users.


Language evolves in mysterious ways. FWIW I find offtop to have high cromulency.


If enough people use it, it will become correct. This is how language evolves. BTW, there is no "official English language specification".

And linguists think it would be a bad idea to have one:

https://archive.nytimes.com/opinionator.blogs.nytimes.com/20...


Surely you mean "I've started seeing..." rather than "I started seeing..."?


Either the present perfect that you suggest or the past perfect originally presented is correct, and the denotation is basically identical. The connotation is slightly different, as the past perfect puts more emphasis on the "started...lately" and the emergent nature of the phenomenon, and the present perfect on the ongoing state of what was started, but there’s no giant difference.


Your entire post does not once mention the form you call correct.

If you intend for people to click the link, then you might just as well delete all the prose before it.


It's not so simple. The knowledge hasn't been transferred to future operators, but to process engineers who are kow in charge of making the processes work reliably through even more advanced automation that requires more complex skills and technology to develop and produce.


No doubt, there are people that still have knowledge of how the system works.

But operator inexperience didn't turn out to be a substantial barrier to automation, and they were still able to achieve the end goal of producing more things at lower cost.


I would recommend the two episodes "Three Robots" and "Three Robots: Exit Strategies" from the anthology series Love, Death and Robots if you like this kind of humor.


But the you're outsourcing the same shared code problem to a third party shared library. It fundamentally doesn't go away.


The third party shared library doesn't know your company exists. This means the third party dependency doesn't contain any business or application specific code and is applicable to any software project. This in turn means it has to solve the majority of business use cases ahead of time and be thoroughly tested to not break any consumers.

The problem has fundamentally gone away and reduced itself to a simple update problem, which itself is simpler because the update schedule is less frequent.

I use tomcat for all web applications. When tomcat updates I just need to bump the version number on one application and move on to the next. Tomcat does not involve itself in the data that is being transferred in a non-generic way so I can update whenever I want.

Since nothing blocks updates, the updates happen frequently which means no application is running on an ancient tomcat version.


That 3rd party library rarely gets updated whereas Jon’s commit adds a field and now everyone has to update or the marshaling doesn’t work.

Yes, there are scenarios where you have to deploy everything but when dealing with micro services, you should only be deploying the service you are changing. If updating a field in a domain affects everyone else, you have a distributed monolith and your architecture is questionable at best.

The whole point is I can deploy my services without relying on yours, or touching yours, because it sounds like you might not know what you’re doing. That’s the beautiful effect of a good micro service architecture.


I was trying to think of better terminology. Perhaps this works:

Two services can have a common dependency, which still leaves them uncoupled. An example would be a JSON schema validation and serialization/deserialization library. One service can in general bump its dependency version without the other caring, because it'll still send and consume valid JSON.

Two services can have a shared dependency, which couples them. If one service needs to bump its version the other must also bump its version, and in general deployment must ensure they are deployed together so only one version of the shared dependency is live, so to speak. An example could be a library containing business logic.

If you had two independent microservices and added a shared library as per my definition above, you've turned them into a distributed monolith.

Sometimes a common dependency might force a shared deployment, for example a security bug in the JSON library. However that is an exception, and unlike the business logic library. In the shared library case the exception is that one could be bumped without the other caring.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: