Hacker Newsnew | past | comments | ask | show | jobs | submit | jdashg's commentslogin

We literally have to be willing to get taken advantage of sometimes, and we have to come down hard on the "don't hate the player, hate the game" f-you-got-mine assholes.

It is not weakness, but strength, to make yourself (reasonably!) vulnerable to being taken advantage of. It is not strength, but weakness, to let bad behavior happen around you. You don't have to do everything, but you have to do something, or nothing changes.

We gotta spend less time explaining away (and tacitly excusing) bad behavior as unfortunate game theory, and more time coming down hard on people who violate trust.

Ante trust gladly, but come down hard on defectors.


Consider this situation: security review before a project go-live.

I have never seen this team before and I'll "never" see this team after the fact. They might be contracted externally, they might leave before the second review.

Let's say I can sus out people doing this. I don't have the option of giving them the benefit of the doubt and they have the motivation to trick me.

I guess I've answered my own question a bit, such an environment isn't built to foster trust at all.


Upvoted because this is true, but we need to establish coping mechanisms for this.

For example:

"Sorry, yes, I know the report is due tomorrow, but I don't have time to review it again because I wasted 2 hours on the first version."

or

"I found these three problems on the first page and stopped reading."

What else?


And the GPU API cycle of life and death continues!

I was an only-half-joking champion of ditching vertex attrib bindings when we were drafting WebGPU and WGSL, because it's a really nice simplification, but it was felt that would be too much of a departure from existing APIs. (Spending too many of our "Innovation Tokens" on something that would cause dev friction in the beginning)

In WGSL we tried (for a while?) to build language features as "sugar" when we could. You don't have to guess what order or scope a `for` loop uses when we just spec how it desugars into a simpler, more explicit (but more verbose) core form/dialect of the language.

That said, this powerpoint-driven-development flex knocks this back a whole seriousness and earnestness tier and a half: > My prototype API fits in one screen: 150 lines of code. The blog post is titled “No Graphics API”. That’s obviously an impossible goal today, but we got close enough. WebGPU has a smaller feature set and features a ~2700 line API (Emscripten C header).

Try to zoom out on the API and fit those *160* lines on one screen! My browser gives up at 30%, and I am still only seeing 127. This is just dishonesty, and we do not need more of this kind of puffery in the world.

And yeah, it's shorter because it is a toy PoC, even if one I enjoyed seeing someone else's take on it. Among other things, the author pretty dishonestly elides the number of lines the enums would take up. (A texture/data format enum on one line? That's one whole additional Pinocchio right there!)

I took WebGPU.webidl and did a quick pass through removing some of the biggest misses of this API (queries, timers, device loss, errors in general, shader introspection, feature detection) and some of the irrelevant parts (anything touching canvas, external textures), and immediately got it down to 241 declarations.

This kind of dishonest puffery holds back an otherwise interesting article.


Man, how I wish WebGPU didn't go all-in on legacy Vulkan API model, and instead find a leaner approach to do the same thing. Even Vulkan stopped doing pointless boilerplate like bindings and pipelines. Ditching vertex attrib bindings and going for programmable vertex fetching would have been nice.

WebGPU could have also introduced Cuda's simple launch model for graphics APIs. Instead of all that insane binding boilerplate, just provide the bindings as launch args to the draw call like draw(numTriangles, args), with args being something like draw(numTriangles, {uniformBuffer, positions, uvs, samplers}), depending on whatever the shaders expect.


>Man, how I wish WebGPU didn't go all-in on legacy Vulkan API model

WebGPU doesn't talk to the GPU directly. It requires Vulkan/D3D/Metal underneath to actually implement itself.

>Even Vulkan stopped doing pointless boilerplate like bindings and pipelines.

Vulkan did no such thing. As of today (Vulkan 1.4) they added VK_KHR_dynamic_rendering to core and added the VK_EXT_shader_object extension, which are not required to be supported and must be queried for before using. The former gets rid of render pass objects and framebuffer objects in favor of vkCmdBeginRendering(), and WebGPU already abstracts those two away so you don't see or deal with them. The latter gets rid of monolithic pipeline objects.

Many mobile GPUs still do not support VK_KHR_dynamic_rendering or VK_EXT_shader_object. Even my very own Samsung Galaxy S24 Ultra[1] doesn't support shaderObject.

Vulkan did not get rid of pipeline objects, they added extensions for modern desktop GPUs that didn't need them. Even modern mobile GPUs still need them, and WebGPU isn't going to fragment their API to wall off mobile users.

[1] https://vulkan.gpuinfo.org/displayreport.php?id=44583


> WebGPU doesn't talk to the GPU directly. It requires Vulkan/D3D/Metal underneath to actually implement itself.

So does WebGL and it's doing perfectly fine without pipelines. They were never necessary. Since WebGL can do without pipelines, WebGPU can too. Backends can implement via pipelines, or they can go for the modern route and ignore them.

They are an artificial problem that Vulkan created and WebGPU mistakenly adopted, and which are now being phased out. Some devices may refuse to implement pipeline-free drivers, which is okay. I will happily ignore them. Let's move on into the 21st century without that design mistake, and let legacy devices and companies that refuse to adapt die in dignity. But let's not let them hold back everyone else.


My biggest issues with WebGPU are, yet another shading language, and after 15 years, browser developers don't care one second for debugging tools.

It is either pixel debugging, or trying to replicate in native code for proper tooling.


Ironically, WebGPU was way more powerful about 5 years ago before WGSL was made mandatory. Back then you could just use any Spirv with all sorts of extensions, including stuff like 64bit types and atomics.

Then wgsl came and crippled WebGPU.


My understanding is that pipelines in Vulkan still matter if you target certain GPUs though.

At some point, we need to let legacy hardware go. Also, WebGL did just fine without pipelines, despite being mapped to Vulkan and DirectX code under the hood. Meaning WebGPU could have also worked without pipelines just fine as well. The backends can then map to whatever they want, using modern code paths for modern GPUs.

Quoting things I only heard about, because I don't do enough development in this area, but I recall reading that it impacted performance on pretty much every mobile chip (discounting Apple's because there you go through a completely different API and they got to design the hw together with API).

Among other things, that covers everything running on non-apple, non-nvidia ARM devices, including freshly bought.


After going through a bunch of docs and making sure I had the right reference.

The "legacy" part of Vulkan that everyone on desktop is itching to drop (including popular tutorials) is renderpasses... which remain critical for performance on tiled GPUs where utilization of subpasses means major performance differences (also, major mobile GPUs have considerable differences in command submission which impact that as well)


Also pipelines and bindings. BDA, shader objects and dynamic rendering are just way better than the legacy Vulkan without these features.

> Also, WebGL did just fine without pipelines, despite being mapped to Vulkan and DirectX code under the hood.

...at the cost of creating PSOs at random times which is an expensive operation :/


No longer an issue with dynamic rendering and shader objects. And never was an issue with OpenGL. Static pipelines are an artificial problem that Vulkan imposed for no good reason, and which they reverted in recent years.

That's not at all what dynamic rendering is for. Dynamic rendering avoids creating render pass objects, and does nothing to solve problems with PSOs. We should be glad for the demise of render pass objects, they were truly a failed experiment and weren't even particularly effective at their original goal.

Trying to say pipelines weren't a problem with OpenGL is monumental levels of revisionism. Vulkan (and D3D12, and Metal) didn't invent them for no reason. OpenGL and DirectX drivers spent a substantial amount of effort to hide PSO compilation stutter, because they still had to compile shader bytecode to ISA all the same. They were often not successful and developers had very limited tools to work around the stutter problems.

Often older games would issue dummy draw calls to an off screen render target to force the driver to compile the shader in a loading screen instead of in the middle of your frame. The problem was always hard, you could just ignore it in the older APIs. Pipelines exist to make this explicit.

The mistake Vulkan made was putting too much state in the pipeline, as much of that state is dynamic in modern hardware now. As long as we need to compile shader bytecode to ISA we need some kind of state object to represent the compiled code and APIs to control when that is compiled.


Going entirely back to the granular GL-style state soup would have significant 'usability problems'. It's too easy to accidentially leak incorrect state from a previous draw call.

IMHO a small number of immutable state objects is the best middle ground (similar to D3D11 or Metal, but reshuffled like described in Seb's post).


Not using static pipelines does not imply having to use a global state machine like OpenGL. You could also make an API that uses a struct for rasterizer configs and pass it as an argument to a multi draw call. I would have actually preferred that over all the individual setters in Vulkan's dynamic rendering approach.

Who cares about dev friction in the beginning? That was a bad choice.

I did indeed play in the era LanceH is talking about, and I agree with them! We had many thriving communities with no serious cheating problems because of community moderation.

Yes, there were poorly moderated servers, but you could simply leave and try a different community until you found one that clicked for you. When you require equal moderation everywhere, you throw the baby out with the bath water.


How much time did you wasted server hopping ?


Initially, until you found the right community run ones? I don't see the issue. Today is worse, especially when there is no server browser but just a blackbox that drops you in a random match.


Well unfortunately, the device pixel size of something does actually depend not just on the css size and dpr scaling, but also on position and iirc even potentially what else is on the page due to think like border and thin line dilation. Unfortunately that's just how css layout works in practice, due to pixel snapping and dealing with ambiguities in how to render thin or generally non-integer-sized elements.

There are two approaches these days though: https://developer.mozilla.org/en-US/docs/Web/API/WebGL_API/W...


> There are two approaches these days

I'm aware of those, it's just sad nothing is nailed as a standard. Those are two workarounds unfortunately. A browser API not supported on Apple Safari and pixel-presnap with JS doing the CSS px calculation backwards + snapping and even that code has an extra branch to fix a macOS+Chrome bug. The whole situation smells of duct tape.


A bunch of my friends in VRChat dropped $1500 or so on Bigscreen Beyond 2e orders earlier this year, and are ecstatic to finally be receiving their kits. I'm eagerly awaiting my own ship date email in the next few weeks.

The people who are hooked are hooked, but it's in too slow of a growth curve to keep the attention of the hypergrowth omninationals. Inshallah the megacorps remain minor players.

The metaverse is here, for those with headsets to see. But all Meta will be remembered for in VR is sad tech "demos" that turned out faked, and, for a time at least, solid budget wireless headsets.

Mass media is still waiting for faster horses, but the real transhumanists already have one foot out of the physical world. (And sometimes four!)


How it happened internally is irrelevant to whether Facebook is responsible. Deploying systems they do not properly control or understand does not shield against legal or normal responsibilities!

There is a trail of people who signed off on this implementation. It is the fault of one or more people, not machines.


>Deploying systems they do not properly control or understand does not shield against legal or normal responsibilities!

We can argue the "moral" aspect until we're both blue in the face, but did facebook have any legal responsibilities to ensure its systems didn't contain sensitive data?


Once upon a time, we were told not to share our real name or personal info online with strangers. That remains wisdom!


Network effects and moats, sadly.


I always thought hacking scenes in sci-fi were unrealistic, but if you're cooking up AI-fortified code lasagna at your endpoints, there are going to be a mishmash of vulnerabilities: Expert robust thought will be spread very thin by the velocity that systemic forces push developers to.


It's about clarity of intent at the call site. Passing by mutable ref looks like `foo`, same as passing by value, but passing mutability of a value by pointer is textually readably different: `&foo`. That's the purpose of the pass by pointer style.

You could choose to textually "tag" passing by mutable ref by passing `&foo` but this can rub people the wrong way, just like chaining pointer outvars with `&out_foo`.


If you want clarity of intent define dummy in and out macros but please don't make clean reference-taking APIs a mess by turning them into pointers for no good reason


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: