Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> With JSON, you often send ambiguous or non-guaranteed data. You may encounter a missing field, an incorrect type, a typo in a key, or simply an undocumented structure. With Protobuf, that’s impossible. Everything starts with a .proto file that defines the structure of messages precisely.

This deeply misunderstands the philosophy of Protobuf. proto3 doesn't even support required fields. https://protobuf.dev/best-practices/dos-donts/

> Never add a required field, instead add `// required` to document the API contract. Required fields are considered harmful by so many they were removed from proto3 completely.

Protobuf clients need to be written defensively, just like JSON API clients.



The blog seems to contain other similar misunderstandings: for example the parallel article against using SVG images doesn't consider scaling the images freely a benefit of vector formats.


https://aloisdeniel.com/blog/i-changed-my-mind-about-vector-... seems fairly clearly to be talking about icons of known sizes, in which case that advantage disappears. (I still feel the article is misguided and that the benefit of runtime-determined scaling should have been mentioned, and see no benchmarks supporting its performance theses, and I’d be surprised if the difference was anything but negligible; vector graphic pipelines are getting increasingly good, and the best ones do not work in the way described, and could in fact be more efficient than raster images at least for simpler icons like those shown.)


> seems fairly clearly to be talking about icons of known sizes, in which case that advantage disappears.

That's the point: obliviousness to different concerns and their importance.

Among mature people, the main reason to use SVG is scaling vector graphics (in different contexts, including resolution-elastic final rendering, automatically exporting bitmap images from easy to maintain vector sources, altering the images programmatically like in many icon collections); worrying about file sizes and rendering speed is a luxury for situations that allow switching to bitmap images without serious cost or friction.


Are there display pipelines that cache the generated-for-my-device-resolution svgs instead of doing all the slower parsing etc from scratch every time, achieving benefits of both worlds? And you can still have runtime-defined scaling by "just" rebuilding the cache?


Haiku (OS) caches the vector icons rendered from HVIF[1][2] files which are used extensively for UI.

I didn't find details of the caching design. Possibly it was mentioned to me by waddlesplash on IRC[3].

[1] 500 Byte Images: The Haiku Vector Icon Format (2016) http://blog.leahhanson.us/post/recursecenter2016/haiku_icons...

[2] Why Haiku Vector Icons are So Small | Haiku Project (2006) https://www.haiku-os.org/articles/2006-11-13_why_haiku_vecto...

[3] irc://irc.oftc.net/haiku


> The drawback to using vector images is that it can take longer to render a vector image than a bitmap; you basically need to turn the vector image into a bitmap at the size you want to display on the screen.

Indeed, would be nice if one of these blogs explained the caching solution to tackle the drawback.

Another issue, I think, especially at smaller sizes, is the pixel snapping might be imperfect and require "hints" like in fonts? Wonder if these icons suffer from these/address it


Increasingly I think you’ll find that the efficient format for simple icons like this actually isn’t raster, due to (simplifying aggressively) hardware acceleration. We definitely haven’t reached that stage in wide deployment yet, but multiple C++ and Rust projects exist where I strongly suspect it’s already the case, at least on some hardware.


The best place for such a cache is a GPU texture, and in a shader that does simple texture mapping instead of rasterizing shapes it would cost more memory reads in exchange for less calculations.


Icons are no longer fixed sizes. They're are numerous dpi/scaling settings even if the "size" doesn't change.


The article goes into that, it’s making a sprite map of at least the expected scaling factors.


There are no "expected" scaling factors anymore.


It’s also conflating the serialization format with contracts


Most web frameworks do both at the same time to the point where having to write code which enforced a type contract after deserializing is a delabreaker for me. I eant to be able to define my DTOs in one place, once, and have it both deserialize and enforce types/format. Anything else is code smell


I'm in the same boat. I mostly write Rust and Python. Using serde_json and Pydantic, you get deserialization and validation at the same time. It allows you to de-serialize really "tight" types.

Most of my APIs are internal APIs that accept breaking changes easily. My experience with protobufs is that it was created to solve problems in large systems with many teams and APIs, where backwards compatibility is important. There are certainly systems where you can't "just" push through a breaking API change, and in those cases protobufs make sense.


> My experience with protobufs is that it was created to solve problems in large systems with many teams and APIs

Also significant distribution such that it’s impossible to ensure every system is updated in lockstep (at least not without significant downtime), and high tail latencies e.g. a message could be stashed into a queue or database and processed hours or days later.


I feel like that's fine since both things go hand in hand anyway. And if choosing the JSON-format comes with a rather high amount of contract-breaches it might just be easier to switch that instead of fixing the contract.


Unless a violation of that contract can lead to a crash or security vulnerability...


The post is about changing the serialization-format so enforcing contracts becomes esier; and I am defending the post, so I don't understand what you're hinting at here.


Then reject the request if it is incomplete?


Isn't the core issue just language and implementation differences of clients vs servers here?

I went all in with Go's Marshalling concept, and am using my Gooey framework on the client side nowadays. If you can come around Go's language limitations, it's pretty nice to use and _very_ typesafe. Just make sure to json:"-" the private fields so they can't be injected.

[1] shameless drop https://github.com/cookiengineer/gooey


Skew is an inherent problem of networked systems no matter what the encoding is. But, once the decoding is done, assuming there were no decoding errors in either case, at least with protobuf you have a statically typed object.

You could also just validate the JSON payload, but most people don't bother. And then they just pass the JSON blob around to all sorts of functions, adding, modifying, and removing fields until nobody knows for sure what's in it anymore.


> You could also just validate the JSON payload, but most people don't bother.

I don't think I have ever worked somewhere that didn't require people to validate inputs.

The only scenario could be prototypes that made it to production, and even when its thrown over the wall I'll make it clear that it is unsupported until it meets minimum requirements. Who does it is less important than it happening.


The convention at every company I've worked at was to use DTO's. So yes, JSON payloads are in fact validated, usually with proper type validation as well (though unfortunately that part is technically optional since we work in php).

Usually it's not super strict, as in it won't fail if a new field suddenly appears (but will if one that's specified disappears), but that's a configuration thing we explicitly decided to set this way.


I think the OP meant something far simpler (and perhaps less interesting), which is that you simply cannot encounter key errors due to missing fields, since all fields are always initialized with a default value when deserializing. That's distinct from what a "required" field is in protobuf


Depending on the language/library, you can get exactly the same behavior with JSON.


> Protobuf clients need to be written defensively, just like JSON API clients.

Oof. I'd rather just version the endpoints and have required fields. Defensive is error-prone, and verbose, harder to reason about, and still not guaranteed. It really feels like an anti-pattern.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: