Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Why does gRPC have to use such a non-standard term for this that only mathematicians have an intuitive understanding of? I have to explain the term every time I use it.

Who are you working with lol? Nobody I’ve worked with has struggled with this concept, and I’ve worked with a range of devs, including very junior and non-native-English speakers.

> Also, it doesn’t pass my “send a friend a cURL example” test for any web API.

Well yeah. It’s not really intended for that use-case?

> The reliance on HTTP/2 initially limited gRPC’s reach, as not all platforms and browsers fully supported it

Again, not the intended use-case. Where does this web-browsers-are-the-be-all-and-of-tech attitude come from? Not everything needs to be based around browser support. I do agree on http/3 support lacking though.

> lack of a standardized JSON mapping

Because JSON has an extremely anaemic set of types that either fail to encode the same semantics, or require all sorts of extra verbosity to encode. I have the opposite experience with protobuf: I know the schema, so I know what I expect to get valid data, I don’t need to rely on “look at the json to see if I got the field capitalisation right”.

> It has made gRPC less accessible for developers accustomed to JSON-based APIs

Because god forbid they ever had to learn anything new right? Nope, better for the rest of us to just constantly bend over backwards to support the darlings who “only know json” and apparently can’t learn anything else, ever.

> Only google would think not solving dependency management is the solution to dependency management

Extremely good point. Will definitely be looking at Buf the next time I touch GRPC things.

GRPC is a lower-overhead, binary rpc for server-to-server or client-server use cases that want better performance and faster integration that a shared schema/IDL permits. Being able to drop in some proto files and automatically have a package with the methods available and not having to spend time wiring up url’s and writing types and parsing logic is amazing. Sorry it’s not a good fit for serving your webpage, criticising it for not being good at web stuff is like blaming a tank for not winning street races.

GRPC isn’t without its issues and shortcomings- I’d like to see better enums and a stronger type system, and defs http/3 or raw quic transport.



I use protobuf to specify my protocol and then generate a swagger/openAPI spec then use some swagger codegen to generate rest client libraries. For a proxy server I have to fill in some stub methods to parse the json and turn it into a gRPC call but for the gRPC server there is some library that generates a rest service listener that just calls into the gRPC server code. It works fine. I had to annotate the proto file to say what REST path to use.


>> Also, it doesn’t pass my “send a friend a cURL example” test for any web API.

> Well yeah. It’s not really intended for that use-case?

Until $WORKPLACE is invaded by Xooglers who want to gRPC all the things, regardless of whether or not there's any benefit over just using HTTPS. Internal service with dozens of users in a good week? Better use gRPC!


Oh yeah, no technology can design against being improperly deployed. I certainly don’t advocate for GRPC-ing-all-the-things! Suitable services only!


Hey, author here:

> Why does gRPC have to use such a non-standard term for this that only mathematicians have an intuitive understanding of? I have to explain the term every time I use it.

>> Who are you working with lol? Nobody I’ve worked with has struggled with this concept, and I’ve worked with a range of devs, including very junior and non-native-English speakers.

This is just a small complaint. It's super easy to explain what unary means but it's often infinitely easier to use a standard industry term and not explain anything.

>> Also, it doesn’t pass my “send a friend a cURL example” test for any web API.

> Well yeah. It’s not really intended for that use-case?

Yeah, I agree. Being easy to use isn't the indented use-case for gRPC.

>> The reliance on HTTP/2 initially limited gRPC’s reach, as not all platforms and browsers fully supported it

> Again, not the intended use-case. Where does this web-browsers-are-the-be-all-and-of-tech attitude come from? Not everything needs to be based around browser support. I do agree on http/3 support lacking though.

I did say browsers here but the "platform" I am thinking of right now is actually Unity, since I do work in the game industry. Unity doesn't have support for HTTP/2. It seems that I have different experiences than you, but I still think this point is valid. gRPC didn't need to be completely broken on HTTP/1.1.

>> lack of a standardized JSON mapping

> Because JSON has an extremely anaemic set of types that either fail to encode the same semantics, or require all sorts of extra verbosity to encode. I have the opposite experience with protobuf: I know the schema, so I know what I expect to get valid data, I don’t need to rely on “look at the json to see if I got the field capitalisation right”.

I agree that it's much easier to stick to protobuf once you're completely bought-in but not every project is greenfield. Before a well-defined JSON mapping and tooling that adhered to it is is very hard to transition from JSON to protobuf. Now it's a lot easier.

>> It has made gRPC less accessible for developers accustomed to JSON-based APIs

> Because god forbid they ever had to learn anything new right? Nope, better for the rest of us to just constantly bend over backwards to support the darlings who “only know json” and apparently can’t learn anything else, ever.

No comment. I think we just have different approaches to teaching.

>> Only google would think not solving dependency management is the solution to dependency management

> Extremely good point. Will definitely be looking at Buf the next time I touch GRPC things.

I'm glad to hear it! I've had nothing but execellent experiences with buf tooling and their employees.

> GRPC is a lower-overhead, binary rpc for server-to-server or client-server use cases that want better performance and faster integration that a shared schema/IDL permits. Being able to drop in some proto files and automatically have a package with the methods available and not having to spend time wiring up url’s and writing types and parsing logic is amazing. Sorry it’s not a good fit for serving your webpage, criticising it for not being good at web stuff is like blaming a tank for not winning street races.

Without looping in the frontend (aka web) it makes the contract-based philosophy of gRPC much less compelling. Because without that, you would have to have a completely different language for contracts between service-to-service (protobuf) than frontend to service (maybe OpenAPI). For the record: I very much prefer protobufs for the "contract source of truth" to OpenAPI. gRPC-Web exists because people wanted to make this work but they built their street racer with some tank parts.

> GRPC isn’t without its issues and shortcomings- I’d like to see better enums and a stronger type system, and defs http/3 or raw quic transport.

Totally agree!


> It's super easy to explain what unary means but it's often infinitely easier to use a standard industry term and not explain anything.

What's the standard term? While I agree that unary isn't widely known, I don't think I have ever heard of any other word used in its place.

> gRPC didn't need to be completely broken on HTTP/1.1.

It didn't need to per se (although you'd lose a lot of the reason for why it was created), but as gRPC was designed before HTTP/2 was finalized, it was still believed that everyone would want to start using HTTP/2. HTTP/1 support seemed unnecessary.

And as it was designed before HTTP/2 was finalized, it is not like it could have ridden on the coattails of libraries that have since figured out how to commingle HTTP/1 and HTTP/2. They had to write HTTP/2 from scratch in order to implement gRPC, so supporting HTTP/1 as well would have greatly ramped up the complexity.

Frankly, their assumption should have been right. It's a sorry state that they got it wrong.


> Hey, author here:

Hello! :)

>> Well yeah. It’s not really intended for that use-case?

> Yeah, I agree. Being easy to use isn't the indented use-case for gRPC.

I get the sentiment, for sure, I guess it’s a case of tradeoffs? GRPC traded “ability to make super easy curl calls” for “better features and performance for the hot path”. Whilst it’s annoying that it’s not easy, I don’t feel it’s super fair to notch up a “negative point” for this. I agree with the sentiment though-if you’re trying to debug things from _first_ principles alone in GRPC-land, you’re definitely going to have a bad time. Whether that’s the right approach is something is I feel like is possibly pretty subjective.

> I did say browsers here but the "platform" I am thinking of right now is actually Unity, since I do work in the game industry. Unity doesn't have support for HTTP/2. It seems that I have different experiences than you…

Ahhhh totally fair. To be fair I probably jumped the gun on this with my own, webby, biases, which in turn probably explains the differences in my/your next few paragraphs too and my general frustration with browsers/FE-devs; which shouldn’t be catching everyone else in the collateral fire.

> No comment. I think we just have different approaches to teaching.

Nah I think I was just in bad mood haha, I’ve been burnt by working with endless numbers of stubbornly lazy FE devs the last few places I’ve worked, and my tolerance for them is running out and I didn't consider the use-case you mentioned of game dev/beholden to the engine, which is a bit unfair. Under this framing, I feel like it’s a difficult spot: the protocol wants to provide a certain experience and behaviour, and people like yourself want to use it, but are constrained by some pretty minor things that said protocol seems to refuse to support for no decent reason. I guess it’s a possibly an issue for any popular-yet-specialised thing: what happens when your specific-purpose-tool finds significant popularity in areas that don’t meet your minimum constraints? Ignore them? Compromise on your offering? Made all the worse by Google behaving esoterically at the best of times lol.

You mentioned that some GRPC frameworks have already moved to support http/3, do you happen to know which ones they are?


This is probably not exhaustive but I think these frameworks can support HTTP/3 today:

- The standard grpc library for C#, dotnet-grpc

- It may already be possible in rust with Tonic with the Hyper http transport

- It's possible in Go if you use ConnectRPC with quic-go

- This is untested but I believe many gRPC-Web implementations in the browser might "just work" with HTTP/3 as well as long as the browsers are informed of the support via the "ALT-SVC" header and the servers supports it.


> Yeah, I agree. Being easy to use isn't the indented use-case for gRPC.

Sick burn. I like it, especially since most use of gRPC seems to be cargo-culting.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: