Hacker Newsnew | past | comments | ask | show | jobs | submit | seanw265's commentslogin

This is not what the article says at all.

The article is about the constraints of computation, scaling of current inference architecture, and economics.

It is completely unrelated to your claim that cognition is entirely separate from computation.


I tend to agree. Cloudflare and Vercel were able to mitigate in the form of WAF rules, but it's not immediately clear what a user or vendor can do to implement mitigations themselves other than updating their dependencies (quickly!).

IMO the CVE announcement could have been better handled. This was a level 10. If other mitigations can are viable and you know about them, you have a responsibility to disclose them in order to best protect the safety of the billions of users of React applications.

I wonder how many applications are still vulnerable.


FWIW it looks like OpenRouter's two providers for this model (one of whom being Deepseek itself) are only running the model around 28tps at the moment.

https://openrouter.ai/deepseek/deepseek-v3.2

This only bolsters your point. Will be interesting to see if this changes as the model is adopted more widely.


After reading the post I kept thinking about two other pieces, and only later realized it was Taylor who had submitted it. His most recent essay [0] actually led me to the Commoncog piece “Are You Playing to Play, or Playing to Win?” [1], and the idea of sub-games felt directly relevant here.

In this case, running a studio without using or promoting AI becomes a kind of sub-game that can be “won” on principle, even if it means losing the actual game that determines whether the business survives. The studio is turning down all AI-related work, and it’s not surprising that the business is now struggling.

I’m not saying the underlying principle is right or wrong, nor do I know the internal dynamics and opinions of their team. But in this case the cost of holding that stance doesn’t fall just on the owner, it also falls on the people who work there.

Links:

[0] https://taylor.town/iq-not-enough

[1] https://commoncog.com/playing-to-play-playing-to-win/


This [1] link is absolutely golden, thanks!

Stripe billing definitely leaves something to be desired and agree that a layer on top can improve the DX tremendously.

What differentiates you from competitors like Lago and Autumn?


Gotta say we admire both teams, and have a lot of respect for anyone trying to make progress in this space.

As far as differences: both are an additional service you need to bolt-on in addition to signing up for Stripe. We're aiming to consolidate onboarding as a single provider that both processes. A lot of work still to do on our side, but that's where we want to end up: that you get your dream devex without needing to sign up for 2 products.

Both are essentially billing-only services where you bring your API key. We have a billing engine that we built from scratch, and are actually processing the payments, currently using Stripe Connect under the hood.

Lago seems to still require you to deal with webhooks - if not theirs, then Stripe's - and is focused on "billing as write operation" (their first-class concern is producing a correct, well-formed charge or invoice object). We want to solve both the "read" (what features can my customer access, what balance does their usage meter have?) and the "write" more conventional billing operations like charges, prorations, converting free trials to paid, etc.

With Autumn we're tackling a similar problem but they currently still require you to use Stripe Billing + your API key. So you'll be paying for Stripe Billing + Autumn (unless you self host). Overtime as we get deeper into the money movement side of things our paths will look more different, as more of our devex will include smoother ways to handle funds flows, tax compliance, etc.

And compared to both, at least from what I can tell from the outside, we're putting relatively larger share of our brain cycles towards making our SDK and docs deeply intuitive for coding agents.

We want to design our default integration path around the assumption that you will have a coding agent doing most of the actual work. As a result we've got some features like e.g. an MCP-first integration path that makes it easy for your coding agent to ask our docs pointed questions that may come up as it integrates Flowglad. And a dynamically generated integration guide md file that considers both your codebase context. A lot of that is the result of our own trial and error trying to integrate payments with coding agents, and we're going to be investing a lot more time and care into that experience going forward.


Thanks for the thoughtful reply. Best of luck!

If they are serious they should realize that "80% accuracy" is almost meaningless for this kind of classifier. They should publish a confusion matrix if they haven't already.


I haven’t tried it myself, but if you’re asking specifically about the human models, the article says they’re not generating raw meshes from scratch. They extract the skeleton, shape, and pose from the input and feed that into their HMR system [0], which is a parametric human model with clean topology.

So the human results should have a clean mesh. But that’s separate from whatever pipeline they use for non-human objects.

[0]: https://github.com/facebookresearch/MHR


Doable for http and https, but if you're running it in a browser environment, you'll eventually run into issues with CORS and other protocols. To get around this you need a proxy server running elsewhere that exposes the lower layers of the network stack.


This is exactly what [0] does. Try it out. If you know the IP you can even log in to another open browser window via telnet.

[0] https://github.com/s-macke/jor1k


Aha! Now I see I'm talking to the expert on the topic ;) Thanks for the link. I'll check this out.


Very cool! I'm curious as to how it compares with WASIX in terms of both compatibility and performance.

Also tangentially related: I'd love to see a performant build of Node.js compatible with this runtime (or really any flavor of WASM), but I think you'd run into the same issues that I have with WASIX. Namely build headaches, JIT, and wasm(-in-wasm) support. I'd explore it myself but I've already sunk way more time than is reasonable on that endeavor.


The designer obviously knows a thing or two. I enjoyed the fun presentation that others seem to dislike.

Where I ran into trouble was the readability of the annotations on the visuals. The tiny font combined with the low contrast was too much for me. I found myself squinting and trying to get close to my monitor. Eventually I had to move on, even though I was enjoying the content.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: