You're producing technical debt. At some point you will invest more time fixing the vibe slop than it would have cost you to do the work yourself in the first place. A lot of vibe-coding just feels like shifting responsibility and resources from "development" to "incident response".
1) Testing for myself 2) Vibe code testing 3) Team members do work from scratch and try to break 4) can read and reason it out even if I come out with it myself.
You joke and folks downvote, but this is my biggest issue with WebStorm. I'm seriously considering switching for the first time in 16 years. Zed is quite snappy. The Claude Code integration in VS Code is brilliant. I've used the CLI in the JetBrains terminal. I had no idea I could revisit past conversations until I used the VS Code extension!
Zed is snappy in the same way that notepad ++ is snappy: If you don't support 10% of language features you can avoid the hard work. Unfortunately this means that non trivial projects have false positive errors everywhere.
They are not just more expensive, they are also slower. Last time I compared them, AWS ARM64 instances could easily run jobs 30% faster, for the same CPU/memory count, than those that GitHub offers.
Yikes! They seem to be gunning for services like WarpBuild, which we've used for a couple years to keep our costs low. The $0.002 per minute on top of WarpBuild's costs is exactly GitHub's new pricing scheme.
I'm happy for competition, but this seems a bit foul since we users aren't getting anything tangible beyond the promise of improvements and investments that I don't need.
The lever that matters the most with the new $0.002/min tax is to reduce the number of minutes consumed.
Given that GitHub runners are still slow as ever, it actually is a point in our favor even compared to self-hosting on aws etc. However, it makes the value harder to communicate <shrug>.
But you do have to figure out which ones are salient to you and how they map to your specific app's lifecycle.
As a very small example: would you need to handle `charge.succeeded` and `payment_intent.succeeded`? How would you dedupe processing these events vs `customer.subscription.created`? Today, there's a lot of incidental knowledge about your payment processor's specific approach to webhook events that you need to know in order to integrate them.
Yeah, this isn’t great today. We have been exploring Webhook bundles/groups for common integration shapes that make a bit easier to make the right choices based on what you’re doing and hope to have something out here to help soon.
Having to figure out which of the 100s of Stripe event types we need to handle and which ones overlap was the most stressful part of adopting their system. Simplification here is welcomed.
Sure, but everything from the Stripe UI down to their API is feature creeped. I remember using it 10 years ago and I got it working in 10-20 minutes. Last month I see it up for my new project and it took almost a whole day
I'd bet the typical payments integration has more complex requirements now than those from 10 years ago. That's what usually happens as a space becomes more important and, therefore, more regulated. Usually new entrants will come in and try to provide a tidier interface for solving an increasingly complex problem. In payments, as others in the comments have pointed out, that process is hampered by the gatekeepers involved.
You'd have to commit to building an amazing developer experience and navigating bank partnerships and compliance (per country), risk, antifraud, etc.
That's why the payments devex feels so behind the DB, hosting, or auth devex today.
I like MCP for _remote_ services such as Linear, Notion, or Sentry. I authenticate once and Claude has the relevant access to access the remote data. Same goes for my team by committing the config.
Can I “just call the API”? Yeah, but that takes extra work, and my goal is to reduce extra work.
Client-generated IDs are necessary for distributed or offline-first systems. My company, Vori, builds a POS for grocery stores. The POS generates UUIDv7 IDs for all data it creates and that data is eventually synced to our backend. The sync time can range from less than 1 second for a store with fast Internet to hours if a store is offline.
Is a collision possible? Yes, but the likelihood of a collision is so low that it's not worth agonizing over (although I did when I was designing the system).
Seems nice. It would be better if it could be run as a script or agent, instead of a plugin, so it could work against hosted installations on AWS or Google Cloud (both of which limit extensions).
While that'd be nice, I ended up deciding I probably didn't want something like this installed on my prod RDS Postgres, but instead I can easily run it on local dev/staging Postgres instances, and ensure I'm testing the prod config without having to run pg extensions on the prod instances. It looks like running this on a database dump from prod will be able to run all the tests except the Cluster Rules - which feels like a good tradeoff for me.
reply