Fair enough, I thought what I'd originally written for that section was too wordy, so I asked Claude to rewrite it. I'll go a bit lighter on the AI editing next time. Here's most of the original with the code examples omitted:
Watching Tuomas' initial talk about Linear's realtime sync, one of the most appealing aspects of their design was the reactive object graph they developed. They've essentially made it possible for frontend development to be done as if it's just operating on local state, reading/writing objects in an almost Active Record style.
The reason this is appealing is that when prototyping a new system, you typically need to write an API route or rpc operation for every new interaction your UI performs. The flow often looks like:
- Think of the API operation you want to call
- Implement that handler/controller/whatever based on your architecture/framework
- Design the request/response objects and share them between the backend/frontend code
- Potentially, write the database migration that will unlock this new feature
Jazz has the same benefits of the sync + observable object graph. Write a schema in the client code, run the Jazz sync server or use Jazz Cloud, then just work with what feel like plane js objects.
That's an interesting question. I will open a Jira ticket to schedule a meeting with the Product Team, and they will assign you several stories with acceptance criteria written by AI.
This is great, I've been wanting to do something like this after finding an old series about making games in the terminal (https://www.youtube.com/watch?v=xW8skO7MFYw). I'll need to checkout bevy now.
Thanks! That's not harsh at all, it's exactly what Tim and I are looking for. It's honest detailed feedback with great reasoning. I appreciate you taking the time to provide so much detail.
How you're describing iOS is similar to how nitric works. Developers indicate in code "I'm reading from this bucket", it's a request not an order, they're not actually configuring the permissions system. That request is collected into a graph of other requests (for resources, permissions, etc.) and passed via an API to a provider to fulfill.
If you want to change what "read" means you're free to do that in the provider without changing a single line of application code. But you also get the benefit on the Ops side of not needing to read the application code to try and figure out what permissions it needs to work, that part it generated so you can't miss anything.
If you want to output Terraform or config files or something else like you do today, to enable audits and keep it alongside the code, you can do that easily.
I did browse the docs and repos[0]. I'm saying that "IfC" as formulated here[1] is an anti-pattern and what I refer to as misguided.
> IaC tools like Pulumi, Terraform, AWS CDK, Ansible and others bring repeatability to infrastructure by giving you scripts that you can use across different environments or projects to provision infrastructure. This code/config is in addition to your application code, typically with a tight brittle coupling between the two.
If there is a tight and brittle coupling, this is not an issue stemming from either the IaC approach or any of those tools. It's an issue of poor design.
I'm curious what designs you use to avoid the issues. For example, if your code needs to access a resource (e.g. making a call to send an event to a cloudwatch event bus or SNS topic on AWS), how do you deal with things like:
- Consuming AWS client libraries (or APIs) in your application code
- Avoiding writing cloud specific mocks/etc. for testing that same code
- Avoiding env vars with resource names/ARN/etc. to use with that code, that then need to duplicated in the Terraform/other IaC without typos etc.
- The code not knowing whether it has the permissions needed to make those calls, so it's can't be guaranteed to be correct before deploying and testing in a live environment
- No longer needing that topic in future, but it lingers in the IaC because the two are unaware of each other
To me those are all examples of the application code "getting involved in the infrastructure it's running on", which I agree it has no business doing.
Nitric deals with these things by separating that code into another module that's sole responsibility is dealing with the cloud, exposed by a common interface for any cloud/environment.
Watching Tuomas' initial talk about Linear's realtime sync, one of the most appealing aspects of their design was the reactive object graph they developed. They've essentially made it possible for frontend development to be done as if it's just operating on local state, reading/writing objects in an almost Active Record style.
The reason this is appealing is that when prototyping a new system, you typically need to write an API route or rpc operation for every new interaction your UI performs. The flow often looks like: - Think of the API operation you want to call - Implement that handler/controller/whatever based on your architecture/framework - Design the request/response objects and share them between the backend/frontend code - Potentially, write the database migration that will unlock this new feature
Jazz has the same benefits of the sync + observable object graph. Write a schema in the client code, run the Jazz sync server or use Jazz Cloud, then just work with what feel like plane js objects.