Very good questions! I still need to flesh out the patterns. While streams of streams are common in functional programming environments, I simply haven't seen them in class based streaming patterns anywhere so they're things to figure out.
Of course, more streams means more resource utilization, there's not getting around that, but that's the cost of parallelism. The use of `yield*` should keep the overhead to a bare minimum. Since the streams are being left alone and aren't consumed until needed, that should preserve some of the back-pressure behavior, although I do need to look into that more deeply.
How the system handles errors probably doesn't have a single solution that works for all frameworks, so I think it should be up to the specific requirements of each use case, but there's also definitely more work to do to explore the options and the patterns.
These are all things I definitely want to hear ideas for as well!
The next thing I'm exploring is applying these patterns to web rendering which will be a real stress test for how they can be used.
They cited the demo of a port having performance issues several months before release. That seems like a complete non story to me. It's brand new hardware and nobody has experience porting to or optimizing for its specific specifications yet. Early ports being unstable and requiring optimizations all the way up to or even past the release date is standard at the point in a hardware cycle.
I think there is a need for the AI counseling use case, but it should not be provided by a general purpose AI assistant. It should be designed by professional psychologists and therapists, with greater safeguards like human check-ins to make sure users get the help they need.
The best way to stop this is to make those safeguards stronger and completely shut down the chat to refer users to seek help from a better service. Unfortunately those services don't really exist yet.
There would be false positives and that'll be annoying, but I think it's worth it to deal with some annoyance to ensure that general purpose AI assistants are not used for counseling people in a vulnerable mental state. They are not aligned to do that and they can easily be misaligned.
Apple has vertical integration between their hardware and operating system meaning they have way more control. They can adapt their software to enable them to optimize their hardware in ways competitors can't.
For the Linear example, I would argue that adding too much flexibility would make the tool worse. Rigid workflows are consistent making them more robust and scalable. They work consistently needing less guidance or intervention.
When you find a rigid workflow that is both widely applicable and useful, that's how the most valuable companies are made. Those workflows can scale up with little intervention giving them incredibly high yield.
I think the new challenge is introducing flexibility in a controlled manner so we can minimize the added inconsistency needed to achieve broader goals. That way the workflow can find the right balance between robustness and flexiblity for the task at hand.
The call was being relayed to the pilot by the flight supervisor. While he wasn't "on" the call, I think it's fair to say he was "part" of the call. He's still getting live tech support and trying to trouble shoot the plane while flying it. He had an intermediary but I don't think that totally changes the scenario.
I think there's just some nuance to the scenario. The pilot wasn't directly on the call, but was participating in the call with the flight supervisor relaying the information.
I'd compare it to being in the room with someone on a conference phone call and they're relaying the conversation to you and them both ways. I would still say you were participating in the call even though you weren't directly on the call.
Also, he did initiate the call so "F-35 pilot held" is imprecise, but not totally wrong. Either way, the pilot was in an active tech support session with the plane engineers, making this one of the most intense tech support calls in history.
I use AI 90% for research, learning, and prototyping. It's increased my interest because before, I felt that I didn't have time to do all those things, now I can expand my knowledge so much faster that it makes greater challenges feel more attainable.
Of course, more streams means more resource utilization, there's not getting around that, but that's the cost of parallelism. The use of `yield*` should keep the overhead to a bare minimum. Since the streams are being left alone and aren't consumed until needed, that should preserve some of the back-pressure behavior, although I do need to look into that more deeply.
How the system handles errors probably doesn't have a single solution that works for all frameworks, so I think it should be up to the specific requirements of each use case, but there's also definitely more work to do to explore the options and the patterns.
These are all things I definitely want to hear ideas for as well!
The next thing I'm exploring is applying these patterns to web rendering which will be a real stress test for how they can be used.