Hacker Newsnew | past | comments | ask | show | jobs | submit | dlojudice's commentslogin

The text reminded me of one of Veritasium's latest videos [1] about power law, self-organized criticality, percolation, etc... and it also has a wildfire simulation

[1] https://www.youtube.com/watch?v=HBluLfX2F_k


Cursor Composer appears to have this type of coupling and uses IDE resources better than other models on average.


Node-based workflow for AI generation seems to be the right approach. Being able to chain different models (Flux for realism, Sora for video, etc.) and do actual editing in between steps is way more useful than single-shot prompting. The ComfyUI comparison is obvious but this looks a bit more polished. The branching/remixing workflow could be interesting for iteration. Also being able to create a entire workflow with just one prompt would be nice.


Congratulations on your work. I spent the day working with a mix of the Composer/Sonnet 4.5/Gemini 2.5 Pro models. In terms of quality, the Composer seems to perform well compared to the others. I have no complaints so far. I'm still using Claude for planning/starting a task, but the Composer performed very well in execution. What I've really enjoyed is the speed. I had already tested other fast models, but with poor quality. Composer is the first one that combines speed and quality, and the experience has been very enjoyable to work with.


Good point. Many people (including me) switched to Apple Silicon with the hope (or promise?) of having just one computer for work and leisure, given the potential of the new architecture. That didn't happen, or only partially, which is the same.

In my case, for software development, I'd be happy with an entry-level MacBook Air (now with a minimum of 16GB) for $999.


Going further, what can I build using it? Basically, can I use Python on a Tauri project or can I use Tauri on a Python project?


Tauri in Python project


+ experimental JIT compiler

this could be the beginning of something very promising


Problem is, it's too late. Most performant code I've seen and written isn't using numba, it's using numpy to vectorize. And sadly, there's a lot of wasted iteration when doing that just to be faster than scaler. My point being, that code won't speed up at all without a rewrite.


Introducing JIT features has a lot of opportunities beyond numerical numpy/numba vectorisation. There’s endless amounts of hot loops, data shuffling, garbage collection, and monomorphisation that could be done in real world python that would benefit a lot, much like V8 has done for JS.


I guess my point is that truly performant python code, at least for number crunching, uses vectorized numpy functions instead of loops, and the overhead on type checking for those is fairly minimal. I have a PR in on a compute heavy python program that I tried using numba to jit. Timing was within margins on using numpy and numba (even though the numba code could exit the loop early, which was why I was trying it) except with numba I'd be adding a dependency and it's more work to maintain the algorithms myself instead of relying on numpy.

I think of the JS code I've seen, it's mostly written in JS. So making JS faster makes JS faster. With python, the fast code is written outside python. It's too late by like 20 years. The world won't rewrite itself into native python modules


> and the overhead on type checking for those is fairly minimal

Well, yeah; the underlying C code assumes the type that was described to it by the wrapper (via, generally, the .dtype of an array), so it's O(1).

But I do wonder what the experience of Numpy has been like for the PyPy users.


> Tinker is a flexible API for efficiently fine-tuning open source models with LoRA.

It would be great if they offered inference from the trained model as well. Ideally pay per token.


OpenRouter should be responsible for this quality control, right? It seems to me to be the right player in the chain with the duties and scale to do so.


I see some pessimism in the comments here but honestly, this kind of product is something that would make me pay for ChatGPT again (I already pay for Claude, Gemini, Cursor, Perplexity, etc.). At the risk of lock-in, a truly useful assistant is something I welcome, and I even find it strange that it didn't appear sooner.


Truly useful?

Personal take, but the usefulness of these tools to me is greatly limited by their knowledge latency and limited modality.

I don't need information overload on what playtime gifts to buy my kitten or some semi-random but probably not very practical "guide" on how to navigate XYZ airport.

Those are not useful tips. It's drinking from an information firehose that'll lead to fatigue, not efficiency.


I doubt there would be this level of pessimism if people thought this is a progress toward a truly useful assistant.

Personally it sounds negative value. Maybe a startup that's not doing anything else could iterate on something like this into a killer app, but my expectation that OpenAI can do so is very, very low.


Pessimism is how people now signal their savviness or status. My autistic brain took some time to understand this nuance.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: