Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Complete newbie here - some questions, if I may!

This stuff can run on a local machine without internet access, correct?

And it can pretty much match Nano Banana? https://github.com/PicoTrex/Awesome-Nano-Banana-images/blob/...

Also -- what are the specs for a machine to run it (even if slowly!)



This model can be run completely offline, yes. You'll need anywhere from 60-200 gb of RAM (either VRAM for high speeds, or a combination of VRAM and RAM, or just CPU+RAM). The active params are really low (3B) so it'll likely run fine even on CPU. Should get 10-15+t/s even on old DDR4 systems. Offload some experts to a GPU (can be as low as 8-16gb) and you'll see greater speeds.

This has nothing to do with nano banana, or image generation. For that you want the qwen image edit[1] models.

1 - https://huggingface.co/Qwen/Qwen-Image-Edit


what you mean is Qwen Image and Qwen Image Edit, you can run it on local machine, using Draw Things application for example.

the model discussed here is text model, so similar to ChatGPT. You can also run it on your local machine, but not yet, as apps need to be updated with Qwen 3 Next support (llama.cpp, Ollama, etc)


> This stuff can run on a local machine without internet access, correct?

Yes.

> And it can pretty much match Nano Banana?

No, Qwen3-Next is not a multimodal model, it has no image generation function.


Isn't this one a text model


Ah, maybe! I am lost reading this page with all the terminology


You'll get used to it.

Make sure to lurk on r/LocalLlama.


> Make sure to lurk on r/LocalLlama.

Please do take everything you read there with a bit of salt though, as the "hive-mind" effect is huge there, even when compared to other subreddits.

I'm guessing the huge influx of money + reputations on the line + a high traffic community is ripe for both hive-minding + influence campaigns.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: