I have it connected to a local Gemma model running in ollama and use it to quickly summarize webpages, nobody really wants to read 15 minutes worth of personal anecdotes before getting to that one paragraph that actually has relevant information, and for finding information within a page, kinda like ctrl-f on steroids.
The machine is sitting there anyway and the extra cost in electricity is buried in the hours of gaming that gpu is also used for, so i haven't noticed yet, and if you game, the graphics card is going to be obsolete long before the small amount of extra wear is obvious. YMMV if you dont already have a gaming rig laying around
Something like this I wouldn't mind, privacy focused local only models that allow you to use your own existing services. Can you give a quick pointer on how to connect Firefox to Ollama?
The default AI integration doesn't seem to support this. The only thing I could find that does is called PageAssist, and it's a third-party extension. Is that what you're using?
I have it connected to a local Gemma model running in ollama and use it to quickly summarize webpages, nobody really wants to read 15 minutes worth of personal anecdotes before getting to that one paragraph that actually has relevant information, and for finding information within a page, kinda like ctrl-f on steroids.
The machine is sitting there anyway and the extra cost in electricity is buried in the hours of gaming that gpu is also used for, so i haven't noticed yet, and if you game, the graphics card is going to be obsolete long before the small amount of extra wear is obvious. YMMV if you dont already have a gaming rig laying around