I don't get this. Isn't this the same as saying "I taught my 5 year old to calculate integrals, by typing them into Wolfram Alpha"...so the actual relevant cognitive task (integrals in my example, "seeing" in yours) is outsources to an external API.
Why do I need gpt-oss-120B at all in this scenario? Couldn't I just directly call e.g. gemini-3-pro api from the python script?
'Calculating' an integral, is usually done by applying a series of sort of abstract mathematical tricks. There is usually no deeper meaning applied to the solving. If you have profound intuition you can guess the solution to an integral, by 'inspection'.
What part here is the knowing or understanding? Does solving an integral symbolically provide more knowledge than numerically or otherwise?
Understanding the underlying functions themselves and the areas they sweep; has substitution or by-parts, actually provided you with this?
Parent says “I taught my 5yo how to” — this means their 5yo learned a process.
OP says “I taught LLM how to see” and this should mean the LLM (which is capable of being taught/learning) internalized how to. It did not, it was given a tool that does seeing and tells it what things are.
People are very interested in getting good local LLMs with vision integrated, and so they want to read about it. Next to nobody would click on the honest “I enabled an LLM to use a Google service to identify objects in images”, which is what OP actually did.
Confused as to why you wouldn’t integrate a local vlm if you want a local llm as the backbone. Plenty of 8b - 30b vlms out there that are visually competent.
Looks like a TOS violation to me to scrape google directly like that. While the concept of giving a text only model 'pseudo vision' is clever, I think the solution in its current form is a bit fragile. The SerpAPI, Google Custom Search API, etc. exist for a reason; For anything beyond personal tinkering, this is a liability.
> Looks like a TOS violation to me to scrape google directly like that
If something was built by violating TOS' and you use that to do more TOS violations against the ones who initially did the TOS violations to build the thing, do they cancel out each other?
Not about GPT-OSS specifically, but say you used Gemma for the same purpose instead for this hypothetical.
Coolest thing about it is, its 1 pip install to give your local model the ability to see, do Google Searches and use News, Shopping, Scholar, Maps, Finance, Weather, Flights, Hotels, Translate, Images, Trends etc
Have you tried GPT-OSS-120b MXFP4 with reasoning effort set to high? Out of all models I can run within 96GB, it seems to consistently give better results. What exact llama model (+ quant I suppose) is it that you've had better results against, and what did you compare it against, the 120b or 20b variant?
How are you running this? I've had issues with Opencode formulating bad messages when the model runs on llama.cpp. Jinja threw a bunch of errors and GPT-OSS couldn't make tool calls. There's an issue for this on Opencode's repo but seems like it's been waiting or a couple of weeks.
> What exact llama model (+ quant I suppose) is it that you've had better results against
Not llama, but Qwen3-coder-next is on top of my list right now. Q8_K_XL. It's incredible (not just for coding).
Again, you're not specifying what GPT-OSS you're talking about, there are two versions, 20b and 120b. Not to mention if you have a consumer GPU, you're most likely running it with additional quantization too, but you're not saying what version.
> Jinja threw a bunch of errors and GPT-OSS couldn't make tool calls.
This was an issue for a week or two when GPT-OSS initially launched, as none of the inference engines had properly implemented support for it, especially around tool calling. I'm running GPT-OSS-120b MXFP4 with LM Studio and directly with llama.cpp, the recent versions handle it well and I have no errors.
However, when I've tried either 120b or 20b with additional quantization (not the "native" MXFP4 ones), I've seen that they're having troubles with the tool syntax too.
> Not llama
What does your original comment mean then? You said llama was "strictly" better than GPT-OSS, which specific model variant are you talking about or you miswrote somehow?
I don't get this. Isn't this the same as saying "I taught my 5 year old to calculate integrals, by typing them into Wolfram Alpha"...so the actual relevant cognitive task (integrals in my example, "seeing" in yours) is outsources to an external API.
Why do I need gpt-oss-120B at all in this scenario? Couldn't I just directly call e.g. gemini-3-pro api from the python script?
'Calculating' an integral, is usually done by applying a series of sort of abstract mathematical tricks. There is usually no deeper meaning applied to the solving. If you have profound intuition you can guess the solution to an integral, by 'inspection'.
What part here is the knowing or understanding? Does solving an integral symbolically provide more knowledge than numerically or otherwise?
Understanding the underlying functions themselves and the areas they sweep; has substitution or by-parts, actually provided you with this?
Parent says “I taught my 5yo how to” — this means their 5yo learned a process.
OP says “I taught LLM how to see” and this should mean the LLM (which is capable of being taught/learning) internalized how to. It did not, it was given a tool that does seeing and tells it what things are.
People are very interested in getting good local LLMs with vision integrated, and so they want to read about it. Next to nobody would click on the honest “I enabled an LLM to use a Google service to identify objects in images”, which is what OP actually did.
Booyah yourself, this like being able to call two APIs and calling it learning? I thought you did some VLM stuff with a projection
Confused as to why you wouldn’t integrate a local vlm if you want a local llm as the backbone. Plenty of 8b - 30b vlms out there that are visually competent.
Next try actually teaching it to see by training a projector with a vision encoder on gpt-oss.
too slow bro
might be slower, but then it can get the actual image as input, not just some description of it
Looks like a TOS violation to me to scrape google directly like that. While the concept of giving a text only model 'pseudo vision' is clever, I think the solution in its current form is a bit fragile. The SerpAPI, Google Custom Search API, etc. exist for a reason; For anything beyond personal tinkering, this is a liability.
> Looks like a TOS violation to me to scrape google directly like that
If something was built by violating TOS' and you use that to do more TOS violations against the ones who initially did the TOS violations to build the thing, do they cancel out each other?
Not about GPT-OSS specifically, but say you used Gemma for the same purpose instead for this hypothetical.
Your TOS violations just have to enable you to get big enough to fund your own legal defense and pay typical corporate fines, and then you’re ok.
If you do it and give it away they will come for you.
You mean like an “unclean hands“ defense?
https://en.wikipedia.org/wiki/Clean_hands
Coolest thing about it is, its 1 pip install to give your local model the ability to see, do Google Searches and use News, Shopping, Scholar, Maps, Finance, Weather, Flights, Hotels, Translate, Images, Trends etc
Easiest and fastest way and the impact is massive
Isn’t SerpAPI about scraping Google through residential proxies as a service ?
And they are getting sued. https://blog.google/innovation-and-ai/technology/safety-secu...
105 comments from a few months ago.
https://news.ycombinator.com/item?id=46329109
All ia models are built on stolen data, that's fair war.
Thought this is "hacker new" bro
> GPT-OSS-120B, a text-only model with zero vision support, correctly identified an NVIDIA DGX Spark and a SanDisk USB drive from a desk photo.
But wasn't it Google Lens that actually identified them?
you eventually get hit with captcha with the playwright approach
have you tried Llama? In my experience it has been strictly better than GPT OSS, but it might depend on specifically how it is used.
Have you tried GPT-OSS-120b MXFP4 with reasoning effort set to high? Out of all models I can run within 96GB, it seems to consistently give better results. What exact llama model (+ quant I suppose) is it that you've had better results against, and what did you compare it against, the 120b or 20b variant?
How are you running this? I've had issues with Opencode formulating bad messages when the model runs on llama.cpp. Jinja threw a bunch of errors and GPT-OSS couldn't make tool calls. There's an issue for this on Opencode's repo but seems like it's been waiting or a couple of weeks.
> What exact llama model (+ quant I suppose) is it that you've had better results against
Not llama, but Qwen3-coder-next is on top of my list right now. Q8_K_XL. It's incredible (not just for coding).
Again, you're not specifying what GPT-OSS you're talking about, there are two versions, 20b and 120b. Not to mention if you have a consumer GPU, you're most likely running it with additional quantization too, but you're not saying what version.
> Jinja threw a bunch of errors and GPT-OSS couldn't make tool calls.
This was an issue for a week or two when GPT-OSS initially launched, as none of the inference engines had properly implemented support for it, especially around tool calling. I'm running GPT-OSS-120b MXFP4 with LM Studio and directly with llama.cpp, the recent versions handle it well and I have no errors.
However, when I've tried either 120b or 20b with additional quantization (not the "native" MXFP4 ones), I've seen that they're having troubles with the tool syntax too.
> Not llama
What does your original comment mean then? You said llama was "strictly" better than GPT-OSS, which specific model variant are you talking about or you miswrote somehow?
GPT-OSS-120B runs like hell on my DGX Spark
The MXFP4 variant I suppose? My setup (RTX Pro 6000) does around ~140 tok/s with llama.cpp, around 160 tok/s with vLLM.