Comment by fm2606

Comment by fm2606 3 days ago

16 replies

gpt-oss-120b is amazing. I created a RAG agent to hold most of GCP documentation (separate download, parsing, chunking, etc). ChatGPT finished a 50 question quiz in 6 min with a score of 46 / 50. gpt-oss-120b took over an hour but got 47 / 50. All the other local LLMs I tried were small and performed way worse, like less than 50% correct.

I ran this on an i7 with 64gb of RAM and an old nvidia card with 8g of vram.

EDIT: Forgot to say what the RAG system was doing which was answering a 50 question multiple choice test about GCP and cloud engineering.

embedding-shape 3 days ago

> gpt-oss-120b is amazing

Yup, I agree, easily best local model you can run today on local hardware, especially when reasoning_effort is set to "high", but "medium" does very well too.

I think people missed out on how great it was because a bunch of the runners botched their implementations at launch, and it wasn't until 2-3 weeks after launch that you could properly evaluate it, and once I could run the evaluations myself on my own tasks, it really became evident how much better it is.

If you haven't tried it yet, or you tried it very early after the release, do yourself a favor and try it again with updated runners.

whatreason 2 days ago

What do you use to run gpt-oss here? ollama, vLLM, etc

  • embedding-shape 2 days ago

    Not parent, but frequent user of GPT-OSS, tried all different ways of running it. Choice goes something like this:

    - Need batching + highest total throughoutput? vLLM, complicated to deploy and install though, need special versions for top performance with GPT-OSS

    - Easiest to manage + fast enough: llama.cpp, easier to deploy as well (just a binary) and super fast, getting ~260 tok/s on a RTX Pro 6000 for the 20B version

    - Easiest for people not used to running shell commands or need a GUI and don't care much for performance: Ollama

    Then if you really wanna go fast, try to get TensorRT running on your setup, and I think that's pretty much the fastest GPT-OSS can go currently.

giorgioz 2 days ago

on what hardware you manate to run gpt-oss-120b locally?

[removed] 3 days ago
[deleted]
rovr138 3 days ago

> I created a RAG agent to hold most of GCP documentation (separate download, parsing, chunking, etc)

If you share the scripts to gather the GCP documentation this, that'd be great. Because I have had an idea to do something like this, and the part I don't want to deal with is getting the data

  • fm2606 2 days ago

    I tried scripts but got blocked. I used wget to download tthem

gkfasdfasdf 2 days ago

What were you using for RAG? Did you build your own or some off the shelf solution (e.g. openwebui)

  • fm2606 2 days ago

    I used pg vector chunking on paragraphs. For the answers I saved in a flat text file and then parsed to what I needed.

    For parsing and vectorizing of the GCP docs I used a Python script. For reading each quiz question, getting a text embedding and submitting to an LLM, I used Spring AI.

    It was all roll your own.

    But like I stated in my original post I deleted it without backup or vcs. It was the wrong directory that I deleted. Rookie mistake for which I know better.

lacoolj 3 days ago

you can run the 120b model on an 8GB GPU? or are you running this on CPU with the 64GB RAM?

I'm about to try this out lol

The 20b model is not great, so I'm hoping 120b is the golden ticket.

  • gunalx 3 days ago

    I have in many cases had better results with the 20b model, over the 120b model. Mostly because it is faster and I can iterate prompts quicker to choerce it to follow instructions.

    • embedding-shape 2 days ago

      > had better results with the 20b model, over the 120b model

      The difference of quality and accuracy of the responses between the two is vastly different though, if tok/s isn't your biggest priority, especially when using reasoning_effort "high". 20B works great for small-ish text summarization and title generation, but for even moderately difficult programming tasks, 20B fails repeatedly while 120B gets it right on the first try.

  • fm2606 3 days ago

    Everything I run, even the small models, some amount goes to the GPU and the rest to RAM.

  • fm2606 3 days ago

    Hmmm...now that you say that, it might have been the 20b model.

    And like a dumbass I accidentally deleted the directory and didn't have a back up or under version control.

    Either way, I do know for a fact that the gpt-oss-XXb model beat chatgpt by 1 answer and it was 46/50 at 6 minutes and 47/50 at 1+ hour. I remember because I was blown away that I could get that type of result running locally and I had texted a friend about it.

    I was really impressed but disappointed at the huge disparity between time the two.