Comment by punnerud
I changed it into running 100% locally with ollama:8b: https://github.com/punnerud/g1
Not updated the Readme yet
I changed it into running 100% locally with ollama:8b: https://github.com/punnerud/g1
Not updated the Readme yet
I just tried it with phi3.5:3.8b-mini-instruct-fp16 - it didn't work with the base question, though interestingly the reasoning decided that strawberry was spelt s-t-r-a-w-b-e-r - which explains why the AIs have such a hard time with this question. I also tried it with my current favourite programming question too - What programming language is this whole line of code using? `def obfuscated_fibonacci(x)` - and like all the AIs, it was convinced the answer was python (the correct answer is ruby - python needs a trailing colon - but most LLMs will swear blind that it's python). It didn't even consider ruby as a possibility. Nobody uses ruby anyway :D
Thanks for the fork and the suggestions though - looks like I'll be having fun with this over the week!
Maybe we could improve it more by combining it with embeddings?
It’s a way to convert a text or response into an array of numbers, that can be used for similarity lookups.
I made a way to query large datasets of text strings: https://github.com/punnerud/search-embeddings-llama3.1
Can be used to let it explore a graph of knowledge as long as the graph is related to the original question, and can explore different solutions at the same time without repeating itself (then it’s get linked back to similar answers and stopped)
You should also try phi-3-small 7B, seems much better at reasoning according to https://livebench.ai