Comment by mooreds

Comment by mooreds 3 months ago

1 reply

Is the output as good?

I'd love the ability to run the LLM locally, as that would make it easier to run on non public code.

fforflo 3 months ago

It's decent enough. But you'd probably have to use a model like llama2, which may set your GPU on fire.