Comment by erikig
Hardware: MacBook Pro M4 Max, 128GB
Platform: LMStudio (primarily) & Ollama
Models:
- qwen/qwen3-coder-30b A3B Instruct 8-bit MLX
- mlx-community/gpt-oss-120b-MXFP4-Q8
For code generation especially for larger projects, these models aren't as good as the cutting edge foundation models. For summarizing local git repos/libraries, generating documentation and simple offline command-line tool-use they do a good job.
I find these communities quite vibrant and helpful too:
Since you are on Mac, if you need some kind code execution sandbox, check out Coderunner[1] which is based on Apple container, provides a way execute any LLM generated cod e without risking arbitrary code execution on your machine.
I have recently added claude skills to it. So, all the claude skills can be executed locally on your mac too.
1. https://github.com/instavm/coderunner