Comment by robot-wrangler
Comment by robot-wrangler 2 days ago
Any automation/agents/etc around that which you could share, or just a pretty manual process? I'm working on something similar.
After hitting the inevitable problems with LLMs trying to read/write more obscure targets like alloy, I've been trying to decide whether it's better to a) create a python-wrapper for the obscure language, b) build the MCP tool-suite for validate/analyze/run, or c) go all the way towards custom models, fine-tuning, synthetic data and all that.
I’m purely in experiment stage, no automation, no agents; just ‘these are the design docs, this is the existing code base, let’s get a simple alloy model started’ and interactively building from there. I was concerned about the same things you mention, but starting very small with a tight development loop worked well with GPT 5.1 high. I wouldn’t try to zero shot the whole model unsupervised… yet.
The first step before a python/TS wrapper would be to put a single file manual into the context as is customary for non-primary targets, but I didn’t even reach the stage where this is necessary ;)