Comment by parsabg
Thank you!
I like that suggestion. Saved prompts seem like an obvious addition, and having templating within them makes sense. I wonder how well would "for each of the following websites do X" prompts work (so have the LLM do the enumeration rather than the client - my intuition is that it won't be as robust because of the long accumulated context)
Edit: forgot to mention it does support Ollama already
Yeah, that "for each" needs to be code instead of prompt. Ideally you want to only use the LLM for the first time you run the task, but after "figuring out the path", you want to run that directly through code
So for the example above, the user might have to do: "do this for this website", then save macro, then create template, then run template with input: [list of 10 websites]