Comment by anerli

Comment by anerli a day ago

3 replies

So this is a path that we definitely considered. However we think its a half-measure to generate actual Playwright code and just run that. Because if you do that, you still have a brittle test at the end of the day, and once it breaks you would need to pull in some LLM to try and adapt it anyway.

Instead of caching actual code, we cache a "plan" of specific web actions that are still described in natural language.

For example, a cached "typing" action might look like: { variant: 'type'; target: string; content: string; }

The target is a natural language description. The content is what to type. Moondream's job is simply to find the target, and then we will click into that target and type whatever content. This means it can be full vision and not rely on DOM at all, while still being very consistent. Moondream is also trivially cheap to run since it's only a 2B model. If it can't find the target or it's confidence changed significantly (using token probabilities), it's an indication that the action/plan requires adjustment, and we can dynamically swap in the planner LLM to decide how to adjust the test from there.

ekzy a day ago

Did you consider also caching the coordinates returned by moondream? I understand that it is cheap, but it could be useful to detect if an element has changed position as it may be a regression

  • anerli 21 hours ago

    So the problem is if we cache the coordinates and click blindly at the saved positions, there's no way to tell if the interface changes or if we are actually clicking the wring things (unless we try and do something hacky like listen for events on the DOM). Detecting whether elements have changed position though would definitely be feasible if re-running a test with Moondream, could compared against the coordinates of the last run.