Comment by OtherShrezzing

Comment by OtherShrezzing a day ago

1 reply

> Simple command-line tools that implement obscure hashing and encryption algorithms are straightforward initial targets, but this approach can easily extend to more complex software, such as websites, professional software, and games.

>Each replication task consists of a detailed specification and a reference implementation. The central idea is that AI models are trained to produce an implementation that precisely matches the reference behavior.

I really don't see the connection from the statements in the article's content, and the assertion near the start that:

>Doing this effectively will produce RL models with strong few-shot, task-agnostic abilities capable of quickly adapting to entirely new tasks.

There's no clear reason outlined in the piece to describe why narrow & well-scoped 1-person-day tasks might scale up to 10,000-person-year projects. If they did, we should expect far more 10,000-person-year projects in the real economy, because the learning curve for firms scaling would be something approximating a straight line. There are very few 10,000-person-year projects, and very many 1-person-day projects.

It seems more like this will spend an unimaginable amount of compute, in order to produce models which are incredibly good at a very precise form of IP theft, and not especially good at any generalisable skills. It's so ludicrously rare that an engineer (or author, illustrator, etc) is tasked with "create a pixel-perfect reimplementation of this existing tool".

rightbyte a day ago

> models which are incredibly good at a very precise form of IP theft

A smell big success? Copyright laundering is the killer app of AI this far.