Comment by hackmaxim

Comment by hackmaxim 2 days ago

5 replies

Yes, of course!

Tasks must be coding problems to complete in a set of open-source repositories. We have eight repositories now and will add more in the future. You can source tasks by identifying merged commits into these repos, and making the following: - A well-defined task description (what should the AI do?) - A golden solution (can be sourced by the implementation in the merge commit) - A test patch (can be sourced from the merge commit, this is a testing suite that verifies whether the AI's solution is correct).

If you can make a task that is hard enough for our AI, you will get paid a fixed amount. That's it!

Once you are approved onto the site, there will be a more detailed tech spec + payment info + a tutorial.

causal 2 days ago

So this has to be a meaningful PR for the open source repository? You can't just invent arbitrary coding challenges?

This seems like trying to get free code contributions with a weird sort of gambling mechanic attached.

  • hackmaxim 2 days ago

    Nothing is free, of course, since we transparently pay per task (and as previously stated, the pay is very generous). There are certainly guidelines for what kinds of tasks you can submit to our platform, but beyond that, you can make any verifiable coding task in any of our qualifying open-source repositories.

    We also definitely do not require you to write new code into the repo. As I said, you can adapt existing merged commits into coding tasks.

    • nl 2 days ago

      How do we know in advance if something it going to be hard enough?

      For example I have a bunch of closed PRs on my Vibe-Prolog project that took multiple attempts to get right: https://github.com/nlothian/Vibe-Prolog/pulls?q=is%3Apr+is%3...

      As a specific example: https://github.com/nlothian/Vibe-Prolog/pull/214 which implements https://github.com/nlothian/Vibe-Prolog/issues/204

      I'd be very interested if it is eligible!

      • hackmaxim a day ago

        Something like this is reasonable. To clarify, we can't support arbitrary tasks yet on our submission platform; the tasks have to belong to a set of supported repositories. But tasks of this difficulty level generally tend to be hard enough.

        You can't know for sure what is going to be the right difficulty in advance; however, you can definitely develop intuition for it as you make more tasks and better understand the model's strengths and weaknesses!

  • carterschonwald 2 days ago

    Ive seen more than one flakey startup do this to fluff adoption metrics and perceived activity. It was frustrating