Comment by euleriancon

Comment by euleriancon a day ago

5 replies

> The truth that may be shocking to some is that open source contributions submitted by users do not really save me time either, because I also feel I have to do a rigorous review of them.

This truly is shocking. If you are reviewing every single line of every package you intend to use how do you ever write any code?

adastra22 20 hours ago

That’s not what he said. He said he reviews every line of every pull request he receives to his own projects. Wouldn’t you?

abenga a day ago

You do not need to review every line of every package you use, just the subset of the interface you import/link and use. You have to review every line of code you commit into your project. I think attempting to equate the two is dishonest dissembling.

  • euleriancon a day ago

    To me, the point the friend is making is, just like you said, that you don't need to review every line of code in a package, just the interface. The author misses the point that there truly is code that you trust without seeing it. At the moment AI code isn't as trustworthy as a well tested package but that isn't intrinsic to the technology, just a byproduct of the current state. As AI code becomes more reliable, it will likely become the case that you only need to read the subset of the interface you import/link and use.

    • bluefirebrand 21 hours ago

      This absolutely is intrinsic to the workflow

      Using a package that hundreds of thousands of other people use is low risk, it is battle tested

      It doesn't matter how good AI code gets, a unique solution that no one else has ever touched is always going to be more brittle and risky than an open source package with tons of deployments

      And yes, if you are using an Open Source package that has low usage, you should be reviewing it very carefully before you embrace it

      Treat AI code as if you were importing from a git repo with 5 installs, not a huge package with Mozilla funding

    • root_axis 21 hours ago

      > At the moment AI code isn't as trustworthy as a well tested package but that isn't intrinsic to the technology, just a byproduct of the current state

      This remains to be seen. It's still early days, but self-attention scales quadratically. This is a major red flag for the future potential of these systems.