Comment by arthurfirst

Comment by arthurfirst 2 days ago

19 replies

I get the moral argument and even agree with it but we are a minority and of course we expect to be able sell our professional skills -- but if you are 'right' and out of business nobody will know. Is that any better than 'wrong' and still in business?

You might as well work on product marketing for ai because that is where the client dollars are allocated.

If it's hype at least you stayed afloat. If it's not maybe u find a new angle if you can survive long enough? Just survive and wait for things to shake out.

order-matters 2 days ago

Yes, actually - being right and out of business is much better than being wrong and in business when it comes to ethics and morals. I am sure you could find a lot of moral values you would simply refuse to compromise on for the sake of business. the line between moral value and heavy preference, however, is blurry - and is probably where most people have AI placed on the moral spectrum right now. Being out of business shouldn't be a death sentence, and if it is then maybe we are overlooking something more significant.

I am in a different camp altogether on AI, though, and would happily continue to do business with it. I genuinely do not see the difference between it and the computer in general. I could even argue it's the same as the printing press.

What exactly is the moral dilemma with AI? We are all reading this message on devices built off of far more ethically questionable operations. that's not to say two things cant both be bad, but it just looks to me like people are using the moral argument as a means to avoid learning something new while being able to virtue signal how ethical they are about it, while at the same time they refuse to sacrifice things they are already accustomed to for ethical reasons when they learn more about it. It just all seems rather convenient.

the main issue I see talked about with it is in unethical model training, but let me know of others. Personally, I think you can separate the process from the product. A product isnt unethical just because unethical processes were used to create it. The creator/perpetrator of the unethical process should be held accountable and all benefits taken back as to kill any perceived incentive to perform the actions, but once the damage is done why let it happen in vain? For example, should we let people die rather than use medical knowledge gained unethically?

Maybe we should be targeting these AI companies if they are unethical and stop them from training any new models using the same unethical practices, hold them accountable for their actions, and distribute the intellectual property and profits gained from existing models to the public, but models that are already trained can actually be used for good and I personally see it as unethical not to.

Sorry for the ramble, but it is a very interesting topic that should probably have as much discussion around it as we can get

  • arthurfirst 2 days ago

    > Yes, actually - being right and out of business is much better than being wrong and in business when it comes to ethics and morals.

    Yes, but since you are out of business you no longer have an opportunity to fix that situation or adapt it to your morals. It's final.

    Turning the page is a valid choice though. Sometimes a clean slate is what you need.

    > Being out of business shouldn't be a death sentence, and if it is then maybe we are overlooking something more significant.

    Fair point! It feels like a death sentence when you put so much into it though -- a part of you IS dying. It's a natural reflex to revolt at the thought.

    > For example, should we let people die rather than use medical knowledge gained unethically?

    Depends if you are doing it 'for their own good' or not.

    Also the ends do not justify the means in the world of morals we are discussing -- that is pragmatism / utilitarianism and belongs to the world of the material not the ideal.

    Finally - Who determines what is ethical? beyond the 'golden rule'? This is the most important factor. I'm not implying ethics are ALL relative, but beyond the basics they are, and who determines that is more important than the context or the particulars.

    • order-matters 2 days ago

      >Yes, but since you are out of business you no longer have an opportunity to fix that situation or adapt it to your morals. It's final.

      Lots of room for nuance here, but generally Id say its more pragmatic to pivot your business to one that aligns with your morals and is still feasible, rather than convince yourself youre going to influence something you have no control over while compromising on your values. i am going to emphasize the relevance of something being an actual moral or ethical dilemma vs something being a very deep personal preference or matter of identity/personal branding.

      >Fair point! It feels like a death sentence when you put so much into it though -- a part of you IS dying. It's a natural reflex to revolt at the thought.

      I agree, it is a real loss and I don't mean for it to be treated lightly but if we are talking about morals and potentially feeling forced to compromise them in order to survive, we should acknowledge it's not really a survival situation.

      >Depends if you are doing it 'for their own good' or not.

      what do you mean by this?

      I am not posing a hypothetical. modern medicine has plenty of contributions to it from unethical sources. Should that information be stripped from medical textbooks and threaten to take licenses away from doctors who use it to inform their decision until we find an ethical way to relearn it? Knowing this would likely allow for large amounts of suffering to go untreated that could have otherwise been treated? I am sincerely trying not to make this sound like a loaded question

      also, this is not saying the means are justified. I want to reiterate my point of explicitly not justifying the means and saying the actors involved in the means should be held maximally accountable.

      I would think from your stance on the first bullet point you would agree here - as by removing the product from the process you are able to adapt it to your morals.

      >Finally - Who determines what is ethical?

      I agree that philosophically speaking all ethics are relative, and I was intending to make my point from the perspective of navigating the issues as in individual not as a collective making rules to enforce on others. So you. you determine what is ethical to you

      However, there are a lot of systems already in place for determining what is deemed ethical behavior in areas where most everyone agrees some level of ethics is required. This is usually done through consensus and committees with people who are experts in ethics and experts in the relevant field its being applied to.

      AI is new and this oversight does not exist yet, and it is imperative that we all participate in the conversation because we are all setting the tone for how this stuff will be handled. Every org may do it differently, and then whatever happens to be common practice will be written down as the guidelines

    • johnnyanmac 2 days ago

      >It's final.

      You should tell that to all the failed businesses Jobs had or was ousted out of. Hell, Trump hasn't really had a single successful business in his life.

      Nothing is final until you draw your last breath.

      >Who determines what is ethical? beyond the 'golden rule'?

      To be frank, you're probably not the audience being appealed to in this post if you have to suggest "ethics can be relative". This is clearly a group of craftsmen offering their hands and knowledge. There are entire organizations who have guidelines if you need some legalese sense of what "ethical" is here.

  • int_19h 15 hours ago

    > What exactly is the moral dilemma with AI? We are all reading this message on devices built off of far more ethically questionable operations.

    The main difference is that for those devices, the people negatively affected by operations are far away in another country, and we're already conditioned to accept their exploitation as "that's just how the world works" or "they're better off that way". With AI, the people affected - those whose work was used to train, and those who lose jobs because of it - are much closer. For software engineers in particular, these are often colleagues and friends.

  • sambuccid a day ago

    >> The creator/perpetrator of the unethical process should be held accountable and all benefits taken back as to kill any perceived incentive to perform the actions, but once the damage is done why let it happen in vain?

    That's very similar to other unethical processes(for example child labour), and we see that government is often either too slow to move or just not interested, and that's why people try to influence the market by changing what they buy.

    It's similar for AI, some people don't use it so that they don't pay the creators (in money or in personal data) to train the next model, and at the same time signal to the companies that they wouldn't be future customers of the next model.

    (I'm not necessarely in the group of people avoiding to use AI, but I can see their point)

  • derangedHorse a day ago

    > but once the damage is done why let it happen in vain?

    Because there are no great ways to leverage the damage without perpetuating it. Who do you think pays for the hosting of these models? And what do you mean by distribute the IP and profits to the public? If this process will be facilitated by government, I don’t have faith they’ll be able to allocate capital well enough to keep the current operation sustainable.

johnnyanmac 2 days ago

>but if you are 'right' and out of business nobody will know. Is that any better than 'wrong' and still in business?

Depends. Is it better to be "wrong" and burn all your goodwill for any future endeavors? Maybe, but I don't think the answer is clear cut for everyone.

I also don't fully agree with us being the "minority". The issue is that the majority of investors are simply not investing anymore. Those remaining are playing high stakes roulette until the casino burns down.

trial3 2 days ago

> but if you are 'right' and out of business nobody will know. Is that any better than 'wrong' and still in business?

yes [0]

[0]: https://en.wikipedia.org/wiki/Raytheon

  • anonymars 2 days ago

    Can you... elaborate?

    • jfindper 2 days ago

      Not the parent.

      I believe that they are bringing up a moral argument. Which I'm sympathetic too, having quit a job before because I found that my personal morals didn't align with the company, and the cognitive dissonance to continue working there was weighing heavily on me. The money wasn't worth the mental fight every day.

      So, yes, in some cases it is better to be "right" and be forced out of business than "wrong" and remain in business. But you have to look beyond just revenue numbers. And different people will have different ideas of "right" and "wrong", obviously.

      • arthurfirst 2 days ago

        Moral arguments are a luxury of thinkers and only a small percentage of people can be reasoned with that way anyways. You can manipulate on morals but not reason in most cases.

        Agreed that you cannot be in a toxic situation and not have it affect you -- so if THAT is the case -- by all means exit asap.

        If it's perceived ethical conflict the only one you need to worry about is the golden rule -- and I do not mean 'he who has the gold makes the rules' I mean the real one. If that conflicts with what you are doing then also probably make an exit -- but many do not care trust me... They would take everything from you and feel justified as long as they are told (just told) it's the right thing. They never ask themselves. They do not really think for themselves. This is most people. Sadly.

      • anonymars 2 days ago

        But the parent didn't really argue anything, they just linked to a Wikipedia article about Raytheon. Is that supposed to intrinsically represent "immorality"?

        Have they done more harm than, say, Meta?