Comment by marricks

Comment by marricks 4 days ago

4 replies

They feel behind because they didn't have the smart guy with a new idea a few years back, and HE decided to work at a place which started as open.

Playing catch up and trying to attract talent from the hot-new-thing OpenAI requires incentives beyond lots of money. I contend actually being open helps.

I'm sure that's one reason Facebook has an open source model, scientists can care about ethics and could be attracted to openness.

michaelt 4 days ago

> They feel behind because they didn't have the smart guy with a new idea a few years back, and HE decided to work at a place which started as open.

The "Attention Is All You Need" guys all worked at Google. Google is where they are despite having the smart guys with a new idea a few years back.

Of course, IMHO it wouldn't have have helped Google if they'd kept the transformer architecture secret. They'd have fumbled it because they didn't realise what they had.

  • zozbot234 4 days ago

    Didn't Google have the LaMDA model pretty early, which was even described as "sentient" at some point? That doesn't look "fumbled" to me.

    • michaelt 4 days ago

      What Google did was sit on their ass, not deigning to release anything. In the meantime, OpenAI became a $150 billion company. And Anthropic came out with Claude, and Facebook with Llama, and Mistral with their models.

      Only then did Google realise there might be something to this LLM stuff - so they responded with Bard, a product so poorly received they later had to completely rebrand it. Looks like they didn't have a "sentient" model up their sleeve after all. Then the updated, rebranded model had a bunch of image generation embarrassments of its own.

      Admittedly, they have recovered somewhat since then; they're second on some performance leaderboards, which is respectable.

      But there was a real tortoise-and-hare situation where they thought they were so far ahead they had time for a nap, until they got overtaken. Any lead they had from inventing transformers and being the only people with TPUs has been squandered.

      • cubefox 4 days ago

        I have the impression they regarded generative AI as too dangerous. Before the success of ChatGPT, they never considered making PaLM or LaMDA or Chinchilla or Imagen publicly available until they saw themselves in a competitive disadvantage.