Show HN: OpenAI/reflect – Physical AI Assistant that illuminates your life

(github.com)

89 points by Sean-Der 4 days ago

56 comments

I have been working on making WebRTC + Embedded Devices easier for a few years. This is a hackathon project that pulled some of that together. I hope others build on it/it inspires them to play with hardware. I worked on it with two other people and I had a lot of fun with some of the ideas that came out of it.

* Extendable/hackable - I tried to keep the code as simple as possible so others can fork/modify easily.

* Communicate with light. With function calling it changes the light bulb, so it can match your mood or feelings.

* Populate info from clients you control. I wanted to experiment with having it guide you through yesterday/today.

* Phone as control. Setting up new devices can be frustrating. I liked that this didn't require any WiFi setup, it just routed everything through your phone. Also cool then that they device doesn't actually have any sensitive data on it.

Sean-Der 4 days ago

I also have been working with Daily on https://github.com/pipecat-ai/pipecat-esp32

I see so much potential if I can make hardware hacking + WebRTC easy. Not just for AI assistants but security cameras + robotics. If anyone has questions/ideas/feedback here to help :)

voxelizer 4 days ago

I love seeing that hackathons are encouraged inside OpenAI and most importantly, that their outcome is also shared :)

  • tesch1 2 days ago

    A cynic might wonder if this is just another way for a corporation selling advertising to get more of the "your data". Who is sharing more? :)

kelseydh 3 days ago

It annoys me a lot that the current devices for controlling smart homes, such as Amazon Alexa or Google Home, lack the ability for lovely conversations the way OpenAI has.

  • crimsoneer 3 days ago

    The way the Gemini Google Assistant rollout has been SO SLOW is utterly baffling to me.

  • godelski 3 days ago

    I honestly can't tell if this comment is joking, serious, or AI lol

    • HPsquared 3 days ago

      LLMs have a lot of advantages over humans for making conversation.

      Even forgetting the main advantages (24x7 availability, and ability to talk about basically any topic for as much or little time as you want), they also get basically every obscure reference/analogy/metaphor and how it ties in to the topic at hand.

      Usually when you're talking to another person, the intersection of obscure references you can confidently make (with the assumption your partner will understand them) is much more limited. I enjoy making those random connections so it's a real luxury to have a conversation partner that gets them all.

      Another one is simply the time and attention they can bring to things a normal person would never have the time for. I'd not want to talk someone's ear off, unless I was paying them and even then, I don't want to subject someone to topics of only interest to myself.

      (Edit: I suppose it's the final apotheosis of the atomised individual leaving all traces of community behind)

      • eloisius 3 days ago

        > final apotheosis of the atomised individual leaving all traces of community behind

        It's not. In 10 years this is going to look as dumb as the biohacker wetware bros surgically embedding RFID chips in their hands. There's much more to communication (and life) than receiving pantomimed validation for your obscure references. You could be throwing away opportunities to connect with another person who would genuinely share your interests and help you grow as a person. Having a useless magnet in your fingertip is going to seem brilliant compared to ending up socially withdrawn and mentally unwell because you traded human companionship for a chat bot.

        • HPsquared 3 days ago

          I think it's a much bigger social phenomenon already. Social talk will become even more a matter of performance, positioning and signalling, rather than something pursued for enjoyment of the thing itself.

          Maybe I'm just weird but LLM conversations seem generally more interesting and enlightening, even in these early iterations of the technology.

      • godelski 3 days ago

        Honestly, it sounds like you need a therapist, not a LLM. I'm not saying this as some quip, I'm saying this because what you wrote is that concerning.

OJFord 3 days ago

Why does this need hardware, other than the phone? Could just be an app on the phone couldn't it?

  • Sean-Der 3 days ago

    I was interested in the ‘hands-free’ idea.

    If I put these devices through out my house it would allow me to switch AI personalities by proximity.

    You can also use the device without your phone. These devices are also very cheap. I think you could do audio only for around ~5$

Telemakhos 4 days ago

Somewhere in here there's a joke about how many tokens it takes to turn on a lightbulb.

[removed] 4 days ago
[deleted]
lagrange77 4 days ago

Is it my browser, or does the video in the readme not have sound?

  • Sean-Der 4 days ago

    No sound! YouTube video in README does.

    I was tempted to put Erik Satie in the README video. Didn’t want to risk copyright issues

countfeng 3 days ago

It would be perfect if it could intelligently linkany device under authorization

  • Sean-Der 3 days ago

    Can you describe more? I would love to build/try it!

TZubiri 4 days ago

I get that this is as-is, but I wonder if so many ultra-alpha products don't dilute the OpenAI brand and create redundancy in the product line. It feels like the opposite of Apple's well thought out planned product design and product line.

Let's see if it pays out.

  • Sean-Der 4 days ago

    This is just a hackathon project. Not a product in any way.

    My primary work is on backend WebRTC servers. This was just an outlet/fun side thing to do client and embedded work. I love writing C and do microcontrollers. I just can’t seem to find a way to do it full time:(

    • dasickis 3 days ago

      We could help you find a pathway there :)

      • [removed] 3 days ago
        [deleted]
  • tuckerman 4 days ago

    For a developer platform having examples is useful as a starting point for new projects.

    Also, I’m not sure if it’s similar at OpenAI, but when I was at Google it was much easier to get approval to put an open source project under the Google GitHub org than my personal user.

  • jgalt212 4 days ago

    They're selling shares at a $500B valuation. The market is telling them everything they are doing is amazing.

    • TZubiri 4 days ago

      Is it possible to differentiate the feedback of the initial success of chatgpt from whatever came after it?

      It's possible those investments are just the oai owners selling their 2023 chatgpt success and its profit share.