Comment by joshstrange

Comment by joshstrange a day ago

120 replies

I could not agree more with this. 90% of AI features feel tacked on and useless and that’s before you get to the price. Some of the services out here are wanting to charge 50% to 100% more for their sass just to enable “AI features”.

I’m actually having a really hard time thinking of an AI feature other than coding AI feature that I actually enjoy. Copilot/Aider/Claude Code are awesome but I’m struggling to think of another tool I use where LLMs have improved it. Auto completing a sentence for the next word in Gmail/iMessage is one example, but that existed before LLMs.

I have not once used the features in Gmail to rewrite my email to sound more professional or anything like that. If I need help writing an email, I’m going to do that using Claude or ChatGPT directly before I even open Gmail.

petekoomen a day ago

One of the interesting things I've noticed is that the best experiences I've had with AI are with simple applications that don't do much to get in the way of the model, e.g. chatgpt and cursor/windsurf.

I'm hopeful that as devs figure out how to build better apps with AI we'll have have more and more "cursor moments" in other areas in our lives

  • dangus a day ago

    Perhaps the real takeaway is that there really is only one product, two if you count image generation.

    Perhaps the only reason Cursor is so good is because editing code is so similar to the basic function of an LLM without anything wrapped around it.

    Like, someone prove me wrong by linking 3 transformative AI products that:

    1. Have nothing to do with "chatting" to a thin wrapper (couldn't just be done inside a plain LLM with a couple of file uploads added for additional context)

    2. Don't involve traditional ML that has existed for years and isn't part of the LLM "revolution."

    3. Has nothing to do with writing code

    For example, I recently used an AI chatbot that was supposed to help me troubleshoot a consumer IoT device. It basically regurgitated steps from the manual and started running around in circles because my issue was simply not covered by documentation. I then had to tell it to send me to a human. The human had more suggestions that the AI couldn't think of but still couldn't help because the product was a piece of shit.

    Or just look at Amazon Q. Ask it a basic AWS question and it'll just give you a bogus "sorry I can't help with that" answer where you just know that running over to chatgpt.com will actually give you a legitimate answer. Most AI "products" seem to be castrated versions of ChatGPT/Claude/Gemini.

    That sort of overall garbage experience seems to be what is most frequently associated with AI. Basically, a futile attempt to replace low-wage employees that didn't end up delivering any value to anyone, especially since any company interested in eliminating employees just because "fuck it why not" without any real strategy probably has a busted-ass product to begin with.

    Putting me on hold for 15 minutes would have been more effective at getting me to go away and no compute cycles would have been necessary.

    • leoedin 19 hours ago

      Outside of coding, Google's NotebookLM is quite useful for analysing complex documentation - things like standards and complicated API specs.

      But yes, an AI chatbot that can't actually take any actions is effectively just regurgitating documentation. I normally contact support because the thing I need help with is either not covered in documentation, or requires an intervention. If AI can't make interventions, it's just a fancy kind of search with an annoying interface.

      • dangus 17 hours ago

        I don’t deny that LLMs are useful, merely that they only represent one product that does a small handful of things well, where the industry-specific applications don’t really involve a whole lot of extra features besides just “feed in data then chat with the LLM and get stuff back.”

        Imagine if during the SaaS or big data or containerizaiton technology “revolutions” the application being run just didn’t matter at all. That’s kind of what’s going on with LLMs. Almost none of the products are all that much better than going to ChatGPT.com and dumping your data into the text box/file uploader and seeing what you get back.

        Perhaps an analogy to describe what I mean would be if you were comparing two SaaS apps, like let’s say YNAB and the Simplifi budget app. In the world of the SaaS revolution, the capabilities of each application would be competitive advantages. I am choosing one over the other for the UX and feature list.

        But in the AI LLM world, the difference between competing products is minimal. Whether you choose Cursor or Copilot or Firebase Studio you’re getting the same results because you’re feeding the same data to the same AI models. The companies that make the AI technologies basically don’t have a moat themselves, they’re basically just PaaS data center operators.

    • miki123211 18 hours ago

      Everything where structured output is involved, from filling in forms based on medical interview transcripts / court proceedings / calls, to an augmented chatbot that can do things for you (think hotel reservations over the phone), to directly generating forms / dashboards / pages in your system.

      • jajko 17 hours ago

        If thats the best current llms can do, my job is secured till retirement

        • ben_w 12 hours ago

          The best that current LLMs can do is PhD-level science questions and getting high scores in coding contests.

          Your job? Might be secure for a lifetime, might be gone next week. No way to tell — "intelligence" isn't yet so well understood to just be an engineering challenge, but it is so well understood that the effect on jobs may be the same.

    • ZephyrBlu 17 hours ago

      Two off the top of my head:

      - https://www.clay.com/

      - https://www.granola.ai/

      There are a lot of tools in the sales space which fit your criteria.

      • dangus 17 hours ago

        Granola is the exact kind of product I’m criticizing as being extremely basic and barely more than a wrapper. It’s just a meeting transcriber/summarizer, barely provides more functionality than leaving the OpenAI voice mode on during a call and then copying and pasting your written notes into ChatGPT at the end.

        Clay was founded 3 years before GPT 3 hit the market so I highly doubt that the majority of their core product runs on LLM-based AI. It is probably built on traditional machine learning.

    • ghaff a day ago

      I have used LLMs for some simple text generation for what I’m going to call boilerplate, eg why $X is important at the start of a reference architecture. But maybe it saved me an hour or two in a topic I was already fairly familiar with. Not something I would have paid a meaningful sum for. I’m sure I could have searched and found an article on the topic.

    • edanm a day ago

      > Perhaps the only reason Cursor is so good is because editing code is so similar to the basic function of an LLM without anything wrapped around it.

      I think this is an illusion. Firstly, code generation is a big field - it includes code completion, generating entire functions, and even agenting coding and the newer vibe-coding tools which are mixes of all of these. Which of these is "the natural way LLMs work"?

      Secondly, a ton of work goes into making LLMs good for programming. Lots of RLHF on it, lots of work on extracting code structure / RAG on codebases, many tools.

      So, I think there are a few reasons that LLMs seem to work better on code:

      1. A lot for work on it has been done, for many reasons, mostly monetary potential and that the people who build these systems are programmers.

      2. We here tend to have a lot more familiarity with these tools (and this goes to your request above which I'll get to).

      3. There are indeed many ways in which LLMs are a good fit for programming. This is a valid point, though I think it's dwarfed by the above.

      Having said all that, to your request, I think there are a few products and/or areas that we can point to that are transformative:

      1. Deep Research. I don't use it a lot personally (yet) - I have far more familiarity with the software tools, because I'm also a software developer. But I've heard from many people now that these are exceptional. And they are not just "thing wrappers on chat", IMO.

      2. Anything to do with image/video creation and editing. It's arguable how much these count as part of the LLM revolution - the models that do these are often similar-ish in nature but geared towards images/videos. Still, the interaction with them often goes through natural language, so I definitely think these count. These are a huge category all on their own.

      3. Again, not sure if these "count" in your estimate, but AlphaFold is, as I understand it, quite revolutionary. I don't know much about the model or the biology, so I'm trusting others that it's actually interesting. It is some of the same underlying architecture that makes up LLMs so I do think it counts, but again, maybe you want to only look at language-generating things specifically.

      • dangus 16 hours ago

        1. Deep Research (if you are talking about the OpenAI product) is part of the base AI product. So that means that everything building on top of that is still a wrapper. In other words, nobody besides the people making base AI technology is adding any value. An analogy to how pathetic the AI market is would be if during the SaaS revolution everyone just didn’t need to buy any applications and directly used AWS PaaS products like RDS directly with very similar results compared to buying SaaS software. OpenAI/Gemini/Claude/etc are basically as good as a full blown application that leverage their technology and there’s very limited need to buy wrappers that go around them.

        2. Image/video creation is cool but what value is it delivering so far? Saving me a couple of bucks that I would be spending on Fiverr for a rough and dirty logo that isn’t suitable for professional use? Graphic designers are already some of the lowest paid employees at your company so “almost replacing them but not really” isn’t a very exciting business case to me. I would also argue that image generation isn’t even as valuable as the preceding technology, image recognition. The biggest positive impact I’ve seen involves GPU performance for video games (DLSS/FSR upscaling and frame generation).

        3. Medical applications are the most exciting application of AI and ML. This example is something that demonstrates what I mean with my argument: the normal steady pace of AI innovation has been “disrupted” by LLMs that have added unjustified hype and investment to the space. Nobody was so unreasonably hyped up about AI until it was packaged as something you can chat with since finance bro investors can understand that, but medical applications of neural networks have been developing since long before ChatGPT hit the scene. The current market is just a fever dream of crappy LLM wrappers getting outsized attention.

    • otabdeveloper4 20 hours ago

      LLMs make all sorts of classification problems vastly easier and cheaper to solve.

      Of course, that isn't a "transformative AI product", just a regular old product that improves your boring old business metrics. Nothing to base a hype cycle on, sadly.

      • molf 16 hours ago

        Agree 100%.

        We built a very niche business around data extraction & classification of a particular type of documents. We did not have access to a lot of sample data. Traditional ML/AI failed spectacularly.

        LLMs have made this super easy and the product is very successful thanks to it. Customers love it. It is definitely transformative for them.

    • kybernetikos a day ago

      This challenge is a little unfair. Chat is an interface not an application.

      • RedNifre a day ago

        Generating a useful sequence of words or word-like tokens is an application.

    • aetherspawn 14 hours ago

      Is Cursor actually good though? I get so frustrated at how confidently it spews out the completely wrong approach.

      When I ask it to spit out Svelte config files or something like that, I end up having to read the docs myself anyway because it can’t be trusted, for instance it will spew out tons of lines to configure every parameter as something that looks like the default when all it needs to do is follow the documentation that just uses defaults()

      And it goes out of its way to “optimise” things that actually picks the wrong options versus the defaults which are fine.

    • whiddershins a day ago

      LLMs in data pipelines enable all sorts of “before impossible” stuff. For example, this creates an event calendar for you based on emails you have received:

      https://www.indexself.com/events/molly-pepper

      (that’s mine, and is due a bugfix/update this week. message me if you want to try it with your own emails)

      I have a couple more LLM-powered apps in the works, like next few weeks, that aren’t chat or code. I wouldn’t call them transformative, but they meet your other criteria, I think.

      • semi-extrinsic a day ago

        What part of this can't be done by a novice programmer who knows a little pattern matching and has enough patience to write down a hundred patterns to match?

teeray a day ago

> This demo uses AI to read emails instead of write them

LLMs are so good at summarizing that I should basically only ever read one email—from the AI:

You received 2 emails today that need your direct reply from X and Y. 1 is still outstanding from two days ago, _would you like to send an acknowledgment_? You received 6 emails from newsletters you didn’t sign up for but were enrolled after you bought something _do you want to unsubscribe from all of them_ (_make this a permanent rule_).

  • namaria a day ago

    I have fed LLMs PDF files, asked about the content and gotten nonsense. I would be very hesitant to trust them to give me an accurate summary of my emails.

    • HdS84 a day ago

      One of our managers uses Ai to summarize everything. Too bad it missed important caveats for an offer. Well, we burned an all nighters to correct the offer, but he did not read twenty pages but one...

      • namaria a day ago

        I don't know if this is the case but be careful about shielding management from the consequences of their bad choices at your expense. It all but guarantees it will get worse.

      • BeetleB a day ago

        Did he pull all nighters to fix it? If not, it wasn't "too bad" for him. I doubt he'll change his behavior.

      • pjc50 20 hours ago

        Where's the IBM slide about "a machine cannot be held accountable, therefore a machine should never make a management decision"?

        Of course, often it's quite hard to hold management accountable either.

        • checkyoursudo 17 hours ago

          Isn't a solution to assign vicarious liability to whomever approves the use of the decision-making machine?

  • nradov a day ago

    LLMs are terrible at summarizing technical emails where the details matter. But you might get away with it, at least for a while, in low performing organizations that tolerate preventable errors.

    • imp0cat a day ago

      This. LLMs seem to be great for 90+% of stuff, but sometimes, they just spew weird stuff.

    • HDThoreaun 17 hours ago

      If I get a technical email I read it myself. The summary just needs to say technical email from X with priority Y about problem Z

  • FabHK a day ago

    I got an email from the restaurant saying "We will confirm your dinner reservation as soon as we can", and Apple Intelligence summarizing it as "Dinner reservation confirmed." Maybe it can not only summarize, but also see the future??

    • rcarmo a day ago

      Well, at least it doesn’t make up words. The Portuguese version of Apple Intelligence made up “Invitaçāo” (think “invitashion”) and other idiocies the very first day it started working in the EU.

  • koolba a day ago

    > LLMs are so good at summarizing that I should basically only ever read one email—from the AI

    This could get really fun with some hidden text prompt injection. Just match the font and background color.

    Maybe these tools should be doing the classic air gap approach of taking a picture of the rendered content and analyzing that.

  • joshstrange a day ago

    What system are you using to do this? I do think that this would provide value for me. Currently, I barely read my emails, which I'm not exactly proud of, but it's just the reality. So something that summarized the important things every day would be nice.

  • amrocha 12 hours ago

    I fed an LLM the record of a chat between me and a friend, and asked it to summarize the times that we met in the past 3 months.

    Every time it gave me different results, and not once did it actually get it all right.

    LLMs are horrible for summarizing things. Summarizing is the art of turning low information density text into high information density text. LLMs can’t deal in details, so they can never accurately summarize anything.

  • throwaway290 a day ago

    What is the reason to unsub ever in that world? Are you saying the LLM can't skip emails? Seems like an arbitrary rule

danielbln a day ago

I enjoy Claude as a general purpose "let's talk about this niche thing" chat bot, or for general ideation. Extracting structured data from videos (via Gemini) is quite useful as well, though to be fair it's not a super frequent use case for me.

That said, coding and engineering is by far the most common usecase I have for gen AI.

  • joshstrange a day ago

    Oh, I'm sorry if it wasn't clear. I use Claude and ChatGPT to talk to about a ton of topics. I'm mostly referring to AI features being added to existing SaaS or software products. I regularly find that moving the conversation to ChatGPT or Claude is much better than trying to use anything that they may have built into their existing product.

rcarmo a day ago

The e-mail agent example is so good that it makes everything else I’ve seen and used pointless by comparison. I wonder why nobody’s done it that way yet.

sanderjd a day ago

I think the other application besides code copiloting that is already extremely useful is RAG-based information discovery a la Notion AI. This is already a giant improvement over "search google docs, and slack, and confluence, and jira, and ...".

Just integrated search over all the various systems at a company was an improvement that did not require LLMs, but I also really like the back and forth chat interface for this.

dale_glass 21 hours ago

I find that ChatGPT o3 (and the other advanced reasoning models) are decently good at answering questions with a "but".

Google is great at things like "Top 10 best rated movies of 2024", because people make lists of that sort of thing obsessively.

But Google is far less good at queries like "Which movies look visually beautiful but have been critically panned?". For that sort of thing I have far more luck with chatgpt because it's much less of a standard "top 10" list.

  • joshstrange 16 hours ago

    o3 has been a big improvement on Deep Research IMHO. o1 (or whatever model I originally used with it) was interesting but the results weren't always great. o3 has done some impressive research tasks for me and, unlike the last model I used, when I "check its work" it has always been correct.

knightscoop a day ago

I wonder sometime if this is why there is such an enthusiasm gap over AI between tech people and the general public. It's not just that your average person can't program; it's that they don't even conceptually understand why programming could unlock.

nicolas_t a day ago

I like perplexity when I need a quick overview of a topic with references to relevant published studies. I often use it when researching what the current research says on parenting questions or education. It's not perfect but because the answers link to the relevant studies it's a good way to get a quick overview of research on a given topic

bamboozled a day ago

Have you ever been cooking and asked Siri to set a timer? That's basically the most used AI feature outside of "coding" I can think of.

  • joshstrange 15 hours ago

    Setting a timer and setting a reminder. Occasionally converting units of measure. That's all I can rely on Siri (or Alexa) for and even then sometimes Siri doesn't make it clear if it did the thing. Most importantly, "set a reminder", it shows the text, and then the UI disappears, sometimes the reminder was created, sometimes not. It's maddening since I'm normally asking to be reminded about something important that I need to get recorded/tracked so I can "forget" it.

    The number of times I've had 2 reminders fire back-to-back because I asked Siri again to create one since I was _sure_ it didn't create the first one.

    Siri is so dumb and it's insane that more heads have not rolled at Apple because of it (I'm aware of the recent shakeup, it's about a decade too late). Lastly, whoever decided to ship the new Siri UI without any of the new features should lose their job. What a squandered opportunity and effectively fraud IMHO.

    More and more it's clear that Tim Cook is not the person that Apple needs at the helm. My mom knows Siri sucks, why doesn't the CEO and/or why is he incapable of doing anything to fix it. Get off your Trump-kissing, over-relying-on-China ass and fix your software! (Siri is not the only thing rotten)

bigstrat2003 a day ago

Honestly I don't even enjoy coding AI features. The only value I get out of AI is translation (which I take with a grain of salt because I don't know the other language and can't spot hallucinations, but it's the best tool I have), and shitposting (e.g. having chatGPT write funny stories about my friends and sending it to them for a laugh). I can't say there's an actual productive use case for me personally.

  • genewitch 20 hours ago

    I've anecdotally tested translations by ripping the video with subtitles and having whisper subtitle it, and also asking several AI to translate the .srt or .vtt file (subtotext I think does this conversion if you don't wanna waste tokens on the metadata)

    Whisper large-v3, the largest model I have, is pretty good, getting nearly identical translations to chatgpt or whatever, Google's default speech to text. The fun stuff is when you ask for text to text translations from LLMs.

    I did a real small writeup with an example but I don't have a place to publish nor am I really looking for one.

    I used whisper to transcribe nearly every "episode" of the Love Line syndicated radio show from 1997-2007 or so. It took, iirc, several days. I use it to grep the audio, as it were. I intend to do the same with my DVDs and such, just so I never have to Google "what movie / tv show is that line from?" I also have a lot of art bell shows, and a few others to transcribe.

    • farrelle25 20 hours ago

      > I used whisper to transcribe nearly every "episode" of the Love Line syndicated radio show from 1997-2007 or so.

      Yes - second this. I found 'Whisper' great for that type of scenario as well.

      A local monastery had about 200 audio talks (mp3). Whisper converted them all to text and GPT did a small 'smoothing' of the output to make it readable. It was about half a million words and only took a few hours.

      The monks were delighted - they can distribute their talks in small pamplets / PDFs now and is extra income for the community.

      Years ago as a student I did some audio transcription manually and something similar would have taken ages...

      • genewitch 12 hours ago

        I actually was asked by Vermin Supreme to hand-caption some videos, and i instantly regretted besmirching the existing subtitles. I was correct, the subtitles were awful, but boy, the thought of hand-transcribing something with Subtitle Edit had me walking that back pretty quick - and this was for a 4 minute video - however it was lyrical over music, so AI barely gave a starting transcription.

    • pjc50 20 hours ago

      I wanted this to work with Whisper, but the language I tried it with was Albanian and the results were absolutely terrible - not even readable English. I'm sure it would be better with Spanish or Japanese.

      • ben_w 19 hours ago

        According to the Common Voice 15 graph on OpenAI's github repository, Albanian is the single worst performance you could have had: https://github.com/openai/whisper

        But for what it's worth, I tried putting the YouTube video of Tom Scott presenting at the Royal Institute into the model, and even then the results were only "OK" rather than "good". When even a professional presenter and professional sound recording in a quiet environment has errors, the model is not really good enough to bother with.

Ntrails 21 hours ago

> Auto completing a sentence for the next word in Gmail/iMessage is one example

Interestingly, I despise that feature. It breaks the flow of what is actually a very simple task. Now I'm reading, reconsidering if the offered thing is the same thing I wanted over and over again.

The fact that I know this and spend time repeatedly disabling the damned things is awfully tiresome (but my fault for not paying for my own email etc etc)

  • genewitch 21 hours ago

    I've been using Fastmail in lieu of gmail for ten or eleven years. If you have a domain and control the DNS, I recommend it. At least you're not on Google anymore, and you're paying for fastmail, so it feels better - less like something is reading your emails.

tomjen3 12 hours ago

I really like my speech-to-text program, and I find using ChatGPT to look up things and answer questions is a much superior experience to Google, but otherwise, I completely agree with you.

Companies see that AI is a buzzword that means your stock goes up. So they start looking at it as an answer to the question: "How can I make my stock go up?" instead of "How can I create a better product", and then let the stock go up from creating a better product.

apwell23 a day ago

garmin wants me to pay for some gen-ai workout messages on connect plus. Its the most absurd AI slop of all. Same with strava. I workout for mental relaxation and i just hate this AI stuff being crammed in there.

Atleast clippy was kind of cute.

  • nradov a day ago

    Strava employees claim that casual users like the AI activity summaries. Supposedly users who don't know anything about exercise physiology didn't know how to interpret the various metrics and charts. I don't know if I believe that but it's at least plausible.

    Personally I wish I could turn off the AI features, it's a waste of space.

    • rurp a day ago

      Anytime someone from a company says that users like the super trendy thing they just made I take it with a sizeable grain of salt. Sometimes it's true, and maybe it is true for Strava, but I've seen enough cases where it isn't to discount such claims down to ~0.

    • genewitch 20 hours ago

      The guy at the Wendy's drive thru has told me repeatedly that most people don't want ketchup so they stopped putting it in bags by default.

  • danielbln a day ago

    Strava's integration is just so lackluster. It literally turns four numbers from right above the slop message into free text. Thanks Strava, I'm a pro user for a decade, finally I can read "This was a hard workout" after my run. Such useful, much AI.

  • bigstrat2003 a day ago

    At this point, "we aren't adding any AI features" is a selling point for me. I've gotten real tired of AI slop and hype.

  • sandspar a day ago

    I use AI chatbots for 2+ hours a day but the Garmin thing was too much for me. The day they released their AI Garmin+ subscription, I took off my Forerunner and put it in a drawer. The whole point of Garmin is that it feels emotionally clean to use. Garmin adding a scammy subscription makes the ecosystem feel icky, and I'm not going to wear a piece of clothing that makes me feel icky. I don't think I'll buy a Garmin watch again.

    (Since taking off the watch, I miss some of the data but my overall health and sleep haven't changed.)

Andugal a day ago

> I’m actually having a really hard time thinking of an AI feature other than coding AI feature that I actually enjoy.

If you attend a lot of meetings, having an AI note-taker take notes for you and generate a structured summary, follow-up email, to-do list, and more will be an absolute game changer.

(Disclaimer, I'm the CTO of Leexi, an AI note-taker)

  • AlexandrB a day ago

    The catch is: does anyone actually read this stuff? I've been taking meeting notes for meetings I run (without AI) for around 6 months now and I suspect no one other than myself has looked at the notes I've put together. I've only looked back at those notes once or twice.

    A big part of the problem is even finding this content in a modern corporate intranet (i.e. Confluence) and having a bunch of AI-generated text in there as well isn't going to help.

    • Karrot_Kream a day ago

      When I was a founding engineer at a(n ill-fated) startup, we used an AI product to transcribe and summarize enterprise sales calls. As a dev it was usually a waste of my time to attend most sales meetings, but it was highly illustrative to read the summaries after the fact. In fact many, many of the features we built were based on these action items.

      If you're at the scale where you have corporate intranet, like Confluence, then yeah AI note summarizing will feel redundant because you probably have the headcount to transcribe important meetings (e.g. you have a large enough enterprise sales staff that part of their job description is to transcribe notes from meetings rather than a small staff stretched thin because you're on vanishing runway at a small startup.) Then the natural next question arises: do you really need that headcount?

    • bee_rider a day ago

      I thought the point of having a meeting-notes person was so that at least one person would pay attention to details during the meeting.

      • jethro_tell a day ago

        I thought it was so I could go back 1 year and say, 'I was against this from the beginning and I was quite vocal that if you do this, the result will be the exact mess you're asking me to clean up now.'

        • bee_rider a day ago

          Ah, but a record for CYA and “told you so”, that’s pure cynicism. “At least one person paying attention” at least we can pretend the intent was to pair some potential usefulness with our cynicism.

      • gus_massa a day ago

        Also, ensure that if the final decition was to paint the the bike shed green, everyone agree it was the final decitions. (In long discusions, sometimes people misunderstand which was the final decition.)

        • soco a day ago

          If they misunderstood they will still disagree so the meeting notes will trigger another mail chain and, you guessed right, another meeting.

    • bluGill a day ago

      What is the problem?

      Notes are valuable for several reasons.

      I sometimes take notes myself just to keep myself from falling asleep in an otherwise boring meeting where I might need to know something shared (but probably not). It doesn't matter if nobody reads these as the purpose wasn't to be read.

      I have often wished for notes from some past meeting because I know we had good reasons for our decisions but now when questioned I cannot remember them. Most meetings this doesn't happen, but if there were automatic notes that were easy to search years latter that would be good.

      Of course at this point I must remind you that the above may be bad. If there is a record of meeting notes then courts can subpoena them. This means meetings with notes have to be at a higher level were people are not comfortably sharing what every it is they are thinking of - even if a bad idea is rejected the courts still see you as a jerk for coming up with the bad idea.

      • namaria a day ago

        Accurate notes are valuable for several reasons.

        Show me an LLM that can reliably produce 100% accurate notes. Alternatively, accept working in a company where some nonsense becomes future reference and subpoenable documentation.

    • falcor84 a day ago

      I agree, and my vision of this is that instead of notes, the meeting minutes would be catalogued into a vector store, indexed by all relevant metadata. And then instead of pre-generated notes, you'll get what you want on the fly, with the LLM being the equivalent of chatting with that coworker who's been working there forever and has context on everything.

    • Yizahi 19 hours ago

      You can probably buy another neural net SAAS subscription to summarize the summaries for you :)

  • yesfitz a day ago

    Is Leexi's AI note-taker able to raise its hand in a meeting (or otherwise interrupt) and ask for clarification?

    As a human note-taker, I find the most impactful result of real-time synthesis is the ability to identify and address conflicting information in the moment. That ability is reliant on domain knowledge and knowledge of the meeting attendees.

    But if the AI could participate in the meeting in real time like I can, it'd be a huge difference.

    • bdavisx a day ago

      If you are attending the meeting as well as using an AI note-taker, then you should be able to ask the clarifying question(s). If you understand the content, then you should understand the AI notes (hopefully), and if you ask for clarification, then the AI should add those notes too.

      Your problem really only arises if someone is using the AI to stand in for them at the meeting vs. use it to take notes.

      • yesfitz a day ago

        I'll pretend you asked a few questions instead of explaining my work to me without understanding.

        1. "Why can't you look at the AI notes during the meeting?" The AI note-takers that I've seen summarize the meeting transcript after the meeting. A human note-taker should be synthesizing the information in real-time, allowing them to catch disagreements in real-time. Not creating the notes until after the meeting precludes real-time intervention.

        2. "Why not use [AI Note-taker whose notes are available during the meeting]?" Even if there were a real-time synthesis by AI, I would have to keep track of that instead of the meeting in order to catch the same disagreements a human note-taker would catch.

        3. "What problem are you trying to solve?" My problem is that misunderstandings are often created or left uncorrected during meetings. I think this is because most people are thinking about the meeting topics from their perspective, not spending time synthesizing what others are saying. My solution to this so far has been human note-taking by a human familiar with the meeting topic. This is hard to scale though, so I'm curious to see if this start-up is working on building a note-taking AI with the benefits I've mentioned seem to be unique to humans (for now).

  • bluGill a day ago

    But that isn't writing for me, it is taking notes for me. There is a difference. I don't need something to write for me - I know how to write. What I need is someone to clean up grammar, fact check the details, and otherwise clean things up. I have dysgraphia - a writing disorder - so I need help more than most, but I still don't need something to write my drafts for me: I can get that done well enough.

  • Yizahi 19 hours ago

    In my company have a few "summaries" made by Zoom neural net, which we share for memes on the joke chats, they are so hilariously bad. No one uses that functionality seriously. I don't know about your app, but I've yet to see a working note taker in the wild.

  • joshstrange a day ago

    I've used multiple of these types of services and I'll be honest, I just don't really get the value. I'm in a ton of meetings and I run multiple teams but I just take notes myself in the meetings. Every time I've compared my own notes to the notes that the the AI note taker took, it's missing 0-2 critical things or it focuses on the wrong thing in the meeting. I've even had the note taker say essentially the opposite of what we decided on because we flip-flopped multiple times during the meeting.

    Every mistake the AI makes is completely understandable, but it's only understandable because I was in the meeting and I am reviewing the notes right after the meeting. A week later, I wouldn't remember it, which is why I still just take my own notes in meetings. That said, having having a recording of the meeting and or some AI summary notes can be very useful. I just have not found that I can replace my note-taking with an AI just yet.

    One issue I have is that there doesn't seem to be a great way to "end" the meeting for the note taker. I'm sure this is configurable, but some people at work use Supernormal and I've just taken to kicking it out of of meetings as soon as it tries to join. Mostly this is because I have meetings that run into another meeting, and so I never end the Zoom call between the meetings (I just use my personal Zoom room for all meetings). That means that the AI note taker will listen in on the second meeting and attribute it to the first meeting by accident. That's not the end of the world, but Supernormal, at least by default, will email everyone who was part of the the meeting a rundown of what happened in the meeting. This becomes a problem when you have a meeting with one group of people and then another group of people, and you might be talking about the first group of people in the second meeting ( i.e. management issues). So far I have not been burned badly by this, but I have had meeting notes sent out to to people that covered subjects that weren't really something they needed to know about or shouldn't know about in some cases.

    Lastly, I abhor people using an AI notetaker in lieu of joining a meeting. As I said above, I block AI note takers from my zoom calls but it really frustrates me when an AI joins but the person who configured the AI does not. I'm not interested in getting messages "You guys talked about XXX but we want to do YYY" or "We shouldn't do XXX and it looks like you all decided to do that". First, you don't get to weigh in post-discussion, that's incredibly rude and disrespectful of everyone's time IMHO. Second, I'm not going to help explain what your AI note taker got wrong, that's not my job. So yeah, I'm not a huge fan of AI note takers though I do see where they can provide some value.

  • yoyohello13 a day ago

    We've had the built-in Teams summary AI for a while now and it absolutely misses important details and nuance that causes problems later.

  • soco a day ago

    I'm not a CTO so maybe your wold is not my world, but for me the advantage of taking the notes myself is that only I know what's important to me, or what was news to me. Teams Premium - you can argue it's so much worse than your product - takes notes like "they discussed about the advantages of ABC" but maybe exactly those advantages are advantageous to know right? And so on. Then like others said, I will review my notes once to see if there's a followup, or a topic to research, and off they go to the bin. I have yet to need the meeting notes of last year. Shortly put: notes apps are to me a solution in search of a problem.

  • UncleMeat 16 hours ago

    You do you.

    I attend a lot of meetings and I have reviewed the results of an AI note taker maybe twice ever. Getting an email with a todo-list saves a bit of time of writing down action items during a meeting, but I'd hardly consider it a game changer. "Wait, what'd we talk about in that meeting" is just not a problem I encounter often.

    My experience with AI note takers is that they are useful for people who didn't attend the meeting and people who are being onboarded and want to be able to review what somebody was teaching them in the meeting and much much much less useful for other situations.