Comment by abe_m

Comment by abe_m a day ago

23 replies

I think this is along the lines of the AI horseless carriage[1] topic that is also on the front page right now. You seem to be describing the current method as operated through an AI intermediary. I think the power in AI for CAD will be at a higher level than lines, faces and holes. It will be more along the lines of "make a bracket between these two parts". "Make this part bolt to that other part". "Attach this pump to this gear train" (where the AI determines the pump uses a SAE 4 bolt flange of a particular size and a splined connection, then adds the required features to the housing and shafts). I think it will operate on higher structures than current CAD typically works with, and I don't think it will be history tree and sketch based like Solidworks or Inventor. I suspect it will be more of a direct modelling approach. I also think integrating FEA to allow the AI to check its work will be part of it. When you tell it to make a bracket between two parts, it can check the weight of the two parts, and some environmental specification from a project definition, then auto-configure FEA to check the correct number of bolts, material thickness, etc. If it made the bracket from folded sheet steel, you could then tell it you want a cast aluminum bracket, and it could redo the work.

[1]https://news.ycombinator.com/item?id=43773813

jillesvangurp a day ago

It's also going to be about diagnosing issues. "This part broke right here, explain why and come up with a solution", "Evaluate the robustness of this solution", "Can I save some material and reduce the weight", etc.

Those are the kind of high level questions that an LLM with a decent understanding of CAD and design might be able to deal with soon and it will help speed up expensive design iterations.

A neat trick with current LLMs is to give them screenshots of web pages and ask some open questions about the design, information flow, etc. It will spot things that expert designers would comment on as well. It will point out things that are unclear, etc. You can go far beyond just micro managing incremental edits to some thing.

Mostly the main limitation with LLMs is the imagination of the person using it. Ask the right questions and they get a lot more useful. Even some of the older models that maybe weren't that smart were actually quite useful.

For giggles, I asked chatgpt to critique the design of HN. Not bad. https://chatgpt.com/share/6809df2b-fc00-800e-bb33-fe7d8c3611...

  • wavefrontbakc 21 hours ago

    I think the cost of mistakes is the major driving force behind where you can adopt tools like these. Generating a picture of a chair with five legs? No big deal. Generating supports for a bridge that'll collapse next week? Big problem.

    > It will point out things that are unclear, etc. You can go far beyond just micro managing incremental edits to some thing.

    When prompted an LLM will also point it out when it's perfectly clear. LLM is just text prediction, not magic

    • ben_w 20 hours ago

      > I think the cost of mistakes is the major driving force behind where you can adopt tools like these. Generating a picture of a chair with five legs? No big deal. Generating supports for a bridge that'll collapse next week? Big problem

      Yes, indeed.

      But:

      Why can LLMs generally write code that even compiles?

      While I wouldn't trust current setups, there's no obvious reason why even a mere LLM cannot be used to explore the design space when the output can be simulated to test its suitability as a solution — even in physical systems, this is already done with non-verbal genetic algorithms.

      > LLM is just text prediction, not magic

      "Sufficiently advanced technology is indistinguishable from magic".

      Saying "just text prediction" understates how big a deal that is.

      • wavefrontbakc 19 hours ago

        >While I wouldn't trust current setups, there's no obvious reason why even a mere LLM cannot be used to explore the design space when the output can be simulated to test its suitability as a solution

        Having to test every assertation sounds like a not particularly useful application, and the more variables there are the more it seems to be about throwing completely random things at the wall and hoping it works

        You should use a tool for it's purpose, relying on text prediction to predict clarity is like relying on teams icons being green to actual productivity; a very vague, incidentally sometimes coinciding factor.

        You could use text predictor for things that rely on "how would this sentence usually complete" and get right answers. But that is a very narrow field, I can mostly imagine entertainment benefiting a lot.

        You could misuse text predictor for things like "is this <symptom> alarming?" and get a response that is statistically likely in the training material, but could be completely inverse for the person asking, again having very high cost for failing to do what it was never meant to. You can often demonstrate the trap by re-rolling your answer for any question a couple times and seeing how the answer often varies mild-to-completely-reverse depending on whatever seed you land.

        • ben_w 13 hours ago

          > Having to test every assertation sounds like a not particularly useful application, and the more variables there are the more it seems to be about throwing completely random things at the wall and hoping it works

          That should be fully automated.

          Instead of anchoring on "how do I test what ChatGPT gives me?", think "Pretend I'm Ansys Inc.*, how would I build a platform that combines an LLM to figure out what to make in the first place from a user request, with all our existing suite of simulation systems, to design a product that not only actually meets the requirements of that user request, but also actually proves it will meet those requirements?"

          * Real company which does real sim software

      • aredox 18 hours ago

        >Saying "just text prediction" understates how big a deal that is.

        Here on HN we often see posts insisting on the importance of "first principles".

        Your embrace of "magic" - an unknown black box who does seemingly wonderful things that usually blow up to one's face and have a hidden cost - is the opposite of that.

        LLMs are just text prediction. That's what they are.

        >Why can LLMs generally write code that even compiles?

        Why can I copy-paste code and it compiles?

        Try to use LLM on code there is little training material about - for example PowerQuery or Excel - and you will see it bullshit and fail - even Microsoft's own LLM.

    • sharemywin 15 hours ago

      isn't it closer to concept prediction layered over top of text prediction because of the multiple levels? it compresses text into concepts using layers of embeddings and neural encoding then predicts the concept based on multiple areas of attention. then decompresses it to find the correct words to convey the concept.

    • baq 20 hours ago

      The text of every Nobel winning physics theory was predicted in someone’s head, too

  • Nevermark 21 hours ago

    > Mostly the main limitation with LLMs is the imagination of the person using it. Ask the right questions and they get a lot more useful.

    Completely agree.

    We get waves of comments on HN downplaying model abilities or their value.

    Many people don’t seem to explore and experiment with them enough. I have 3 screens. The left one has two models on it. The right one has a model & a web browser for quick searches. I work on the largest middle screen.

    Extreme maybe, but I use them constantly resulting in constant discovery of helpful new uses.

    I web search maybe 10% of what I did six months ago.

    The quirks are real, but the endless upsides models deliver when you try things were unobtainium, from humans or machines, until LLMs.

    • wavefrontbakc 21 hours ago

      I've only found useful answers for when I've begun something from completely zero. Things I've got even a reasonable grasp on I start to spot large dangerous holes a model very confidently claims is the right solution. Stuff like bringing camping stove inside a tent to avoid rainy weather, nevermind it being suicide at the same time.

      >I web search maybe 10% of what I did six months ago.

      Me too, though this is more driven by the total cliff-fall of web search result quality

      • sharemywin 16 hours ago

        I'm not sure what you used for it to tell you that?

        what should I do with my camping stove if I want to avoid the rain

        ChatGPT said: If you want to avoid the rain while using your camping stove, here are your best options:

        Shelter Options: Use a Tarp: Set up a tarp high above your stove area with good airflow. Make sure it's not enclosed and water runs off away from you.

        Camp Kitchen Tent: Lightweight pop-up canopies or dedicated camp kitchen shelters are perfect. Just make sure it’s ventilated.

        Picnic Table Shelter: Many campgrounds have covered areas you can cook under — check for those.

        Vestibule (Cautiously): If you have a tent with a large vestibule, you can use the stove there, only if it's very well-ventilated and you're extremely cautious (fire and carbon monoxide risks are real).

        Important Safety Tips: Never use a stove inside a closed tent — risk of carbon monoxide poisoning and fire.

        Keep the stove on a stable, flat surface.

        Avoid cooking right under trees with low-hanging branches.

        Ensure your shelter is tall enough — heat rises.

        Would you like a quick sketch or image of a tarp setup that works well in rain?

      • cdecl 11 hours ago

        The problem I have with this conclusion is that "trust but verify" long predates AI models. People can, and have been, posting total bullshit on the internet since time immemorial. You have never _not_ needed to actually validate the things you are reading.

  • dmd 21 hours ago

    > Not bad

    It reads like a horoscope to me.

  • krige 20 hours ago

    > Not bad. I'm sorry but it's pretty bad. Hierarchy complaint is bogus, so's navigation overload, it hallucinates BG as white, and the rest is very generic.

  • otabdeveloper4 18 hours ago

    > wanting AI to make decisions

    That's a mega-yikes for me.

    Go ahead and do something stupid like that for CEO or CTO decisions, I don't care.

    But keep it out of industrial design, please. Lives are at stake.

alnwlsn a day ago

You're right, but I think we have a long way to go. Even our best CAD packages today don't work nearly as well as advertised. I dread to think what Dassault or Autodesk would charge per seat for something that could do the above!

  • abe_m a day ago

    I agree. I think a major hindrance to the current pro CAD systems is being stuck to the feature history tree, and rather low level features. Considerable amounts of requirements data is just added to a drawing free-form without semantic machine-readable meaning. Lots of tolerancing, fit, GD&T, datums, etc are just lines in a PDF. There is the move to MBD/PMI and the NIST driven STEP digital thread, but the state of CAD is a long way from that being common. I think we need to get to the data being embedded in the model ala MBD/PMI, but then go beyond it. The definition of threads, gear or spline teeth, ORB and other hydraulic ports don't fit comfortably into the current system. There needs to be a higher level machine-readable capture, and I think that is where the LLMs may be able to step in.

    I suspect the next step will be such a departure that it won't be Siemens, Dassault, or Autodesk that do it.

coderenegade a day ago

I think this is correct, especially the part about how we actually do modelling. The topological naming problem is really born from the fact that we want to do operations on features that may no longer exist if we alter the tree at an earlier point. An AI model might find it easier to work directly with boolean operations or meshes, at which point, there is no topological naming problem.