Comment by sdwr

Comment by sdwr a day ago

3 replies

I think this is a backwards approach, especially for children.

Gen AI is magical, it makes stuff appear out of thin air!

And it's limited, everything it makes kinda looks the same

And it's forgetful, it doesn't remember what it just did

And it's dangerous! It can make things that never happened

Starting with theory might be the simplest way to explain, but it leaves out the hook. Why should they care?

jraph a day ago

As a child, I think I would have been annoyed by such a presentation. There are science magazines for children that can explain pretty complex stuff just fine.

It's also critical not to leave out the ethical topics (resource consumption, e-waste production, concerns about how the source data is harvested - both how it DDoS websites and how authors are not necessarily happy with their work ending in the models)

superfluous_g a day ago

Getting folks to care is essential in my experience. In coaching adults "what's in it for me?" is the end of my first section and forms the basis of their first prompt. Also how I cover risk - ie. "How do I not damage my credibility?". If you're asking people to break habits and processes, you've got to make them want to.

That said, the hands on approach here is great and also foundational in my experience.

westurner 19 hours ago

> Starting with theory might be the simplest way to explain,

Brilliant's AI course has step-by-step interactive textgen LLMs trained on TS (Swift) lyrics and terms of services with quizzes for comprehension and gamified points.

Here's a quick take:

LLM AI are really good at generating bytes that are similar to other bytes, but aren't yet very good at caring whether what they've generated is wrong or incorrect. Reinforcement Learning is one way to help prevent that.

AI Agents are built on LLMs. An LLM (Large Language Model) is a trained graph of token transition probabilities (a "Neural Network" (NN), a learning computer (Terminator (1984))). LLMs are graphical models. Clean your room. The grass is green and the sky is blue. Clean it well

AI Agents fail where LLMs fail at "accuracy" due to hallucinations even given human-curates training data.

There are lots of new methods for AI Agents built on LLMs which build on "Chain of Thought"; basically feeding the output from the model back through as an input a bunch of times. ("feed-forward")

But if you've ever heard a microphone that's too close to a speaker, you're already familiar with runaway feedback loops that need intervention.

There are not as many new Agentic AIs built on logical reasoning and inference. There are not as many AI Agents built on the Scientific Method that we know to be crucial to safety and QA in engineering.