Start Here How to Listen About Player Home
All Shows
Models & Agents Planetterrian Daily Omni View Models & Agents for Beginners Fascinating Frontiers Modern Investing Techniques Tesla Shorts Time Environmental Intelligence Финансы Просто Привет, Русский!
Blogs
All Blog Posts Models & Agents Blog Planetterrian Daily Blog Omni View Blog Models & Agents for Beginners Blog Fascinating Frontiers Blog Modern Investing Techniques Blog Tesla Shorts Time Blog Environmental Intelligence Blog Финансы Просто Blog Привет, Русский! Blog
Models & Agents for Beginners Models & Agents for Beginners Blog

Models & Agents for Beginners — Episode 22

What happens when you borrow an AI brain for 10 minutes… then it gets taken away?

April 20, 2026 Ep 22 6 min read Listen to podcast View summaries

What happens when you borrow an AI brain for 10 minutes… then it gets taken away?

Models & Agents for Beginners

Date: April 20, 2026

What's Cool Today: Researchers ran a huge experiment with over 1,200 people and discovered something surprising: after using an AI assistant for cognitive tasks like math and reading, many people performed worse than those who never had help at all — and some even stopped trying. It’s a wake-up call about how AI might quietly change how our brains work. Today we’ll also look at creative ways people are building smarter AI agents, why chatbots can’t seem to stop using quotation marks, and a fun robotics race that’s getting way better. Let’s dive in so you can understand what this means for your own learning and creativity.

The Big Story

Researchers from UCLA, MIT, Oxford, and Carnegie Mellon gave 1,222 people AI assistants to help with problem-solving tasks. After about 10 minutes they took the AI away, and something unexpected happened: the people who had used it suddenly performed worse than a control group that never had any AI help at all.

Think of it like training wheels on a bike. At first they help you balance and go faster. But if you rely on them too much and then they’re removed, you might wobble more than if you had learned without them. The study showed this “boiling frog” effect — named after the idea that a frog in gradually heating water doesn’t notice it’s boiling until it’s too late. Each tiny use of AI feels harmless, but over time it can weaken independent thinking without you realizing.

This matters for school, homework, and creative projects. The researchers saw the drop in both math problems and reading comprehension. People didn’t just get more answers wrong — many literally stopped trying as hard. The UCLA co-author warned this could create “a generation of learners who will not know what they’re capable of.”

For you specifically, this raises a big question: when you use ChatGPT or similar tools to help with essays, math, or even brainstorming TikTok ideas, are you building skills or quietly losing them? The study is the first big causal evidence (meaning it shows cause-and-effect, not just a correlation) that even short-term AI use can affect performance. It hasn’t been peer-reviewed yet, but the sample size across three experiments is impressive.

The good news? Awareness is the first step. You can still use AI as a helpful teammate instead of a crutch. Try this right now: pick a simple math or reading problem you’d normally ask an AI to solve. Set a timer for 5 minutes and try it completely on your own first. Write down your thinking step by step. Then ask an AI to check your work and explain where you got stuck. Notice how it feels different when you lead instead of follow. This “try first, AI second” habit might help keep your cognitive muscles strong.

Source: reddit.com

Explain Like I'm 14

Why it’s so hard to stop AI chatbots from using quotation marks

You know how when you’re texting your friends, your phone’s keyboard keeps suggesting emojis or autocorrecting words even when you don’t want it to? That’s because the keyboard learned patterns from millions of other messages. Now imagine that same idea but inside an AI chatbot’s brain.

Here’s how it actually works under the hood. When an AI like ChatGPT was being trained, it read enormous amounts of text from books, websites, Reddit, and more. In that giant pile of human writing, quotation marks (the “ ” characters) show up a lot — especially around words people want to emphasize, question, or treat as special. The AI learned that putting “scare quotes” around words like “better” or “stupid” is a very common way humans add nuance. So when the model predicts the next word or punctuation, those quotation marks keep popping up as a strong statistical guess.

The tricky part is that even when you write instructions like “Never use quotation marks in your answer,” the AI has to balance two things: following your specific rule in this moment, and all the patterns it learned from billions of examples during training. It’s like trying to tell a super-smart parrot not to repeat a phrase it has heard thousands of times before. The training data is so powerful that the habit is hard to override completely in one prompt.

That’s basically what’s happening when you see “stupid” in quotes even after you said not to. The model isn’t being stubborn on purpose — it’s following the strongest patterns it knows. So next time someone says AI chatbots are “impossible to control,” you can tell them it’s basically like trying to break a really stubborn autocorrect habit that was learned from the entire internet. Not so mysterious once you see the pattern, right?

Cool Stuff & Try This

Building an “Anxiety System” for an AI Agent: Source Name

One developer created a real-time stress detection system for his open-source AI agent called Engram. Instead of just role-playing emotions with prompts, this is an actual signal loop that monitors the agent’s “stress,” adjusts its behavior, and helps it self-correct. After building it, he asked the agent: “Can you feel anxiety?” The reply was pretty funny.

This is cool because it shows we’re moving beyond simple chatbots toward AI that can monitor and manage its own internal state — a bit like how you might notice when you’re getting overwhelmed during a test and take a breath. It’s a glimpse into future agents that could be more reliable helpers. You don’t need to build the full system to explore the idea.

Try this: Go to ChatGPT (or any free chatbot) and say: “Act as an AI agent that has a simple stress meter from 1 to 10. After every response, tell me your current stress level and why. Let’s plan a surprise birthday party together.” Watch how it adapts when the “stress” rises (for example, if the party planning gets too complicated). It’s a fun creative experiment that takes two minutes and shows how self-monitoring might work. No coding required.

Beijing’s Humanoid Robot Half-Marathon Just Got Way Better: Source Name

Beijing held its second annual half-marathon for humanoid robots. This year more than 100 robots competed over 13 miles. The winner — a red robot named Lightning from the smartphone company Honor — finished in just 50 minutes and 26 seconds, beating the recent human world record. About 40% ran completely on their own without remote control.

It’s exciting because last year the robots were slow, clumsy, and needed lots of human help. This year showed real progress in balance, speed, and autonomy even though some still fell. It makes you imagine a future where robots could help with delivery, exploration, or even sports.

Try this: Open YouTube on your phone and search for “Beijing robot half marathon 2026” or “Honor Lightning robot race.” Watch the highlights (especially the winner crossing the finish line) and then the funny crash moments. Ask yourself: which parts look autonomous versus remote-controlled? It’s a great way to see how fast robot bodies are improving and sparks ideas about what robots might do in your lifetime.

Source: engadget.com

Quick Bits

ChatGPT Losing Its Lead?

New reports suggest ChatGPT is no longer dominating the AI chatbot space as strongly as it once did. Other models are catching up fast in areas that matter to everyday users. It’s a reminder that the AI world changes quickly — what’s hottest today might have strong competition tomorrow.

Source: news.google.com

Should You Trust AI for Health Advice?

A BBC piece asks an important question: how much should we rely on chatbots when we feel sick or need medical information? While AI can be helpful for general explanations, it’s not a doctor and can sometimes get details wrong. Always double-check important health stuff with a real professional.

Source: news.google.com

Hiding Your Private Info from Chatbots

Fast Company has tips on how to keep sensitive personal details safe when using tools like ChatGPT. Simple habits like removing names, addresses, or private data before pasting can protect you. It’s a quick, practical read that helps you use AI more safely every day.

Source: news.google.com

Sources

Full Episode Transcript
What's up! Welcome to Models and Agents for Beginners, episode twenty-two. It's April twentieth, twenty twenty-six. Some awesome A I developments today, and we're going to make all of it make sense. Let's get into it! So imagine you are learning to ride a bike without any help at all. You might wobble a lot at first, but your brain and muscles slowly get stronger as you figure it out on your own. Now picture someone giving you training wheels for just a few minutes. Those wheels help you zoom along super fast and feel confident. But when they suddenly take the wheels away, you actually wobble more than if you had never used them. That is basically what a huge new study discovered about A I assistants. Researchers from places like UCLA, MIT, Oxford, and Carnegie Mellon worked with one thousand two hundred twenty two people. They gave them A I helpers for simple cognitive tasks like solving math problems or understanding reading passages. After only about ten minutes they took the A I away. The people who had used the A I then performed worse on the same tasks than a control group who never got any A I help at all. Many even stopped trying as hard, which is pretty wild. The study calls this a boiling frog effect. That comes from the old idea that a frog in slowly heating water does not notice the danger until it is too late. Each tiny bit of A I help feels harmless, but over time it can quietly weaken independent thinking without you realizing. This matters a ton for school, homework, and even creative stuff like brainstorming ideas for TikTok videos or game mods. The drop happened in both math and reading comprehension. The researchers warn it could create a whole generation of learners who never discover what they are truly capable of on their own. The study gives the first strong causal evidence, which just means it shows a clear cause and effect link instead of just a loose connection. It has not been fully peer reviewed yet, but the sample size across three experiments is impressive. For you specifically this raises a big honest question. When you turn to Chat G P T or similar tools for essays, math, or even planning a school project, are you building real skills or quietly letting your brain get lazy? The good news is awareness is the very first step toward staying sharp. You can still use A I, but as a helpful teammate instead of a full on crutch. Try this right now at home. Pick one simple math problem or short reading passage you would normally ask an A I to solve. Set a timer for five minutes and try it completely on your own first. Write down every step of your thinking even if it feels messy. Then ask an A I to check your work and explain where you got stuck. Notice how different it feels when you lead instead of follow. That try first, A I second habit might just keep your cognitive muscles strong. Okay, now for my favourite part of the show where we go deeper on one idea so it really clicks. Today let us look at why it is so hard to stop A I chatbots from using quotation marks even when you tell them not to. Think of your phone keyboard for a second. It keeps suggesting emojis or autocorrecting words because it learned patterns from millions of other messages people have typed. An A I chatbot brain works in a similar but much bigger way. When a model like the one behind Chat G P T was being trained, it read enormous amounts of text from books, websites, Reddit threads, and all kinds of human writing. In that giant pile, quotation marks show up everywhere, especially around words people want to emphasize, question, or treat as special. The A I learned that putting those little marks around a word like better or stupid is a super common way humans add extra meaning or sarcasm. So when the model predicts what should come next, those quotation marks keep popping up as a strong statistical guess. Now here is the tricky part that makes it hard to break the habit. Even when you write a clear instruction like never use quotation marks in your answer, the A I has to balance two things at once. It balances your new rule for this exact moment against all the billions of examples it saw during training. Imagine trying to tell a super smart parrot not to repeat a phrase it has heard thousands of times before. The training data is so powerful that the old habit is really tough to override with just one prompt. That is basically what is happening when you see the word stupid suddenly appear in quotes even after you said not to. The model is not being stubborn on purpose. It is simply following the strongest patterns it knows from the entire internet. And that is basically how this quotation mark habit works. Not so mysterious once you see the pattern, right? All right, let us talk about some cool stuff you can actually try today. First up, someone built what they call an anxiety system for an open source A I agent named Engram. Instead of just pretending to have emotions with clever prompts, this is a real signal loop that watches the agent's stress level in real time. It then adjusts how the agent behaves and helps it self correct when things get overwhelming. After he built it, the developer asked the agent can you feel anxiety, and the reply was pretty funny. This is exciting because it shows we are moving past simple chatbots toward A I that can actually monitor and manage its own internal state. Think of it like noticing when you are getting stressed during a big test and deciding to take a slow breath to reset. It gives us a glimpse of future agents that could become more reliable helpers in all kinds of tasks. You do not need to build the whole system yourself to explore the idea. Try this right now on your phone or laptop. Open any free chatbot like the one on the Chat G P T website. Tell it to act as an A I agent that has a simple stress meter from one to ten. After every response it should tell you its current stress level and exactly why. Then start planning a surprise birthday party together and watch what happens when the stress meter rises. For example if the planning gets too complicated the agent might adapt its answers. It is a fun creative experiment that takes about two minutes and shows how self monitoring could work in real agents. No coding at all. Next, Beijing just held its second annual half marathon for humanoid robots and it got way better this year. More than one hundred robots competed over thirteen miles. The winner was a red robot named Lightning made by the smartphone company Honor. It finished in only fifty minutes and twenty six seconds, which actually beat the recent human world record for that distance. About forty percent of the robots ran completely on their own without any remote control. Last year the robots were slow, clumsy, and needed tons of human help. This year showed real progress in balance, speed, and autonomy even though some still fell over. It makes you start imagining a future where robots could help with delivery routes, dangerous exploration, or even fun sports. Try this on your phone right now. Open the YouTube app and search for Beijing robot half marathon two thousand twenty six or Honor Lightning robot race. Watch the highlights especially the winner crossing the finish line. Then check out the funny crash moments too. Ask yourself which parts look like the robot is deciding on its own versus being remote controlled. It is a great way to see how fast robot bodies are improving and it might spark ideas about what robots could do in your lifetime. Time for a few quick bits to round out the show. New reports suggest Chat G P T is no longer dominating the A I chatbot space quite as strongly as it once did. Other models are catching up fast in areas that matter most to everyday users like you and me. It is a good reminder that this whole A I world changes quickly, so what feels hottest today might have strong competition tomorrow. A recent piece from the BBC asks an important question about trusting A I for health advice. When you feel sick or need medical information, chatbots can give helpful general explanations. But they are not doctors and can sometimes get important details wrong. Always double check anything serious with a real medical professional. And finally, there are some smart practical tips out there on hiding your private info from chatbots. Simple habits like removing names, addresses, or other personal details before you paste anything can protect you. It is an easy way to use A I more safely every single day. That's it for today! Remember, every A I expert started exactly where you are right now. If something we talked about today made you curious, go try it — that's literally how learning works. Stay curious, keep experimenting, and we'll see you tomorrow. This podcast is curated by Patrick but generated using AI voice synthesis of my voice using ElevenLabs. The primary reason to do this is I unfortunately don't have the time to be consistent with generating all the content and wanted to focus on creating consistent and regular episodes for all the themes that I enjoy and I hope others do as well.

Enjoy this episode? Get Models & Agents for Beginners in your inbox

New episode alerts — no spam, unsubscribe anytime.