Models & Agents for Beginners
Date: March 22, 2026
A new AI that helped build itself? Plus safe ways to test models without breaking anything — today’s news is wild.
What's Cool Today: Chinese company MiniMax just released a model called M2.7 that reportedly took an active role in improving its own training. That’s like a student helping design their own exam! We’ll also look at four smart ways companies test new AI models before letting them loose on everyone, why a publisher just canceled a horror novel over AI worries, and a surprising survey about how UK students are really using AI. Stick around — there are things you can try today.
The Big Story
A Chinese AI company called MiniMax has released a new model named M2.7 that reportedly helped develop itself. Instead of humans doing all the work, the model used “autonomous optimization loops” to improve its own training process and still achieved competitive results on tests.
Think of it like this: imagine you’re learning math and instead of just doing practice problems, you also invent better ways to study and then test those methods on yourself. That’s roughly what this model did. It ran its own experiments on how it learns, adjusted its training approach, and got smarter without a human rewriting every rule.
This matters because most AI models are built entirely by teams of engineers who decide exactly how to train them. If models can start helping design their own learning process, it could speed up how fast new AIs improve and possibly make them better at complex tasks. For students and creators, this points toward future tools that understand their own weaknesses and fix them — kind of like a super-smart homework helper that knows when it’s guessing.
The big “so what” for you is that AI is moving from something humans fully control to systems that can participate in their own growth. That raises cool questions about how much we should trust AI to improve itself and what kinds of safeguards we need.
You can’t try M2.7 directly today, but you can experiment with the idea of “self-checking” AI right now. Open ChatGPT or Google Gemini (both free) and try this: ask it a question you know the answer to, then say “Now rate your confidence in that answer from 1 to 10 and explain why.” Watch how it evaluates itself — that’s the beginning of the uncertainty-aware systems experts are building.
Source: the-decoder.com
Explain Like I'm 14
How safe testing strategies for AI models actually work
You know how when you want to try a new hairstyle, you don’t just chop off all your hair at once? You might test it on a small section first, ask a couple friends what they think, or even use an app to see a photo of the change before committing. That same careful approach is exactly how smart companies test new AI models before showing them to millions of users.
Here’s the simple version: imagine your school has two lunch menus — the normal one everyone knows and a brand-new experimental one. Instead of forcing everyone to eat the new food, they could try four safe methods. First, they could randomly give half the students the new menu and half the old one (this is called A/B testing) and see who’s happier. Second, they could test the new menu only with one small group, like just the Year 9s, before rolling it out to the whole school (this is canary testing — like sending a “canary” in first to check if it’s safe).
Third, they could mix both menus together so every student gets some old and some new items in random order (interleaved testing) and compare reactions bite by bite. Finally, they could secretly prepare the new menu in the kitchen but keep serving the old one — while secretly measuring how the new recipes would have performed (this is shadow testing). The real menu never changes until they’re sure the new one is better.
And that’s basically what these four controlled strategies are doing with AI. Companies run the new model alongside the old one in careful ways so they can catch problems before anyone gets a bad experience. Next time you hear about “deploying ML models to production,” you can tell your friends — it’s basically the same caution you’d use before changing your whole hairstyle. Not so scary, right?
Source: marktechpost.com
Cool Stuff & Try This
95% of UK Students Use AI — Here’s What They Actually Think — The Decoder
A new survey found that 95% of British students are using generative AI, but their feelings are totally split. Some say it helps them understand topics better and makes learning more interesting. Others worry it’s quietly replacing their ability to think for themselves. Universities are struggling to keep up with these changes.
This is super relevant if you’re in school or college right now. AI isn’t some far-off thing — it’s already part of how most students work. The survey shows both the excitement and the honest concerns, which is a great reminder that you get to decide how you use these tools.
Try this today: Open a free AI like ChatGPT, Claude, or Gemini. Pick something you’re studying this week — maybe a history topic or science concept. First write your own one-paragraph explanation. Then ask the AI to explain the same thing. Compare the two. Ask yourself: “Did the AI help me understand it better, or did I just copy?” This exact experiment is what thousands of students are figuring out right now.
Source: the-decoder.com
Publisher Cancels Horror Novel Over AI Concerns — TechCrunch
Hachette Book Group decided not to publish a horror novel called “Shy Girl” because they were worried the text was generated by artificial intelligence. This shows how seriously some big publishers are taking the question of AI-created books.
It matters because a lot of people care whether a story was written by a human or by AI. For anyone who loves reading, writing stories, or dreams of publishing one day, this news highlights the ongoing conversation about creativity and AI.
You can try something related right now: Go to any free AI writing tool (like ChatGPT) and ask it to “Write the first page of a horror story about a shy girl who discovers something strange in her mirror.” Then write your own version of the same scene. Compare them. Notice what feels different. This simple side-by-side test helps you see what the conversation is really about.
Source: techcrunch.com
Quick Bits
OpenAI Plans to Nearly Double Its Staff
OpenAI is reportedly growing from about 4,500 employees to 8,000 by the end of 2026. They’re hiring in engineering, research, product development, sales, and even “technical ambassadors” who help businesses use AI better. It shows how fast the biggest AI companies are still expanding.
Source: engadget.com
Minecraft Theme Park Coming to London in 2027
A permanent Minecraft World theme park is planned to open inside Chessington World of Adventures in 2027. It will have a roller coaster, interactive adventures, and block-built playscapes. While not directly AI news, it shows how our favorite games are jumping from screens into real life — and AI will likely help design experiences like this in the future.
Source: engadget.com
Gemini’s New Task Automation Looks Impressive (But Clunky)
Google’s Gemini can now take over simple tasks like ordering food or booking rides in certain apps. It’s still limited and a bit slow, but it’s an early look at AI agents that can actually use apps for you.
Source: theverge.com
Models & Agents
Planetterrian Daily
Omni View
Models & Agents for Beginners
Fascinating Frontiers
Modern Investing Techniques
Tesla Shorts Time
Environmental Intelligence
Финансы Просто
Привет, Русский!