Google just opened its powerful Colab computers so any AI agent can use them with free GPUs.
What's Cool Today: Google released an open-source server that lets local AI agents create, edit, and run Python code inside real Colab notebooks running on the cloud with GPUs. This means your own AI helpers at home can now tap into serious computing power without you paying for it. Today we’ll also look at how everyday gig workers are getting paid to create training material for AI, a new Mac app Google is testing, and what it means when AI starts making music you can actually sell. These stories show AI moving from “chat with it” to “work alongside it” in ways that matter to creators, students, and anyone curious about the future.
The Big Story
Google has officially released the Colab MCP Server, an open-source version of something called the Model Context Protocol. This tool lets AI agents — basically smart software helpers — talk directly to Google Colab and control it.
Think of Google Colab like a free, super-powered Google Doc that can run code and use graphics cards (GPUs) from the cloud. Normally you open it in your browser and type commands yourself. Now an AI agent running on your own computer can open notebooks, write Python code, change it, run it, and see the results — all automatically. It’s like giving your AI a remote control for a powerful laptop that lives in Google’s data centers.
This is a big deal because running AI or doing heavy calculations usually needs an expensive graphics card. Most teens and students don’t have one at home. With this, your local AI helper can borrow Google’s hardware for free (within Colab’s normal limits) to do things like train small models, create images, or analyze data. For a student working on a school project or someone learning to build AI, this removes a huge money barrier.
It also changes how we think about AI agents. Instead of just answering questions in a chat, they can now do real work inside a proper coding environment. Imagine asking your AI to “build me a simple game and test it” and it actually spins up a notebook, writes the code, runs it, and shows you the result.
For you specifically, this could mean better school projects, faster learning, or even experimenting with creative ideas without buying hardware. It makes advanced AI experimentation feel more possible for regular people.
You can try something related right now: Go to colab.google.com, start a new notebook, and choose a GPU runtime (Runtime → Change runtime type → T4 GPU). Even without the new server, you get a taste of free cloud computing power. If you want to explore the new MCP server, visit the MarkTechPost article for the link to the open-source code.
Source: marktechpost.com
Explain Like I'm 14
What even is an AI agent?
You know how when you text your friend, your phone’s keyboard suggests the next word? That’s basically a tiny AI making a guess about what comes next. Now imagine that idea but on steroids.
Step 1: Instead of just guessing one word, the AI looks at everything you’ve said so far and predicts whole sentences or even actions.
Step 2: Give that AI a goal, like “help me with my homework” or “plan a birthday party.”
Step 3: The AI doesn’t just answer once — it can break the big goal into smaller steps, use tools (like searching the web or running code), check its own work, and keep going until the goal is done.
Step 4: It can even decide when to ask you for more information, just like a real assistant would.
That back-and-forth, goal-driven loop is basically what people mean by an “AI agent.” It’s not just a chatbot waiting for your next message — it’s an AI that can take initiative and use different tools to get things done.
So next time you hear someone say “AI agent,” you can tell them it’s basically a super-smart digital helper that doesn’t just answer questions — it works toward goals using tools, almost like a friend who’s really good at getting stuff done. Not so scary once you see it that way, right?
Source: General concept
Cool Stuff & Try This
DoorDash is paying delivery workers to train AI — Engadget
DoorDash just launched a new feature called Tasks. Delivery workers (called Dashers) can now do short activities between deliveries — like taking photos of restaurant dishes or recording casual conversations in languages other than English — and get paid for it. The photos, videos, and recordings are used to train AI models and robotics systems for companies in retail, insurance, hospitality, and tech.
This is cool because instead of companies secretly scraping the internet for training data, they’re paying regular people for new, useful material. It turns everyday workers into part of the AI creation process and gives them an extra way to earn money.
You can’t become a Dasher just to test this today, but the story is worth knowing: AI needs real-world examples to get better at understanding the world, and now some of that is coming from people like delivery drivers. It’s a reminder that the pictures and videos we create every day have real value for training tomorrow’s AI.
Source: engadget.com
Google is testing a Gemini app for Mac — Engadget
Google is quietly testing a standalone Gemini app for Mac computers. The app would let you chat with Gemini, search the web, and generate text, images, or code — just like the website does. The really interesting part is a possible new feature called “Desktop Intelligence” that would let Gemini see what’s on your screen and pull information from other apps to give better answers.
This matters because it puts Google’s AI right next to the tools you already use for schoolwork or creative projects. Having an AI that can look at what you’re working on (when you allow it) could make homework help or creative brainstorming much more useful.
While the app is still in testing and not publicly available yet, you can already use Gemini on your phone or at gemini.google.com. Try this: Open gemini.google.com on your laptop, describe a school project you’re working on, and ask it to “act like you can see my screen — what should I do next?” Even without the full Desktop Intelligence feature, it’s a fun way to see how context-aware AI can help.
Source: engadget.com
Quick Bits
ElevenLabs launches AI music marketplace
ElevenLabs now lets people sell AI-generated music on a new marketplace. When tracks get downloaded or licensed, the creators get paid. However, the terms say that technically no one really “owns” the music in the traditional sense. It’s an interesting experiment in how AI creativity and money might work together.
Source: the-decoder.com
Adobe lets you train its AI on your own art
Adobe’s Firefly image generator now has Custom Models in public beta. You can train it on your own drawings, photos, or characters so the AI learns your specific style and keeps it consistent. This is great news for artists who want to use AI but keep their unique look.
Source: theverge.com
Meta had a security scare with a rogue AI agent
An AI agent inside Meta gave an employee bad technical advice, which accidentally gave people unauthorized access to data for almost two hours. The company says no user data was mishandled, but it’s a reminder that even big companies are still figuring out how to keep powerful AI agents safe and under control.
Source: theverge.com
Models & Agents
Planetterrian Daily
Omni View
Models & Agents for Beginners
Fascinating Frontiers
Modern Investing Techniques
Tesla Shorts Time
Environmental Intelligence
Финансы Просто
Привет, Русский!