Want to win $50k? Learn to speak AI
Video Here: https://youtu.be/7a4vpoRZsIM
Last week, someone won $50,000 with a single prompt. Not by coding. Not by hacking. Just by understanding how AI thinks. After 481 failed attempts and thousands of dollars spent by others trying, one person cracked it by speaking AI's language. We'll break down what happened, why it matters, and most importantly - what you can do today.
Read until the end where I'll share the exact tools I use to easily level up my prompts.
Imagine trying to explain something complex to someone who's brilliant but takes everything literally. That's prompt engineering - the art of helping AI understand exactly what you want. This week, we saw just how far that art has come. From someone winning $50k for the perfect prompt to researchers proving AI can theoretically do anything with the right instructions, here's the breakdown:
What Happened This Week
- Put Your Money Where Your Prompt Is: The Freysa AI $50k challenge was finally cracked after 481 attempts, followed by another win in Act 2 worth $12k. Meanwhile, Sahil put up $1k for the best real-world prompts.
- Under the Hood: Leaked system prompts revealed how top AI assistants actually work, while a bizarre ChatGPT quirk and its creative solution showed the community's ingenuity.
- The Science of Speaking AI: Researchers proved prompting is Turing-complete (translation: theoretically limitless), while new frameworks like REV THINK and LLMs as Method Actors showed massive improvements in practice.
- Pushing the Boundaries: Fresh research tackled hallucinations as industry leaders shared new strategies for learning prompting techniques.
My Take
Learning to prompt is a lot like learning to ride a bike. At first, it seems impossible - you're wobbling, overthinking every movement, probably falling a few times. Then something clicks. You're not consciously thinking about balancing anymore; you're just... riding. That's what we're seeing with AI prompting right now.
Some say prompt engineering isn't the future. They're missing something crucial. Sure, the basic tricks we use today (like "take a deep breath and think step by step") might change as models get smarter, but understanding how to guide AI will always matter. It's not about memorizing perfect prompts - it's about developing intuition.
One trick that helps me if AI is struggling to give me a good response is taking a step back looking at my prompt and thinking, if the answer I wanted was the top comment on a Reddit post, what would be in the post? This mindset shift is huge. When you get a bad response, it's probably not the AI - it's the prompt. And that's good news! It means you can fix it.
This week's breakthroughs back this up. Leaked system prompts and new research frameworks, confirm that giving AI the right context is what matters most. It’s obviously way easier said than done to make a prompt that only has useful context in it and no noise but fortunately Anthropic and OpenAI are building better tools to help us iterate on prompts, and experts are sharing their strategies.
The really exciting part is you don't need to be an AI expert. Your expertise in your field matters more. When you're generating images or creating audio you need different prompts than when you’re writing code or making animations (see my last video). But the fundamental skill - clear communication with AI - stays the same.
New research shows these models can theoretically do anything with the right prompt. That’s CRAZY. To me this shows just how important cultivating this skill is. That's why I'm always using AI and testing new approaches. The key? About 10 hours of meaningful practice gets you surprisingly far. Not memorizing prompts, but actually using AI for real work.
The tools are getting better. The optimization frameworks are improving. But at its core, prompting is about having a conversation with a brilliant but very literal partner. And like any conversation, it gets better with practice.
What You Can Do Today
Whether you're just starting or already diving deep, here's how to level up your prompting:
Beginners:
- Start with the fundamentals: Few-shot and Chain-of-thought prompting are fancy ways to say “use examples” and “tell the model to take a deep breath and think step by step” bonus points if you define the steps
- Use built-in helpers: Claude has an awesome prompt generation AND prompt improvement tool - I use these ALL THE TIME.
- Build your toolkit: Pick one task you do often and test different AIs (check my last video on making benchmarks that matter) then collect prompts that work well for your specific needs
Intermediate:
- Go through Anthropic’s Prompt Engineering guide and check off anything you’re not comfortable with
- Study the masters: The Big Prompt Library lets you compare how different AI assistants think
- Into the Playground: OpenAI's playground is perfect for experimenting with function calling and Structured outputs, two super useful techniques
Advanced:
- Scale your testing: Try LangWatch's optimization studio for systematic improvement
- Implement new research: REV THINK and LLM's as Method Actors offer powerful new approaches - good places to start but there are so many if you look
- Create meta-prompts: Use OpenAI's guide to generate and optimize prompts automatically
- Try jailbreaking LLM powered tools to get them to share their system prompts
Remember that bike analogy? There's one more parallel: once you learn to ride, you never forget. But more importantly - it opens up entirely new places to explore. That's where we are with AI prompting. The fundamentals are waiting for you above. The tools are ready. And somewhere between your expertise and AI's capabilities lies the next breakthrough. Time to start riding. 🚲
I’m Al with AlxAI,
December 4th 2024
And that’s how I see it.