Episode 25
Why Your Messy Human Brain Is Smarter Than Any AI
The smartest man in the world mathematically proved airplanes were impossible. One year later, the Wright Brothers flew at Kitty Hawk. What did two bicycle mechanics see that a genius with perfect data and flawless logic couldn't? And why did the U.S. Army discover that teaching their elite soldiers to think more like computers was actually making them worse at their jobs?
Here's the thing nobody tells you: every time you open an app, fill out a form, or stare at a menu trying to translate "I want something Italian that's healthy" into the categories a machine demands, you're training yourself to think like a computer. And computers, it turns out, are probability machines that can only see what's already happened. Your brain is a possibility machine that can imagine what's never existed. You just forgot how to use it. This episode will remind you—and it might change how you make every decision from here on out.
Sign up for the Dumbify newsletter: https://www.david-carson.com/
Dumbify celebrates ideas so weird, wrong, or wildly impractical… they just might be brilliant. Hosted by David Carson, a serial entrepreneur behind multiple hundred-million-dollar companies and the go-to secret weapon for companies looking to unlock new markets through unconventional thinking. Dumbify dives into the messy, counter-intuitive side of creativity — the “dumb” ideas that built empires, broke rules, and ended up changing everything.
Transcript
... "I'm sitting in Panera Bread last Thursday with my sandwich, and I'm watching this guy, clearly on his lunch break, trying to order from one of those touchscreen kiosks. He taps the screen. The kiosk says, 'Select your bread. Sourdough, white, whole grain, French baguette, ciabatta.'" He stares at it, taps nothing. I can see him trying to figure out what he wants. Finally, he mutters to himself, "'I don't know, I just want something Italian that's healthy.' The kiosk blinks back at him. 'Select your bread.' He randomly picks ciabatta, probably because it sounds Italian. Now the screen says, 'Select your protein, turkey, ham, roast beef, bacon, tuna, chicken.'" He's scrolling, scrolling. "'I don't wanna build a sandwich,' he says to the screen. 'I just want something that tastes Italian that's healthy.' The kiosk doesn't understand anything he's saying. The kiosk wants him to pick a protein. He backs out, tries to start over, gets stuck at the bread screen again. I watch him go in circles for three minutes. Bread, protein, back, bread again, before he just gives up and walks out. No sandwich. And I'm sitting there looking down at my own lunch, which I'd ordered maybe five minutes earlier from an actual human at the counter. I'd walked up and said, 'Hey, I want a sandwich that tastes like Thanksgiving.' She didn't even pause. 'Turkey, stuffing, cranberry sauce, little gravy on sourdough, side salad instead of chips.' Perfect, two minutes. Done." That woman went on break right after. That's why this guy got stuck with the kiosk. And I'm realizing that kiosk isn't stupid. It's actually incredibly sophisticated. It's got every ingredient cataloged, every combination possible, perfect memory, consistent execution, but it needed him to think like it thinks. It needed him to start with bread, then protein, then toppings, then condiments, because that's how its database is structured. That's its decision tree. That guy didn't wanna think in categories. He wanted to think in outcomes. An Italian sandwich that's healthy, that's a perfectly clear thought to another human. To the kiosk, it's meaningless noise. And here's what's making me crazy. Everything around us is doing this now. Every app, every form, every algorithm, every AI interface, they all want us to think in their structure, their categories, their logic, their way. It's like asking a dolphin to climb a tree. You could build scaffolding. You could train it. You could technically force it to happen, but everyone's going to be miserable. The dolphin's going to be terrible at it, and you're completely ignoring what dolphins are actually genius at. We're not computers. We think in stories, in feelings, in possibilities, in something Italian that's healthy, but we've built a world that demands we translate everything into computer speak before we're allowed to participate. What if the smartest thing you could do right now is to stop trying to think like a machine, like a computer? Welcome to Dumbify, the only show that celebrates the that your brain is not a calculator, and that's your secret weapon. I'm your host, David Carson. Let's get dumb.
‘DUMBIFY’ THEME SONG:Dumbify, let your neurons dance. Put your brain in backwards pants. Genus hides in daft disguise. Brilliance wears those googly eyes. So honk your nose and chase that spark. Dumb is just smart in the dark. Dumbify, yell it like a goose. It's thinking wrong on purpose with juice.
David Carson:Okay, so here's what we're told about intelligence. The smarter you are, the more you can process information, identify patterns, and make data-driven predictions about the future. Basically, the smarter you are, the more you think like a really good computer. And listen, I get why this idea took hold. Computers are incredible at certain things. They can crunch numbers we can't touch. They can find patterns in massive datasets. They can beat us at chess, and Go, and Jeopardy. But here's what we forgot. Computers are probability machines. They look at what happened before and predict what will happen next. They're playing the odds. Humans, we are possibility machines. We can imagine things that have never happened, but could happen. And that's not just a fun party trick. According to research by a neuroscientist named Angus Fletcher, it's literally the difference between the kind of intelligence that created airplanes and the kind of intelligence that proved airplanes were mathematically impossible. Today, we're talking about why your messy, inefficient, storytelling human brain is actually smarter than any AI if you remember how to use it. Let me tell you about the world's smartest man being spectacularly wrong. His name was Lord Kelvin. Born William Thomson in 1824, he was basically the human version of a supercomputer.... he made groundbreaking discoveries in thermodynamics. He helped lay the first transatlantic telegraph cable. He had an absolute unit of temperature named after him. This guy was serious. And in 1895, at the height of his powers, Lord Kelvin declared, with complete confidence, "Heavier-than-air flying machines are impossible." Not unlikely. Not improbable. Impossible. He ran the numbers. He did the physics. He analyzed every attempt at flight up to that point, all failures. He looked at bird flight and concluded that human-scale aviation violated fundamental principles of physics and engineering. The data was clear. The probability of an airplane? 0%. Then, in 1902, seven years later, he said it again. Just in case anyone thought he'd softened, he reiterated his point of view that, "No balloon and no airplane will ever be practically successful." One year later, December 17th, 1903, Orville and Wilbur Wright flew 120 feet in 12 seconds at Kitty Hawk, North Carolina. Now, Lord Kelvin wasn't being arrogant or stupid. He was being computational. He was thinking in probability. And probabilistically, he was right. Based on every single data point available, powered human flight had a 0% chance of success. But the Wright Brothers weren't thinking in probability. They were thinking in possibility.
And here's where it gets interesting. According to Angus Fletcher's research in his book, Primal Intelligence, we know exactly why the Wright Brothers could see what Lord Kelvin couldn't. Their father, Milton Wright, had this weird rule. The boys could skip school any day they wanted as long as they spent that day reading a novel from his bookshelf. So Wilbur and Orville would stay home and read Charles Dickens. They'd immerse themselves in narratives about people doing things that had never been done before, and this did something to their brains. It trained them to think in story, which is another way of saying it trained them to think in possibility. When they looked at the airplane problem, they didn't ask, "What does the data say?" They asked, "What doesn't violate the laws of physics?" Flight didn't violate gravity. It didn't violate lift. It had never happened, but it was possible. That's a completely different cognitive mechanism than probability. Probability looks backward at patterns. Possibility looks at constraints and asks, "What could exist within these rules that's never existed before?" The Wright Brothers saw a possibility. Lord Kelvin saw a probability of zero. Both were using intelligence, but only one type of intelligence creates the future.
thinking, Albert Einstein in:It would never look at light and say, "Let's make everything work like the exception." That's computational suicide. But human brains? We evolved to do exactly that, and we're really, really good at it when we remember how.
IME FOR SCIENCE’ THEME SONG:Time for science. Time to get unnecessarily nerdy with it. 'Cause nerding out is what we do. And we're not going to apologize for it. Get ready for...
David Carson:So in 2021, Angus Fletcher, who's a professor at Ohio State's Project Narrative, got a knock on his door. And to his surprise, it was the United States Army. Now, Fletcher is an English professor who studies Shakespeare and neuroscience, so this was unexpected. But the army had a problem. They'd invented computers back in World War II. Literally, they built ENIAC, the first electronic general-purpose computer, and they'd also, around the same time, invented something called ideation, basically brainstorming and design thinking. The idea was computers can predict the future with data and ideation can generate creative options. Together, they'd create the perfect decision-making system, except it didn't work. The army noticed that their special operations soldiers, the ones who had to get dropped into chaos and improvise new plans when everything went sideways, were getting worse at their jobs the more they trained in ideation and computer-style thinking. So they brought in Fletcher and said, "Explain what's happening in the brains of our best operators, the ones who can see the future faster, who can spot threats nobody else sees, who can create plans that work in situations with no data." And Fletcher discovered something wild. The brain is not a computer.
SFX:[gasps]
David Carson:Like, at all.
SFX:[gasps]
David Carson:Not even close.
SFX:No.
SFX:Yes.
David Carson:A computer is built on transistors. It processes information through electrical gates that are either on or off. It learns by absorbing massive amounts of data and finding patterns through a process called induction.
The human brain? It's built on animal neurons, which are over half a billion years old.
SFX:Oh, yeah.
David Carson:And animal neurons evolved for one specific purpose, to act intelligently in uncertainty. Fletcher describes it like this ...
SONG:A computer is a calculator. The human brain is a Swiss Army knife. And what can you do with a Swiss Army knife that you can't do with a computer? Basically everything, except math.
David Carson:Here's the key difference. Computers think in probability because they learn from data. The more data they have, the better they perform. But the moment you put them in a foggy, uncertain, rapidly-changing environment where data is thin, they break down completely. Human brains evolved to do the opposite. We're designed for volatility. Our intelligence comes from our ability to improvise, to initiate new actions, to learn through rapid feedback, and to create entirely new approaches when old ones don't work. And the way we do this, through narrative cognition, through story. Fletcher worked with Army Special Operations to develop training based on this. Instead of teaching brainstorming, they taught operators to spot exceptions to rules, double down on those exceptions, turn them into new plans, test them rapidly, iterate when they don't work. The results? Operators saw threats faster. They healed quicker from trauma. In life-and-death situations, they made better decisions. Their performance in volatile environments went way up. Then they tested it on civilians, entrepreneurs, doctors, engineers, managers, salespeople, coaches, teachers, NFL players. Same results. Leadership improved. Innovation increased. People coped better with change and uncertainty. Then they tried it on students as young as eight years old. It worked there, too. Because here's what Fletcher proved. The ability to think in possibility instead of probability isn't magic. It's not some mystical creativity that you either have or don't. It's a specific neurological process that runs on mechanisms computers literally cannot replicate. Computers can't think in negatives. They can't imagine something not happening. They can't understand causation, only correlation. They can't think in narrative, only in data. And they can't create genuine novelty. They can only recombine what already exists. Your brain can do all of that. You just forgot, because you've been training yourself to think like a machine.
WORD OF THE DAY THEME’ SONG:Dumb, dumb, dumb, dumb, dumb word of the day. Dumb word of the day. It's a word. It's dumb. Use responsibly.
David Carson:All right. It's time for my absolute favorite part of the show. It's time for Dumb Word of the Day. And today's dumb word is "apophenia," spelled A-P-O-P-H-E-N-I-A. Apophenia. It's a beautiful word that describes a very human problem. Apophenia is the tendency to perceive meaningful connections between unrelated things and seeing patterns that aren't actually there. The term was coined in 1958 by German psychiatrist Klaus Conrad, who was studying the early stages of schizophrenia. But you don't have to be schizophrenic to experience apophenia. We all do it constantly. You know when you're scrolling through your phone and you suddenly see three unrelated posts about-... I don't know, hedgehogs, and you think the universe is trying to tell me something about hedgehogs? That's apophenia. Or, when you're wearing your lucky socks and your team wins, so obviously the socks caused the victory. Apophenia. Here's why it matters for today's episode. We've built an entire economic system on industrial-grade apophenia. We call it data-driven decision-making. We look at correlations in massive datasets and assume they mean something. We find patterns in past behavior and assume they predict future behavior. We're pattern-matching machines who've convinced ourselves that if we just find enough patterns, we can eliminate uncertainty. That's computer thinking. That's probability thinking. And it works great, until it doesn't. Let's try using it in a sentence. "Kevin's apophenia led him to believe that because all five of his successful marketing campaigns had been launched on Tuesdays, he should only launch campaigns on Tuesdays, which is why he missed the perfect Wednesday opportunity that would have made him a millionaire." Poor Kevin. Moving on. So look, I did something dumb. I decided to test this out in my own life. I've been working on growing this podcast for a minute now, and I've gotten very good at thinking in probabilities. I know which topics tend to perform well. I know which episode structures get the most downloads. I know what works, kinda. But I was stuck. Growth had plateaued. And every time I'd think about trying something new, I'd look at the data and go, "But that's never worked before." After reading Fletcher's book, I realized I was asking the wrong question. I was asking, "What does the data say?" when I should've been asking, "What's possible?" So, last month, I did an episode about something I'd never covered before, in a format I'd never tried, with zero data suggesting it would work. I just asked, "Is this possible within my constraints? Do I have something interesting to say here? Does it violate any laws, legal or otherwise?" No, no, and no. So, I did it. That episode has become one of my most downloaded of all time. And here's the thing. I could never have predicted that with probability thinking. The data would have told me not to do it. But possibility thinking said, "This could work, and there's only one way to find out." I'm not saying data is useless. Lord Kelvin was a genius, and his kind of probability thinking has helped to build the modern world, no question. But when you hit a roadblock, or when the environment has changed rapidly, you need a new plan. That's when you need to switch from asking, "What happened before?" to asking, "What could happen next?" So, here's your challenge this week, and I'm calling it the exception expedition. It has three basic steps. Step one, catch yourself pattern-matching. This week, pay attention to how often you make decisions based on, "This is what successful people always do." Notice when you dismiss something because, "That's never worked before," or, "Nobody does it that way." Every time you catch yourself thinking in patterns, write it down. No judgment, just notice. Step two, find your exception. Look at your life and find one thing that's working but doesn't fit the pattern. Maybe you're most productive at midnight, even though everyone says morning is optimal. Maybe you had your best business idea in the shower, not in a brainstorming session. Maybe your healthiest relationship is with someone who breaks all of your type rules. Find the exception, the thing that works but shouldn't according to the data. Step three, double-down on it. Instead of treating it as a weird outlier, ask, "What if this is the rule? What if my whole approach was built around this exception instead of trying to fit this exception into my old approach?" You don't have to blow up your life. Just do one thing this week based on your exception instead of your pattern. Bonus points, do something that has zero probability based on your past, but is possible within your actual constraints. Ask someone out who's not your type. Apply for a job you're not qualified for. Make art in a medium you've never touched. Not, "What does the data say about my chances?" Ask, "Is this possible?" Then try it and see what happens. Share your results. Tag me. I want to hear about your exceptions.
David Carson:And that's our show. Thanks for getting dumb with me today. If you enjoyed this episode, share it with someone who needs to remember they're not a calculator. Subscribe wherever you listen to podcasts, and leave a review if you're feeling generous. If you want even more dumbness from the Dummify Dummyverse, get the Dummify newsletter at david-carson.com. I'm your host, David Carson. Until next time, keep thinking in possibilities.
