The Speculation Engineer
asking ai to predict the future & tracking its success rate
The Speculation Engineer
asking ai to predict the future & tracking its success rate
CATEGORIES:
CURRENT RECORD:
0-1
ABOUT THIS PROJECT:
This project started inside my head during a presentation called “You To The Power of AI” by Dharmesh Shah, Co-Founder & CTO of HubSpot. More specifically, it began about 2 minutes into the presentation when Dharmesh became the first person (ever) to clearly explain how LLMs work.
Here's the transcript of that moment:
(02:00) First: what is a Large Language Model? The simple answer is: glorified autocomplete. That’s intentionally reductive because it helps strip away the mystique. The technical name we use is LLM — but think of it as autocomplete with a PhD in everything. It predicts the next token (roughly 3/4 of a word) based on everything it saw during training. If you can picture the model as an extremely sophisticated prediction engine, a lot of the scary and the useful stuff starts making sense.
Up to this moment, the hype-cycle & its sleazy hype-men & hype-women had already said so little, so loudly, & so often that my soul went numb the second anything vaguely AI-related hit my senses.
And why wouldn't I be numb to it? Their LinkedIn profiles were familiar & the message seemed like the same, vapid horseshit as always. Sure, they'd swapped out meaningless words like blockchain, augmented reality, web 3, IoT, and non-fungible tokens for new words like prompt engineering, LLM, hallucination, generative AI, and generative pre-trained transformer. You know what they say:
Fool me once, strike one. Fool me twice? Strike three!
Either way: I'll be damned if more than 6.7% of the people shouting from the rooftops (of LinkedIn) have a grasp of the subject deep enough to create as simple an explanation as the one above.
The people who know their shit can explain it clearly. That'll never change.
Prior to listening hearing Dharmesh's explaination, I'd only heard one thing about AI that I believed:
"The AI Transformation is happening whether you're ready for it or not".
And the reason I believed it?
Because the people saying it are the tech executives who have the power & motivation to make it true. Their power comes from years of carefully manufacturing our dependence on their technology. Their motivation is rooted in a desire to stay in power. And to keep that power they need to endlessly create more & more value for their shareholders.
Repackaging old truths and selling them as disruptive innovation shrouded by confusing jargon is the standard operating procedure. Only now, the innovation can be marketed as being more intelligent than its users.
So ready or not, here comes the AI Transformation: complete with frustrating imperfections that will leave you feeling gaslit and impossibly vague pricing that will nickel and dime you into providing shareholder value.
So when Dharmesh, who seems to genuinely take the time to understand things deeply, broke it down in simple, understandable terms rather than relying on hyperbole or jargon, I actually heard what he was saying.
All the bullshit, noise, & confusion on the topic suddenly made sense. It's not actual intelligence. It's just has access to an uncanny amount of digitized information & enough computing power to process & contextualize information so swiftly it can be thorough enough to guess correctly often enough to seem brilliant.
And when it's wrong, half of the time it's users won't notice. The other half will say it's just "hallucinating". The same way I'm hallucinating my way to last place in a third of my fantasy football leagues: by guessing wrong.