Skip to content

2025

Recipes – A Pattern for Common Code Transformations

I did a thing. A very silly, very meta thing. I vibe-coded a CLI tool that summarizes YouTube videos, recorded myself making the tool, and then used the tool to summarize the video of me making the tool. And now, dear reader, you are reading a blog post that was largely generated from that summary.

But the real star of the show isn't the tool, or the video, it's the Recipe Pattern – a way to encapsulate repetitive coding work into a one-off, reusable doc.

Recipe Bot

Supercharging LLM Classifications with Logprobs

I was just reading the classification chapter of Jay Alammar and Maarten Grootendorst's excellent book Hands-On Large Language Models. I felt inspired to extend their work and show yet another cool trick you can do with LLM-based text classification. In their work they demonstrated how an LLM can be used as a "hard classifier" to determine the sentiment of movie reviews. By "hard" I mean that it gives a concrete answer, "positive" or "negative". However, we can do one better! Using "this one simple trick"™ we can make a "soft" classifier that returns the probabilities of each class rather than a concrete single choice. This makes it possible to tune the classifier – you can set a threshold in the probabilities so that classifications are optimally aligned with a training set.

Soft Classification

Fire Yourself First: The E-Myth Approach to Iteratively AI App Development

I've always been interested in entrepreneurship, so, early on in my career, I asked my financial advisor for book recommendations about startups. He handed me "The E-Myth" by Michael Gerber – a book about... building food service franchises? In the heat of the dot-com explosion, this wasn't exactly the startup guide I was hoping for, but its core message stuck with me and turned out to be surprisingly relevant to the problems I hear about regularly when talking to people about building reliable LLM applications.

Fire Yourself