Skip to content

Prompt Engineering

Context Engineering Requires AI Empathy

A big part of context engineering comes down to empathy. ... Does this sound surprising?

Consider this, LLMs have been trained to act like humans. So when you are building an AI agent, it's a useful exercise to put yourself in their shoes and walk around a bit. For me, I like to think of the AI agent as if it's an AI intern showing up for its first day of work. How would you feel if you were coming in for your first day of work and the boss gave you 50 pages to read? What if you only learned what you were supposed to do with this information after you had already read the 50 pages? And what if the instructions were poorly written, ambiguous, and impossible to achieve with the tools provided!?

In this post I'll go over several places where I've learned to empathize with the AI Intern. But understanding the world from their unique vantage point, you can build better context for the agents and drastically improve the quality of your AI application.

Spec-Driven Development

Roaming RAG – RAG without the Vector Database

Let's face it, RAG can be a big pain to set up, and even more of a pain to get right.

There's a lot of moving parts. First you have to set up retrieval infrastructure. This typically means setting up a vector database, and building a pipeline to ingest the documents, chunk them, convert them to vectors, and index them. In the LLM application, you have to pull in the appropriate snippets from documentation and present them in the prompt so that they make sense to the model. And things can go wrong. If the assistant isn't providing sensible answers, you've got to figure out if it's the fault of the prompt, the chunking, or the embedding model.

If your RAG application is serving documentation, then there might be an easy alternative. Rather than setting up a traditional RAG pipeline, put the LLM assistant to work. Let it navigate through the documentation and find the answers. I call this "Roaming" RAG, and in this post I'll show you how it's done.

Roaming RAG

Cut the Chit-Chat with Artifacts

Most chat applications are leaving something important on the table when it comes to user experience. Users are not satisfied with just chit-chatting with an AI assistant. Users want to work on something with the help of the assistant. This is where the prevailing conversational experience falls short.

Asset-Aware Assistant