The 1996 Space Jam website is still online. You can visit it right now at spacejam.com/1996/. No CSS, no JavaScript frameworks, no backend. Just HTML tables, GIF images, and a twinkling star background. Warner Bros. built it to promote one of their biggest movies of the year, and the design team later called it a nightmare to create. “No one had really done web development to that point,” is how they described it. It was, as the Internet Archive put it, “created at a time when the exact relevance of websites in the spectrum of mass media promotion was still being worked out.”

This was state of the art.

In the late 1990s, web designers didn’t have CSS as a reliable tool. CSS was proposed in 1994 and adopted in 1998, but many browsers didn’t fully support it until well into the 2000s. The most common technique for building a page layout was to design the entire thing as an image in Photoshop, slice it into rectangles, and then reconstruct those rectangles using HTML tables as scaffolding. Spacer GIFs, invisible 1x1 pixel images, were resized and jammed between elements to push things into position. It was the digital equivalent of building furniture out of duct tape and cardboard. It worked, sort of, if you didn’t look too hard and your visitor happened to be using the same browser you tested in.

And browsers were their own problem. The Browser Wars between Netscape and Internet Explorer meant each one shipped exclusive, incompatible HTML features. A site that looked right in Netscape could fall apart in IE. Meanwhile, Geocities and Angelfire let anyone build a website, and the results were auto-playing MIDI music, scrolling marquee text, animated GIFs of construction workers, and guest books. Corporate web design didn’t look all that different from someone’s personal site; websites were still a new frontier.

Nobody had best practices because best practices hadn’t been invented yet.

But out of that chaos, patterns emerged. Tables gave way to CSS. Flash had its moment and died. Responsive design appeared when smartphones demanded it. What started as creative chaos became an industry with recognized practices, shared vocabulary, and professional standards.

LLMs are at that same inflection point right now. People are experimenting, hacking together workflows, and discovering what works through trial and error. Some patterns have already been named: Andrej Karpathy coined vibe coding and later agentic engineering, researchers formalized RAG, and practitioners have written about prompt chaining. Others are still emerging from practitioners doing the daily work. This article maps the ones I’ve observed and started using, from both my own experience and the broader community.

Why you should care

Before generative AI, I wanted to build a Chrome extension. A simple one. All it needed to do was remove YouTube’s recommended videos, strip out Facebook’s newsfeed, and get rid of a few other distracting components on those pages. Conceptually straightforward: match the right elements on the page and remove them.

In practice, it meant reading through Chrome’s extension documentation, understanding content scripts and manifest files, debugging injection timing, figuring out why elements kept reappearing after the page dynamically loaded new content, and doing the whole cycle of trial and error that comes with working in an unfamiliar API for the first time. Hours of work for something that amounted to matching and removing a handful of DOM elements.

That ratio felt wrong at the time, and now it’s obvious why. With an LLM, you describe what you want, iterate on the result, and within a fraction of the time you have something that works. The barrier between “I have an idea” and “I have a working thing” has collapsed. For personal-use tools and side projects, this is the fun part. You get to focus on the creative problem instead of grinding through documentation for the hundredth time.

But the value goes beyond side projects. It extends into the tedious-but-necessary parts of daily work that nobody loves.

I used to spend a significant amount of time writing stories in our team’s tracking system. There was a specific template to follow, specific details to include, acceptance criteria to formalize and refine. Each story took real effort to get right. Now: template plus a simple prompt describing the feature. The LLM fills it out. I review it, adjust where needed, and submit. What used to take thirty minutes takes two. And it’s not just story writing. It’s every process like this: formatting reports, drafting communications, writing boilerplate configuration files. The mundane time savings compound across every working day.

And the people around you are already figuring this out.

One engineer on my team invested some time setting up a workflow to upgrade dependency versions in a project using an AI agent framework. The pattern was specific enough to be reliable and general enough to apply broadly. Once it worked, it could be used by practically every other project we had. One person’s investment became a force multiplier for the whole team.

That’s the dynamic playing out across engineering teams everywhere. Someone figures out a pattern, shares it, and the group gets faster. The gap between people who have built LLMs into their workflow and those who haven’t is becoming visible. Not because the technology replaces anyone, but because it amplifies the people who learn to use it. Just as knowing your way around the internet became a baseline expectation for knowledge workers, knowing how to work effectively with LLMs is heading in the same direction. You want to be building these skills now, not catching up later.

One way to close that gap is to start recognizing the patterns that are already emerging. The dependency upgrade workflow I just described didn’t come out of nowhere. It was a specific instance of a more general approach: create a reusable, structured process and let the LLM execute it. That approach has a shape. It has tradeoffs. It has situations where it works well and situations where it doesn’t. And it’s not the only one.

The more I’ve worked with LLMs across different projects, the more I’ve noticed these recurring patterns. Some are simple habits that take five minutes to adopt. Others are multi-step workflows that require more setup but pay off over time. What follows is a map of the ones I’ve found most useful, with enough detail to understand each concept and see where it applies. Future articles will go deeper into each one, with links added here as they’re published. Think of this as the table of contents for an ongoing series.

The patterns

The Panel of Experts Pattern

Ask the same question to multiple different LLMs and compare the answers. Claude, GPT, Gemini, whichever models you have access to. The simplest version is opening three browser tabs.

The value is in the comparison. When three different models give you the same answer, you can be more confident it’s correct. When they disagree, that’s where the interesting investigation starts. One model might catch an edge case another missed. One might frame the problem in a way that reveals an assumption you hadn’t examined. The disagreements are often more useful than the agreements.

This is the same instinct behind getting multiple quotes from contractors, or reading reviews from several sources before making a purchase. You’re not trusting any single source. You’re looking for convergence and paying attention to divergence.

In practice, I’ve found this most valuable for architectural decisions and unfamiliar domains, situations where I’m not sure I’m asking the right question in the first place. A single model will give you a confident answer. Three models giving you three different confident answers tells you the problem is more nuanced than it first appeared.

Deep dive: coming soon

The Context Compaction Pattern

LLMs have a finite context window. They can only hold so much of your conversation in memory at once. In long working sessions, this creates a practical problem: important decisions you made early in the conversation get pushed out as newer messages take up space. The quality of the model’s responses starts to degrade. Researchers call this “context rot,” and if you’ve ever noticed an LLM seeming to forget what you agreed on twenty minutes ago, you’ve experienced it.

Rather than starting over with a fresh conversation, you can proactively compress.

The approach I’ve found useful is maintaining a work log, a markdown file that acts as a running session diary. It captures what was investigated, what was decided, what’s still open. Periodically, you ask the agent to run a compaction on it: keep open questions, key decisions, and active experiments. Discard anything that’s recoverable from git history or the current codebase. Condense completed work into brief summaries.

Think of it like keeping a meeting’s action items while throwing away the full transcript. The context stays focused on what matters for the next step.

This pattern is already built into some tools at the infrastructure level. Claude Code, for example, auto-compacts when the context window fills up. But you don’t have to wait for tooling to adopt this. Maintaining a work log yourself and deliberately compacting it is a habit that makes long sessions dramatically more productive. The discipline of deciding what to keep and what to discard is itself valuable. It forces you to clarify what actually matters in your current context.

(Inspired by the work log concept from randsinrepose.com.)

Deep dive: coming soon

The Context Bridge Pattern

You’ve solved a problem before, or a teammate has, or an open-source project has. You need to implement something similar in a different codebase. The traditional approach is to study the existing implementation, hold the relevant pieces in your head, switch to the new project, and start writing code informed by what you remember.

The Context Bridge pattern replaces that mental juggling with a structured knowledge transfer.

Step one: point the LLM at the existing project and ask it to read and understand the codebase. Step two: narrow the focus. Ask it to document specifically how a particular feature works: the architecture, the key decisions, the implementation steps. Step three: have it produce a portable markdown file with that context. Step four: open a new session for your new project, feed that markdown file as context, and ask it to implement a similar feature based on the reference document.

The markdown file is the bridge. It carries the relevant knowledge from one project’s context into another’s, without you needing to hold it all in your head or re-explain it from scratch.

This works because the bottleneck in cross-project knowledge transfer isn’t understanding. It’s packaging. You probably understand how the existing feature works. The hard part is extracting exactly the right context, at the right level of detail, and presenting it in a way that’s useful for implementation. LLMs are good at this particular kind of synthesis, and the resulting document is reusable. You can hand the same bridge file to a different team member, a different agent, or a future version of yourself working on a third project.

It’s like taking detailed notes on how one team solved a problem and handing those notes to a different team so they don’t start from scratch. The LLM is both the note-taker and the reader.

Deep dive: coming soon

The Agent-Updator Pattern

Most people interact with LLMs in a conversational, one-off way. You ask a question, get an answer, and start fresh next time. The Agent-Updator pattern breaks out of that cycle by using the LLM to create reusable, structured prompt files that you feed back to it in future sessions.

In tools like VS Code’s Copilot, these are sometimes called “skills”: markdown files with structured prompts, context, and instructions that act as small, focused agents for specific tasks. The key insight is that you don’t have to write these by hand. You can describe what you need, “create a skill that reviews pull requests for security issues, focusing on input validation and authentication,” and the LLM generates the structured file for you. You review it, refine the edge cases, and save it.

Now anyone on your team can invoke that skill. The LLM is building its own tools, and you’re reviewing and curating them.

This matters because it shifts LLM usage from a personal skill to a team capability. Instead of everyone on the team independently crafting their own prompts for the same recurring task, you produce a vetted, portable artifact that encodes the team’s standards and expectations. It’s the difference between everyone writing their own SQL queries from scratch and having a shared library of tested, parameterized queries.

The Agent-Updator pattern often uses the Template Pattern described below. You might have a base template for what a skill file should look like, and the LLM fills in the specifics for each new use case.

Deep dive: coming soon

The Template Pattern

This is the most fundamental pattern, and also the most abstract, which is why it’s last.

The Template Pattern combines a reusable template with user-specific context to generate a new artifact. The template encodes structure and standards. The user provides intent and specifics. The LLM bridges the gap.

You’ve probably used this pattern without naming it. If you’ve ever copied a standard format, pasted it into an LLM conversation, and asked the model to fill it in based on some description you provided, that’s the Template Pattern. It’s a fill-in-the-blank system, but one where the AI understands the blanks’ context and fills them in with judgment, not just text substitution.

At the simple end, consider a platform that requires specific configuration files for every new project. The files are largely identical except for a few particular details. Instead of copy-pasting and hand-editing, you feed the template plus your requirements to the LLM. It generates the configuration, you review it, done. The template encodes what “correct” looks like. Your input provides what’s specific to this instance.

At the more sophisticated end, GitHub’s Spec Kit is built entirely on this pattern. Spec Kit structures AI-assisted development into phases: you describe what you want to build, and the tool maps your input through a series of template files. The first phase, /specify, takes your high-level feature description and generates a detailed specification. The second phase, /plan, takes that spec plus your technical direction and produces an implementation plan. The third phase, /tasks, breaks the plan into small, reviewable work items. Each phase is a template that the LLM fills in with the context from the previous phase and your input. The templates encode what a good spec looks like, what a thorough plan covers, how tasks should be scoped. Your input provides the specifics of what you’re building and how.

I’ve used Spec Kit on several projects, and the thing that stands out is how much of the value comes from the templates themselves. They capture decisions about structure and completeness that you’d otherwise have to remember and enforce yourself. The LLM handles the synthesis. You handle the review.

The reason I call this pattern cross-cutting is that it shows up inside many of the others. The Agent-Updator pattern uses a template for what a skill file should look like. The Context Bridge produces a templated markdown document. The story-writing workflow from the “why you should care” section is a template plus a prompt. Wherever you have a recurring structure that gets customized per use case, you have a candidate for the Template Pattern. Once you start looking for it, you’ll see it everywhere.

Deep dive: coming soon

The patterns are still emerging

There are many more patterns out there that others have named and formalized. On the engineering side, patterns like RAG, prompt chaining, and Chain of Thought describe how LLMs process and retrieve information. Martin Fowler’s team at Thoughtworks has been cataloging emerging GenAI patterns from their client work, noting that these are still “early days” for the field. On the human collaboration side, Ethan Mollick’s research at Wharton describes Centaur and Cyborg models for how people divide work with AI. Simon Willison has written extensively about practical LLM workflows for code. These are worth exploring in their own right, and I may cover them in future entries in this series.

But the patterns in this article come from a different place. They come from daily practice, from the workflows that people are building right now as they figure out how to make LLMs genuinely useful in their work. Much like the Gang of Four’s software design patterns, which weren’t invented from scratch but emerged organically as developers recognized recurring solutions across projects, the patterns here are observations, not prescriptions.

These patterns aren’t final answers. Some will be refined, renamed, or merged as the field matures. New ones will appear that nobody has thought of yet. That’s the nature of an emerging field.

In 1996, the people who learned to build with HTML tables and spacer GIFs were the ones who became the web designers and developers who built the modern internet. They didn’t have best practices handed to them. They developed them through experimentation, through sharing what worked, and through gradually turning chaos into craft.

Right now, the people learning to work with LLMs effectively, building patterns, sharing what works, developing instincts for what these tools can and can’t do, are building the same kind of foundational advantage.

The question isn’t whether these patterns will become standard practice. It’s which ones we haven’t discovered yet.