Possibility Debt

· 10 min read

I both love and hate a Saturday with nothing planned. There is something genuinely appealing about waking up with a clear schedule and the feeling that I can do whatever I want. But what actually happens on those days is rarely what I imagined. I don’t make meaningful progress on any of the projects I’ve been meaning to get to. I don’t fully relax either, which would be an equally acceptable outcome. Instead, I drift. I start something, think of something else I could be doing, switch to that, remember a third thing, and by the end of the day I’ve touched everything and finished nothing. The absence of constraints, which was supposed to be the gift, turns out to be the problem.

I’ve been thinking about this pattern because I recently came across an essay by Francesco, co-founder of Cua AI, that describes something remarkably similar happening to software developers using AI coding tools. He calls it “vibe coding paralysis”—the syndrome of wanting to do so much, and being able to do so much, that you end up finishing nothing. The AI can scaffold a new feature in minutes, so while it’s working on one thing you spin up another. Then a third idea hits, and why not, the tool can handle it. An hour later you have five things in flight, you can’t remember what the original task was, and you feel further behind than when you started. The capability is real. The productivity is an illusion.

What resonated with me wasn’t the developer-specific details—the worktrees and parallel sessions and terminal windows. It was the underlying dynamic, which I recognized immediately from my own Saturdays and from something I’ve been watching happen in education. I want to give it a broader name: possibility debt. It has two dimensions: the growing gap between what you could do and what you actually do, and the growing gap between what you start and what you actually finish. The first is paralysis—you can’t choose, so you don’t act. The second is fragmentation—you act on everything, and finish nothing.

One of the sharpest lines in the essay is this: “Value comes from finished things that work.” That’s it. Not from things you started, not from things you’re planning to get back to, not from the impressive list of projects you have in flight. Value comes from finished things that work. The 37signals folks have been saying a version of this for years—you’re better off building half a product than a half-assed product. But AI tools make the half-assed path so frictionless that it barely feels like a choice. You accumulate starts the way you accumulate browser tabs, and each one feels productive in the moment.

Possibility debt accumulates whenever capability outpaces intention. It’s the half-started projects on your weekend. It’s the five browser tabs of AI tools you signed up for and never learned properly. It’s the growing list of things you know you could be doing with these new technologies but aren’t, and the low-grade guilt that comes with knowing the list exists. Unlike a to-do list, which at least has the decency to be finite, possibility debt is open-ended. Every new tool, every new feature, every “you could also use AI for…” adds to the balance. And unlike financial debt, there’s no statement that arrives to tell you how much you owe.

Checklist usage is a special emphasis area for the FAA, and for good reason. In aviation, a checklist isn’t a constraint on your freedom—it’s one of the things that has been proven to make flight safer. But the power of a checklist isn’t just in what’s on it. As Atul Gawande argues in The Checklist Manifesto, a good checklist is deliberately short—focused on what he calls “the killer items,” the steps that are most dangerous to skip. It is explicitly not a comprehensive guide to everything you could or should be doing. The whole point is that it cuts through the overwhelming complexity of a task and says: these are the things that matter enough to not skip. Everything else can wait.

Before I start the engine, I check fuel quantity, mixture rich, primer in and locked, master switch on, beacon on, clear prop. There are a hundred other things I could be thinking about—weather at my destination, my passenger’s comfort, whether I filed my flight plan—but the checklist doesn’t ask me to think about those yet. It gives me the next thing to do, and by doing so, it takes the entire overwhelming possibility of flight and makes it a sequence of manageable actions. I don’t have to decide what to do. I don’t have to weigh options. The checklist is the result of already having made those decisions, and of benefitting from those who learned the lessons to make them.

The Saturday problem is an absence-of-checklist problem. There’s no sequence, no structure, no external system telling me what comes next. And so the entire day becomes a decision—not one decision, but an endless series of micro-decisions about what to do right now, each one competing with every other possible thing I could be doing instead. Decision fatigue sets in quickly, and the path of least resistance becomes doing a little of everything, which is functionally the same as doing nothing.

I bring this up because I think we are inadvertently creating a massive possibility debt problem in education with AI.

The current narrative around AI in education is almost entirely about expanding capability. AI can help teachers differentiate instruction. It can generate assessments. It can build interactive simulations. It can analyze student data. It can draft IEPs. It can create lesson plans tailored to individual learning styles. It can translate materials into multiple languages. It can provide real-time feedback on student writing. The list is, for all practical purposes, infinite.

And we present this list to teachers as if it’s good news.

I’ve sat in enough professional development sessions to know what actually happens when you tell an already-overwhelmed teacher that AI can do thirty new things for them. They don’t feel empowered. They feel behind. Every new capability that they’re not yet using becomes another item on an invisible ledger of things they should be doing but aren’t. The possibility debt compounds, and the emotional weight of it—the sense that everyone else has figured this out and they’re falling further behind—can be paralyzing.

This is not a new dynamic in education technology. Every wave of new tools has come with the same implicit promise: this will make your job easier. And every wave has, at least initially, made the job harder, because the tools arrive without the constraints that would make them usable. A new learning management system doesn’t just give you a new capability; it gives you a hundred new decisions to make about how to configure it, what to put in it, how to integrate it with your existing workflow. The capability is real, but the cognitive overhead of figuring out how to use it effectively can dwarf the time it was supposed to save.

AI is this dynamic on a different scale entirely. Previous technology waves added a finite set of new capabilities—a new tool could do three or four things, and you could evaluate them and make a decision. AI tools are open-ended. A large language model doesn’t do three things; it does whatever you can think to ask it. This means the possibility space isn’t just large, it’s effectively unbounded. And an unbounded possibility space, without constraints to navigate it, produces exactly the paralysis that Francesco describes in developers and that I experience on my free Saturdays.

The instinct of many AI literacy efforts, including my own, has been to address this by expanding awareness—showing people more of what’s possible, more use cases, more examples of how others are using these tools. And that instinct isn’t wrong exactly, but I think it’s incomplete. Showing someone twenty things they could do with AI when they haven’t figured out how to do one of them well doesn’t reduce possibility debt. It increases it. It’s the equivalent of responding to someone’s empty Saturday by handing them a list of fifty activities.

What I think we actually need—and I’m still working through this—are constraints. Not limitations on the technology, but structures that help people navigate the possibility space without being overwhelmed by it. Checklists, in the Gawande sense—not comprehensive guides to everything AI can do, but deliberately short lists of the killer items, the things most dangerous to skip. Not “here are all the things AI can do” but “here is the one thing to try this week.” Not “AI can transform your teaching” but “take this one assignment you already give and run it through this specific process.” The value of the constraint is as much in what it leaves out as what it includes.

This is what I keep coming back to. In my previous post about comprehension debt, I was thinking about the gap between what we use and what we understand. Possibility debt is the companion problem—the gap between what we could use and what we do use, and the weight of that gap on the people we’re trying to help. Both debts accumulate silently. Both get dangerous when you don’t acknowledge them. And both, I think, point to the same underlying challenge: the tools have gotten ahead of our ability to integrate them into our lives and work in a way that’s sustainable.

The developers in Francesco’s essay are figuring this out by imposing their own constraints—timeboxing their planning, reading code line by line, asking “what did I actually finish today?” Teachers need the equivalent. Not more possibilities. Better constraints.

I don’t have the checklist yet. But I’m increasingly convinced that building one is more important than adding more items to the list of things AI can do.