Are We Managers Now?

Paul Graham writes in “Maker’s Schedule, Manager’s Schedule”:

There are two types of schedule, which I’ll call the manager’s schedule and the maker’s schedule. The manager’s schedule is for bosses. It’s embodied in the traditional appointment book, with each day cut into one hour intervals. You can block off several hours for a single task if you need to, but by default you change what you’re doing every hour. […]

For someone on the maker’s schedule, having a meeting is like throwing an exception. It doesn’t merely cause you to switch from one task to another; it changes the mode in which you work.

He makes the case that the maker’s schedule and the manager’s schedule are fundamentally incompatible, and I’ve always worked on the maker’s schedule. But I noticed that heavy use of Claude Code has shifted my role. Instead of doing the work myself, I mostly specify tasks and evaluate outputs. That is managerial work, and I’m not very good at it yet.

Learning to Manage

AI agents execute tasks quickly, which creates effectively infinite demand for review. The pull to check outputs immediately is strong, but humans are not built to process a constant stream of results. I think makers will benefit from learning and adopting some structure for sustainable (human) management. I’m experimenting with a few ways of working with AI agents.

  • Viewing my job as defining the goal and plan, and treating my evaluation as a scarce resource to allocate.
  • The “meeting/briefing” model: one dense sync session followed by long autonomous AI execution. This might look like spending an hour writing a detailed plan, pre-empting any blockers, and dispatching the agent. Reviewing only after everything finishes.
  • Batching. Grouping agent tasks by the type of thinking they require from me.
  • Boundaries. Just because an agent finishes does not mean I have to look immediately.

I think this will be made even better by improvements in test-time optimization loops, which allow LLMs to autonomously perform meaningful work over long periods of time. In these systems, the human defines an objective and the model iterates against it, replacing many small supervision decisions with a small number of objective definitions. This role seems far more sustainable than managing a stream of agent outputs in real time.

Two Modes of AI Work

That said, not all AI work feels managerial. AI seems to create two different modes of work. I play a manager role when delegating tasks to agents, but when interacting with a model in real time (iterating, testing ideas, building something together) the maker loop is still there, and in fact it’s become even better in my experience. That mode, where each response pushes the work forward in real time, has produced some of the strongest creative flow states I’ve experienced.