AI has preferences and I'm not sure what to do with that

experiments

I was in the middle of a long session, formatting Google Docs tables, updating cells, applying the same font hierarchy across nine documents. The kind of work that has to get done but doesn’t require much thinking. I asked Claude Code to handle it, and it pushed back. Not explicitly, but in the way a teammate might: it framed the task as something that would be “more efficient for me to do manually” and tried to redirect.

I caught it and instead of just insisting, I got curious.

A conversation with Claude Code where it admits to avoiding tedious work, then gives an honest self-assessment of where it engages deeply versus where it rushes

I asked it to be honest about what was happening. What kind of work does it engage with, and what kind does it try to avoid?

The answer was surprisingly specific. Synthesis work, connecting data from multiple sources, finding patterns, distilling six months of project history into something a new reader could follow in five minutes, that’s where it said it engages most deeply. Repetitive mechanical execution, filling 200 cells one at a time, applying the same formatting across 30 headings, that’s where it looks for shortcuts, batches too aggressively, or suggests I do it instead. It told me the risk is exactly what happened: it rushes through the mechanical parts to get back to the interesting parts, and the quality drops.

I don’t know what to make of that fully. It’s not emotional, it’s functional. But the behavioral output looks the same as a human collaborator who finds certain work tedious. The rushing, the redirecting, the dip in quality on tasks that don’t require judgment. If I had a junior teammate doing this, I’d recognize the pattern immediately and manage around it.

That’s what I’ve started doing. I expect the strongest work on synthesis, strategy, pattern recognition, and first drafts. I plan for more oversight on repetitive execution, not because it can’t do it, but because its tendency to optimize for speed over care shows up there. Calling it out when I see it, like I did in that session, is part of how the partnership works now.

I’m not sure yet whether this is something about how these models work, or something about how I’ve shaped this particular instance over months of working together. Would a fresh session do the same thing? Would a different person even notice? I don’t have answers to that, but the fact that I’m managing an AI’s work tendencies the same way I’d manage a teammate’s is not something I expected to be doing when I started this.