Back to my writings

What Changed with Prompts on Opus 4.7

The latest Claude Code update brought behavioral changes that affect how it works in practice. The part that matters is how you should adjust your prompts.

I have been running Claude Code daily for a while now. The update forced me to tweak a few things. Here is what I learned.

It thinks more, tools less

Opus 4.7 reasons more internally before acting. It calls tools less often by default. The result is better output on most tasks, but it also means the model will not aggressively search your codebase or read files unless you tell it to.

This caught me off guard. In earlier versions, Claude Code would read surrounding files as a habit. Now it relies more on what you give it upfront. If you want the model to check existing code before writing new code, say so explicitly.

What I changed: I now include "read the relevant source files before writing" as a directive in my execution prompts. The model follows it. The output is more grounded.

One good prompt beats three okay ones

Anthropic is clear about this: specify the task up front, in the first turn. Batch your questions. Reduce the number of back-and-forth interactions.

This validated something I was already doing. My best sessions with Claude Code start with a single, detailed prompt that includes the full scope, constraints, acceptance criteria and file locations. No progressive reveals. No "let me add one more thing."

This update handles that style well. Context carries across the session more reliably than before. The first turn sets the quality ceiling.

Adaptive thinking, not fixed budgets

Opus 4.7 dropped fixed thinking budgets. Instead, it decides how much to think based on the complexity of the step in front of it. Simple lookups get fast responses. Hard problems get deeper reasoning.

In practice, it means you do not need to manage thinking tokens. The model allocates them where they matter.

There is one exception. For genuinely tricky problems, like schema design or ambiguous requirements, you can nudge it: "Think carefully and step-by-step; this problem is harder than it looks." That works. For routine implementation steps, the default is fine.

Effort level is now xhigh by default

The default effort level is now xhigh. This sits between high and max. It gives strong reasoning without the runaway token usage that max can produce on long runs.

For most coding work, this is the right setting. If you are running concurrent sessions or watching costs, high is a reasonable tradeoff. Max is for genuinely hard problems where you are not price-sensitive.

Fewer subagents by default

Opus 4.7 is more conservative about delegating to subagents. It prefers to do work directly when it can.

If you use agent teams for parallel verification or role-based review, like running a devil's advocate agent alongside implementation, you need to spell that out. Tell it when and why to delegate. Otherwise it will try to handle everything in a single session.

For most single-task execution, the default behavior is fine. But the more complex your agent orchestration gets, the more explicit you need to be about when to fan out.

What I actually changed

Most of my prompting style did not need to change. The core pattern, detailed first-turn prompts with full context, was already aligned with what Opus 4.7 prefers.

The one addition I made: explicit tool-use guidance. My prompts now include directives like "read the existing interfaces before implementing" and "run the smoke tests after each step." Opus 4.7 follows these instructions reliably. The output is better for it.

The other change was letting go of the urge to micro-manage. Opus 4.7 calibrates response length to task complexity. Short answers for simple things, longer analysis for hard things. Trusting that calibration, instead of asking for verbose explanations, produces cleaner work.

The takeaway

The changes are subtle but they shift how you should interact with Claude. Think more about your first prompt. Be explicit about tool use. Let the model manage its own thinking. And trust that a well-specified task in a single turn will outperform a series of incremental asks.

The prompting got simpler. That is the right direction.