AI Mode
How AI mode works: querying the agent, tool calling, and session context.
How It Works
AI mode is the default mode in Codiv. When the gutter shows > in cyan, everything you type is sent to the daemon’s AI agent over IPC.
The agent then:
- Receives your query with the current session context
- Reasons about what to do
- Calls tools as needed (bash, read, write, edit, grep, glob)
- Streams the response back to your terminal in real time
AI mode is active on startup. Press Tab on empty input to switch to Command mode, and Tab again to return.
Streaming Responses
AI responses stream token-by-token to the TUI as they are generated. You see the agent’s thinking and output appear progressively, rendered as markdown with syntax highlighting.
Tool Calling Loop
The agent operates in a loop:
Query ──▶ LLM Reasoning ──▶ Tool Call ──▶ Tool Result ──▶ LLM Reasoning ──▶ ...
│
▼
Final Response
Each iteration, the LLM decides whether to call another tool or produce a final response. This allows multi-step tasks: the agent might grep for a pattern, read the matching file, edit it, and then run tests — all from a single query.
Session Timeline
Codiv maintains a unified timeline that includes:
- Shell commands you have run and their output
- AI queries and responses
- Tool calls made by the agent
This context flows into the LLM prompt, so the agent is aware of what you have been doing. If you just ran cargo test and some tests failed, the agent knows about it when you ask “fix the failing test.”
Shared Shell State
When the AI agent runs bash commands, it operates in the same shell environment as you. Environment changes like cd, export, source, etc. made by the AI are immediately reflected in your terminal, and changes you make are visible to the AI on its next command.
This works because the agent’s bash commands are routed through your terminal’s co-process using a lease-based protocol rather than running in a separate shell. The agent acquires a shell lease, executes commands, and releases the lease when done. The result is seamless integration — if the AI runs cd src/ and then you type ls, you see the contents of src/.
Conversation Compaction
As a session grows, the conversation history can exceed the LLM’s context window. Codiv handles this with conversation compaction:
- Auto-compaction — when the input token count exceeds a threshold, the daemon automatically compacts the conversation by summarizing older turns into a
Summaryevent and discarding the originals. This happens between agent turns so it never interrupts a response. - Manual compaction — type
/compactin AI mode to trigger compaction on demand. This is useful if you want to free up context space before starting a new line of work in the same session.
After compaction, the agent retains a concise summary of everything that happened earlier, plus the most recent turns in full detail.
Safety Confirmations
When the agent wants to run a potentially dangerous command (like rm, git push --force, or writing to system files), Codiv pauses and asks for confirmation:
Agent wants to run: rm -rf target/
Risk level: MEDIUM
[y] Allow [n] Deny [a] Always allow this command
You can allow the command once, deny it, or add it to your permanent allowlist. See the Safety & Audit design doc for details on risk classification.