RE:CZ

Thoughts on AI Agent Module-Level Software Engineering Architecture Design

AI Software Engineering

👤 Software engineers, AI developers, technical personnel interested in automated programming and human-machine collaboration
This article documents the author's thoughts on January 12, 2026, regarding the application of AI Agents in module-level software engineering. The author proposes a human-machine collaborative architecture, with key points including using git worktree to manage code repositories, invoking AI Agents (such as Claude Code) via CLI and managing sessions, obtaining Agent completion notifications and conversation history for transparency. The author plans to implement an automated script that assigns each task to an independent Agent session and coordinates workflows through a scheduler. The article emphasizes the advantages of using Agents over directly calling LLM APIs, as Agents can handle underlying complexities (such as exploring code repositories, invoking system commands, context management), avoiding reinventing the wheel. The author intends to first implement a simplified version to validate the concept.
  • ✨ Design a module-level human-machine collaborative software engineering architecture
  • ✨ Use git worktree to manage code repositories and setup scripts
  • ✨ Invoke AI Agents (such as Claude Code) via CLI to start sessions
  • ✨ Obtain AI Agent completion notifications and conversation history for transparency
  • ✨ Implement automated scripts to assign independent sessions to Agents
📅 2026-01-12 · 345 words · ~2 min read
  • AI Agent
  • Software Engineering
  • Human-Machine Collaboration
  • Claude Code
  • Automation
  • Modularity
  • Transparency
  • Scheduler

Today is Monday, January 12, 2026, morning.

Woke up early today, reflecting on the AI Agent design issues discussed with C1 yesterday. I found some insights worth recording.

Referencing the previous context, I designed a module-level human-machine collaborative software engineering architecture.

I'm considering how to implement it.

In short, the key points are:

  1. Need to manage code repositories using the git worktree command and provide a setup script for each Repo.
  2. Need to invoke the AI Agent (Claude Code, OpenCode, ...etc) via CLI, pass in a prompt, and start a session.
  3. Need to receive a completion notification from the AI Agent.
  4. Need to access the intermediate conversation history of the AI Agent; otherwise, transparency cannot be achieved. Referencing the controllable trust issue mentioned in this document, we need in-process transparency and controllability.

Based on these capabilities, I can implement an automated script to accomplish module-level software engineering tasks.

Taking Claude Code as an example:

  1. Claude Code can start a new session directly via CLI by passing in a prompt.
  2. When Claude Code outputs results to stdout using the -p parameter, it indicates completion.
  3. Claude Code provides the ability to pass a session ID, allowing us to locate the corresponding conversation history file in the .claude directory and retrieve the historical messages.

Given this, I can write a script to manage these tasks.

Each session is independent and clean, and each session will be assigned to an Agent to complete.

An Agent instance can be abstracted as an interface, regardless of whether it's Claude Code or something else underneath.

The scheduler will then dispatch different Agents to complete tasks based on our predefined workflow.

Why base it on an Agent rather than an LLM API? Because the Agent handles the underlying logic of exploring the codebase, invoking operating system commands, managing context, and adapting to the LLM API. This is a complex system, and I believe we don't need to reinvent the wheel unless it doesn't meet our requirements.

I plan to implement a minimal version first to verify the feasibility of this idea. Stay tuned for follow-up progress records.

See Also