It is now Thursday, January 29, 2026, at noon.
Adjusted my schedule to wake up in the morning.
Yesterday, I discussed the RFC 1 feature of LegionMind with C1. He mentioned:
After modularization, the overall pace of work has slowed down, making early-stage documentation alignment even more important.
It's a bit like moving toward a waterfall development model as the cost and effort of implementation increase.
Is there an agile model suitable for AI's working speed? That would definitely be AI-autonomous agile.
Providing AI with facts, enabling it to correctly understand facts and intentions, becomes more important. This can lead to effective intermediate reviews and agile iterations.
My comment on this is:
AI autonomy is correct because human intervention is too costly. But how can AI achieve autonomy? The core lies in the alignment of scientific perspectives.
The Core of AI Autonomy is the Alignment of Scientific Perspectives
The core of AI autonomy lies in the alignment of scientific perspectives. In other words, AI needs to understand and follow human scientific concepts and methodologies.
Human intentions are sometimes vague, changeable, and cannot be measured beforehand. It is often futile to expect humans to align every detail of their intentions with AI before experimental results are available. In the initial human expression, only the expression of values is relatively stable, while other details of intentions often need to be continuously adjusted and optimized during the experimental process.
However, AI can still make efforts to align with basic human scientific perspectives.
For example, Occam's Razor: Entities should not be multiplied beyond necessity. This is a very common heuristic in the scientific community. AI can adopt this principle to optimize RFCs.
When AI generates RFCs, it often adds many seemingly ideal but impractical features. This increases the complexity and cost of implementation. Readers who have used the PLAN mode will likely relate to this. For instance, adding complex error-handling mechanisms to simple functions, designing iteration plans spanning six months, or introducing unnecessary technology stacks.
Therefore, AI needs to learn how to simplify its goals and avoid over-engineering and unnecessary complexity. By adopting Occam's Razor, AI can manage its goals more effectively, thereby achieving more efficient autonomy.
AI should use an adversarial generation architecture to handle RFC generation tasks. The RFC review AI needs to point out every design point in the RFC, questioning the RFC-generating AI and asking it to explain why each design point is necessary. If the necessity of a design point cannot be reasonably explained, it should be removed. The RFC-generating AI must support its design choices through factual constraints.
Factual constraints come from facts provided by the Supervisor, facts obtained through environmental exploration, or facts retrieved from external knowledge bases. AI needs to learn how to use these facts to support its design choices. These facts must be verifiable by third parties.
The definition of facts comes from designing a verifiable experiment 🧪, which has always been the forte of the scientific community. AI can also refer to this practice in engineering by designing an experimental code. The experimental plan is the code itself, and the results of the experiment can prove the truth of the facts. Anyone, including humans and RFC reviewers, can run this experiment to verify the facts.
Footnotes
-
Yes. It refers to the IETF's RFC (Request for Comments) standard. In the Module-Level Human-AI Collaborative Software Engineering Architecture, the Protocol Spec mentioned is named RFC by us. Their functions are very similar to RFCs. We hope AI can describe their functions and interfaces in the style of RFCs. ↩