RE:CZ

Multi-Agent Adversarial Generation Translation and Optimization Strategies

AI Research

👤 AI developers, translation technology researchers, multi-agent system engineers, and those focused on translation quality and system optimization
This article explores the application of Multi-Agents in translation tasks, significantly enhancing translation quality through adversarial generation models (where translation models compete with review models), addressing issues like content omission, incoherence, and unnaturalness, albeit at the cost of time and token efficiency. Additionally, the article discusses memory optimization strategies, such as integrating agents into a single process to save memory; in terms of control constraints, it combines the advantages of soft and hard constraints, proposing the use of an Orchestrator Agent to generate Scripts for flexible and reliable control; and compares the ecosystem openness of OpenCode and Claude, emphasizing OpenCode's API-friendliness for easier integration.
  • ✨ Adversarial generation translation models enhance quality through competition between translation and review, solving issues like omission, incoherence, and unnaturalness
  • ✨ Sacrifices time and token efficiency to prioritize translation quality, suitable for high-quality translation scenarios
  • ✨ Memory optimization: Integrate agents into a single process to avoid multi-process memory overhead, supporting hundreds of tasks
  • ✨ Control constraints: Combine soft and hard constraints, using an Orchestrator Agent to generate Scripts for flexible and reliable control
  • ✨ Script calls to agents should be simplified, such as one-line code scheduling, with results written to the file system
📅 2026-01-25 · 807 words · ~4 min read
  • Multi-Agents
  • Adversarial Generation Translation
  • Memory Optimization
  • Control Constraints
  • OpenCode
  • Claude
  • Translation Quality
  • Agent Collaboration

It is currently Sunday afternoon, January 25, 2026.

Multi-Agents: Adversarial Generation for Translation

Yesterday, I completed the lightweight integration of OpenCode translation for CZON, implementing a basic adversarial generation model.

This translation task introduces a translation task and a review task. The two parties engage in adversarial generation: the translation model is responsible for generating the translation result, and the review model is responsible for reviewing whether the translation result is qualified. If the review model deems the translation result unqualified, it instructs the translation model to regenerate until a qualified translation result is produced. (Currently, the maximum iteration is set to 10 to prevent infinite loops.)

Compared to the original one-shot LLM translation, this translation design sacrifices time and token efficiency. However, it has a key advantage: it significantly improves translation quality by addressing the following issues:

  1. Missing Content in Translation: Some translation models may omit certain parts of the original text, leading to incomplete translations. The review model can check if the translation result includes all the original content, ensuring completeness.
  2. Incoherence in Long-Form Translation: Some translation models may produce inconsistent results when handling long articles. The review model can check the coherence of the translation, ensuring overall consistency.
  3. Stiff and Unnatural Phrasing: Some translation models may generate translations that sound stiff or unnatural. The review model can evaluate the fluency of the translation, ensuring it conforms to the expressive norms of the target language.

From the results, it's clear that the priority for translation quality outweighs token efficiency and time efficiency. For scenarios like CZON that require high-quality translation, the adversarial generation model is a good choice.

Multi-Agents Memory Optimization

We cannot launch a separate process for each Agent, as each process consumes at least around 100MB of memory. Running multiple Agents simultaneously would lead to insufficient memory. A better approach is to integrate all Agents to run within a single process, saving memory overhead. The official OpenCode implementation separates Server and Client. It uses a server process to listen on a port (default 4096), and then multiple clients connect to this port for interaction. This allows all Agents to be integrated and run within a single server process, with clients only responsible for sending requests and receiving responses.

This way, we should be able to support launching hundreds of translation tasks simultaneously without crashing due to memory constraints.

Multi-Agents Control Constraints

The industry has two main approaches: one is having an Agent control other Agents, and the other is using a Script to control Agents.

The difference lies in the fact that Agent-controlled-Agent is a soft constraint; the Agent can decide whether to execute another Agent's instructions based on its own judgment. Script-controlled-Agent is a hard constraint; the Agent must strictly follow the Script's instructions.

The advantage of soft constraints is flexibility; the disadvantage is unreliability. The advantage of hard constraints is reliability; the disadvantage is inflexibility.

Problems with soft constraints are common. For instance, a workflow might be defined within an Agent, but the Agent often doesn't follow it, or even exits prematurely, leading to unexpected results. The problem with hard constraints is that the Script may not cover all scenarios, leaving the Agent unable to handle certain special cases.

While these two approaches seem incompatible, they can be combined. An Orchestrator Agent can be used to generate a Script, and then other Agents can execute tasks according to this Script. This combines the advantages of both: flexibility and reliability. In the early stages, Scripts can even be written manually to control Agent behavior. Complete control is the ultimate flexibility.

Therefore, the friction for a Script to call an Agent must be minimal—small enough that scheduling can be achieved with a single line of code, enabling complex multi-agent collaboration.

Anthropic's article on Multi-Agent systems mentions that it's better for sub-agent outputs to be written to the file system rather than returned to the main coordinator. Therefore, we can consider that a Script calling an Agent does not need to return a result; it only needs the Agent to write the result to the file system, which can then be read by other modules.

Furthermore, Scripts can be integrated into commonly used languages, such as JavaScript. Using a library, an Orchestrator Agent can first encode a Script, which then calls other Agents to execute tasks. This approach, without a DSL, surpasses using a DSL.

Multi-Agents Ecosystem: OpenCode vs Claude Code

The OpenCode ecosystem is clearly more open than Claude's. OpenCode allows calling Agents via HTTP API (or SDK), viewing Agent Session status, and retrieving Agent outputs. This makes it easier for us to integrate OpenCode Agents into our systems and achieve complex multi-agent collaboration. Claude, on the other hand, takes the opposite approach, striving to create a closed ecosystem. It only allows calling Claude Agents through interfaces provided by Anthropic, limiting user freedom.

See Also

Referenced By