Publication
As engineered systems evolve toward multi-agent architectures with autonomous LLM-based agents, traditional governance approaches using static rules or fixed network structures fail to address the dynamic uncertainties inherent in real-world operations. This paper presents a novel framework that integrates adaptive governance mechanisms directly into the design of sociotechnical systems through a unique separation of agent interaction networks from information flow networks. We introduce a system comprising strategic LLM-based system agents that engage in repeated interactions and a reinforcement learning-based governing agent that dynamically modulates information transparency. Unlike conventional approaches that require direct structural interventions or payoff modifications, our framework preserves agent autonomy while promoting cooperation through adaptive information governance. The governing agent learns to strategically adjust information disclosure at each timestep, determining what contextual or historical information each system agent can access. Experimental results demonstrate that this RL-based governance significantly enhances cooperation compared to static information-sharing baselines. This work establishes information transparency as a dynamic design parameter and demonstrates how governance considerations can be effectively embedded into complex engineering systems from the design phase. While validated on the repeated Prisoner's Dilemma, which represents a challenging governance problem in an abstract model, our framework offers a flexible strategy for fostering desired collective outcomes across diverse sociotechnical engineering applications, from human-robot collaboration to autonomous vehicle networks and future multi-agent AI systems.



