Marco Nurisso
Talk recording
Intelligent systems must deploy internal representations that are simultaneously structured — to support broad generalization — and selective — to preserve input identity. We expose a fundamental tradeoff between these two properties. By assuming the presence of limits on the agent's capability to compute representational similarities, we derive closed-form expressions that pin the probability of correct generalization and identification to a universal Pareto front independent of input space geometry. Extending the analysis to multiple simultaneous inputs predicts that well-generalizing representations impose hard constraints on the agent's parallel processing capability, echoing known results in cognitive science and artificial intelligence. A minimal ReLU network trained end-to-end reproduces these laws: during learning, a resolution emerges, and empirical trajectories closely follow the theoretical curves. Finally, we show that analogous limits appear in the markedly more complex setting of LLMs and VLMs prompted to perform simple behavioral tasks.



