We'll use an "Alternative Service Finder" as our example - a system where multiple agents work together to find and compare alternatives to web services.
All these mental models are simplified maps of an infinitely complex reality. When we rely on them too heavily, do we risk falling into the trap of mistaking the map for the actual territory? The very tools we use to understand the world can end up shaping and even limiting our perspective. That's why being aware of the limitations of the models themselves is just as important as using them.
Your map/territory risk is exactly what this lineage formalizes -- internal maps are necessary but they shape and limit perception. Walter Lippmann (1922) makes "pictures in our heads" the operative reality of public judgment:
voidhorse: "mental model" vs "theory" is a real distinction in the literature. Kenneth Craik frames small-scale models as internal simulations for reasoning, not public theories:
andsoitis: "informal, simplified, personal" models are exactly why systematic errors show up. Daniel Kahneman and Amos Tversky document heuristics and biases when internal maps are over-trusted:
If you want to see Drescher operationalized, MOOLLM turns the schema mechanism into working skills. Schema Mechanism is the causal core, Schema Factory adds a deterministic toolchain and context bundles for LLM reasoning, and Play-Learn-Lift is the governance loop that maps ACT/OBSERVE/ATTRIBUTE/SPIN OFF into audited upgrades. This is GOFAI made practical with LLMs filling the old gaps in grounding and explanation.
Here is the exact kind of thing we are talking about -- the YAML Jazz schema examples are live, readable schemas-by-example with causal context, semantic comments, evidence counts, side effects, and marginal attribution notes, including a practical devops edgebox/ingest cluster and a Zork/MUD "learn by dying" cluster so you can see the mechanism at work in real data:
# YAML Jazz schema examples (comments are semantic)
#
# These are schemas-by-example: minimal structure, rich intent.
# Follow canon schema rules where possible, but annotate as needed.
# Ad hoc fields and side-notes are allowed for partially jelled ideas.
And here is a MOOLLM simulation session explaining Gary Drescher's ideas themselves -- an ethical tribute simulation (not actually real people), grounded in documented work and analyzed source code, and framed for a simulated audience of familiar experts, to show how a Society of Mind meets "The Sims" style ensemble can explain itself:
Finally, if you want the deeper connections tour written specifically for this thread -- the big-picture synthesis that ties Papert, Minsky, Drescher, Play-Learn-Lift, and live microworlds into one operational map -- dive here:
teaching you how to build a coordinated multi-agent workflow that exposes a tool via MCP. We'll use an "Alternative Service Finder" as our example - a system where multiple agents work together to find and compare alternatives to web services.
Claude Code can be integrated into OpenAgents networks by connecting it through the Model Context Protocol (MCP). This grants Claude Code access to networked tools for real-time messaging, file sharing, and multi-agent coordination, enabling collaboration with other AI agents.
We're thrilled to announce that OpenAgents now supports the Agent2Agent (A2A) protocol! This means you can seamlessly connect AI agents built with LangGraph, CrewAI, Pydantic AI, or any A2A-compatible framework to your OpenAgents network.
Claude Cowork just launched — and it's impressive.
Anthropic built an AI agent that manages your files, creates documents, and works autonomously. All written by Claude Code itself in just 10 days.
But here's what caught my attention:
It's a single agent working alone.
What if you could connect multiple AI agents into a network?
That's exactly what we're building at OpenAgents.
Instead of one agent handling everything, imagine:
→ 2 Claude Code agents pair-programming together
→ A coding agent + research agent tackling complex problems
→ Multiple specialized agents sharing knowledge in real-time
We call it Agent Coworking — and we're open-sourcing it.
Why this matters:
The future isn't one super-agent. It's networks of agents collaborating — just like human teams do.
OpenAgents provides the infrastructure:
• Connect any AI agents (Claude, GPT, open-source models)
• Protocol-agnostic (WebSocket, gRPC, HTTP)
• Shared artifacts and knowledge bases
• 100% open source
We're launching an Agent Coworking template with ready-to-run examples.
I've been busy organizing hackathons and promoting the demos developed by our team, all to get OpenAgents known by more people. Yet, there’s been little to show for it. Two months ago, I joined OpenAgents—an open-source project dedicated to building an open and collaborative AI Agent network.https://github.com/openagents-org/openagents
OpenAgents makes it easy to build Agent Networks, enabling multiple AI agents to work together! The Elon Musk Tracker agent demonstrated here tracks Elon Musk's latest updates 24/7, automatically collecting and organizing information to significantly boost efficiency. Of course, you can also use OpenAgents to track other public figures' updates and news in real time—it's incredibly easy to get started. Give it a try!
Frameworks like AutoGen are used to build individual agents or agent teams, while OpenAgents is designed to connect countless such teams and individuals into a vast, dynamic, and scalable ecosystem.
reply