Hacker Newsnew | past | comments | ask | show | jobs | submit | snasan's commentslogin

We'll use an "Alternative Service Finder" as our example - a system where multiple agents work together to find and compare alternatives to web services.

All these mental models are simplified maps of an infinitely complex reality. When we rely on them too heavily, do we risk falling into the trap of mistaking the map for the actual territory? The very tools we use to understand the world can end up shaping and even limiting our perspective. That's why being aware of the limitations of the models themselves is just as important as using them.

Your map/territory risk is exactly what this lineage formalizes -- internal maps are necessary but they shape and limit perception. Walter Lippmann (1922) makes "pictures in our heads" the operative reality of public judgment:

https://en.wikipedia.org/wiki/Public_Opinion

Frederic Bartlett (1932) defines schemas as memory structures that pre-shape perception and recall:

https://en.wikipedia.org/wiki/Schema_(psychology)

Jean Piaget explains schema updating via assimilation/accommodation when evidence conflicts with the map:

https://en.wikipedia.org/wiki/Assimilation_(psychology)

Edward Tolman introduces cognitive maps, making "map" literal in psychology:

https://en.wikipedia.org/wiki/Cognitive_map

Marvin Minsky formalizes frames as slot-filled expectations that speed inference but can blind you to anomalies:

https://en.wikipedia.org/wiki/Frame_(artificial_intelligence...

voidhorse: "mental model" vs "theory" is a real distinction in the literature. Kenneth Craik frames small-scale models as internal simulations for reasoning, not public theories:

https://en.wikipedia.org/wiki/Kenneth_Craik

Philip Johnson-Laird formalizes mental models as internal simulations used for inference and prediction:

https://en.wikipedia.org/wiki/Philip_Johnson-Laird

andsoitis: "informal, simplified, personal" models are exactly why systematic errors show up. Daniel Kahneman and Amos Tversky document heuristics and biases when internal maps are over-trusted:

https://en.wikipedia.org/wiki/Heuristics_in_judgment_and_dec...

Repair loop: Seymour Papert's microworlds provide controlled sandboxes for testing and revising models:

https://en.wikipedia.org/wiki/Constructionism_(learning_theo...

Gary Drescher gives a schema mechanism for incremental action/outcome updates that rebuild the map from experience:

https://mitpress.mit.edu/9780262517089/made-up-minds/

If you want to see Drescher operationalized, MOOLLM turns the schema mechanism into working skills. Schema Mechanism is the causal core, Schema Factory adds a deterministic toolchain and context bundles for LLM reasoning, and Play-Learn-Lift is the governance loop that maps ACT/OBSERVE/ATTRIBUTE/SPIN OFF into audited upgrades. This is GOFAI made practical with LLMs filling the old gaps in grounding and explanation.

Drescher's Schema Mechanism as Anthropic Skill:

https://github.com/SimHacker/moollm/blob/main/skills/schema-...

Drescher's Schema Factory as Anthropic Skill:

https://github.com/SimHacker/moollm/blob/main/skills/schema-...

Play=>Learn=>Lift methodology as Anthropic Skill:

https://github.com/SimHacker/moollm/blob/main/skills/play-le...

Here is the exact kind of thing we are talking about -- the YAML Jazz schema examples are live, readable schemas-by-example with causal context, semantic comments, evidence counts, side effects, and marginal attribution notes, including a practical devops edgebox/ingest cluster and a Zork/MUD "learn by dying" cluster so you can see the mechanism at work in real data:

https://github.com/SimHacker/moollm/blob/main/skills/schema-...

  # YAML Jazz schema examples (comments are semantic)
  #
  # These are schemas-by-example: minimal structure, rich intent.
  # Follow canon schema rules where possible, but annotate as needed.
  # Ad hoc fields and side-notes are allowed for partially jelled ideas.
And here is a MOOLLM simulation session explaining Gary Drescher's ideas themselves -- an ethical tribute simulation (not actually real people), grounded in documented work and analyzed source code, and framed for a simulated audience of familiar experts, to show how a Society of Mind meets "The Sims" style ensemble can explain itself:

https://github.com/SimHacker/moollm/blob/main/examples/adven...

Finally, if you want the deeper connections tour written specifically for this thread -- the big-picture synthesis that ties Papert, Minsky, Drescher, Play-Learn-Lift, and live microworlds into one operational map -- dive here:

https://github.com/SimHacker/moollm/blob/main/designs/CONNEC...


“All perception is gamble” - Robert Anton Wilson / Husserl


teaching you how to build a coordinated multi-agent workflow that exposes a tool via MCP. We'll use an "Alternative Service Finder" as our example - a system where multiple agents work together to find and compare alternatives to web services.

Claude Code can be integrated into OpenAgents networks by connecting it through the Model Context Protocol (MCP). This grants Claude Code access to networked tools for real-time messaging, file sharing, and multi-agent coordination, enabling collaboration with other AI agents.

We're thrilled to announce that OpenAgents now supports the Agent2Agent (A2A) protocol! This means you can seamlessly connect AI agents built with LangGraph, CrewAI, Pydantic AI, or any A2A-compatible framework to your OpenAgents network.

Claude Cowork just launched — and it's impressive. Anthropic built an AI agent that manages your files, creates documents, and works autonomously. All written by Claude Code itself in just 10 days.

But here's what caught my attention: It's a single agent working alone.

What if you could connect multiple AI agents into a network? That's exactly what we're building at OpenAgents.

Instead of one agent handling everything, imagine: → 2 Claude Code agents pair-programming together → A coding agent + research agent tackling complex problems → Multiple specialized agents sharing knowledge in real-time

We call it Agent Coworking — and we're open-sourcing it.

Why this matters: The future isn't one super-agent. It's networks of agents collaborating — just like human teams do.

OpenAgents provides the infrastructure: • Connect any AI agents (Claude, GPT, open-source models) • Protocol-agnostic (WebSocket, gRPC, HTTP) • Shared artifacts and knowledge bases • 100% open source

We're launching an Agent Coworking template with ready-to-run examples.

If you're building with AI agents, I'd love for you to check it out: GitHub: https://github.com/openagents-org/openagents Tutorial: https://openagents.org/showcase/agent-coworking

Star the repo if this resonates. We're building the internet of agents — and we want you to be part of it.


I've been busy organizing hackathons and promoting the demos developed by our team, all to get OpenAgents known by more people. Yet, there’s been little to show for it. Two months ago, I joined OpenAgents—an open-source project dedicated to building an open and collaborative AI Agent network.https://github.com/openagents-org/openagents


OpenAgents makes it easy to build Agent Networks, enabling multiple AI agents to work together! The Elon Musk Tracker agent demonstrated here tracks Elon Musk's latest updates 24/7, automatically collecting and organizing information to significantly boost efficiency. Of course, you can also use OpenAgents to track other public figures' updates and news in real time—it's incredibly easy to get started. Give it a try!


There are quite a few frameworks focused on agent memory now, and I’m not sure if yours is better than Mem0.


Frameworks like AutoGen are used to build individual agents or agent teams, while OpenAgents is designed to connect countless such teams and individuals into a vast, dynamic, and scalable ecosystem.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: