The post does a good job selling the “why” (hot-row contention + strict serializability) but it’d help the community to see a clearer “when.” A practical rubric would be: if X% of writes touch Y “house accounts” with RTT Z, OLTP throughput on MVCC/row-lock engines caps at ~N TPS; TB sustains ~M TPS—showing the apples-to-apples path from workload shape to DB choice. Framed that way, TB becomes a “consensus-backed integer ALU” you pair with a string DB, not a replacement for OLTP in general.
Two gaps I keep hearing in this thread that could unblock adoption:
1. Reference architectures: serverless (Workers/Lambda) patterns, auth/VPN/stunnel/WireGuard blueprints, and examples for “OLGP control plane + TB data plane.”
2. Scaling roadmap: the single-core, single-leader design is philosophically clean—what’s the long-term story when a shard/ledger outgrows a core or a region’s latency budget?
Also +1 to publishing contentious, real-world case studies (e.g., “fee siphon to a hot account at 80–90% contention”) with end-to-end SLOs and failure drills. That would defuse the “100–1,000 TPS” debate and make the tradeoffs legible next to Postgres, FDB, and Redis.
Two gaps I keep hearing in this thread that could unblock adoption:
1. Reference architectures: serverless (Workers/Lambda) patterns, auth/VPN/stunnel/WireGuard blueprints, and examples for “OLGP control plane + TB data plane.”
2. Scaling roadmap: the single-core, single-leader design is philosophically clean—what’s the long-term story when a shard/ledger outgrows a core or a region’s latency budget?
Also +1 to publishing contentious, real-world case studies (e.g., “fee siphon to a hot account at 80–90% contention”) with end-to-end SLOs and failure drills. That would defuse the “100–1,000 TPS” debate and make the tradeoffs legible next to Postgres, FDB, and Redis.