Looks cool but as others have said, it’s really hard to just try all similar projects because all of them promise the same thing but I haven’t seen any of them provide any benchmarks.
Claude Code keeps all the conversation logs stored on-disk right? Why not parse them asynchronously and then use hooks to enrich the context as the conversation goes? (I mean in the most broad and generic way, I guess we’d have to embed them, do some RAG… the whole thing)
Yep, parsing logs + async RAG works fine if you’re staying inside a single tool.
The issue we ran into when building agent systems was portability. Once you want multiple agents or models to share the same evolving context, each tool reconstructing its own memory from transcripts stops scaling.
We’re less focused on “making agents smarter” and more on avoiding fragmentation when context needs to move across agents, tools, or people — for example, using context created in Claude from Codex, or sharing specific parts of that context with a friend or a team.
That’s also why benchmarks are tricky here. The gains tend to show up as less duplication and less state drift rather than a single accuracy metric. What would constitute convincing proof in this space for you?
I’m slowly working on rigel.sh, a next generation Remote Desktop solution. Not AI related, but I got tired of janky desktop streaming / remote management solutions (I use the windows app on my Mac and Raspberry Pi Connect online) so I decided to build my own.
You install a client on the system you want to manage and enroll it with a single command, just like Tailscale. I’ve built a nice web application were users can manage and access their devices, setup monitoring and configure alerts (for now it only tracks basic stuff like CPU/RAM…)
The whole thing uses WebRTC for p2p connections and it’s very snappy because all the graphics pipeline is fully custom (all the enconding/decoding is platform-specific too) and I’ve managed to get latency and quality on par with Parsec in many scenarios (I still have some work to do here because it’s not my thing)
I plan on making the whole thing open source with a permissive license and also offer a paid SaaS early in 2026 (I think early February?). I plan on offering a hobby free tier and then paid business/pro features but time will tell!
I haven’t event built a landing page yet but if you are interested write me to ramon@rigel.sh
Hey thanks for the comment! Not sure about that right now, I plan on making a Show HN and maybe some content on Reddit and Product Hunt (I’ve built a startup before but in the B2B space and sales are very different)
Website: AI generated, slop
Comment replies: AI generated, also slop
“Fair point — that’s totally valid…” come on, are we supposed to just pretend this is an acceptable submission? I’m all in for vibe coding but at least be upfront about it and don’t waste other people’s time and energy.
Reminds me of the guy who recently spammed PRs to the OCaml compiler but this time the script is flipped and all the confusion is self inflicted.
I wonder how long will it take us to see a vibe-coded, slop covered OS or database or whatever (I guess the “braveness” of these slop creators will (is?) be directly proportional to the quality of the SOTA coding LLMs).
Do we have a term for this yet? I mean the person, not the product (slop)
I mean maybe the content is not AI generated (I wouldn’t say it is) but the website does have an AI generated smell to it. From the colors to the shapes, it looks like Sonnet or Opus definitely made some tweaks.
Spaniard here. Let me know if I can somehow help navigate all of that. I’m very interested in history and everything related to the 1400-1500 period (although I’m not an expert by any definition) and I’d love to see what modern technology could do here, specially OCRs and VLMs.
This looks great, but I’m a bit confused about what actually exists right now. The site uses the present tense (“it is”, “we are”), but I couldn’t find anything after browsing and clicking around for about 15 minutes. From the “log”, it sounds like only the parser for R Prime is implemented, and that R Prime itself is just a precursor to the actual language that will be used to develop the whole system (from scratch?). Does that mean that R Prime has to be fully developed before work on its successor can even begin?
If anything is already working, where’s the code? Can people contribute yet?
Not trying to nitpick, but it’s hard to tell what’s real vs. vaporware (beyond the author’s very impressive abilities for systems/language design and writing)
The website also mentions a device but I get that’s many years away too, right? I mean how long will it take to actually develop everything that’s described in the website?
Claude Code keeps all the conversation logs stored on-disk right? Why not parse them asynchronously and then use hooks to enrich the context as the conversation goes? (I mean in the most broad and generic way, I guess we’d have to embed them, do some RAG… the whole thing)
reply