Hacker Newsnew | past | comments | ask | show | jobs | submit | rachelradulo's commentslogin

Super cool approach. What ended up being the hardest part of bypassing macOS’s lack of monitor mode—was it USB throughput constraints or getting reliable timing for TX/RX?

Reliable timing across TX/RX was by far the biggest hurdle. Raw USB access and overall throughput really hasn't been as big of an issue as I thought it would be.

Hey HN, Rachel here, another member of the team on the product/design side. Happy to answer questions about the use cases or where we're taking the UX. Matt covered the technical bits well but please let me know if any UX enhancements or bugs you think of or run into with this!


How do agents handle the Stripe webhook setup? That's always been the gnarliest part for me manually.


webhooks are the part where most agent-built auth falls apart. Here's how Corral handles it:

The server-express.ts template generates the webhook route with the raw body parser before express.json() (Stripe requires the raw body for signature verification — agents almost always get this wrong). The route handles checkout.session.completed, customer.subscription.updated, and customer.subscription.deleted events and auto-updates the user's plan in your database.

So when your agent runs corral init, the webhook endpoint is already in your server at /api/corral/webhook, with Stripe signature verification wired in. Your agent just needs to:

corral stripe sync — creates the products/prices in Stripe Set STRIPE_WEBHOOK_SECRET in .env For local dev: stripe listen --forward-to localhost:3000/api/corral/webhook That's it. The agent doesn't have to figure out raw body parsing, event routing, or idempotency — the template handles all of it. And since corral doctor checks for the webhook secret in your env, the agent gets told if it's missing.

The worst Stripe webhook bugs I found during testing were (1) express.json() parsing the body before the webhook route sees it, and (2) agents putting the webhook route after auth middleware that rejects unsigned requests. Both are baked into the template ordering now.


ah makes sense, excited to hook it up to a project


This validates something we keep seeing-the bottleneck for AI agents isn't intelligence, it's that tooling isn't yet built for how they actually work. Structured specs and parseable errors > docs written for humans. Really cool to see this ship.


Thanks for following along since day 1!


Fair question. The core will always stay open source and free. We’ll monetize around it with things like managed hosting, enterprise support, and compliance options (HIPAA, SOC2, etc). Basically, we make money when teams want someone to stand behind it in production, not for using the software itself. But let us know if you have other ideas! We're still new to open source


So to clarify, does this mean you don't plan to go down the route of having a "community edition" vs a "enterprise edition" with missing features in the former?


The split model leaves too many holes to make it really useful for the community. When we add things like "authentication", we'll ship the plugins (like okta integration (for enterprises), etc. We will do our best to maintain all of the plugins (but if there are 30 different auth providers, we will have to rely on the community to maintain the smaller ones), but, enterprises will pay us to ENSURE everything is up to date, safe, etc.

The Support + Service model has been proven by large and small companies alike - it is one that will also survive the AI contraction coming.


Correct for now. We've got a ways to go to hammer out the real details of enterprise. We wouldn't want a world where not having the enterprise-y add-ons would hinder the core value prop.


Yep! We were working on an authentication startup (https://news.ycombinator.com/item?id=30615352) and built it to $1.5M in ARR, but then we saw even a bigger pain point; local AI is hard. When we tried building a corporate knowledge base with RAG and local models, we hit the same wall: a painful gap between prototype and production.

Production-ready enterprise AI requires solving model management, RAG pipelines, model fine-tuning, prompt engineering, failover, cost optimization, and deployment orchestration. You can’t just be good at one or two of these, you have to be great at all of them or your project won't succeed. And so Llamafarm was born!

Monetization-wise - We’re open source and free forever, with revenue coming from enterprise support, managed deployments, and compliance packages—basically, companies pay for confidence, not code.


Just found and fixed a bad link on the bottom of the website - thanks again for pointing that out !


Hey thanks! Sorry about the broken link - here's a better docs link for now https://docs.llamafarm.dev/docs/intro mind sharing where it's broken?


Yea of course. I was trying to click the docs link from the homepage on llamafarm.dev from two different networks on two different browsers: edge and brave. Neither worked. Phone didn’t either. It takes me to a supabase link that errors out. Hope that helps! Thanks for the link! (Btw I don’t see any errors in the browsers console)


thank you! found and fixed 2 on the website - appreciate the comment and detailed testing


Hey thanks! I'm Rachel from LlamaFarm; we actually use LlamaIndex as one of our components. It's great for RAG, and we didn't want to reinvent what they've already done. LlamaFarm is about bundling the best of open source into a complete, production-ready AI project framework. Think of us like the integration and orchestration layer that makes LlamaIndex, plus model management, plus prompt engineering, plus deployment tools all work together seamlessly.

Where LlamaIndex gives you powerful RAG primitives, we give you the full production system - the model failover when OpenAI is down, the strategy system that adapts from development to production, the deployment configs for Kubernetes. We handle all the boring stuff that turns a RAG prototype into a system that actually runs in production. One YAML config, one CLI command, and you have everything from local development to cloud deployment. :)


Hey thanks for the feedback! We added a note under the encoded section to address this - do you think that helps?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: