Hacker Newsnew | past | comments | ask | show | jobs | submit | lukaesch's commentslogin

Not local. Inference is the only part not written in Rust so far.

I am using Replicate to run docker images with a pipeline based on faster-whipser, VAD, pyannote and a custom LLM enhancement flow.

Thanks for sharing candle/ort. Interesting to see the WASM in-browser opportunities


Congrats on the impressive project! Sharing structs between the frontend and backend sounds very helpful. I've experienced the advantage of shared FE and BE code in JS/TS projects, and it’s definitely valuable.

What led you to choose Leptos for the frontend besides the shared code advantage?


For frontend in Rust, there're Yew, Sycamore, Dioxus and Leptos to choose from. Leptos fits me the best in terms of DX.


That's awesome! Reaching the point where subscribers cover costs is a huge milestone. Rust's reliability making it easy to revisit projects is a big plus. How do you find Rocket and Actix compared to Axum? Why Tera instead of Askama? Would love to hear your take!


Rocket/Actix/Axum are all similar enough; I've used them all and I find Rocket the most ergonomic for web apps, Actix for pure backend APIs. I have spent less time working in Axum since it is the newest, but I don't have anything "bad" to say about it.

I started with Rocket for Notado, but because there was a long period of pre v0.5 stagnation, I went with Actix for Kullish. These days I'd be happy starting a new project with either.

As for Tera over Askama, I don't think Askama was around when I started building Notado; Tera was the first templating engine I used in Rust and I learned its ins and outs pretty well, so now it's just the default thing that I reach for whenever I'm building a web app.


Thanks for the motivation! I completely agree - using Rust to build a web app is truly refreshing. I've worked with Java, TypeScript, and Python in recent years, and it's gratifying to catch most errors at compile time instead of during runtime


Absolutely, give it another try! I've spent much of my life with GCed languages too. Axum, SQLite with SQLx, and Askama form a dev-friendly combo.

Once you grasp Rust's concepts, you'll find the control and efficiency rewarding. It's worth the effort!


Thanks. Interesting. What are some best practices for integrating Rust with Elixir/Phoenix in a project, especially when managing CRUD operations and maintaining system efficiency you learned along your side project?


Right now I run them as two separate services. Phoenix does all the user signup and management, landing page, docs and vends API keys while I use Rust for the API product itself, which does some cpu bound work where Elixir would have been weaker. They both share a Postgres database where the Rust API is restricted to the api keys table.

There is also Rustler which helps facilitate calling Rust binaries from Elixir directly but I opted to deploy them separately.


Thanks for the hint!

We hit the front page on Hacker News and received many submissions, which led to network errors when fetching some podcasts. I've just added retry logic and scheduling for new podcast fetches and pushed these changes to production.

The Criminal[1] podcast is now added. Due to the current demand, our transcription queue is a bit backed up, so transcription for this podcast will appear in a few days. Please be invited to have a look the next days again.

I'm actively working on securing more GPUs to help scale this process efficiently so in the future you don't have to wait that long.

Would it be interesting if I add e-mail "transcription ready" notifications for those who submit a new podcast?

[1] https://www.audioscrape.com/podcast/criminal


Oh yah I totally understand the current circumstances, just trying to be helpful in debugging.

I think a notification feature makes a lot of sense, but it depends on how long the wait tends to be. In terms of setting expectations it might be better to display the current backlog and an estimate about when the transcription might be done (though of course both would be even better than either).


Thanks for the feedback! I'm glad to hear you find the notification idea useful. I'm considering displaying the current backlog and estimated completion times as you propose.

How would you expect to get an understanding of the backlog? Would a dedicated page for the entire backlog be helpful, or would you prefer to see which episodes are being transcribed on each podcast page?

Any insights on what would be most helpful for you would be greatly appreciated!


I think there's a range of ways you could communicate it. Kind of depends on the structure of how the backlog gets churned through. Maybe each episode card includes a sigil for the transcript that's either a green circle with a check or a short summary of how long you expect before you process it (6d, 6h, 15m, etc)? That's also kind of busy - maybe you end up putting a single element at the top of the podcast page saying how long before the next episode will be transcribed and perhaps how long before all episodes might be transcribed (i.e. you need to wait at least this X long and probably not longer than Y)?

Having a big central page for all backlogs sounds cool, but I imagine I would probably care about the expected delay for a particular podcast / episode of a podcast most of the time?


Suggestion, you've probably fairly easily got access to how long the most recently completed task took?

Perhaps on the submission form include a low-resolution indication of that, so people's expectations are set before they enter a podcast link? Round it off to the nearest minute/hour/day, and format it appropriately:

"Processing times are currently around 3 hours"

or

"Processing times are currently around 17 minutes"


Thanks! I was inspired by levelsio's meme about having MVPs in a single index.php file. Traditionally, I've organized codebases into folders, but I started questioning its necessity.

Folders often just add an extra layer to search through. It's basically a search param. With Neovim and strict naming conventions, I've found managing everything in one file works quite well. Keyboard navigation can make folders feel like a hassle in the vi context. This setup has been effective so far, though potential downsides might appear later on.


I like to use lazyvim - <space> s S does a global symbol search which is super handy and works great with any cargo project thanks to rust analyzer.


Might want to check out Harpoon: https://github.com/ThePrimeagen/harpoon/tree/harpoon2

No comment on your one-file approach one way or the other, but like all of us, you will need to deal with tree-shaped projects, I've found Harpoon to be a good solution for this. Global marks can only get you so far.

Good luck with Audioscape btw.


I can confirm this! Initially tried shortcuts with unwrap, but once the structure is solid, it's on par with TS/Python frameworks.


Thanks for asking! Using SQLite with SQLx in Rust provides type-safe SQL queries, which enhances safety by catching errors at compile time.

As for backups, the setup is straightforward: just periodically copy the SQLite file to a secure location. Since it's a single binary and a database file, this keeps things simple and low-maintenance.


I hope you're calling the `backup` command in sqlite. A simple copy can leave sqlite db files in an inconsistent state from which sqlite can't recover.


Thanks for the hint. I didn't know that. So far nothing happened, but I will start using the backup command.


"reliable software"


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: