Hacker Newsnew | past | comments | ask | show | jobs | submit | adamnemecek's commentslogin

Consider adding support for Typst.

Interesting idea! Typst is compelling (Rust-based too). Not on immediate roadmap but could be a future addition. TeX is heavier but possible via external tools + pipeline feature.

Or even better, TeX. I realize capital bought out even basic typesetting but let's not encourage this

Typst is open-source.

Open source doesn't mean relinquished from capital by any means. I also don't blame the author of typst. But TeX is truly free from capital, and that should mean far more than the aesthetics of a nicer interface.

Integration with typst will be more straightforward than latex.

Yes, at the cost of dragging people into subscription software. Fuck off

How?

Markdown was cool for a while. I have switched to typst and boy is that an improvement. It’s the love child of latex and markdown. With markdown you’d still have to embed latex, while typst has its own thing that is nicer than latex.

I've been enjoying Typst. I worry that much of it is too complex for many end users. I'm musing about having end users draft stuff in markdown, then render that markdown with Typst templates.

You don’t have to use those parts, you can use it as markdown.

Good call, I've had success with:

     pandoc -f gfm -t typst -o file.typ file.md 
and as you'll know it's easy to add a Template if required.

Pandoc is cool but I hate writing my own scripts for it.

the icing on the cake would be gitlab, github, etc. rendering typst like markdown

Typst is lovely.

I started using nushell around April 2024. While it is much better than other shell languages, the lack of proper types is painful.

Shell languages make sense if you believe in the Unix philosophy. For me, the problem with the Unix philosophy is the endless serialization and deserialization as well as lack of proper error handling. So nushell for me is a good answer to an ill-posed question.

The approach I have been taking is a single binary with a billion subcommands and aliasing these. I currently have two of these in Rust, one in Swift. I tried going all Rust but integrating with certain aspects of macOS is just so much simpler in Swift.

Like the recent push to make CLI development suck less (e.g. argument parsing used to be painful but is solved now) has made developing CLI tools a lot less painful.


serialization/deserialization will always be needed unless you got all your programs working with the same ABI. it's just that in nushell's case you aren't serialize to human readable text.

endless serialization and deserialization is what makes me hate Bash and love PowerShell which solves this issue by piping full objects between commands and the ConvertTo-JSON command.

> Abstractions don’t remove complexity. They move it to the day you’re on call.

Then they are bad abstractions. I get where he is coming from, but the entire field is built on abstractions that allow you to translate say a matmul to shuffling some electrons without you doing the shuffling.


They are very different.


AI is capital intensive because autodiff kinda sucks.


> run them like their own little ramshackle empire for their personal enrichment

If you are from the US, I’m laughing my butt off, the irony is not lost on me.


There is a Metal Obj-C API, Metal implementation is C++.


No it's not - the compiler for MSL is of course C++ because it's LLVM but the runtime is absolutely written in objc (there weren't even C++ bindings until recently).


No, I mean what is inside the Objective-C objects. Essentially everything on macOS has an Objective-C API but is implemented using C++. Have you ever noticed the ".cxx_destruct" method on like all objects?

What you are talking about are C++ wrappers around Metal Objective-C API. Yes, it is weird as they are going C++ -> Objective-C -> C++. Why not go directly? Because Apple does not ship C++ systems frameworks.

The term is Objective-C++.


Modern ML builds on two pillars: GPUs and autodiff. Given that GPUs are running out of steam, I wonder what we should focus on now.


The price, power, and size. Make it cheap, low power, and small enough for mobile. One way to do this is inference in 4, 2, 1 bit. Also GPUs are parallel, most tasks can be split on several GPUs. Just by adding they you can scale up to infinity. In theory. So datacenters aren't going anywhere, they will still dominate.

Another way is CPU+ + fast memory, like Apple does. It's limited but power efficient.

Looks like with ecosystem development we need the whole spectrum from big models+tools running on datacenters to smaller running locally, to even smaller on mobile devices and robots.


My point is that revising autodiff is overdue.


* revisiting


If you like this, check out typst https://typst.app


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: