Hacker Newsnew | past | comments | ask | show | jobs | submit | jpc0's commentslogin

Rust prevents the footgun, but also prevents shooting in the direction where your foot would be even if it isn't there.

There are absolutely places where that is required and in Rust those situations become voodoo to write.

C++ be default has more complexity but has the same complexity regardless of domain.

Rust by default has much less complexity, but in obscure situations outside of the beaten path the complexity dramatically ramps up far above C++.

This is not an argument for or against either language, it's a compromise on language design, you can choose to dislike the compromise but that doesn't mean it was the wrong one, it just means you don't like it.

A relatively simple but complex example, I want variable X to be loaded into a registerer in thos function and only written to memory at the end of the function.

That is complex in C/C++ but you can look at decompilation and attempt to coerce the compiler into that.

In rust everything is so abstracted I wouldn't know where to begin looking to coerce the compiler into generating that machine code and might just decide to implement it in ASM, which defeats the point of using a high level language.

Granted you might go the FFMPEG route ans just choose to do that regardless but rust makes it much harder.

You don't always need that level of control but when you do it seems absurdly complex.


> I want variable X to be loaded into a registerer in thos function and only written to memory at the end of the function.

> That is complex in C/C++ but you can look at decompilation and attempt to coerce the compiler into that.

> In rust everything is so abstracted I wouldn't know where to begin looking

I don't know if I fully understand what you want to do, but (1) controlling register allocation is the realm of inline asm, be it in C, C++, or Rust. And (2) if "nudging" the compiler is what you want, then it's literally the same thing in Rust as in C++, it's a matter of inspecting the asm yourself or plonking your function onto godbolt.


I think the issue is that naive translation of C into ASM, which is somewhat simulated by -O0, is usable in C, while it isn't in Rust.

This.

I agree that you will probably just end up writing ASM but it was a trivial example, there are non-trivial examples involving jump tables and unrolling loops etc.

Effectively weird optimisations that rely on the virtual machine the compiler is building for vs reality, there's just more abstractions with rust than with C++ by the virtue of the safety mechanism, it's just plain not possible to have the one without the other.

The hardware can do legal things that rust cannot allow or can allow but you need to write extremely convoluted code, C/C++ is closer to the metal in that regard.

Don't get me wrong I am all for the right abstractions, it allows insane optimisations that humans couldn't dream of, but there is a flip side.


My high level understanding of the UB concept is, that it means false positives to the question "Is that a valid program?". Given that the philosophy of C is mostly "Do what the programmer wrote, no questions asked.", it leads to designing the language so, that the probability of false negatives goes to zero. This obviously means, that the number of false positives goes up.

Rust basically takes the opposite approach of making false positives go to zero, which makes the false negatives go up, which you need to work around with unsafe or type gymnastics.

The third approach is to make both false positives and negatives be zero, by restricting the set of programs, which is what non systems languages do.


Are you sure claude didn't do exactly the same thing but the harness, claude code, just hid it from you?

I have seen AI agents fall into the exact loop that GP discussed and needed manual intervention to fall out of.

Also blindly having the AI migrate code from "spaghetti C" to "structured C++" sounds more like a recipe for "spaghetti C" to "fettuccine C++".

Sometimes its hidden data structures and algorithms you want to formalize when doing a large scale refactor and I have found that AIs are definitely able to identify that but it's definitely not their default behaviour and they fall out of that behaviour pretty quickly if not constantly reminded to do so.


> Are you sure claude didn't do exactly the same thing but the harness, claude code, just hid it from you?

What do you mean? Are you under the impression I'm not even reading the code? The code is actually the most important part because I already have working software but what I want is working software that I can understand and work with better (and so far, the results have been good).


Reading the code and actually understanding the code is not that the same thing.

"This looks good", vs "Oh that is what this complex algorithm was" is a big difference.

Effectively, to review that the code is not just being rewritten into the same code but with C++ syntax and conventions means you need to understand the original C code, meaning the hard part was not the code generation (via LLM or fingers) but the understanding and I'm unsure the AI can do the high level understanding since I have never gotten it to produce said understanding without explicitly telling it.

Effectively, "x.c, y.c, z.c implements a DSL but is convoluted and not well structured, generate the same DSL in C++" works great. "Rewrite x.c, y.c, z.c into C++ buildings abstractions to make it more ergonomic" generally won't recognise the DSL and formalise it in a way that is very easy to do in C++, it will just make it "C++" but the same convoluted structure exists.


> Reading the code and actually understanding the code is not that the same thing.

Ok. Let me be more specific then. I'm "understanding" the code since that's the point.

> I'm unsure the AI can do the high level understanding since I have never gotten it to produce said understanding without explicitly telling it.

My experience has been the opposite: it often starts by producing a usable high-level description of what the code is doing (sometimes imperfectly) and then proposes refactors that match common patterns -- especially if you give it enough context and let it iterate.

> "Rewrite x.c, y.c, z.c into C++ buildings abstractions to make it more ergonomic" generally won't recognise the DSL and formalise it in a way that is very easy to do in C++, it will just make it "C++" but the same convoluted structure exists.

That can happen if you ask for a mechanical translation or if the prompt doesn't encourage redesign. My point was literally make it well-designed idiomatic C++ and it did that. Inside of the LLM training is a whole bunch of C++ code and it seems to be leaning on that.

I did direct some goals (e.g., separating device-specific code and configuration into separate classes so adding a device means adding a class instead of sprinkling if statements everywhere). But it also made independent structural improvements: it split out data generation vs file generation into pipeline/stream-like components and did strict separation of dependencies. It's actually well designed for unit testing and mocking even though I didn't tell it I wanted that.

I'm not claiming it has human-level understanding or that it never makes mistakes -- but "it can't do high-level understanding" doesn't match what I'm seeing in practice. At minimum, it can infer the shape of the application well enough to propose and implement a much more ergonomic architecture, especially with iterative guidance.

I had to have it introduce some "bugs" for byte-for-byte matching because it had generalized some of the file generation and the original C code generated slightly different file structures for different devices. There's no reason for this difference; it's just different code trying to do the same thing. I'll probably remove these differences when the whole thing is done.


That clarifies a lot.

So effectively it was at least partly guided refactoring. Not blind vibe coding.


> Which is a huge risk factor for Rust, especially in today's context of the Linux kernel. If I have an object created/handled by external native code, how do I make sure that it respects Rust's lifetime/aliasing rules?

Can you expand on this point? Like are you worried about whether the external code is going to free the memory out from under you? That is part of a guarantee, the compiler cannot guarantee what happens at runtime no matter what the author of a language wants, the CPU will do what it's told, it couldn't care about Rusts guarantees even if you built your code entirely with rust.

When you are interacting with the real world and real things you need to work with different assumptions, if you don't trust that the data will remain unmodified then copy it.

No matter how many abstractions you put on top of it there is still lighting in a rock messing with 1s and 0s.


This is more akin to selling a car to an adult that cannot drive and they proceed to ram it through their garage door.

It's perfectly within the capabilities of the car to do so.

The burden of proof is much lower though since the worst that can happen is you lose some money or in this case hard drive content.

For the car the seller would be investigated because there was a possible threat to life, for an AI buyer beware.


I think the general public has a MUCH better grasp on the potential consequences of crashing a car into a garage than some sort of auto-run terminal command mode in an AI agent.

These are being sold as a way for non-developers to create software, I don't think it's reasonable to expect that kind of user to have the same understanding as an actual developer.

I think a lot of these products avoid making that clear because the products suddenly become a lot less attractive if there are warnings like "we might accidentally delete your whole hard drive or destroy a production database."


I don't think that's entirely true. Seeking mastery does not imply being a master.

If you have only ever seen one pattern to solve a problem, trivial example of inheritance, and therefore do that to the best of your ability then you have achieved mastery to your ability. Once you see a different pattern, composition, you can then master that and master identifying when which is suitable.

Lack of mastery is just using inheritance despite seeing alternative patterns.

Naturally mastery also includes seeking alternative solutions but just because a codebase uses inferior patterns does not mean those that came before did not strive towards mastery, it's possible that they didn't know better at the time and now cannot get the time to revise the work.

There's always a juggling act in the real world.

Assume incompetence and not malice, and incompetence is not a state of being. A person without experience can be seen as incompetent but quickly become competent with training or experience, but the code they write still stems from incompetence.

Strive to see your previous self as incompetent (learn something new every day)


Could you write us a nice blog post or article with performance metrics to prove this?

You might be correct but at this point your statement is as much a lie as the parent.


Because number bigger doesn’t translate to higher perceived performance…

The only compelling reason that I want to upgrade my Sandy Lake chip is AVX2.

So it is instruction set not perf, sure there will be improved performance but most of the things that are actually performance issues is already handed off to the GPU.

On that note probably rebar and PCIe4 but those aren’t dramatic differences, if CPU is really a problem (renders/compilation) then it gets offloaded to different hardware.


> Because number bigger doesn’t translate to higher perceived performance…

When the numbers are that far apart, there is definitely room to perceive a performance improvement.

2011 era hardware is dramatically slower than what’s available in 2025. I go back and use a machine that is less than 10 years old occasionally and it’s surprising how much less responsive it feels, even with a modern high speed SSD drive installed.

Some people just aren’t sensitive to slow systems. Honestly a great place to be because it’s much cheaper that way. However, there is definitely a speed difference between a 2011 system and a 2025 system.


Choice of things like desktop environments matters a lot. I’m using xfce or lxde or something (I can’t tell without checking top), and responsiveness for most stuff is identical between 2010 intel and a ryzen 9.

The big exceptions are things like “apt get upgrade”, but both boxes bottleneck on starlink for that. Modern games and compilation are the other obvious things.


> The big exceptions are things like…

> Modern games and compilation are the other obvious things.

I mean if we exempt all of the CPU intensive things then speed of your CPU doesn’t matter

I don’t have a fast CPU for the low overhead things, though. I buy one because that speed up when I run commands or compile my code adds up when I’m doing 100 or 1000 little CPU intensive tasks during the day. A few seconds or minutes saved here and there adds up multiplied by 100 or 1000 times per day. Multiply that by 200 working days per year and the value of upgrading a CPU (after over a decade) is very high on the list of ways you can buy more time. I don’t care so much about something rendering in 1 frame instead of 2 frames, but when I type a command and have to wait idly for it to complete, that’s just lost time.


Believe it or not, "good enough" often is good enough. Regardless of how big the numbers are.


The comment claimed there wasn’t a perceivable difference

That’s different than acknowledging that newer hardware is faster but deciding current hardware is fast enough


Especially on single core, everything is painfully slow. Tried to install linux on a ppc imac G5 five years ago and I had to admit that it was never going to be a good experience, even for basic usage


Agreed that if you’re not using NVMe (as example), that non-CPU upgrade will translate into the biggest perceived benefit.

Then again, not many Sandy Bridge mobo supported NVMe.


I did get a PCI-Express to M2 adapter and installed an NVMe drive.

That was indeed the biggest upgrade ever.


I went from a Sandy Bridge (i5 2500k) to a Ryzen 9 3950x, and the perceived performance improvement was insane. You also have to take into account RAM and PCIe generation bumps, NVMe, etc.


I’ll take the fight on Algorithmic complexity any day.

There are many cases where O(n^2) will beat O(n).

Utilising the hardware can make a bigger difference than algorithmic complexity in many cases.

Vectorised code on linear memory vs unvectorised code on data scattered around the heap.


I sincerely hope you are joking...


Big O notation drops the coefficient, sometimes that coefficient is massive enough that O(N) only beats out O(N^2) at billions of iterations.

Premature optimisation is a massive issue, spending days working on finding a better algorithm is many times not with the time spent since the worse algorithm was plenty good enough.

Real world beats algorithmic complexity many many times because you spent ages building a complex data structure with a bunch of heap allocations all over the heap to get O(N) while it's significantly faster to just do the stupid thing that is in linear memory.


AES67 is the open standard, ravenna and dante are extensions/alterations of it (well dante preceded aes67 but can be run in aes67 compatibility) and none of them require FPGAs, they usually use FPGAs to keep latency very low but they work just fine with any network card that supports PTPv2 and in dante case it’s not even that strict.

Go grab the ravenna docs, it’s pretty close to the spec for AES67 with added details for how to communicate metadata. You will find it, SMPTE2110 and the likes is all built on-top of existing standards (RTP, PTP, amongst others), even AVB which has much stricter requirements regarding latency is the same. These aren’t complex proprietary standards, they are standards which just specifies restrictions and interactions between other standards.

What I’m getting at is the Klark Teknik and Behringer after then refusing to use these standards as their interconnect is the industry outlier, the only other example in this discussion which still has relevance is Allen and Heath and they now do actually support Dante stage boxes on their models.

Twinlan and the other examples were never the only options, Digico and soundcraft support madi by default, Yamaha effectively spurred dante into existence in the live industry. Their proprietary protocol are there to solve problems that cannot be solved with the standard interconnect, usually latency or channel count or both.


> These aren’t complex proprietary standards,

They aren't proprietary, but they are complex standards, and sometimes somewhat incomplete. AES67, for instance, is somewhat crippled by not having a control plane (for that you need AES70, not well supported, or various NMOS standards). AVB requires special network switches.

Behringer doesn't really refuse to support standards. They offer Dante and MADI cards for their X32 and Wing mixers. They have AES50 built in as well. I think StageConnect and UltraNet are intended to be a cheaper/simpler/more limited alternative for people who don't need the full physical range and flexibility of Dante.


I don’t know how they do it but in general the big thing you need to think about is how to handle reactivity.

There is no reason you cannot implement MVVC/MVC or whatever else on JS for reactivity, it’s just that react et al just abstracts that away for you.

You are effectively choosing some method of implementing and abstracting the observer pattern and handling cleanup since you can easily get into a situation where you have dangling references to objects that are never cleaned up (memory leaks) or pretty nasty GC hangs which can make the UI feel bad to use.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: