Hacker Newsnew | past | comments | ask | show | jobs | submit | sealeck's commentslogin

This is a bit like saying everyone would be a bit less jaded if the plane staying in the air wasn't hung over the Boeing 737 MAX 8 designer's heads by certain communities and used as an existential threat to the company.


Commenting from the sidelines:

Doesn't modern C++ offer the ability to write memory safe code? The primary distinguishing difference from Rust, on this front, is that Rust is memory safe by default (and allows them to override memory safety with code that is explicitly declared as unsafe) while C++ requires the developer to make a conscious effort to avoid unsafe code (without providing facilities to declare code as safe or unsafe). While this means that C++ is problematic, it does not make Rust automagically safer - particularly when interfacing with C/C++ code (as would be the case when interfacing with Linux syscalls or most C or C++ based libraries).

I guess what I'm saying is that Rust is great when dealing exclusively with Rust libraries since it is either memory safe by default or because it is easier to audit (since it is either declared explicitly or implicitly as unsafe). On the other hand, it is not guaranteed to be memory safe. While this may sound like nitpicking, the distinction is important from the perspective of the end user who is unlikely to ever audit the code yet may be swayed by being told that it is written in a memory safe language.


> C++ requires the developer to make a conscious effort to avoid unsafe code

The problem is much worse than how you put it. I've written C++ for more than a decade and it's painful to constantly having to worry about memory safety due to enormous complexity of the language. Even if you are super diligent, you will make a mistake and it will bite you when you expect it the least. Relying on clang-tidy, non-default compiler flags, sanitizers and other tools is not only not enough, but a constant source of headache on how to integrate them with your build systems and project requirements.


Admittedly, I am more of a hobbiest when it comes to C++ development. I try to keep track of things, but I started learning the language before it was standardized and I switched to other languages shortly after it was standardized (never mind the introduction of memory safe option in the standard libraries, which occurred in the 2000's). That said, memory safety has been a consideration, and a feature, for nearly 20 years now. It seems to me that people should have been taught how to approach it for nearly 20 years now. Sure, you can break the rules. Sure, anyone working with older code would have been exposed to bad code. Yet it shouldn't be a universal problem unless people are deliberately seeking out shortcuts (since writing memory safe code in C++ is messier than writing unsafe code).


All languages offer the ability to write memory-safe code. It's just that doing so is very difficult in C and C++. The benefit of Rust isn't really the assurance of safety that's provided by not using the `unsafe` keyword. After all, pretty much all Rust programs do use the `unsafe` keyword. The benefit of Rust is a combination of many design decisions that make it easy to write safe code.

For example, everyday operations in Rust are almost all defined. But in C++, it is extremely easy to run into undefined behavior by accident and make your code do something bizarre. On the practical side, I have never ever gotten a segfault when writing rust, but have many times in C++.


I think one of the fundamental differences in the SOTA C++ approach to memory safety (eg. extending unique/shared_ptr) and Rust is that C++ doesn't try to enforce having a single mutable reference to a variable and it still relies on strict aliasing heuristics, and so cannot claim to be fully deterministic. Still, use after free, and memory leaks should be impossible.

It'll still let you do a bunch of stuff Rust doesn't, which is up to the programmer to decide whether this is good or not.


> Still, use after free, and memory leaks should be impossible.

Use-after-free is still possible in modern C++ via std::span/std::string_view/etc. outliving the backing object.


    Doesn't modern C++ offer the ability to write memory safe code?
Can you name an example of a non-trivial C++ program that's memory safe? The only examples I can think of went to extraordinary lengths with formal methods and none of them are widely used.


A better analogy might be how Ada was hung over the heads of those using C and C++, to the point of Ada being mandated by law in certain niches.

And then Ada software caused the loss of US$370 million with Ariane 5.


> to the point of Ada being mandated by law in certain niches.

> And then Ada software caused the loss of US$370 million with Ariane 5.

This seems like a bit of a non-sequitur? Ariane is an EU rocket and the flight you're referring to was carrying an EU payload, and I don't think it was ever subject to the US DoD Ada mandate or an EU equivalent (which I'm not sure ever existed?).

(Also a bit of a nitpick, but I don't think the US DoD Ada Mandate was a law per se; it was a DoD policy and not something the US Congress passed).

It's probably somewhat disputable as to whether the Ariane failure was "due to Ada" or whether other higher-level concerns were responsible (e.g., reusing the Ariane 4 software without revalidation)


I don't think many (if any) programmers would imagine 100M lines of code to mean 100,000 lines of code and not 1,000,000...


What was your dissertation, and how would Nano-Banana-pro with Veo 3.1 have helped it?


I was working on semantic segmentation. used to spend a long time creating graphics for presenting at conferences. I had a link showing the results but people were saying i was sharing too many links so deleted the link. But these tools with chatgpt can write a paper in a week which i used to take 6 months to do


> because a "bazooka" is easy defeated by the same ratchet mechanism

That's an argument for capitulation in general: it's not an argument specifically against extending the field of scope to include services.


Why isn't AI just another application that can be run on the device? Surely we expose the necessary interfaces through the OS and the application goes from there?


> Startups belong to the latter.

Except that Apple, Intel, Tesla, etc have all received US government investment [1]. TSMC is a product of the Taiwanese state! Government investment can be done well, and seeds excellent companies.

[1]: https://www.sba.gov/blog/2024/2024-02/white-house-sba-announ...


Denmark has a large hearing aids industry due to lots of government funding for hearing aids, and a large wind turbine industry due to funding for wind farms. So stimulating demand can work to build or strengthen an industry, but what Denmark and EU are doing with GPUs is stimulating supply in Europe and demand in the US. I would be surprised if that does not end up strengthening US and not EU industry.


I wouldn't count the wind turbine industry in the same category as startups, energy production and infrastructure are classical government investment schemes.


It doesn't matter if government funded startups have been successful. It's not the government's job to provide capital to high risk ventures. They should provide public services for the people and regulate the private sector according to the interest of the people.


Well, depends on your definition of high risk. Basic research is definitely high risk, in that return for investment may happen generations from now if ever. I'd argue that funding of basic research through universities etc. is part of government's job.


And funding for basic research should be made so that the results belong to the public, for example through universities as you mentioned. Not through private startups.


I don't understand why your comment is downvoted.

The comment you're replying to is tainted with the survivorship bias. We see successful companies that got government funding, but not the opposite. Maybe we'd have more innovation and competition without government picking these specific winners.

Ironically, one of the companies you mentioned (Apple) now operates in an environment with very little competition and regularly faces antitrust claims.

Government picking winners may actually reduce competition in the long run. The key difference: when private money picks wrong, it's their loss. When government picks wrong, it's taxpayer money.


This sounds like a terrible approach to accounting. Surely large public companies should account for their expenditure at greater fidelity than billions.


Glad that every statement of personal opinion is now "propaganda"...


This isn't a subject that needs opinions. "Microsoft donated to a new ball room for the east wing of the White House." is a neutral, factual statement.


As is "Donald Trump ordered the demolition of the existing east wing of the White House, in order to construct a new ballroom on that site. Microsoft has funded this construction work."


You do know you can have high availability without using cloud providers? E.g. you run a second server in a different database as a standby that can take over, etc.


I mean the (virtual) machine itself has these guarantees. You can set the entire rack on fire and your VM will continue to operate or else you're owed compensation for the SLA violation.

You can add redundant machines with a failover. You then need to calculate how likely the failover is to fail, how likely the machines are to fail, etc. How likely is the switch to fail. You need engineers with pager rotations to give 24 hour coverage, etc.

What I'm saying is that the cloud providers give you strong guarantees and this is reflected in their pricing. The guarantees apply to every service you consume because with independent failures, the probability of not failing is multiplicative. If you want to build a reliable system out of N components then you need to have bounds on the reliability of each of the components.

Your business may not need this. Your side project almost certainly doesn't. But the money you save isn't free, it's compensation for taking on the increased risk of downtime, losing customer data, etc.

I would be interested to see a comparison of the costs of running a service on Hetzner with the same reliability guarantees as a corresponding cloud service. On the one hand we expect some cloud service markup for convenience. On the other hand they have economies of scale. So it's not obvious to me which is cheaper.


> What I'm saying is that the cloud providers give you strong guarantees and this is reflected in their pricing.

and yet, they go offline all the time.


just, and, and, and ...

IF you need it, soon you wish the lego blocks pulled IAM all the way through and worked with a common API


In the previous post on the Fly.io blog (https://fly.io/blog/litestream-revamped/), a really cool thing was mentioned!

> In Litestream, we’re solving the problem a different way. Modern object stores like S3 and Tigris solve this problem for us: they now offer conditional write support. With conditional writes, we can implement a time-based lease. We get essentially the same constraint Consul gave us, but without having to think about it or set up a dependency.

Reading this blog post though, I couldn't see any reference to this. Is this supported in Litestream v0.5.0, or will it be for a future release?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: