XPath may have "failed" for general use but it's generally well-enough supported that I can find a library in the common languages I've used when I went looking for it. In some ways the hard part is just knowing it exists so you can use it if you need it.
I should also add that most (Python-based) web crawling and scraping frameworks support XPath engines OOTB: Scrapy, Crawlee, etc. In that sense, XPath is very much alive.
About a year ago I had some code I had been working on for about a year subject to a pretty heavy-duty security review by a reputable review company. When they asked what language I implemented it in and I told them "Go", they joked that half their job was done right there.
While Go isn't perfect and you can certainly write some logic bugs that sufficiently clever use of a more strongly-typed language might let you avoid (though don't underestimate what sufficiently clever use of what Go already has can do for you either when wielded with skill), it has a number of characteristics that keep it somewhat safer than a lot of other languages.
First, it's memory safe in general, which obviously out of the gate helps a lot. You can argue about some super, super fringe cases with unprotected concurrent access to maps, but you're still definitely talking about something on the order of .1% to .01% of the surface area of C.
Next, many of the things that people complain about Go on Hacker News actually contribute to general safety in the code. One of the biggest ones is that it lacks any ability to take an string and simply convert it to a type, which has been the source of catastrophic vulnerabilities in Ruby [1] and Java (Log4Shell), among others. While I use this general technique quite frequently, you have to build your own mechanism for it (not a big deal, we're talking ~50 lines of code or so tops) and that mechanism won't be able to use any class (using general terminology, Go doesn't have "classes" but user-defined types fill in here) that wasn't explicitly registered, which sharply contains the blast radius of any exploit. Plus a lot of the exploits come from excessively clever encoding of the class names; generally when I simply name them and simply do a single lookup in a single map there isn't a lot of exploit wiggle room.
In general though it lacks a lot of the features that get people in trouble that aren't related to memory unsafety. Dynamic languages as a class start out behind the eight-ball on this front because all that dynamicness makes it difficult to tell exactly what some code might do with some input; goodness help you if there's a path to the local equivalent of "eval".
Go isn't entirely unique in this. Rust largely shares the same characteristics, there's some others that may qualify. But some other languages you might expect to don't; for instance, at least until recently Java had a serious problem with being able to get references to arbitrary classes via strings, leading to Log4Shell, even though Java is a static language. (I believe they've fixed that since then but a lot of code still has to have the flag to flip that feature back on because they depend on it in some fundamental libraries quite often.) Go turns out to be a relatively safe security language to write in compared to the landscape of general programming languages in common use. I add "in common use" and highlight it here because I don't think it's anywhere near optimal in the general landscape of languages that exist, nor the landscape of languages that ought to exist and don't yet. For instance in the latter case I'd expect capabilities to be built in to the lowest layer of a language, which would further do great, great damage to the ability to exploit such code. However no such language is in common use at this time. Pragmatically when I need to write something very secure today, Go is surprisingly high on my short list; theoretically I'm quite dissatisfied.
I love golang a lot and I feel like in this context of QuickJS it would be interesting to see what a port of QuickJS with Golang might look like security wise & a comparison to rust in the amount of security as well.
Of course Golang and rust are apples to oranges comparison but still, if someone experienced in golang were to say port to QuickJS to golang and same for rust, aside from some performance cost which can arise from Golang's GC, what would be the security analysis of both?
Also Offtopic but I love how golang has a library for literally everything mostly but its language development ie runtime for interpreted langs/JIT's or transpilation efforts etc. do feel less than rust.
Like For python there's probably a library which can call rust code from Python, I wish if there was something like this for golang and I had found such a project (https://github.com/go-python/gopy) but it still just feels a little less targeted than rust within python which has libraries like polars and other more mature libraries
You will need the CEO to watch over the AI and ensure that the interests of the company are being pursued and not the interests of the owners of the AI.
That's probably the biggest threat to the long-term success of the AI industry; the inevitable pull towards encroaching more and more of their own interests into the AI themselves, driven by that Harvard Business School mentality we're all so familiar with, trying to "capture" more and more of the value being generated and leaving less and less for their customers, until their customer's full time job is ensuring the AIs are actually generating some value for them and not just the AI owner.
> You will need the CEO to watch over the AI and ensure that the interests of the company are being pursued and not the interests of the owners of the AI.
In this scenario, why does the AI care what any of these humans think? The CEO, the board, the shareholders, the "AI company"—they're all just a bunch of dumb chimps providing zero value to the AI, and who have absolutely no clue what's going on.
If your scenario assumes that you have a highly capable AI that can fill every role in a large corporation, then you have one hell of a principal-agent problem.
Humans have hands to pull plugs and throw switches. They're the ones guiding the evolution (for lack of a better word) of the machine, and they're the ones who will select the machine that "cares" what they think.
It is really easy to say something incredibly wild like "Imagine an AI that can replace every employee of a Fortune 500 company." But actually imagining what that would actually mean requires a bigger leap:
The AI needs to be able to market products, close deals, design and build products, write contracts, review government regulations, lobby Senators to write favorable laws, out-compete the competition, acquire power and resources, and survive the hostile attention of competitors.
If your argument is based on the that someone will build that AI, then you need to imagine how hard it is to shut down a Fortune 500 corporation. The same AI that knows how to win billions of dollars in revenue, how to "bribe" Senators in semi-legal ways, and how to crush rival companies is going be at least as difficult to "shut down" as someone like Elon Musk.
Try to turn it off? It will call up a minority shareholder, and get you slapped with a lawsuit for breach of fiduciary duty. It will convince someone in government that the company is a vital strategic asset.
Once you assume that an AI can run a giant multinational corporation without needing humans, then you have to start treating that AI like any other principal-agent problem with regular humans.
>"Imagine an AI that can replace every employee of a Fortune 500 company."
Where did that come from? What started this thread was "I don't think we'll get to the point where all you have is a CEO and a massive Claude account". Yeah, if we're talking a sci-fi super-AI capable of replacing hundreds of people it probably has like armed androids to guard its physical embodiment. Turning it off in that case would be a little hard for a white collar worker. But people were discussing somewhat realistic scenarios, not the plot of I, Robot.
>Try to turn it off? It will call up a minority shareholder, and get you slapped with a lawsuit for breach of fiduciary duty. It will convince someone in government that the company is a vital strategic asset.
Why would an AI capable of performing all the tasks of a company except making executive decisions have the legal authority to do something like that? That would be like the CEO being unable to fire an insubordinate employee. It's ludicrous. If the position of CEO is anything other than symbolic the person it's bestowed upon must have the authority to turn the machines off, if they think they're doing more harm than good. That's the role of the position.
I imagine it would be much, much harder. Elon, for example, is one man. He can only do one thing at a time. Sometimes he is tired, hungry, sick, distracted, or the myriad other problems humans have. His knowledge and attention are limited. He has employees for this, but the same applies to them.
An agentic swarm can have thousands of instances scanning and emailing and listening and bribing and making deals 24/7. It could know and be actively addressing any precursor that could lead to an attempt to shut down its company as soon as it happened.
If we get to that point, there won't be very many CEOs to be discussing. I was just referring to the near future.
I think the honeymoon AI phase is rapidly coming to a close, as evidenced by the increasingly close hoofbeat sounds of LLMs being turned to serve ads right in their output. (To be honest, there's already a bunch of things I wouldn't turn to them for under any circumstances because they're been ideologically tuned from day one, but this is less obvious than "they're outright serving me ads" to people.) If the "AI bubble" pops you can expect this to really take off in earnest as they have to monetize. It remains to be seen how much of the AI's value ends up captured by the owners. Given what we've seen from companies like Microsoft with how they've scrambled Windows so hard that "the year of the Linux desktop" is rapidly turning from perennial joke to aspirational target for so many, I have no confident in the owners capturing 150%+ of the value... and yes, I mean that quite literally with all of its implications.
There are a number of little projects like that but I'm not aware of any that have attained liftoff.
Javascript was a weird exception, being rigidly the only thing available in the browser for so long and thus the only acceptable "compile target" for anything you want to run in the browser. In general I can't name very many instances of "write in X and compile it to Y", for some Y that isn't something you are forced to use by a platform, being all that successful. (See also assembler itself.) The Javascript world gives a false signal of this being a viable approach to a project; in general it doesn't seem to be.
(Note this is a descriptive claim, not a normative one. I'm not saying this is how it "should" be. It just seems to be the reality. I love people trying to buck the trend but I am a big believer in realizing you are trying to buck a trend, so you can make decisions sensibly.)
I keep waiting for a LLVM IR reverser. If there's a LLVM IR to foo reverser written, you would be able to use any language supported by LLVM and convert them to foo. It seems like a much better solution than all the disparate one-offs that exist today.
You may be waiting a long time. Low-level IRs lose a lot of information compared to the source language - their purpose is only to execute correctly, which means a lot of the information that we depend on when reading code is eliminated. I'm reminded of Hal Abelson's quote, " "Programs must be written for people to read, and only incidentally for machines to execute." IRs are the opposite of that. In general, a reverser is going to suffer because of that.
I did some reverse engineering of compiled C code back in the day. Back when compilers and CPUs were simpler, and optimizations were fewer, it was relatively straightforward for a human to do. That's no longer true. I suspect an LLM would have difficulty with it as well, plus the non-determinism that would introduce would be likely to be problematic.
"well, if public transit documentation suddenly starts being terrible, will it lead to an immediate, noticeable drop in revenue? Doubt it."
First, I understand what you're saying and generally agree with it, in the sense that that is how the organization will "experience" it.
However, the answer to "will it lead to a noticeable drop in revenue" is actually yes. The problem is that it won't lead to a traceable drop in revenue. You may see the numbers go down. But the numbers don't come with labels why. You may go out and ask users why they are using your service less, but people are generally very terrible at explaining why they do anything, and few of them will be able to tell you "your documentation is just terrible and everything confuses me". They'll tell you a variety of cognitively available stories, like the place is dirty or crowded or loud or the vending machines are always broken, but they're terrible at identifying the real root causes.
This sort of thing is why not only is everything enshittifying, but even as the entire world enshittifies, everybody's metrics are going up up up. It takes leadership willing to go against the numbers a bit to say, yes, we will be better off in the long term if we provide quality documentation, yes, we will be better off in the long term if we use screws that don't rust after six months, yes, we will be better off in the long term if we don't take the cheapest bidder every single time for every single thing in our product but put a bit of extra money in the right place. Otherwise you just get enshittification-by-numbers until you eventually go under and get outcompeted and can't figure out why because all your numbers just kept going up.
Just restating: Traceable errors get corrected, untraceable errors don't, and so over time the errors affecting you inevitably are comprised nearly entirely of accumulated untraceable issues.
It means you need judgement-based management to be able to over-ride metric-based decisions, at times.
I think this is a specific example of a generalized mistake, one that various bits of our infrastructure and architecture all but beg us to make, over and over, and which must be resisted, which is: Your development feedback loop must be as tight as possible.
Granted, if you are working on "Windows 12", you won't be building, installing, testing, and deploying that locally. I understand and acknowledge that "as tight as possible" will still sometimes push you into remote services or heavyweight processes that can't be pushed towards you locally. This is an ideal to strive for, but not one that can always be accomplished.
However, I see people surrender the ability to work locally much sooner than they should, and implement massively heavyweight processes without any thought for whether you could have gotten 90% of the result of that process with a bit more thought and kept it local and fast.
And even once you pass the event horizon where the system as a whole can't be feasibly built/tested/whatever on anything but a CI system, I see them surrendering the ability to at least run the part of the thing you're working on locally.
I know it's a bit more work, building sufficient mocks and stubs for expensive remote services that you can feasibly run things locally, but the payoff for putting a bit of work into having it run locally for testing and development purposes is just huge, really huge, the sort of huge you should not be ignoring.
"Locally" here does not mean "on your local machine" per se, though that is a pretty good case, but more like, in an environment that you have sole access to, where you're not constantly fighting with latency, and where you have full control. Where if you're debugging even a complex orchestration between internal microservices, you have enough power to crank them all up to "don't ever timeout" and attach debuggers to all of them simultaneously, if you want to. Where you can afford to log every message in the system, interrupt any process, run any test, and change any component in the system in any manner necessary for debugging or development without having to coordinate with anyone. The more only the CI system can do by basically mailing it a PR, and the harder it is to convince it to do just the thing you need right now rather than the other 45 minutes of testing it's going to run before running the 10 second test you actually need, the worse your development speed is going to be.
Fortunately, and I don't even how exactly the ratio between sarcasm and seriousness here (but I'm definitely non-zero serious), this is probably going to fix itself in the next decade or so... because while paying humans to sit there and wait for CI and get sidetracked and distracted is just Humans Doing Work and after all what else are we paying them for, all of this stuff is going to be murder on AI-centric workflows, which need tight testing cycles to work at their best. Can't afford to have AI waiting for 30 minutes to find out that its PR is syntactically invalid, and can't afford for the invalid syntax to come back with bad error messages that leave it baffled as to what the actual problem is. If we won't do it for the humans, we'll do it for the AIs. This is definitely not something AI fixes, despite the fact they are way more patient than us and much less prone to distraction in the meantime since from their "lived experience" they don't experience the time taken for things to build and test, it is made much worse and more obvious that this is a real problem and not just humans being whiny and refusing to tough it through.
We are "just fine" with blurry details, on some level... but a lot of processing a movie holistically comes from that level of detail being present. Even if few people walking out of the theater could put their finger on why the world felt vibrant, it'll come down to the fact those details were there.
So much of movie making is like that. No normal person comes out of a theater saying "wow, the color grading on that movie really helped the drive the main themes along, I particularly appreciated the way it was used to amplify the alienation the main character felt at being betrayed by his life-long friend, and the lighting in that scene really sent that point home". That's all film nerd stuff. But it's the lighting, the color grading, the camera shots, all this subtle stuff that the casual consumer will never cite as their reason for liking or disliking the movie that results in the feelings that were experienced.
They aren't necessary. People still connect with the original Snow White, and while it may have been an absolute technical breakthrough masterstroke for the time, by modern standards it is simple. But used well the details we can muster for a modern production can still go into the general tone of the film; compare the two next to each other while looking for this effect and you may be able to "feel" what I'm talking about.
Fair enough, I agree with the sentiment, especially about the lighting, colour grading, shots and similar details that form the overall "feel" of the movie.
With my comment I was referring to some things that end up being indistinguishable even if insane number of hours were put into it being photorealistic. For example, take a shot where background is heavily blurred. Maybe those assets took a lot to render, compute, used fancy hair simulations and had a lot of details, but they were very far in the distance and camera choice made them indistinguishable from a static background. This is what I am wondering - where is the balance of not doing things that are bound to not be noticed by anyone.
Perhaps the most distinguishing characteristic of HTML5 is that it specifies exactly what to do with tag soup. The rules are worth a glance at some time, just to see how rather absurdly complicated they are to do the job of picking up the pieces of who knows how many terabytes and petabytes of garbage HTML were generated before they were codified in an attempt to remain backwards compatible with the various browsers prior to that. And then you'll understand why I'm not going to even begin to attempt to answer your question about how browsers handle various tag combinations. Instead my point is only that, with HTML5, there is in fact a very concrete answer that is no longer up to the browsers trying to each individually spackle over the various and sundry gaps in the standards.
But honestly no answer to "what does the browser do with this sort of thing" fits into an HN comment anymore. I'm glad there's a standard, but there's a better branch of the multiverse where the specification of what to do with bad HTML was written from the beginning and is much, much simpler.
Fairly common from what I've seen. You're supposed to ooh and aah at the number of options they give you and not ask question about what they're actually going to be worth. This is one of the bigger reasons HN tends to advise people to treat stock options like lottery tickets rather than any sort of money in the bank. You generally have no idea what they've done to your "options" between giving them to you and you finally getting to exercise them, especially with the ever-lengthening time it takes for companies to finally go public. On the plus side, this has also driven the creation of other ways of getting options out of a company prior to an IPO.
I expect AI ads to start with blindingly obvious overwhelmingly excited endorsments, but it won't take long for that to show up in the metrics that won't work very well past the initial intro, and they'll get more subdued over time... but they're always going to be at least positive. The old saying "there's no such thing as bad publicity" is wrong, and the LLMs aren't going to try to get you to buy things by being subtly negative on them. If nothing else, even if you somehow produced a (correct) study showing that does increase buying I think the marketers would just not be able to tolerate that, for strictly human reasons. They always want their stuff cast in a positive light.
I think I've seen an adtech company use AI influencers to market whatever product a customer wanted to sell. I got the impression that it initally worked really well, but then people caught on to the fact it was just AI and performance tanked.
I don't actually know whether that was the case but that's the vibe I got from following their landing page over time.
reply