I think you would need a complicated set of metrics to claim something like "improved" that wasn't caveated to death. An immediate conflict being total number of articles vs impressions of articles labeled with POV biases. If both go up has the site improved?
I find I trust Wikipedia less these days, though still more than LLM output.
I can't think of a better accidental metric than that!
I'll go ahead and speculate that the number of incoherent sentences per article has gone down substantially over the last decade, probably due to the relevant tooling getting better over the same period.
I'm pretty sure the point parent was trying to make is that you can't get other people to leave their phones at home and there is very little recourse if a private citizen decides to record you without your consent from their phone in a public space. There's of course a difference in the powers involved, but people have had their lives ruined because somebody captured a video of them out of context or in their worst moment.
More crazy than the Cold War? The World Wars? The great flu pandemic? The Chinese Civil Wars? The European Wars of Religion? The Black Death? The Mongol Invasions?
I can't think of a single year in human history when the world wasn't crazy (maybe with the exception of a couple years in the late 1990s)
I don't understand why the Baby Boomers are the ones that get blamed for restrictive zoning. The oldest were in their early 30s and the youngest couldn't even vote during the peak of single family zoning activity in the 1970s
By this logic, you'd also have to blame Gen X and Millenials, as they've all been voting for at least a decade (and together have outnumbered baby boomers for several) but single family zoning continues to persist
I, a Millenial with a very active voter record, increasingly do. What seems to be the problem? Perhaps I should point out that representatives still must bring issues to bear, and the age of the average representatives has only very recently dipped into even high Gen X territory. Regardless, if your goal is to spread a little bit of the recent blame around, by all means, don't let me stop you. We should do something about it.
So I guess it’s the desegregation advocates who are technically at fault? Like that’s when people started using sneaky tricks to maintain segregation. At least before that it was above board and didn’t distort all these other basically unrelated areas (like zoning, and school district “preferences”, etc).
Because they’re the ones who’ve been fighting to keep it. I don’t give a shit what happened in the 70s before I was born. I’ve been watching boomers fight for low density zoning since the 90s.
New technology (cheap sensors, machine intelligence models) is already providing law enforcement with a wide array of new tools for identifying and building a legal case against people committing crimes. I don't see any reason to believe the law will somehow become unenforceable without gimping encrypted communications.
As with many things, the degree matters. It is both an imposition on your liberty to require identification when boarding an airplane and an imposition on your liberty to ban everyone from flying altogether. But one clearly restricts your liberty more than another. I think when choosing between different solutions to a problem, choosing the one that limits your freedom the least is a reasonable rule of thumb.
AI should help people achieve their ultimate goals, not their proximate goals. We want it to provide advice on how to alleviate their suffering, not how to kill themselves painlessly. This holds true even for subjects less fraught than suicide.
I don't want a bot that blindly answers my questions; I want it to intuit my end goal and guide me towards it. For example, if I ask it how to write a bubblesort script to alphabetize my movie collection, I want it to suggest that maybe that's not the most efficient algorithm for my purposes, and ask me if I would like some advice on implementing quicksort instead.
I agree. I also think this ties in with personalization in being able to understand long term goals of people. I think the current personalization efforts of models are more of a hack than what they should be.
I could be mistaken, but my understanding was that the people most likely to interact with the suicidal or near suicidal (i.e. 988 suicide hotline attendants) aren't actually mental health professionals, most of them are volunteers. The script they run through is fairly rote and by the numbers (the Question, Persuade, Refer framework). Ultimately, of course, a successful intervention will result in people seeing a professional for long term support and recovery, but preventing a suicide and directing someone to that provider seems well within the capabilities of an LLM like ChatGPT or Claude
Outdoor lighting is a lot cheaper now than it was in the 1970s. I think we can give it another shot after 50 years. And it's worth pointing out that Arizona has gone without DST for the last 50 years and seems to be doing fine.
Interestingly part of the UK approach then was to make street lighting more efficient, around that time a lot of low-pressure sodium lamps were installed. They used so little energy they were only beaten for efficiency by LEDs in this decade, but the monochromatic yellow light was seen as unacceptable by some countries which continued to use inefficient high-pressure mercury then later high-pressure sodium.
I miss the humble SOX lamp to be honest, they made night look like night rather than a poor approximation of day. They also had benefits for wildlife, much of which is insensitive to the 589 nm wavelength as well as astronomy where the light is easily filtered out.
It wasn't the best definition of AGI but I think if you asked an interested layman whether or not a system that can pass the Turing test was AGI 5 years ago, they would have said yes
When I was in college ~25 years ago, I took a class on the philosophy of AI. People had come up with a lot of weird ideas about AI, but there was one almost universal conclusion: that the Turing test is not a good test for intelligence.
The least weird objection was that the premise of the Turing test is unscientific. It sees "this system is intelligent" as a logical statement and seeks to prove or disprove it in an abstract model. But if you perform an experiment to determine if a real-world system is intelligent, the right conclusion for the system passing the test is that the system may be intelligent, but a different experiment might show that it's not.
Douglas Hofstadter wrote Gödel, Escher, Bach nearly 50-years ago, and it won a Pulitzer Prize, the National Book Award, and got featured in the popular press. It’s been on lots of college reading lists, from 2007 online coursework for high school students was available from MIT. The FBI concluded that the 2001 anthrax scare was in-part inspired by elements of the book, which was found in the attacker’s trash.
Anyone who’s wanted to engage with the theories and philosophy surrounding artificial intelligence has had plenty of materials that get fairly in-depth asking and exploring these same questions. It seems like a lot of people seem to think this is all bleeding edge novelty (at least, the underlying philosophical and academic ideas getting discussed in popular media), but rather all of the industry is predicated on ideas that are very old philosophy + decades-old established technology + relatively recent neuroscience + modern financial engineering.
That said, I don’t want to suggest a layperson is likely to have engaged with any of it, so I understand why this will be the first time a lot of people will have ever considered some of these questions. I imagine what I’m feeling is fairly common to anyone who’s got a very niche interest that blows up and becomes the topic of interest for the entire world.
I think there’s probably some very interesting, as-yet undocumented phenomena occurring that’s been the product of the unbelievably vast amount of resources sunk into what’s otherwise a fairly niche kind of utility (in LLMs specifically, and machine learning more broadly). I’m optimistic that there will be some very transformational technologies to come from it, although whether it will produce anything like “AGI”, or ever justify these levels of investment? Both seem rather unlikely.