Something like 5% of the time when I pair my airpods to my apple wathc to go for a run, only one of them pairs. So, if I've actually started running, I then have to circle back to get the headphones case, unpair them, stick them back in the case and hope it then works after i close the lid for a minute.
This is a great point. It would make a lot more sense simply to require a 25-foot easement along the lines of the checkerboard for unrestricted public access or a road. That would have the effect of forcing the ranchers to move their fences back ~12 feet.
In compensation, ranchers could be given the right to create structures or rights-of-way on those same easements to connect their diagonal pieces so as to make them more useable, as long as the public has a reasonable right to access their areas.
This situation honestly makes me wonder how the ranchers even use these squares, since they face the exact same access problem, just with the opposite corners.
I understood the situation here to be that the same private owner owned all of the private squares in this particular area. I would assume that most private owners won't be interested in buying squares deep in the checkerboard for access reasons.
Siri's awfulness really is a thing to behold. I haven't used an android phone in a while. For those users out there, does its voice assistant actually work?
The current situation on Google's Android Pixel phones is odd. The old non-LLM Google Assistant works well in a limited domain: Things like setting alarms, phoning by name, etc. It's similar in scope to Siri, but with better voice recognition, and better context awareness. However, Google is desperate to kill Google Assistant and force all Pixel users to use Gemini instead. Gemini 3 is a very good LLM, and far, far, far more versatile than Google Assistant. But Gemini won't do the simple things as reliably as the old Assistant. Setting an alarm works maybe 90% of the time with Gemini. If you asked the old Assistant "What time is it?" it would respond "It's 4:40 PM". If you ask Gemini "What time is it?" it will sometimes respond "It's 4:40 PM CDT in {your city}", but sometimes it will say "It's four four zero Pee Em in {your city}" and sometimes it will do a web search. Results are spotty in other areas like voice dialing. I've retained the old Assistant, because I want to do the basic things far more often than I want to verbally vibe code. But rumor has it Google is going to disable the old Assistant in March, forcing all users onto Gemini for voice commands. Unless Gemini gets much better at handling simple tasks by then, Pixel users will end up with a voice assistant much more frustrating than Siri.
Not to mention all the useless LLM fluff Gemini has. I turned off Gemini because simple questions like "What's 1 USD in AUD" would be met with 30 seconds of "As a large language model, I can't provide investment advice, so it's always important to check information yourself [...], but one Australian dollar is approximately $0.65" (note the conversion in the wrong direction). By comparison, Google Assistant just gives you a number straight away.
Its gemini so at least its smartish and has some integration with the rest of the ecosystem so it can do some assistant work as long as its read mostly, but integration with the rest of the phone is almost non existent. It also struggle in noisy environments and in mixed language situations
Exactly. Looks like everybody's complaining that Siri isn't a better Ask Jeeves, when that's not the design goal. What people expect is an LLM that has full access to the phone. Nobody's even remotely close to shipping that.
My Siri-initiated timers are always done with my phone, probably 50 or more each week (work stuff). The only time I get a failure is when I release the side button too quickly. I've made certain the spoken feedback is enabled to reduce the risk of me making that mistake. (Settings > Siri > Siri Responses > Prefer Spoken Responses)
As for, "What time is it?"... Try activating Siri and only saying, "Time."
I suspect that's the main difference; if you're trying to use hands-free voice activation via "hey Siri" you get a much different experience than if you can touch the watch/phone to trigger Siri first.
And thinking back over it, more than half the failures are complete - e.g., it likely never activated at all. Very few are "it set a timer, but for the wrong time".
Good chance that's what captures our different Siri experiences. The few times I've done it spoken was always with AirPods and I always waited for the Siri reply (been a while; is it, "Uh-huh"?) after I said, "Hey, Siri." But my experience activating Siri with speech is so minimal as to be untrustworthy of anything broader.
I wonder if we are getting different versions based on geolocation (I'm in Europe) because my experience is the absolute opposite of this. I actually had the thought "maybe I should switch to apple to stop having to deal with this" just this week (although reading this thread siri is as bad).
My experience is only through android auto and it honestly makes me furious how bad it is. There is absolutely no other tech product in my life that gets even close to how bad voice commands are handled in Android.
In my experience, literally everything sucks:
- single language voice recognition (me speaking in English with an accent)
- multi language voice recognition (english commands that include localised names from the country I'm in)
- action in context (understand what I'm actually asking it to do)
- supported actions (what it can actually do)
Some practical examples from just this week:
- I had to repeat 3 times that "no I don't want to reply" because I made the mistake of getting google to read a whatsapp message while driving, and it got stuck into the "would you like to reply" (it almost always gets stuck - it's my goto example to show people how bad it is)
- I asked it to queue a very specific playlist on Spotify, and it just couldn't get it right (no matter how specific my command was, I couldn't get it to play a playlist from MY. account instead of playing an unrelated public playlist)
- I asked to add a song to a playlist, and it said it couldn't do that (at least it understood what I was asking? maybe)
And in general I gave up trying to use google maps through voice commands, because it's just not capable of understanding an English command if it contains a street/location name pronounced in the local language/accent.
If they didn’t have so many limitations imposed for safety or by permissioning. Hey Google call my wife. 30 seconds later “something has gone wrong”. Or hey Google play _______ on YouTube music. “Playing something else on you tube music”. Its stupid.
I used to use it for that. A few months ago I got an Android system update and it no longer works for that. It just does web searches if I try. Now it's trying to push me into this thing where it takes a screenshot and tells me what's on the screen. I've never once cared about that.
Failing to find any way to get the alarm thing back, I turned off the entire assistant thing.
It works okay. I like that it's universal (the same assistant on my phone, on my home devices, in my car, in my earbuds). I like that it does tasks right, but you have to know how to phrase them (my most common is probably "remind me to X tomorrow at X time"). Setting alarms and timers, creating calendar events, asking about the weather on a specific day or in a specific place, asking how long it'll take to walk/drive somewhere -- all good. But anything more complicated than that and you get erratic behaviour. From what I've seen with my friends interacting with Siri, I'd say they're about equal in capability.
It works pretty well for me, but doesn't do nearly what I'd expect.
EG I can talk to it like I would chatgpt and it works well. But I can't be like "hey I want to get dinner with my wife on our anniversary, please book the best available option in my city for fine dining"
It's still way better than Siri, which feels like a voice CLI to me (same as Alexa, which is very low quality IME)
I don’t think I’d want to talk to a voice assistant like that. Maybe it’s a generational thing? Things like that are ambiguous enough discussing them with human beings and a big part of things like voice assistants is understanding how it’s going to interpret and execute a response based on what I say to it.
Could you share a bit about your use case/experience? Siri does what I need it to do— send messages, create reminders and calendar entries, look up basic facts and cites the source, play music, add things to lists, etc. I’m curious if you’re trying to do things that I haven’t, or if you’re just having a very different experience with those same things? Or maybe just have higher expectations for it?
Edit: why in gods name are people downvoting me for politely asking about someone’s differing experience?
Ah I never felt inspired to use it on a computer and always use physical volume controls in the car and through headphones, so I wouldn’t have run into that. It does seem like something that should be a day-one sort of feature.
Not true for me at all, it fails at the most basic tasks, sometimes even at tasks it has done before. Three examples:
- "Timer 5 minutes" -> Loading spinner is shown. Siri disappears after a few seconds. No error, no confirmation. I then have to manually check if the timer was set or not (it was not).
- "Turn on the lights in the living room" to which it responds "Sorry, I cannot do that". I have Phillips Hue lights that are connected to Apple Home, of course Siri can do that. It did that before.
- "Add tooth paste to my shopping list". The shopping list is a list I have in reminders. It then tries to search for the query on Google. I then tried "Add tooth paste to the list shopping list in reminders" which worked, but if I have to be this wordy, it is no longer any convenient.
There are many more simple cases in which Siri always / sometimes fails. I also have the feeling that it performs far worse if asked in my native language (German) than in English.
Yeah that’s strange. I set timers constantly both at home and work and I can’t recall a single time it hasn’t worked. I periodically add things to lists without issue. I have zero experience using it in another language. Maybe their testing sucked for that?
The very first thing I tried when Siri was released -- "set an alarm for ten minutes before sunset" -- still doesn't work. "What time is sunset?" and "Set an alarm for 5:03 PM" both worked on day one, and still work. Zero progress.
Interesting. I think I probably mentally separate the information retrieval realm and the command execution realm more than makes sense for the interface. There’s no apparent reason that shouldn’t work based on what the user is given.
The only time I find Siri useful, or I should say ~potentially~ useful, is while driving text, call and to ask basic facts. The amount of times I've heard "I can't show you that right now" after basic questions is insane. I just stopped asking it questions. Recently I asked "what engine is in a 2022 f150". Trying it without Carplay now, it literally just displays text. It should be able to TTS those results. What on earth have they been working on if not things like that?
I know there at least used to be a setting to specify if you get a verbal or text response based on whether or not the phone is locked. Maybe that would get it to stop just displaying text?
I pretty much only use it when I can’t look at the phone so I’m not sure if it’s still there.
Interesting— I use that functionality constantly and listen to a wide variety of artists, some of them pretty obscure. Do you use Apple Music or another service?
I can think of one time recently where no matter how I prompted it to play an album for (decades old but probably triple platinum,) it kept playing some cardi b song with the band’s name in the title instead… but that’s probably like a 1 in 2000 request problem. Maybe its a genre thing?
It’s extremely hit or miss for me. Sometimes it works and I’m amazed. Other times it fails to play my main playlist that I’ve played 1000 times before.
Yea, I’m boggled. At this point Siri should be able to parse and understand a wide variety of forms of the same command, but it still seems to fail. This should be doable even without LLMs.
Yeah I can ask it to play specific editions of specific EPs named the same or similar thing to the albums or whatever and it rarely screws up. There’s got to be something significantly different in our approach. I wish I could test it.
‘Siri turn on torch’. Used to work, now all I get is “sorry, Torch isn’t available right now” this is at night when it is plugged in and I need to work as a nightlight to go open the bedroom door to let the dog in or out without blasting myself awake with the main phaser array next to my bed.
There probably are some very large values that go on forever. It's interesting how the structure of numbers must change at some point. Seemingly just based on magnitude, but in huge scales, some kind of density of structures related to factors builds up and eventually hits a critical point. And "island of forever". Collatz island, above what we've tested. It will be cool to discover that. I wonder how big it is? Probably hundreds of millions of decimal digits. But it's likely dense enough that if you randomly search and land on the island, you probably have a good chance.
I had to go on Youtube to listen to some of the music mentioned here, as I'm pretty out of the loop on it. Given what I heard I honestly think we're basically at the point where AI can generate equivalent or even better music. It's just very simple and doesn't feel particularly innovative or noteworthy.
Point being, I think it's likely this person is one of the last pop stars.
Actually, as I'm writing this, I realized that probably the music being produced by this person is actually done by a computer. So, maybe she's in the first wave of totally artificial pop stars.
> Actually, as I'm writing this, I realized that probably the music being produced by this person is actually done by a computer. So, maybe she's in the first wave of totally artificial pop stars.
Her main collaborator, co-creator and producer of many years is the artist AG Cook, who founded the label PC Music. He appears often in her music videos and gets mentioned in her lyrics. His own solo work plays a lot with pairing the artificial and the organic, taking the "slick" aesthetics of electronic pop to abrasive extremes and placing it next to vulnerability and gentleness.
It makes me sad to think you have formed this opinion on her more than decade long career that spans a variety of genres and many collaborations based on a few brat songs you may have listened to
Music is a subjective thing, but what I like about Charli XCX is her albums have completely different sound from album to album, but are consistently fun to listen to. As if it puts you in a certain energetic mood. Listening to albums in full you can tell many tracks experiment with the genre. For example, she brought a niche thing like hyperpop to mainstream listeners in her prior 2 albums before brat.
The novelty in pop music is not usually in the harmony. The novelty is usually in the presentation. The idea is that you hook the audience with familiarity (nostalgia) and then keep them with a novel expression of it. In recent years, this means really strange synth patches and vocal effects.
On contrary, it's sad that he can't see how his "I Don't Buy What I Can't Understand" strategy wins again. Buffet thrives when bubbles burst.
More than third of Berkshire valuation is now cash.
Berkshire has $350 billion USD in cash and cash equivalents. They accumulate cash because they don't find enough things worth buying (all stocks seem too expensive for them). When the bubble bursts, there will be undershoot and Berkshire starts buying.
He's been through a few bubble bursts. He used to run basically a hedge fund but wound it up in 1969 at near the top of that bubble and switched to running Berkshire.
Things like this really favor models offered from countries that have fewer legal restrictions. I just don't think it's realistic to expect people not to have access to these capabilities.
It would be reasonable to add a disclaimer. But as things stand I think it's fair to consider talking to ChatGPT to be the same as talking to a random person on the street, meaning normal free-speech protections would apply.
This is what really scares me about people using AI. It will confidently hallucinate studies and quotes that have absolutely no basis in reality, and even in your own field you're not going to know whether what it's saying is real or not without following up on absolutely every assertion. But people are happy to completely buy its diagnoses of rare medical conditions based on what, exactly?
GPT-5 thinking is one of the biggest offenders and it's quite incredible to me that you think it doesn't hallucinate. It makes me strongly suspect your own judgment is impaired. Also, what do you mean asking for "reproducible examples"? Is it somehow not a valid example if it only sometimes makes up citations?
The researchers compared ChatGPT-4 with its earlier 3.5 version and found significant improvements, but not enough.
In one example, the chatbot confidently diagnosed a patient’s rash as a reaction to laundry detergent. In reality, it was caused by latex gloves — a key detail missed by the AI, which had been told the patient studied mortuary science and used gloves.
...
While the researchers note ChatGPT did not get any of the answers spectacularly wrong, they have some simple advice.
“When you do get a response be sure to validate that response,” said Zada.
Which should be standard advice in most situations.
>I think it's fair to consider talking to ChatGPT to be the same as talking to a random person on the street
That’s not how companies market AI though. And the models themselves tend to present their answers in a highly confident manner.
Without explicit disclaimers, a reasonable person could easily believe that ChatGPT is an authority in the law or medicine. That’s what moves the needle over to practicing the law/medicine without a license.
The problem with these kinds of bets is the Fed Put. That's the invisible force levitating stocks. I don't really see that changing unless/until the country genuinely enters a debt or currency crisis. The path is unsustainable, but they'll keep it going as long as they possibly can.
The fed can take nore active measures than just managing rates. I lost some money by unexpectedly finding myself on the opposite side of US government policy - and dollar-firehose - during COVID. Shorting travel-related stocks can be a losing bet if the government wants to "shore up" share prices by directly injecting hitherto unheard of amounts of liquidity into the market.
I had early "insider" info on COVID admissions from a Pulmonologist spouse, and an understanding of exponential growth (doubling every few days).
I took the initiative to be the first person to WFH in my org, which I did as a pre-emptive quarantine as I was at risk of being infected: several of my spouse's colleagues subsequently got infected. Unfortunately, I didn't link my WFH circumstances that with investing in Zoom.
Uhh... Powell's term as chairman ends in May 2026. His term as a board member ends Jan 2028. Senate confirmation is irrelevant because Trump will not nominate him again for anything.
It's even worse than that, as Palantir is a Party business. Betting against that is like betting whether specific people were going to be airbrushed out of photos in Stalin's Russia. And if you have that kind of insider insight, why short instead of making a positive bet on whomever the new Party darlings are going to be?
Maybe it makes sense based on the dynamic of the Party needing to run through scapegoats? One could possibly see that Palantir is about to be thrown under the bus, but only connected insiders will know who its exact replacement will be? Personally I don't see signs of Palantir being close to the chopping block though.
Anyone have a solution for this?
reply