Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don’t think that’s completely fair. It basically puts Apple in the same bucket as Google or OpenAI. Google obviously tracks everything you do for ads, recommendations, AI, you name it. They don’t even hide it, it’s a core part of their business model.

Apple, on the other hand, has made a pretty serious effort to ensure that no employee can access your data on these AI systems. That’s hugely different! They’re going as far as to severely restrict logging and observability and even building and designing their own chips and operating systems. And ensuring that clients will refuse to talk to non-audited systems.

Yes, we can’t take Apple’s word for it. But I think the third party audits are a huge part of how we trust, and also verify, that this system will be private. I don’t think it’s far to claim that “Apple knows what you’re doing.” That implies that some one, at some level at Apple can at some point access the data sent from your device to this private cloud. That does not seem to be true.

I think another facet of trust here is that a rather big part of Apple’s business model is privacy. They’ve been very successful financially by creating products that generate money in other ways, and it’s very much not necessary or even a sound business idea for them to do something else.

While I think it’s fair to be skeptical about the claims without 3rd party verification, I don’t think it’s fair to say that Apple’s approach isn’t better for your data and privacy than openAI or Google. (Which I think is the broad implication — openAI tracks prompts for its own model training, not to resell, so it’s also “only openAI knows what your doing.”)



What makes you think that internal access control at Apple is any better than Google's, Microsoft's or OpenAI's? Google employees have long reported that you can't access user data with standard credentials, for example.

Also, what makes you think that Apple's investments on chip design and OS is superior to Google's? Google is known for OpenTitan and other in-house silicon projects. It's also been working in secure enclave tech (https://news.ycombinator.com/item?id=20265625), which has been open-source for years.

You're making unverifiable claims about Apple's actual implementation of the technical systems and policies it is marketing. Apple also sells ads (App Store, but other surfaces as well) and you don't have evidence that your AI data is not being used to target you. Conversely, not all user data is used by Google for ad targeting.


It’s not about technology. It’s about their business.

Apple generally engineers their business so that there isn’t an incentive to violate those access controls or principles. Thats not where the money is for them.

Behavior is always shaped by rewards and punishments. Positive reinforcement is always stronger.


One hundred percent this.

All these conversations always end up boiling down to someone thinking they’re being clever for pointing out you have to trust a company at the end of the day when it comes to security and privacy.

Yes. Valid. So if you have to trust someone, doesn’t it make sense for it to be someone who has built protecting privacy into their core value proposition, versus a company that has baked violating your privacy into their value prop?


It's not about being clever, it's about being perceptive. Apple's cloud commitment has a history of being sketchy, whether it's their government alliance in China, the FIVE-EYES/PRISM membership in America, or their obsession with creating "private" experiences that rely on the benefit of the doubt.

Apple doesn't care about you, the individual. Your value as a singular customer is worthless. They do care about the whole; a whole that governments can threaten to exclude them from if they don't cooperate with domestic surveillance demands. How far off do you really think American iCloud is from China? If Apple is willing to backdoor one server, what's stopping them from backdooring them all? If they're willing to lie about notification security, what's stopping them from lying about server integrity too?

And worst off, Apple markets security. That's it; you can't go verify their veracity outside the dinky little whitepapers they publish. You can't know for sure if they have privacy violation baked-in to their system because you can't actually verify anything. You simply have to guess, and the best guess you can make gets based off whatever Apple markets as "true" to you. In reality, we can do better with security and should probably expect more from one of the largest consumer technology brands in the world. Simply assuming that they aren't violating user privacy is an absurd thing to gamble your security on.


If you are the target of a nation state level actor, you are already fucked. Most of us just don’t want our behavior sold to our insurance companies or whatever. Apple doesn’t do that because it would kill their brand for very little return.


This is the part that’s always so humorous to me about the super tinfoil hat security crowd. They think they’re in the plot of Mr Robot or something. When for the most part, no one actually cares about them at all.

My dad fits into this category. So worried about being “tracked by the government.” He’s not a dissident. He’s not a journalist. Not a freedom fighter. Just deeply inconveniencing his kids with some of his tech choices.

But if these people were the targets of APTs, all the massive technology lifestyle changes they’ve made to supposedly protect themselves wouldn’t really matter.


I really also don’t bother about security, but I hate that any argument against people caring about privacy is along the lines of „I have nothing to hide“. Especially on the note of Apple, I remember when a dad was flagged as a pedophile because Apple found photos of his kid in his iCloud and their algorithm decided to get him raided. It’s about control, when you hand your data over to 3rd parties of any kind you are giving up control and one day that will bite you in the ass in some way. I am will to take that risk, you too, but I still think not wanting that is totally valid. A type of angst which I find much more stupid is people being scared of AI taking over the world HAL x Terminator style…


So this whole thing is about you being angry that your dad doesn't use iMessage?

Sounds like your dad is the cool dude, and you're the tech-obsessed weirdo. Do you visit him often?


Nah he uses iMessage. He’s not that obstinate.

He’s otherwise a good dude. Just makes some tech choices here and there as if he’s a former CIA agent on the run that sort of just make you chuckle and shake your head.


That's the convenient line of blind apathy they rely on, to sell iPhones. If people cared, they would object to owning an iPhone just from the material and labor cost of it... but they don't. It's a running joke that nobody cares what next year's iPhone looks like as long as the trade-in value is good. Apple couldn't kill their brand if they tried, past this point. People don't pay attention anyways.

Which is why it's good for us to demand more from capable companies. Apple looks good when they're scared, and the market wins when they're forced to compete in novel and interesting ways. Success breeds complacency, the rest is distant history.


> And worst off, Apple markets security. That's it; you can't go verify their veracity outside the dinky little whitepapers they publish. You can't know for sure if they have privacy violation baked-in to their system because you can't actually verify anything.

Oh, boy, but this is deeply false. Apple literally provides security researchers models of their devices to verify their security claims on their most important cash cow, the iPhone.

This is just an incredibly bold and verifiably false claim.

Wow.


Apple has tried suing researchers, before: https://www.theverge.com/2021/8/11/22620014/apple-corellium-...

On top of that, they fail to commit to iOS security on the level of AOSP and don't let researchers create hardened variants or custom patches. With actively-distributed exploits like Pegasus still being used, that's the sort of behavior that turns your userbase into a stationary target. Giving researchers iPhones is insultingly usel

Apple vehemently opposes the concept of anyone securing their iPhone except them. They have a well-documented habit of ignoring vulnerabilities and offering zero compensation for the discovery of zero-days. Apple's ambivalence towards the security research sector is like one of the only things they're known for, among hacker communities. It is "verifiably false" in the sense that Apple spends quite a lot of money marketing the opposite of what they actually do in reality (not that you should be surprised by that).


Can you explain to me how I might use such a device to verify the security properties of iBoot?


You lose all credibility when you start yakking about FIVE EYES, etc. If you're the target of intelligence services, the advice you need is eloquently delivered in the movie "Goodfellas". That is: "Don't talk on the fucking phone."

American companies are subject to US law, full stop. Global technology companies have to balance interests to operate globally. China requires a local partner to operate services in the PRC, thus Apple and Microsoft (and others) operate with a business partner in that market.

From a business perspective, there's little or no incentive for Apple to take measures to collect information on you systematically - they do not monetize it and won't devote resources to its collection. However, not being responsive to government requests, demands, or order for information will result in punitive action. So they comply.

No company cares about you. They don't love or hate you. There's no moral purity - the competitive platform is owned by a company that owns the advertising market and has a long history of extracting every sinew of data to create profiles that allow for maximally efficient ad delivery. Engaging in whataboutism isn't productive.


That's a false dichotomy. You may have to trust someone but that someone could be something else than an opaque for-profit company.


Give me some examples of benevolent non profits that provide anywhere near the level of consumer services as a company like Apple.


I'll do better, here's a benevolent nonprofit that goes beyond what Apple provides to ensure top-notch consumer service: https://grapheneos.org/


They're not trying to be clever, they're trying to point out the very important philisophy of maximizing self reliance that so many people like you eschew.

How do you distinguish between a company who 'has built protecting privacy into their core value proposition' and one who just says they've done so?

What are you going to do if a major privacy scandal comes out with Apple at the center? If you wouldn't jump ship from Apple after a major privacy scandal then why does your input on this matter at all?

Some people feel that is inevitable so it's best to just rip that bandaid off now.


I'm taking aim at the Google bros who try to raise these arguments to muddy the waters into a sort of false equivalence between Apple and Google.

If you're already using a dumb phone and eschewing modern software services, then I'm not really talking to you. Roll on brother/sister, you are living your ideals.

> How do you distinguish between a company who 'has built protecting privacy into their core value proposition' and one who just says they've done so?

The business incentives. Apple's brand and market valuation to some extent depends on being the secure and privacy oriented company you and your family can trust. While Google's valuation and profit depends almost entirely on exploiting as much of your personal data as they possibly can get away with. The business models speaks for themselves.

Does this guarantee privacy and security? Does Apple have a perfect track record here? No of course not, but again if these are my two smartphone choices it seems fairly clear to me.


> but again if these are my two smartphone choices it seems fairly clear to me.

If you really perceive this as a binary choice, I have no idea how you could conclude that iOS is more secure than the Android Open Source Project.

...of course, it's not just a choice between a Google-spyware phone or an Apple-spyware phone. Many people like to reduce it to that so they can rationalize whichever company they pick, but in reality you have many choices including no smartphone at all. On Android's side, the Open Source images have enabled rigorous cross-referencing in OS capability, as well as forks that reduce the already-limited attack surface. Apple has a long track-record of letting zero-days fester in their inbox and failing to communicate promptly to security researchers, even for actively-exploited vulnerabilities.

It's not a "false equivalency" to highlight how Google, Apple and Microsoft all fold over like wet paper when the intelligence agencies come around. It's not a coincidence, either; all of those companies are enrolled in the NSA's domestic warrantless surveillance program.


> but in reality you have many choices including no smartphone at all.

Oh come on man. This is why these conversations often aren’t even worth having.


I'm sorry, hopefully you come back to reality soon. I just went 2 weeks without touching a smartphone, I'm certain you can too.


I think you’re the one not living in reality.

But, hey, at least the NSA won’t get ya.


If you can live without a cellphone, you're not living in reality? Interesting argument.

I wonder how all those people did it in the 90s and 00s and before the age of smartphones.


In those dark derelict days, before the brilliant shining light of creation endowed man with the Subway App.


Simple, everyone around them also didn’t have cellphones.

Reality is based in a context.

Or are we going to go to even more “get off my lawn” kind of places and talk about how ancient man survived quite fine without the internet?


You know this is a growing trend with teens, right?

Like to eschew smartphones and just use basic feature phones and to interact in real physical settings and not digital ones.

There's a growing and warranted push back to pervasive and addictive digital technology.


Alright. Take care.


I worked for Google for almost 14 years. Never did they, any other engineer, or even product manager I know of, ever suggest to snoop into cloud customer data, especially those using Shielded VMs and Customer Managed Encryption Keys for attached storage (https://cloud.google.com/kubernetes-engine/docs/how-to/using...). I've never seen even the slightest hint, and the security people at Google are incredibly anal to a T about the design and enforcement of these things.

This stuff is all designed so that even an employee with physical access to the machine would find it very difficult to get data. It's encrypted at rest by customer keys, stored in enclaves in volatile RAM. If you detached the computer or disk, you'd lose access. You'd have to perform an attack by somehow injecting code into the running system. But Shielded VMs/GKE instances makes that very hard.

I am not a Google employee anymore but this common tactic of just throwing out "oh, their business model contains ad model ergo, they will sell anything and everything, and violate contracts they sign to steal private data from your private cloud" is a bridge too far.


That's becoming less the case. As Apple's advertising and services revenue grows and hardware sales slow, they have increasing incentive to mine your data the same as any company does. They already use quite a bit data on the location and content personalization front. I would argue that Apple perhaps cares about protecting your data more from malicious third parties (again like any company should - it's never good for FAANG when data leaks or is abused), but they are better at it (and definitely better at marketing it).


> What makes you think that internal access control at Apple is any better

There are multiple verified stories on the lengths Apple goes internally to keep things secret.

I saw a talk years ago about (I think) booting up some bits of the iCloud infrastructure, which needed two different USB keys with different keys to boot up. Then both keys were destroyed so that nobody knows the encryption keys and can't decrypt the contents.


The stories about Apple keeping things secret usually go about protecting their business secrets from normal people, up to doing probably illegal actions.

Using deniable, one-time keys etc. are... not that unusual. In fact I'd say I'm more worried about the use of random USB keys there instead of proper KMS system.

(There are similar stories with how doing a cold start can be difficult when you end up with a loop in your access controls, from Google, where a fortunately simulated cold-start showed that they couldn't access necessary KMS physically to bootstrap the system... because access controls depended, after many layers, on the system to be cold-started).


they used smartcards, not usb keys


Which probably were just key transport devices from offline secured KMSes


What's funny is that, in all these orgs, it ends up being the low-tech vulns that compromise you in the end. Physical access, social engineering, etc. However, I'm really impressed by the technical lengths Apples goes to though. The key-burning thing reminds me of ICANN' Root KSK Ceremonies.


Destroyed? Where? In all places where they were stored? Or just in some of them? How can you tell? You still need to trust them they didn't copy them somewhere.


It's impossible to use any technology if you don't trust anyone.

Any piece of technology MAY have a backdoor or secondary function you don't know of and can't find out without breaking said device.


That was the point of my response. Somewhere in the chain one must trust something without any proof.


That's not even getting to the fact that Apple is also running a display ads business: https://searchads.apple.com/


Indeed. Apropos to this: new features[1] to insert ads into videos in native apps.

[1]: https://developer.apple.com/videos/play/wwdc2024/10114/


Such a lazy take. Yes, they show ads based on what you search for in the App Store. They will also show apps based on location if the customer opts in to that feature. No other data is used. No browsing history, no purchase history, nothing like what other companies are collecting.

https://searchads.apple.com/privacy


Glancing at your comment history I can't help but notice that most of your comments are related to defending Apple, even at points where the consensus on HN is that Apple is obviously in the wrong. I applaud you, sir.


Eventually the addressable market for iPhones will saturate, but the growth imperative will remain.

If I were king of Apple and I truly valued user privacy, I would be careful not to tie any revenue streams to products that entail the progressive violation of user privacy.


"I think another facet of trust here is that a rather big part of Apple's business model is privacy. They've been very successful financially by creating products that generate money in other ways, and it's very much not necessary or even a sound business idea for them to do something else."

If a third party wants that data, whether the third party is an online criminal, government law enforcement or a "business partner", this idea that Apple's "business model" will somehow negate the downsides of "cloud computing", online advertising and internet privacy is futile. Moreover, it is a myth. Apple is spending more and more on ad services, we can see this in its SEC filings. Before he died, Steve Jobs was named on an Apple patent application for showing ads during boot. The company uses "privacy" as a marketing tactic. There is no evidence of an ideological or actual effort to avoid the so-called "tech" company "business model". Apple follows what these companies do. It considers them competitors. Apple collects a motherload of user data and metadata. A company that was serious about privacy would not do this. It's a cop out, not a trade off.

To truly avoid the risks of cloud computing, online advertising and associated privacy issues, choosing Apple instead of Google is a half-baked effort. Anyone who was serious about it would choose neither.

Of course, do what is necessary, trust whomever; no one is faulting anyone for making practical choices, but let's not pretend choosing Apple and trusting it solves these problems introduced by so-called "tech" company competitors. Apple pursues online advertising, cloud computing and data collection. All at the expense of privacy. With billions in cash on hand, it is one of the wealthiest companies on Earth, does it really need to do that.

In the good old days, we could call Apple a hardware company. The boundaries were clear. Those days are long gone. Connect an Apple computer to a network and watch what goes over the wire wth zero user input, destined for servers controlled by the mothership. There is nothing "private" about that design.


> Of course, do what is necessary, trust whomever; no one is faulting anyone for making practical choices, but let's not pretend choosing Apple and trusting it solves these problems introduced by so-called "tech" company competitors. Apple pursues online advertising, cloud computing and data collection. All at the expense of privacy. With billions in cash on hand, it is one of the wealthiest companies on Earth, does it really need to do that.

Yeah. I feel like the conversation needs some guard rails like, "Within the realm of big tech, which has discovered that one of its most profitable models is to make you the product, Apple is really quite privacy friendly!"


Disclaimer: I used to work on Google Search Ads quality models

> Google obviously tracks everything you do for ads, recommendations, AI, you name it. They don’t even hide it, it’s a core part of their business model.

This wasn't the experience I saw. Google is intentional about which data from which products go into their ads models (which are separate from their other user modeling), and you can see things like which data of yours is used in ads personalization on https://myadcenter.google.com/personalizationoff or in the "Why this ad" option on ads.

> and it’s very much not necessary or even a sound business idea for them to do something else

I agree that Apple plays into privacy with their advertising and product positioning. I think assuming all future products will be privacy-respecting because of this is over-trusting. There is _a lot_ of money in advertising / personal data


"ensuring that clients will refuse to talk to non-audited systems."

I'm trying to understand if this is really possible. I know they claim so but is there any info on how this would prevent Apple from executing different code to what is presented for audit?


The servers provide a hash of their environment to clients, who can compare it to the published list of audited environments.

So the question is: could the hash be falsified? That’s why they’re publishing the source code to firmware and bootloader, so researchers can audit the secure boot foundations.

I am sure there is some way that a completely malevolent Apple could design a weakness into this system so they could spend a fortune on the trappings while still being able to access user information they could never use without exposing the lie and being crushed under class actions and regulatory assault.

But I reject the idea that that remote possibility means the whole system offers no benefit users should consider in purchasing decisions.


Sure I'm missing something, but isn't that just an untrusted server self-reporting its own hash? Apple publishes the bootloader source and we'd have to assume it's what's actually running and reporting honestly the hash of the OS it's hosting. So we need to go earlier in the chain. In the end, from afar, we don't know if we're communicating with an actual Secure Enclave/SGX whatever or something that just acts like one.

Matt Green's posts about it so am sure it's been thought out - but hard to understand how it doesn't just depend on employees doing the right thing, when if you could, you would need all the rigmarole.


You're not missing anything.


* wouldn't need all the...


Unless they pass all keys authorized by the system to third parties that ensure appropriate auditing, none.

And at least after my experiences with T2 chip, I consider Apple devices to be always owned by Apple first...


It's completely fair, because regardless of third party audits, chips, etc, there are backdoors right along the line, that are going to provide Apple and the government with secret legal access to your data. They can simply go to a secret court, receive a secret judgment, and be authorised to secretly view your data. Does anyone really think this is not already the case? There is no transparency. A licensed third party auditor would not be able to tell you this. We have to operate with the awareness that all data online is already not private - no need to pretend/imagine that Apple's marketing is actually true, and that it is possible to buy online privacy utopia.


The best protection against "secret orders" is to use mathematics.

Build your system so that it can't be decrypted, don't log anything etc. Mullvad has been doing this with VPNs and law enforcement has tested it - there's nothing for them to get.

Same has been proven with Apple not allowing FBI to open an iPhone, because it'd set a precedent. Future iPhone versions were made so that it's literally impossible for even Apple to open a locked iPhone.

There's no reason why they wouldn't go to same lengths on their private cloud compute. It's the one thing they can do that Google can't.


> Same has been proven with Apple not allowing FBI to open an iPhone, because it'd set a precedent.

I thought the outcome of that case was that no precedent was set, since the iPhone was unlocked before the FBI could test their argument in court.

> Future iPhone versions were made so that it's literally impossible for even Apple to open a locked iPhone.

Firmware signed by apple is what runs to verify your biometrics and decide whether or not to unlock the device. At any point apple could sign firmware with a backdoor for this processor which lets them unlock any phone. How did they prevent this in future iPhone versions?

> theshrike79 18 hours ago | parent | context | flag | on: Private Cloud Compute: A new frontier for AI priva...

The best protection against "secret orders" is to use mathematics.

Build your system so that it can't be decrypted, don't log anything etc. Mullvad has been doing this with VPNs and law enforcement has tested it - there's nothing for them to get.

Same has been proven with Apple not allowing FBI to open an iPhone, because it'd set a precedent. Future iPhone versions were made so that it's literally impossible for even Apple to open a locked iPhone.

> There's no reason why they wouldn't go to same lengths on their private cloud compute. It's the one thing they can do that Google can't.

They did go to the same length, they have the ability to see your data whenever they choose to since they own the signing keys.


> Build your system so that it can't be decrypted

Now you can't debug anything.

> Mullvad has been doing this with VPNs

Mullvad do not need to store any data at all. Infact any data that they store is a risk. Minimising the data stored minimises their risk. The only thing they need to store is keys.

Look, if you want to ask an AI service if this photo has a dog in, thats simple and requires no state other than the photo. If you want to ask it does it have my dog in, thats a whole 'nother kettle of fish. How do you communicate the descriptors that describe your dog? how do you generate them? on device? that'll drain your battery in a very short order.

> Apple not allowing FBI to open an iPhone, because it'd set a precedent

Because they didn't follow process.

> Future iPhone versions were made so that it's literally impossible for even Apple to open a locked iPhone.

They don't need to, just hack the icloud backup. plus its not impossible, its just difficult. If you own the key authority then its less hard.


> Same has been proven with Apple not allowing FBI to open an iPhone, because it'd set a precedent. Future iPhone versions were made so that it's literally impossible for even Apple to open a locked iPhone.

Right, but I have no reason to think that this isn't a marketing ploy either, just another story. There is simply no way that Apple is as big as it is, without providing whatever data the government requires. Corporations and governments are not your friend.


Apple will obey government orders to give data they have and can access.

No government order short of targeting a specific backdoored update to a specific person will allow them to give data they can't access.

And if you're doing something that can make a TLA force Apple to create a targeted iOS update just for you, it's not something regular people can or should worry about.

Apple keeps normal people safe from mass surveillance, being protected from CIA/NSA required going Full Snowden and it's not a technological problem, you need to change the way you live.


> No government order short of targeting a specific backdoored update to a specific person

I'm failing to see the what would be the challenge here. Apple can technically do that. The government can force them to do that.


Do you not remember Edward Snowden? Eg this sort of info:

> The scandal broke in early June 2013, external when the Guardian newspaper reported that the US National Security Agency (NSA) was collecting the telephone records of tens of millions of Americans.

> The paper published the secret court order directing telecommunications company Verizon to hand over all its telephone data to the NSA on an "ongoing daily basis".

https://www.bbc.com/news/world-us-canada-23123964

You seem to think that 10 years, under cover of secret orders, that this is NOT going on now. Not Apple!

People's lovely trusting natures in corporations and government never ceases to amaze me.


"telephone data" != "contents of every phone call"


Contents of communications aren't as important as you may think; metadata is extremely dangerous.


You and I have no idea.


> Does anyone really think this is not already the case?

I don't think this is already the case, and I think the article is an example of safeguards being put into place (in this particular scenario) to prevent it.


On the basis of not having information, cos all this occurs out of sight, you believe this is not the case. Ok.


If you’re presenting a conspiracy theory, you have to at least poke holes in the claims you consider false.

Under the system described in the linked paper, your scenario is not possible. In fact, the whole thing looks to be designed to prevent exactly that scenario.

Where do you see the weakness? How could a secret order result in undetectable data capture?


No. The information is all out there - secret courts, secret judgements, its all been put out there. I don't need to dissect any technical information, to recognise that I cannot know what I do not know.

In case anyone was uncertain about whether to trust what we are told - we heard that the US government was taping millions of phone records from the Snowden revelations.

So, we are told there are secrets, and we are told that there are mechanisms in place to prevent this information from being made public.

You are also free to believe that the revelations are no longer relevant... I'd like to hear the reason.

IMO - the reverse is the case - in that you need to show why Apple have now become trustworthy. Why would Apple not be subject to secret judgements?

I know there is a lot of marketing spin about Apple's privacy - but do you really think that they would actually confront the government system, in a way that isn't some further publicity stunt? Can one confront the government and retain a license to operate, do you think? Is it not probable that the reality is that Apple have huge support from the government?

Perhaps this kind of idea is hard to understand - that one can make a big noise about privacy, and how one is doing this or that to prevent access, and all the while ensuring that access is provided to authorised parties. Corporations can say this sort of thing with a straight face - its not a privacy issue to private information - its a (secret) legal issue!

Sorry, but secret courts and secret judgements, along with existing disclosure that millions were being spied upon, means one needs to expect the worst.


Fair, go ahead and expect the worse, and handwave away any attempts to mitigate.

But I'm not sure where that leaves you. Is it just a nihilistic "no security matters, it's all a show" viewpoint?


It is fair, I don't accept attempts to mitigate. The trust is gone, and nothing can recover it. The idea of trusting government and corporations was ridiculous in the first place as these entities are not your friends.

You wouldn't expect a repeat abuser to stop abusing just because of 'time' or a marketing campaign. And yet this is the case here. People keep looking to their tormentors for solutions.

Not expecting healing from those also inflicting the trauma, ie changing one's expectations, seems like a minimum effort/engagement in my view, but it's somehow inconceivable.


Apple uses your information for advertising as well.

https://www.apple.com/legal/privacy/data/en/apple-advertisin...

It also exempts itself from normal tracking opt-outs in iOS. It has _another_ set of settings you need to opt out off to disable _their_ advertising tracking.

https://support.apple.com/en-us/105131


I think it’s pretty fair. This example isn’t about Apple but about Microsoft, but we’ve had a decade long period where Microsoft has easily been the best IT-business partner for enterprise organisations. I’ve never been much of a fan of Microsoft personally, but it’s hard to deny just how good they are at building relationships with enterprise. I can’t think of any other tech company that knows enterprise the way Microsoft does, but I think you get the point… anyway they too are beginning to “snoop” around.

Every teams meeting we have is now transcribed by AI, and while it’s something we want, it’s also a lot of data in the hands of a company where we don’t fully know what happens with it. Maybe they keep it safe and only really share it with the NSA or whichever American sneaky agency listens in on our traffic. Which isn’t particularly tin-foil-hat. We’ve semi-recently had a spy scandal where it somewhat unrelated (this wasn’t the scandal) was revealed that our own government basically lets the US snoop on every internet exit node our country has. It is what it is when you’re basically a form of vassal state to the Us. Anyway, with the increased AI monitoring tools build directly into Microsoft products, we’re now handing over more data than ever.

To get the point, we’re currently seeing some debate on whether Chromebooks and Google education/workspaces should be allowed in schools. Which is a good debate. Or at least it would be if the alternative wasn’t Microsoft… Because does it really matter if it’s Google or Microsoft that invades your privacy?

Apple is increasingly joining this trend. Only recently it was revealed that new Apple devices have some sort of radio build into them, even though it’s not on their tech sheets. Or in other words, Apple has now joined the trend of devices that can form their own internet by being near other Apple devices. Similar to how Samsung and most car manufacturers have operated for years now.

And again if sort of leads to… does it really matter if it’s Google or Apple that intrudes on your privacy? To some degree it does, of course, I’d personally rather have Microsoft or Apple spy on me, but I would frankly prefer if no one spied on me.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: