Hacker Newsnew | past | comments | ask | show | jobs | submit | leadingthenet's favoriteslogin

A16Z is basically funding toxic fungi growing on the face of society at this point. So much of what they do seems to be a bet that people will want to pay money to do antisocial things and avoid the consequences.

I have a theory: They realized the right approach is to focus purely on the yes/no of what you choose to consume, rather than trying to optimize the consumption experience itself.

Remember how YouTube and Netflix used to let you rate things on 1-5 stars? That disappeared in favor of a simple up/down vote.

Most services are driven by two metrics: consumption time and paid subscriptions. How much you enjoy consuming something does not directly impact those metrics. The providers realized the real goal is to find the minimum possibly thing you will consume and then serve you everything above that line.

Trying to find the closest match possible was actually the wrong goal, it pushed you to rank things and set standards for yourself. The best thing for them was for you to focus on simple binary decisions rather than curating the best experience.

They are better off having you begrudgingly consume 3 things rather than excited consuming 2.

The algorithmic suggestion model is to find the cutoff line of what you're willing to consume and then surface everything above that line ranked on how likely you are to actually push the consume button, rather than on how much you'll enjoy it. The majority of which (due to the nature of a bell curve) is barely above that line.


Those debates just show why Grokipedia is needed. Truth seeking is the opposite, it's when you demand crowdsourced consensus that debates become endless and stupid.

Here's a cut and dried case: the BBC admitted recently to broadcasting faked video of a Trump speech. It wasn't a mistake and the lying was institutional in nature, i.e. an internal whistleblower tried to get it fixed and BBC management up to the top viewed it as OK to broadcast video they knew was fake. Even when it was revealed publicly, they still defended it with logic like "OK, maybe Trump didn't say that but it's the sort of thing he might have said".

So the BBC can't be considered a reliable source, yet Wikipedia cites it all over the place. This problem was debated here:

https://en.wikipedia.org/wiki/Wikipedia:Reliable_sources/Not...

The discussion shows just how stupid Wikipedia has become. Highlights include:

1. Calling The Telegraph a tabloid (it's not)

2. Not reading the report ("What exactly was editted incorrectly?", "it's just an allegation")

3. Circular logic: "This just seems like mudslinging unless this is considered significant by less partisan publications" but their definition of "less partisan" means sources like the BBC, that just lied for partisan reasons.

4. Shooting the messenger for not being left wing enough.

5. Not fixing the problem: "Closed as per WP:SNOW. There is no indication whatsoever that there is consensus to change the status of the BBC as a generally reliable source, neither based on the above discussion nor based on this RfC".

Wikipedia is as broken as can be. It institutionally doesn't care that its "reliable sources" forge video evidence to manipulate politics. As long as left wing people turn up to defend it, there is no consensus, and nothing will change even if those people clearly don't even bother reading what happened. The death of Wikipedia will be slow, but it will be thoroughly deserved.


They deserve it. Look, Microsoft and Meta and Google are all evil companies too, but all of them actually make good-faith contributions to Open Source projects. Meta maintains React.js and Zstd, Microsoft does tons of open work on Typescript, VS Code and the Linux Kernel. Google handles the AOSP, Blink Engine, Go, Angular, JAX, Grpc... the list goes on. They're brimming with genuine, self-evident goodwill, even when their businesses detract from humanity.

So; what exactly are Apple's big, selfless contributions? XNU is Source Availible but unusable without buying proprietary Apple hardware. iBoot is mysterious and has to be reverse-engineered to use it like UEFI. Open standards like Vulkan are ignored for political reasons, CUPS is basically derelict, WebKit killed KHTML because sharing was too hard, and APFS and Metal are still both undocumented despite promised transparency. CoreML is proprietary but can't compete with CUDA, iPads can't use QEMU despite supporting it in-hardware, all the Apple Silicon DeviceTree code is private, competing runtimes like Corellium get sued and security researchers get ignored. Swift, as an offering to the Open Source community, is a punch line at the end of a 20 year long gag.

Apple does genuinely nothing to advance the wellbeing of common computing for mankind. Apple leaves behind no charitable contributions to anything that does not ensure their absolute preservation as a business. Combined with the proven anticompetitive damages that the App Store incurs on the burgeoning mobile market, they are unequivocally a net-negative force and aren't hated enough for their parasitic influence on global software production.


It doesn't hinge entirely on that. There's a lot of ambient background context here too.

The idea Google is hostile to long tail indie content isn't exactly a groundbreaking claim, it's been obvious and widely discussed for years. They've been losing the original culture for a long time. Google circa 2000-2010 was very libertarian. It believed in a large decentralized web in which Google helped all users with all queries, without passing judgement. If it was obscure and you wanted it, Google would reliably surface it in the first page of results every single time. This was the Google that believed in the indie web so much it purchased Blogger.

Starting around 2010-2012 the rate at which they hired new grads went up quite sharply (I was there and saw it). The average level of experience dropped sharply. These recruits brought with them the new authoritarian politics of the university campus. Around 2015-2016 you start to see Google start to just openly engage in political activism, tossing the hard-won reputation for neutrality in the trash. Unfortunately, this new worldview was incompatible with the prior commitment to the indie web. Whereas the Google of Matt Cutts cared a lot about surfacing tiny sites, the new Google became highly suspicious of any content that wasn't from sources they deemed "reliable", "authoritative" etc [1]. They defined these terms to mean basically any large left-leaning source, without reference to objective metrics. Put simply: if it's on .gov, .edu or one hop removed then it's reliable, if it's not then it isn't.

This shows up in how easy it now is to find queries where Google gives you the exact opposite of what you're asking for, no matter how clearly you specify the search terms. This would have once been considered a high severity code yellow, now it's by design. The open web won out over AOL partly because old Google fostered it, but one gets the feeling that Google now views its child with disgust. Can you imagine Google purchasing Substack, as they once did with Blogger? It's unthinkable. They'd undoubtably view it as a hive of villainy and scum. In the event they did buy it the first thing they'd do is delete most of its content.

Unfortunately, you can't be both anti-misinformation and pro-open-web. These two things are irreconcilable. Either the world is complex and anyone might have insight to contribute, or it's simple and the right answer is always found via traversing a shallow hierarchy of trusted sources.

So: does your random indie travel blog "demonstrate expertise" or "authoritativeness" as defined by someone who has been through the Ivy League universities? No. Are these the sorts of sites that can eventually become big and a recognized source of authoritative expertise, given enough nourishment from the watering can of unbiased search? Yes! That's how the web grew to start with. But Google doesn't care anymore and with the loss of its primary patron the open web is in its twilight years. As the author says: he was invited to Google HQ to hear an apology, and also to be told nothing will change. The new web is no different to AOL except in minor technical details, because that's how the woke generation like it.

[1] e.g. https://support.google.com/websearch/answer/12395529?hl=en


> But others said the admissions exam and additional application requirements are inherently unfair to students of color who face socioeconomic disadvantages. Elaine Waldman, whose daughter is enrolled in Reed’s IHP, said the test is “elitist and exclusionary,” and hoped dropping it would improve the diversity of the program.

Recognizing gifted students is inherently discriminatory. Because these are the numbers:

Average IQ [1]

- Ashkenazi Jews - 107-115

- East Asians - 110

- White Americans - 102

- Black Americans - 90

There are other numbers from other sources, but they all rank in that order. There's a huge amount of denial about this. There are more articles trying to explain this away than ones that report the results.

(Average US Black IQ has been rising over the last few decades, but the US definition of "Black" includes mixed race. That may be a consequence of intermarriage producing more brown people, causing reversion to the mean. IQ vs 23 and Me data would be interesting. Does anyone collect that?)

Gladwell's new book, "The Revenge of The Tipping Point" goes into this at length. The Ivy League is struggling to avoid becoming majority-Asian. Caltech, which has no legacy admissions, is majority-Asian. So is UC Berkeley.[3]

Of course, this may become less significant once AI gets smarter and human intelligence becomes less necessary in bulk. Hiring criteria for railroads and manufacturing up to WWII favored physically robust men with moderate intelligence. Until technology really got rolling, the demand for smart people was lower than their prevalence in the population.

We may be headed back in that direction. Consider Uber, Doordash, Amazon, and fast food. Machines think and plan, most humans carry out the orders of the machines. A small number of humans direct.

[1] https://iqinternational.org/insights/understanding-average-i...

[2] https://www.brookings.edu/articles/the-black-white-test-scor...

[3] https://opa.berkeley.edu/campus-data/uc-berkeley-quick-facts


It's lacking agency, and it's not doing enough of that quacking and walking, it doesn't have memory, it doesn't have self-consciousness.

The Panpsychist view is a good starting point, but ultimately it's too simple. (It's just a spectrum, yes, and?) However, what I found incredibly powerful is Joscha Bach's model(s) about intelligence and consciousness.

To paraphrase: intelligence is the ability to model the subject(s) of a mind's attention, and consciousness is a model that contains the self too. (Self-directed attention. Noticing that there's a feedback loop.)

And this helps to understand that currently these AIs have their intelligence outside of their self, nor do they have agency (control over) their attention, nor do they have much persistence for forming models based on that attention. (Because the formed attention-model lives as a prompt, and it does not get integrated back into the trained-model.)


There's a few things that have led to this trend, none of which are obviously 'wrong' or dysfunctional:

1. Better GFX quality. Modern screens have vastly higher pixel resolutions than before and vastly higher than any embedded device. Also everything now has to be anti-aliased by default and we expect sophisticated Unicode support everywhere.

This means UI is lovely and sharp - our stuff just looks fantastic compared to the Windows 95 era. But this came at a really high and non-linear cost increase because you can't do CPU rendering and keep up the needed pixel rates anymore. This has caused a lot of awkwardness, complexity and difficulties lower down in the stack as people try to move more and more graphics work to the GPU but hit problems of internal code complexity, backwards compatibility etc.

2. Windows dominance ended, meaning apps have to be platform neutral at reasonable cost. In turn that means you can't use the OS native GUI widgets anymore unless you're writing mobile apps or some artisanal macOS app - you have to use some cross platform abstraction. This also led to the widespread use of GCd languages for UI, because ain't nobody got time for mucking around with refcounting and memory ownership in their UI code anymore.

3. For various reasons like distribution and sandboxing, browsers met people's needs better than other ways of writing apps but browser rendering engines are massively constrained in how much they can improve due to the backwards compatibility requirements of the web again. Flash demonstrated that viscerally back when it was around. So a lot of potential performance got lost there in the transition to web apps, and memory usage exploded due to the highly indirect and complicated DOM rendering model which in turn needs layers of (non-mmap-shareable) code to make it digestable for devs.

4. Browser devs lost confidence in language based sandboxes and so moved to process based sandboxes, but a process is an extremely heavyweight thing - lots of CPU cost from all the IPC and context switching and especially expensive in terms of memory overhead.

You ask why is embedded different. Others here are asking why games are different. This is simple enough:

1. Embedded apps don't care about OS independence, don't care about security or sandboxing and only sometimes have large hi-res displays. If you're on a 40MHz CPU your display is probably a dinky LCD. You can lose the abstractions and write much closer to the metal.

2. Game engines and GPUs co-evolved with the needs of games driving GPU features and capabilities. In contrast, nobody was buying a hot new NVIDIA card to make their browser scroll faster. Games also benefit from historically being disposable software in which the core tech isn't really evolved over a long period of time, so devs can start over from scratch quite frequently without backwards compatibility being a big deal. Normal application software can't justify this. Of course games are going the same way as app software with Unreal becoming a kind of OS for games, but ultimately, it's shipped with the app every time and porting titles between major engine versions is rare, so they can change things up every so often to get better performance.

Could things have been different? Maybe. If just a tiny handful of decisions had been different in the late 90s then Jobs would never have come back to Apple and the dominance of Windows would have never ended. The iPhone would have never happened and Android would have remained a BlackBerry competitor at best, with a UI to match. If the Windows team had executed better and paid more attention to security basics like sandboxing, ActiveX could have remained a common and viable way to way to ship apps inside the browser. Flash might still be around because it was ultimately Google and Apple who killed it off by fiat - Microsoft wanted to compete via Silverlight, but were by then sufficiently respectful of anti-trust concerns that they wouldn't have simply announced they were going to murder it in cold blood.

So it's easy to imagine a parallel universe where our tech stack looks very different. But, this is the one we live in.


Same here. When watching the Elton John movie, I could see all the artificial image enhancements made to the picture to make it look better, but burning the sincerity out of it in the process. Same for sound, which made characters mouth completely out of sync with the music (the first scene, the voices are singing intensely, but you can see the faces not breaking a sweat, uncanny valley ensued).

This week, I started (and gave up) to watch "the witches", which has clearly been designed by committee. All the ingredients are nice, but there is this feeling that they are put in there because people though they should be, not because they felt it should. It was not story telling, it was product manufacturing. We were being sold concepts: this is a lovable character, here is a tragedy to bound, here is the bad guy, she is scary, oh, look at the grandma you wish you had, and so on.

The problem is, they do that because it works. Star wars remakes made banks. Disney live actions as well.

If I discuss this with my friends, they don't seem to mind, they enjoy it.

Now I ate at Mc Donalds for year, enjoying it, so I get that you can very well enjoy scientifically crafted quick and satisfying experiences.

But at least for movies, it's becoming hard for me. Maybe I have rose glasses about movies in the past, but I watched Groundhog Day last week, and it felt nothing like this. Pure creative and fun experience.

I guess it' a blessing: it will save me time and money.

On the other hand, it leaves a bad taste in my mouth. Don't know why I'm bitter about this, cause it's not like it's an important thing, but I react strongly to it.


Is there a hosted service for LibreOffice online ? I wish I could find a better replacement for Google Drive.

She who pays the piper calls the tune

> Any new kind of "internet" that depends on traditional infrastructure that is under the control of states/corporations will end up being exactly like the one we have now.

There is plenty of evidence to suggest otherwise, decentralised or federated versions of all kinds of things on the web exist, and work, today... Just no one uses them of course.

The problem is none of these ideas seem to be able to obtain dominance, because that's not in their nature, that is in the nature of capitalism at the deep end, and it already has a grasp on the things that most decentralised tech is trying to replace.

The exceptions are decentralised tech that has no corporate competition, e.g bittorrent. I think that's a very useful observation, it's a classic "make a better horse" scenario, it's difficult to see the future when you are focused on present technology - I think the way to make the centralised web obsolete is not to merely decentralise the existing manifestation of it, but to make something entirely, fundamentally different, because corporations don't innovate in that dimension, so they can't compete, and can't dominate early enough to monopolize it.


You fail to take in to account the Eternal September effect.[1] Some people are aware, but others are not because they're new (or for various other reasons, such as just beginning to get interested, etc), and new people will always be coming along.

Also, some people might know but might have forgotten or had their attention shifted to something else. It's good to periodically bring it back.

There will always, always be a need to inform people. Providing information and good/better arguments for one's position will always be a valid and valuable service.

Acting constructively on that information is also needed, of course, but they're not mutually exclusive.

[1] - https://en.wikipedia.org/wiki/Eternal_september


No it's not, no one is rewritting their front ends every year to change frameworks. React was released 6 years ago, Angular fully released about 3 years ago. Just because some person decided to build something that solved their problem and may help others doesn't mean you need to adopt it and change everything. This argument that everyone needs to slow down because some people don't like hearing about change is ridiculous.

No other platform is as widespread and accessible as the web. There's more developers able to target sites that fit a massive range of uses, more than any other platform ever. So ofcourse there are going to be different needs for different use cases.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: