Hacker Newsnew | past | comments | ask | show | jobs | submit | bko's commentslogin

I never understood the point of this kind of comment. It doesn't add any value or anything to the discussion. Its basically two paragraphs with some presupposition (openai bad) and how the author is virtuous by canceling his subscription. No explanation, argument, nuance. Its just virtue signaling. Actually... I guess I do know the point of this kind of comment. I just don't know why these kinds of comments get upvoted, even if you do agree openai bad

I think people forgot how bad it was. It was much more fragmented before but instead of services it was fragmented by time. Sure you have access to Seinfeld, but you can watch one or two Seinfelds a night at 8pm and 11pm.

I also remember base cable without any movies was around $60 or something and with some movie channels is >$100. And that's not inflation adjusted. You can easily get 3 or 4 of the top services for $100 today.

Finally claiming there are more ads on these services is a joke. There was ~20m for every 30m of programming, meaning 1/3 of the time you're watching commercials. And not just any commercials, the same commercials over and over. There was even a case of shows being sped up on cable to show more commercials.

I get it, everyone wants everything seamlessly and for next to nothing, but claiming that 90s cable was even comparable is absurd.

https://www.digitaltrends.com/home-theater/how-networks-spee...


[dead]


Seinfeld way syndicated. It aired for a long time on TBS. But also Comedy Central after 2021, Nick at Nite briefly and TV Land more recently.

I'm not sure what your point is.


Seinfeld only ran until 1998. Not sure what people buying the rights in 2021 has to do with the OP's comment.

> Everyone wants an untrackable unblockable currency

What are you talking about? Crypto is defined by its trackability (immutable, permission-less, verifiable ledger of every transaction in history). Please refrain from commenting on things you're unfamiliar with.


That's not universally true, there is a class of privacy coins whose txs are not (at least in theory) traceable.

I'd argue that's actually a more anarchist original view and transparent ledger is a bug of the first implementation, not a feature, and creates problem of the original money people are trying to solve (i.e. have electronic money without a government overreach, US using modern banking system as a political pressure tool, etc)


Most accidents happen because people are human, aren't paying attention, are inebriated, not experienced enough drivers, or reckless.

It's not fair to say that vision based models will "make the same mistakes people do" as >99% of the mistakes people make are avoidable if these issues were addressed. And a computer can easily address all those issues


Which means the mistakes vision-based models for today are unique to them.

Why make things more complicated than they need to be? Humans don't have lidar and we are the only intelligence that can reliably drive. Lidar just seems like feature engineering, which has proven to be a dead end in most other AI applications (bitter lesson).

https://www.cs.utexas.edu/~eunsol/courses/data/bitter_lesson...


> Why make things more complicated than they need to be? Humans don't have lidar and we are the only intelligence that can reliably drive.

Because we want self driving cars to be safer than human driven cars.

If humans had built in lidar we would use it when driving.


Read the comment again. It's not that vision is "good enough", it's that feature engineering doesn't work

Self driving cars are not equipped with human brains so this doesn’t really make sense.

“We should achieve self driving cars via replicating the human brain” strikes me as an incredibly inefficient and difficult way to solve the problem.


Then you deeply underestimate how difficult the problem is, and deeply misunderstand where all the effort has been spent in developing autonomous vehicles.

If all the effort has been spent in trying to replicate the human brain then I am comfortable saying that is a mistake.

We have a tool that can tell with great accuracy how far away an object is. The suggestion that we should ignore it and rely on cameras that have to guess it because “that’s how humans work” is absurd, frankly.


Before you can learn how far away an object is, you must decide: which laser return corresponds to which object? In fact, what counts as an object? Where does a tree stop and become a fallen tree branch? Is that object moving towards me? Is the apparent velocity of this point represent the fact that the object is moving, or that it's rotating, or that it's flexing, or dividing, or all 4? Is that object moving towards me but that's ok because it's a car that's going to stay in its lane? What's a lane? What's my laser return for where the lane is? Should I stop at this intersection? What's my laser return for whether the light is red? Am I in the blind spot of the car in front of me? Is he about to shift into my lane because he doesn't see me? What laser return do I get to tell me whether his indicator is on?

The problem of understanding what is happening in front of you while driving is preposterously more complicated than just a point cloud of distances. That is .01% of the problem. To solve the remaining 99.99%, you need interpretation of photons and sound waves into a semantic understanding that gives you predictive power to guess how the physical world will evolve and avoid breaking the rules of the road. Show me a mechanized way of understanding the causes of how the physical structure of the world is about to evolve, and I'll show you something that is imitating a human brain, however poorly. The cameras give you _plenty_ of data to determine 3D structure, at a higher resolution than the laser, without being emissive, for cheaper. It's a completely reasonable approach to focus your limited computational hardware on interpreting the data you have instead of adding more modalities with their own limitations that (according to nature) are demonstrably unnecessary.

The world is more complicated than slogans and pitchforks and Elon Bad.


People get into accidents not because they don't know with great accuracy how far away an object is.

They get into accidents because they make bad decisions and get distracted.

If AI makes better decisions and don't get distracted, the amount of accidents will already be greatly reduced compared to humans.

Having lidar in addition to cameras will be of marginal benefit (but a benefit to be sure) when you realize what is actually important: proper modeling of the environment. And for this, cameras are better at providing than lidar, so you still will want cameras anyways.

The focus on lidar is really a red herring. You merely push the computational budget you have to understanding a point cloud instead of vision. You're back to square 1 of "how can I properly model the environment given this sensory modality". This is the part that essentially needs human level understanding of the world that you're missing.

As the other commenter says, you deeply misunderstand the problem.


This knee-jerk reply is old and tired, and the counterarguments are well-trod at this point. Even if cameras-only can build a car that’s as good as humans, why should we settle for “as good as“ humans, who cause 40,000 fatalities a year in the US? If we can do better than humans with more advanced sensors, we are practically morally obligated to do that.

I would bet a large portion of fatalities is from distracted/bad driving, not that human sight was insufficient.

Phrased a different way, I would expect lidar to help marginally, but it is safer driving in general that will bring down fatalities. This could be done with cameras.


Yes! The smart and nuanced panoply of replies to the GP are a wonderful counterbalance to people "just saying things that pop into their head" -- which is unfortunately how I view a lot of human speech nowadays :/

> we are the only intelligence that can reliably drive.

Science would like to point out that rats also can learn to drive

https://theconversation.com/im-a-neuroscientist-who-taught-r...


yeah but not reliably, they often totally space on their commitments to pick you up from the airport, etc

If you had to choose between picking someone up at the airport or dragging a slice of pizza twice your size down the NYC subway stairs, what would YOU do?

Humans can drive with eyes only, but we are better drivers when we can also use other senses like hearing. If humans has lidar we would use it when driving.

The bitter lesson I think is a great way of explaining the logic behind Tesla's strategy. People aren't getting it.

Whether or not it'll actually work remains to be seen, but it's a perfectly reasonable strategy. One counterargument would be that the bitter lesson can be applied to LIDAR too; you don't have to use that data for feature engineering just because it seems well suited for it.


Don't cars already use a ton of sensors that don't reproduce human senses and ways of doing things?

There was a small group of doomers and scifi obsessed terminally online ppl that said all these things. Everyone else said its a better Google and can help them write silly haikus. Coders thought it can write a lot of boilerplate code.

I think you're overthinking it. She probably just has a lot of real people connections and drives the algo to meaningful interactions. When a ghost logs in, they don't know what to show so default to "general" spam which is just AI generated woman.

This is very likely.

It reminds me of people who browse YouTube logged off: they see garbage, spam, rage bait, and sexy girls doing sexy stuff.

But I browse logged in and my carefully curated subscriptions mean I mostly get good quality, relevant recommendations, and almost zero rage bait or outrageous stuff.


The algorithm is not optimised for meaningful interactions, even 10 years ago i couldn't get it to even mostly show friends and family after fighting it for a week

The algorithm is optimized to show you content you tend to engage with. You couldnt get it to show you meaningful interaction because you didnt engage with it.

Do your friends and family interact on facebook? Could run an experiment to see if it adapts.

> When a ghost logs in ... so default to "general"

I do this with youtube - and I get to see what is broadly popular.

It is grim.


Lol! "Facebook's not bad, you're just a loser"

I suppose at the margin they can compete. But there's also a lot of uncertainty when adding a single route can wipe out your entire business. Also private services often pick off the most profitable opportunities leaving the leftovers most undesirable services to the public option. Consider student debt. If you're going to Harvard MBA or Yale law school, there are private options like Sofi that will give you a rate better than federal loans, and that serves as a selection bias and leaves the worst highest risk loans that no private lender would fund to be backed by the government.

Or another example: Google Analytics. It's awful service in many ways, but because they're free and can afford to be free, they captured 80-90% of web analytics. I would not want to compete with GA.


> There are plenty of ways to evaluate that without charging a fee. You can track utilisation without needing to charge for it.

The point is that utilization is dramatically different when something is "free". Many times the marginal user values it just above 0, and having that person on reduces the value for everyone else. Charging something, anything, weeds out the very marginal people you don't want using the service. Same concept with emails. If we had a marginal fee to send emails (fraction of a cent) it would love spam pretty much overnight. Things shouldn't be "free".

That student in your example would gladly pay as he has no other options.


    > "people you don't want using the service"
That's a subjective evaluation, which doesn't have a clear criteria.

> Things shouldn't be "free"

More good reasons to hate this government.


I think the market opportunity can be a standard and eventually get labels to include your standard in addition to their traditional labeling.

Figure out the variables (like shape, inseam, width, whatever else) for each article of clothing. Then freely distribute this and begin to catalog popular items. You can crowdsource some of this. The idea is people will look up the clothes as per your scale.

Then after you index a lot of clothes, you can search by exact measurements and then you can hit up clothing manufacturers to use their propriety code in their marketing or promote their brands on your site.


This works in theory, until you discover as the article did, that all manufacturers use one clothing shape — hourglass — and so if your measurements aren’t “bust == hips, waist := bust - 10” then your search engine finds few or no results.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: