Hacker Newsnew | past | comments | ask | show | jobs | submit | polygamous_bat's commentslogin

Money laundering is also a pretty large use case (https://www.justice.gov/usao-dc/pr/additional-12-defendants-...)


> There is no credible accusation that X itself is tricking people here.

That is a purely subjective opinion, since I have talked to elderly people who assumed “blue checkmark = celebrity” and was therefore confused why there are so many such interactions on trivial posts.


Ignorant people sometimes have stupid thoughts. This is not an actual problem, or anything that governments or media companies need to fix.

Even under previous Twitter management, there were a lot of verified accounts who weren't celebrities by any reasonable definition. So only a moron would have ever believed that "blue checkmark = celebrity". We can't protect morons from themselves and it's pointless to even try.


Calling people stupid is a common and low-quality excuse to not regulate. It's part of how societies start to fail. If some percentage of people are mistaken about something, the reality of that is all that matters, regardless of how stupid you personally think those people are.


Nah. There's no evidence to support your claim. You're just making things up to try to find a plausible, friendly sounding excuse to justify government censorship. Citation needed.

Life is hard. It's even harder when you're stupid. Government regulation can never change that reality.


>government censorship

If I trick someone I get a fine, if a multi billion company do that is censorship?


The multi billion company hasn't tricked anyone here so your comment makes no sense.


> Ignorant people sometimes have stupid thoughts. This is not an actual problem, or anything that governments or media companies need to fix.

The European Union thinks that it is an actual problem though, one that governments or media companies need to fix.


Whoa, there's nothing trivial about ten thousand mechanical turks wishing each other good morning on a loop bub


> To take a hypothetical extreme: If all cars but one on the road were Teslas, it would not be meaningful to point out that there have been far more fatalities with Teslas.

However, in such a case, “base rate fallacy” would prevent you from blaming Tesla even if it had a 98% fatality rate. How do you square that? What happens if other companies aren’t putting self driving cars out yet because they aren’t happy with the current rate of accidents, but Tesla just doesn’t care?


> What happens if other companies aren’t putting self driving cars out yet because they aren’t happy with the current rate of accidents, but Tesla just doesn’t care?

You handle it the same way any new technology is introduced. Standards and regulations, and these evolve over time.

When the first motor car company started selling cars, pedestrians died. The response wasn't to ban cars altogether.

The appropriate response would be to set some rules, examine the incidents, see if any useful information can be gleaned.

And of course, once more models are out there with self driving abilities, we compare between them as well.

Here, we can get better data than what's in the article: What is the motorcycle death rate with cars with no automated driving? If, per mile, it's higher than with Teslas with automated driving, then Tesla is already ahead. The article is biased right from the get go: It compares only cars with "self-driving" (whatever that means) capabilities, and inappropriately frames the conversation.

If I'm a motorcyclist, I want to know two things:

1. If all cars were replaced with Teslas with self driving capabilities, am I safer than the status quo?

2. If all self driving cars were replaced with other cars with self driving capabilities, am I safer than the status quo?

The article fails to answer these basic questions.


> As such, while 5 > 0, and that's a problem, what we don't know (and perhaps can't know), is how that adjusts for population size.

This puts the burden on companies which may hesitate to put their “self driving” methods out there because it has trouble with detecting motorcyclists. There is a solid possibility that self driving isn’t being rolled out by others because they have higher regard for human life than Tesla and its exec.


There can still be product catalogues, people will still shop in person or on online resellers. You can still start off with a limited time discount so people can try your product. None of these need advertisement, they are organic ways of getting the word out there.


> Letting people communicate freely is a good thing in its own right, and fundamental to so many other good things we enjoy

I would argue that paid advertisement is a force distorting free speech. In a town square, if you can pay to have the loudest megaphone to speak over everyone else, soon everyone would either just shut up and leave or not be able to speak properly, leaving your voice the only voice in the conversation. Why should money be able to buy you that power?


I mean most town squares have no restriction on using a megaphone, and yet town squares have not been drowned out and rendered useless by megaphones. Even if that did happen, it would be a very poor analogue to generic advertising which can not drown out conversation. At best it would be an argument against megaphones over a certain volume, ie certain methods of communication might be reasonable to restrict, but restricting the ideas that can be expressed by megaphone is indefensible.


> Why should money be able to buy you that power?

why shouldn't it?

If somebody believes that their message is important enough to outbid everybody else, their message ought to be the one that is displayed.


> If somebody believes that their message is important enough to outbid everybody else, their message ought to be the one that is displayed.

Sometimes (often?) people with a lot of money may not believe in speech but in suppressing speech. However, money should not allow for suppressing speech, for example by buying a giant megaphone and speaking over people.

By your logic paying people $500 to heckle at your political opponents rally is fine. It may be legally okay, but it is a moral hazard, and for a better society we should try to better distinguish between “free” speech and “bought and paid for” speech.


If they believe their message is important they should do grassroots, talk to people and convince people to talk to other people. Trust me, if the message is good people volunteer their time.

The reality is that more often than not these messages are self serving and profit driven, many times borderline fraudulous in claims or questionable at best


> The reality is that more often than not these messages are self serving and profit driven

the reality is that all messages, even those you think ought to be a grassroots message, are all self-serving. It's just self-serving for you as well as the message deliverer. And those "advertising" messages are self-serving, but not for you (or your tribe).

Therefore, this is just a thinly disguised way to try suppress the messages of those whose self-interest does not align with your own, rather than an altruistic reason.


Because that means something Elon Musk thinks is 0.00001% important outbids 99.99999% of peoples opinion on anything.


Good information is valuable. When internet didn’t exist people paid good money for newspaper and magazines because they provided good information which people found valuable.


o1 does not show the reasoning trace at this point. You may be confusing the final answer for the <think></think> reasoning trace in the middle, it's shown pretty clearly on r1.


Is this different data or different annotation


I wasn't really referring much to the UI as I was the fact that it does it to begin with. The thinking in deepseek trails off into its own nonsense before it answers, whereas I feel openai's is way more structured.


All you get out of o1 is

    Reassessing directives

    Considering alternatives

    Exploring secondary and tertiary aspects

    Revising initial thoughts

    Confirming factual assertions

    Performing math

    Wasting electricity
... and other useless (and generally meaningless) placeholder updates. Nothing like what the <think> output from DeepSeek's model demonstrates.

As Karpathy (among others) has noted, the <think> output shows signs of genuine emergent behavior. Presumably the same thing is going on behind the scenes in the OpenAI omni reasoning models, but we have no way of knowing, because they consider revealing the CoT output to be "unsafe."


o1 does not output the full CoT tokens, they are not comparable.


> The part of this that doesn’t jibe with me is the fact that they also released this incredibly detailed technical report on their architecture and training strategy. The paper is well-written and has a lot of specifics. Exactly the opposite of what you would do if you had truly made an advancement of world-altering magnitude.

I disagree completely on this sentiment. This was in fact the trend for a century or more (see inventions ranging from the polio vaccine to "Attention is all you need" by Vaswani et. al.) before "Open"AI became the biggest player on the market due and Sam Altman tried to bag all the gains for himself. Hopefully, we can reverse course on this trend and go back to when world-changing innovations are shared openly so they can actually change the world.


Exactly. There's a strong case for being open about the advancements in AI. Secretive companies like Microsoft, OpenAI, and others are undercut by DeepSeek and any other company on the globe who wants to build on what they've published. Politically there are more reasons why China should not become the global center of AI and less reasons why the US should remain the center of it. Therefore, an approach that enables AI institutions worldwide makes more sense for China at this stage. The EU for example has even less reason now to form a dependency on OpenAI and Nvidia, which works to the advantage of China and Chinese AI companies.


Even the "Language Models are Unsupervised Multitask Learners" paper was pretty open; I'd say even more open than the R1 paper.


I’m not arguing for/against the altruistic ideal of sharing technological advancements with society, I’m just saying that having a great model architecture is really not a defensible value proposition for a business. Maybe more accurate to say publishing everything in detail indicates that it’s likely not a defensible advancement, not that it isn’t significant.


Here is a great interview. They don’t seem to care that much about money. They are already profitable.

https://www.chinatalk.media/p/deepseek-ceo-interview-with-ch...

> Money has never been the problem for us; bans on shipments of advanced chips are the problem.


> I think a better example would be You (AirBnB Host) rent a house to Person and Person loses the house key.

This is not a direct analogue, a closer analogy would be when the guest creates a copy of the key (why?) without my direct consent (signing a 2138 page "user agreement" doesn't count) and at some later point when I am no longer renting to them, loses the key.


I'm still much more interested in the answer to who is liable for the robbery.

Just the Robber? Or are any of the key-copiers (instead of losers w/e) also?


I don't really care about the answer to that specific question, where there's only one household.

What I will say is the guy that has copies of 20000 people's keys should get in trouble if he loses his horde.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: