Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There's a lot of problems with AI that need some carefully thought out regulation, but infringing on rights granted by IP law still isn't theft.


It's theft. But not all IP theft, or theft in general, is morally equivalent. A poor person stealing a loaf of bread or pirating a movie they couldn't afford is just. A corrupt elite stealing poor farmers' food or stealing content from small struggling creators is not.


>pirating a movie they couldn't afford is just

I wish this argument would die. It's so comically false, and is just used to allow people to pave over their cognitive dissonance with the real misfortunes of a small minority.

I am a millennial and rode the wave of piracy as much as the next 2006 computer nerd. It was never, ever, about not being able to afford these things, and always about how much you could get for free. For every one person who genuinely couldn't afford a movie, there were at least 1000 who just wanted it free.


Speak for yourself. For many more it's about being unwilling to support the development of tech that strips users of their ability to control the devices that they ostensibly own.

I happily pay for my media when there's a way to do so, without simultaneously supporting the emplacement of telescreens everywhere you look.


> For every one person who genuinely couldn't afford a movie, there were at least 1000 who just wanted it free.

You have this backwards. There are way more poor people who can't afford things than there are people who can afford whatever they want


Genuinely cannot afford is different than can't afford.

Genuinely cannot afford means you don't have the $15 to buy the movie after paying for necessities.

Cannot afford tends to mean "I bought a 72" OLED last week so no way I'm spending another $1400 on a movie collection".


How many people can afford to pay cash for a 72" OLED?

If you have to use credit to "afford" such things, then you can't actually afford them


Right, you should be stealing them?


When you steal a loaf of bread, somebody's loaf of bread is missing. That's worlds apart from making an unauthorized copy of something.


Ask yourself: who owns the IP you're defending? It's not struggling artists, it's corporations and billionaires.

Stricter IP laws won't slow down closed-source models with armies of lawyers. They'll just kill open-source alternatives.


How do you expect open source alternatives to exist when they cannot enforce how you use their IP? Open source licenses exist and are enforced under IP law. This is part of the reason why AI companies have been pushing hard for IP reform because they to decimate IP laws for thee but not for me.


Under copyright laws, if HN's T's & C's didn't override it, anything I write and have written on HN is my IP. And the AI data hoarders used it to train their stuff.


Calling a HN comment “intellectual property” is like calling a table saw in your garage “capital”. There are specific regulatory contexts where it might be somewhat accurate, but it’s so different from the normal case that none of our normal intuitions about it apply.

For example, copyright makes it illegal to take an entire book and republish it with minor tweaks. But for something short like an HN comment this doesn’t apply; copyright always permits you to copy someone’s ideas, even when that requires using many of the same words.


People seem to either intentionally or unintentionally (large from being taught by the intentional ones), to not know what training an AI involves.

I think most people think that AI training means copying vast troves of data onto ChatGPT hard drives for the model to actively reference.


Let's meet in the middle: only allow AI data hoarders to train their stuff on your content if the model is open source. I can stand behind that.


Uh no.

a) The model and the data

b) Why are we meeting in the middle?


I never advocated "stricter IP laws". I would however point out the contradiction between current IP laws being enforced against kids using BitTorrent while unenforced against billionaires and their AI ventures, despite them committing IP theft on a far grander scale.


And it doesn't even infringe on IP rights.


Agreed. Regulate AI? Sure, though I have zero faith politicians will do it competently. But more IP protection? Hard pass. I'd rather abolish patents.


I think one of the key issues is that most of these discussions are happening at too high of an abstraction level. Could you give some specific examples of AI regulations that you think would be good? If we actually start elevating and refining key talking points that define the direction in which we want things to go, they will actually have a chance to spread.

Speaking of IP, I'd like to see some major copyright reform. Maybe bring down the duration to the original 14 years, and expand fair use. When copyright lasts so long, one of the key components for cultural evolution and iteration is severely hampered and slowed down. The rate at which culture evolves is going to continue accelerating, and we need our laws to catch up and adapt.


> Could you give some specific examples of AI regulations that you think would be good?

Sure, I can give you some examples:

- deceiving someone into thinking they're talking to a human should be a felony (prison time, no exceptions for corporations)

- ban government/law-enforcement use of AI for surveillance, predictive policing or automated sentencing

- no closed-source AI allowed in any public institution (schools, hospitals, courts...)

- no selling or renting paid AI products to anyone under 16 (free tools only)


> - deceiving someone into thinking they're talking to a human

This is gonna be as enforceable as the CANSPAM act. (i.e. you will get a few big cases, but it's nothing compared to the overall situation)

How do you proof it in court? Do we need to record all private conversations?


If you think spam is bad now imagine if trillion dollar corporations could do it. Just because something isn't perfect doesn't mean it doesn't help.


I like where you're going. How about we just ban closed source software of any kind from public institutions?


> Could you give some specific examples of AI regulations that you think would be good?

AI companies need to be held liable for the outputs of their models. Giving bad medical advice, buggy code etc should be something they can be sued for.


90% of the time I'm pro anything that causes a problem for the big corporations, but buggy code? C'mon.

It's a pile of numbers. People need to take some responsibility for the extent to which they act on its outputs. Suing OpenAI for bugs in the code is like suing a palm reader for a wrong prediction. You knew what you were getting into when you initiated the relationship.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: