Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

"If an algorithm doesn't work, then responsibility lies with the person who deployed the software. Real people cannot become laboratory rats for the testing of an algorithm."

I'm wondering what the HN opinions are on this view. My guess is that the argument will be that this will stifle innovation. But Facebook has such clout that algorithms have very direct effect on people, up to even the security of a state. I hear Facebook was a major contributer to the ethnic cleansing of Rohingya. Is algorithm deployment subject to (government) audit? Or are there alternatives to ensure the algorithm does nothing bad?



We tend to judge people by their intentions rather than their results. E.g. personal bugbear: drivers who kill pedestrians or cyclists are largely not punished for it. More abstractly, governments and organisations pushed for adoption of diesel cars out of concern for CO2 emissions; turns out that has killed many people via air pollution. We don't generally hold those organisations culpable for those deaths.

I don't think "algorithms" changes this at all. People who have bad intentions are guilty. People who ignore or bypass established safety protocols are reckless. But people who made a good-faith effort to do something legitimate and turned out to be mistaken? There's no real precedent for punishing those, and it doesn't seem fair to make a special exception just because technology is involved.


> But people who made a good-faith effort to do something legitimate and turned out to be mistaken? There's no real precedent for punishing those

Yes there is, if they failed to thoroughly consider the consequences of their actions, then we would call it negligence.

In law this is called "mens rea", or "guilty mind", and it exists in a hierarchy: Accidental, Negligent, Reckless, Knowingly, Intentionally, and we punish people differently based on which level of guilt they have.

There's another axis that courts consider when deciding charges and sentences called "actus reus", or "guilty act", and it's supposed to be represent how responsible the person is for the harm rendered. This also exists in a spectrum, from "proximate cause" (you did the thing that immediately caused the harm) to "coincidence", (you stabbed a voodoo doll, and the person got hurt, with no chain of causality linking the two events)

I would argue that Facebook engineers were negligent with respect to, like, the fake news fiasco, though probably not very responsible, being enablers rather than actors. Same with the racial loan targeting scandal.

The depression experiment, I would call reckless and a proximate cause of whatever harm was done. There is no way in hell that a secret experiment to make people depressed by manipulating their news feeds would pass an IRB, and Facebook should have been punished for it.


When your actions have an impact on the world it's meaningless to make a distinction between negligence and criminal intent. The victims of the Whatsapp riots in India or of the Facebook-driven pogroms in Myanmar don't care if Facebook was the actor or the enabler. These people are dead, they don't have the luxury of caring.


This argument would call for an end to the distinction between manslaughter and honicide, as an example.

It’s a great example of why we don’t allow victims, or relatives of victims, to or Jesus their own justice. Because they don’t care about distinctions that we, as a society, have decided are important.


This argument would call for an end to the distinction between manslaughter and honicide, as an example.

Corporations should probably not enjoy that distinction in the same way that people do. The sooner we dispense with the harmful fiction of corporate personhood the better. Either way, let’s not conflate people and the law, with corporations and the law.


> drivers who kill pedestrians or cyclists are largely not punished for it

You might want to qualify this with the part of the world you're speaking about (especially given the article is on a German site). For instance, in the Netherlands, given a collision between a motorized vehicle and an non motorized (pedestrian or bike), the motorized vehicle is automatically at fault (for liability and insurance purposes etc).


Everybody knows that the Netherlands has a strong distinction between manslaughter (or whatever they call it there) and murder. Intent matters; that is the point that was being communicated.


Audits won't change much if the algorithm is learning with new data, which is all the race right now. You'd have to inspect the data, and think about all possible cases. Innovation doesn't have to be stifled by regulation, but ad revenue will possibly be with weaker targeting accuracy. Is that really that bad for consumer? Maybe I will finally start seeing ads that would broaden my interests, instead of seeing what is too similar to what I already own and know to interest me? Or maybe each product should have an ad free paid version?


You wonder what happened. Back then, when biotech and genetic engineering was going to become hugely impactful Paul Berg & colleagues organized the Asilomar Conference, and the impact of that meeting is still felt in the field. Now we have deep learning on Google-sized natural language corpora, and there isn't a word on the impacts of that on society.


I can't say I agree there isn't a word. There's been plenty of conversations about it, resulting in stuff like https://www.partnershiponai.org/

How honest those efforts are is, in my opinion, worth questioning, but they are talking about it.


> I hear Facebook was a major contributer to the ethnic cleansing of Rohingya

Yeah people claim that. But does the claim hold water? I think it was just used because it was a convenient way to communicate and if they didn't use facebook they would have used any of the thousands of other ways to communicate.


Except that it's not a "claim", it's fact. In Myanmar as well as many other developing economies Facebook simply "is" the internet via it's Free Basics program.

Facebook was zero rated with the state owned PTT - Myanma Posts and Telecommunications. The majority of people can not afford the style data plans people in the West take for granted. For many, their "internet" is largely confined to Facebook's walled-garden. So no they don't have "thousands of other ways" to communicate." Practically speaking the "convenient" way also happens to be the only way.

Maybe do a bit of research on the issue before summarily dismissing it.


It is curious how Free Basics is largely overlooked in discussions about Facebook's role as a publisher or platform with respect to speech and access.

When it's the only way to access the internet in an affordable manner, Facebook becomes the de facto authority in control of what readers have access to.

Dangerous.


Indeed and FB's internet.org/Free Basics has amassed 100 million users in developing counties at this point.

Curiously FB's Free Basics pulled out of Myanmar recently:

https://theoutline.com/post/4383/facebook-quietly-ended-free...


Your argument is that Facebook was the de facto communications medium in the region, so it’s a fact that they contributed?

Did Boeing contribute to 911?


Here, Facebook was a medium designed for maximum dispersion of information among groups of people--remember the pious crusade, "connecting the world"?--which, when carried out to the point that waves of toxic, destructive information can propagate across that medium, is just an insane thing to unleash upon the world without some really rigorous safety measures. The problem is that Facebook intentionally optimized for virality and completely ignored, repeatedly and aggressively, critiques that not all information dissemination is inherently good.

I don't know if that is or is not corporate negligence, but we should consider it so. It is detrimental to humanity, and they have continued to try to ignore the immense responsibility that the world now finds them in hold of. If they cannot very quickly act as responsible stewards of this immense power, they should not be entrusted with it. That is how government has always worked, and guess what! Making infrastructure public so that it can be regulated, monitored and managed for the public good turns out to be a good thing. FB is social infrastructure and needs to either accept and act on their deep responsibilities or cease trying to be social infrastructure.


Yes they were in fact complicit. They provided and subsidized a platform and then didn't bother to enforce their own community standards on that platform.

Perhaps read the BSR Myanmar report[1] and then read FB's own blog post where they agree with many of those findings in the BSR report[2].

[1] https://fbnewsroomus.files.wordpress.com/2018/11/bsr-faceboo...

[2] https://newsroom.fb.com/news/2018/11/myanmar-hria/

Your comparison of Boeing and 911 is beyond absurd even as a strawman.


That's an odd comparison to make; Boeing did not exert control over the passengers, crew, or flight path. Facebook exerts control over membership, access and exposure.


I think letting governments have power to control which algorithms get deployed is a bad thing. Instead, individuals need to cultivate a better intuition on how and why granting a large amount of power to a single entity is detrimental for them in the long term. They also need to develop tools and heuristics that lessen the probability of this happening.

Power needs to be distributed as much as possible, so that no single entity can do a large amount of damage and have a large amount of leverage. In other words, Facebook only has as much power as the people using it let them have. The people need to become acutely aware of this and act accordingly.


No. We have elected representatives to study these questions in depth. You can't ask every citizen to be an expert in ethics, genetics, medicine, computer sciences, virology, etc. so they can make the obvious/good/true/rational choice.

That's why societies have laws. So we can know what to do or not when faced with such questions instead of passing the burden to the individual under the guise that it's "freeing people".


>No. We have elected representatives to study these questions in depth.

Except the elected representatives are dumb as rocks. I'm sure they studied how the cookie law would work really in depth before voting on it. That's why the cookie law worked out so perfectly, right? Oh wait, it didn't do ANYTHING other than cost companies millions of euros and wasted the time of the general public. It had ZERO benefits and a CS student could tell you that the way it was designed it could never have any benefits. I guess the only benefit of the law is to show that lawmakers continue to be incompetent at technology.

>That's why societies have laws. So we can know what to do or not when faced with such questions instead of passing the burden to the individual under the guise that it's "freeing people".

Societies have laws so that the regular person thinks that everyone plays according to the same rules, but that's not true in practice. Laws are intentionally written to be vague and not consistently enforced. This has led to a situation where there are so many laws that everyone breaks several every year and as a result if somebody higher up in the chain doesn't like what you're doing they can use the state to harass you with.

If laws were truly about knowing what to do then it would be impossible to graduate from mandatory education without having studied every single law that will apply to you in most circumstances. Furthermore, vague laws wouldn't be written and would get removed from legal codes. Furthermore, laws that are only selectively enforced would be removed, because laws can only be just if they apply to everyone consistently.


> That's why societies have laws. So we can know what to do or not when faced with such questions instead of passing the burden to the individual under the guise that it's "freeing people".

Yes, but laws can be devastating just as easily. In an age where the abuse and misuse of the ever-growing body of laws is so rampant, I cannot help to think that having even more badly written laws is a good solution.

Legislation is at best a necessary evil: a baggage, a clunky, heavy stone which you have to lug with you all day, every day. It serves a purpose, but you want to have as little of it as possible.


I get where you’re coming from, but it seems any proposal that requires every single person to change to be protected from some harm is destined to failure, or would at least be incomplete and take decades to establish itself. Judging by how populism is tearing our societies apart, we may not have that time.

Indeed, the problem of coordinated action is so prevalent in any society, we have developed a host of tools to tackle it: morality and religion („don’t eat pork“), heuristics (“no free lunch”), and, yes, government (SEC, FAA, FDA,...)

The adjacency to free speech may increase the downside risk. But if the alternative is between government intervention and what is essentially “before flying, check that the pilot does not appear drunk”, the former seems to be the obvious choice. If the industry or the public wish to avoid that fate, they would have to come up with something in the middle: self-restraint (ha), or possibly a private certification scheme like MPAA.

It’s also notable that we do have examples very close to the issue of free speech that did not go down the slippery slope, namely public-airwaves television (were regulation is somewhat silly, but in no way restricting viewpoints other than sex being a healthy activity), and the aforementioned MPAa.


> I get where you’re coming from, but it seems any proposal that requires every single person to change to be protected from some harm is destined to failure, or would at least be incomplete and take decades to establish itself. Judging by how populism is tearing our societies apart, we may not have that time.

Perhaps we may not have the time, but I'm unconvinced it is so. There is also no need for every single person to change, just a critical mass and not necessarily drastically either. I propose that even some awareness of this issue in the majority would be enough.

It is also not unprecedented that the overall stance of society changes on some issue drastically over time. I'm thinking of ideas such as the abolition of slavery, women's suffrage, the acceptance of non-white people as equal members of society (yes, the change is not complete, but it is drastically different compared to a hundred years ago), the awareness of sexually transmitted diseases and so forth.

The awareness of the danger of letting any single corporation have too much power seems like a good addition to this class of ideas.


>If the industry or the public wish to avoid that fate, they would have to come up with something in the middle: self-restraint (ha), or possibly a private certification scheme like MPAA.

Sounds to me like you've accepted government tyranny as being the default. You also make it sound as though government intervention fixes all the problems - it doesn't. You STILL have to check whether the pilot is drunk before the flight and sometimes they still are drunk during the flight even with all of the regulations in the industry.


Sounds to me like you have a problem with nuance: meat inspectors != tyranny, if only because they are widely accepted, and created & overseen by democratically elected governments.

And either you never fly or frequently lie: there’s no chance they let you examine the pilot as a normal passenger on a commercial flight. Plus, being drunk was obviously just an example. Are you going to subject them to a three-day practical examination? And how are you going to provide tested pilots conducting those exams, without being lost in the recursion of your little populist fantasy?


The whole point of having a government is to deduplicate the work that each individual would otherwise have to do. Instead of constantly having to second-guess whether the company you're dealing with has malicious intentions or is acting negligently (something which you might not be able to notice without specialized knowledge), you can rely on the government having set a standard of legal behavior that the company is incentivized to follow. Any attempt to decentralize power that also decentralizes the work required to exercise that power is going to fail in the majority of cases.

That doesn't mean you can only choose between "anything goes" and "here's a list of government-approved algorithms, everything else is illegal". But it does mean that instead of individuals developing tools and heuristics to protect themselves, the government should mandate the creation of such tools.

That's essentially what GDPR is about. Instead of having to find out for yourself how your data is being used, the company needs to tell you. Instead of having to write a crawler yourself to get your data back out, the company has to provide it to you. And so on.

The power of making the final decision still remains with the user, but the government's job is to ensure that the decision is as easy as possible.


> That doesn't mean you can only choose between "anything goes" and "here's a list of government-approved algorithms, everything else is illegal". But it does mean that instead of individuals developing tools and heuristics to protect themselves, the government should mandate the creation of such tools.

I agree with almost everything you wrote in your comment (including this quoted paragraph), but still maintain it is paramount that power be decentralized as much as possible. Similarly, that doesn't mean the only choices are between "power is completely decentralized such that no entity has greater power than any other" and "power lies solely within one central entity". Instead, the power of any government, corporation or organization should be kept bounded, with strict, irremovable limitations. The exact method to achieve this goal remains somewhat elusive, but I think the largest prosperity happens in those instances when it was achieved.

The trouble with solely relying on governments is scope creep. Seemingly inevitably the government eventually gravitates towards regulating more and more, until it is effectively prescribing an allowed list of algorithms, burdening the whole ecosystem in the process. You mention GDPR, but in order for the argument to stay balanced, in the same breath you should also mention Articles 11 and 13 of the EU Copyright Directive.

I also consider it essential for individuals to take matters into their own hands in some degree. They absolutely should develop tools and heuristics that help achieve the goals they want. Should some of the burden be taken away from the individual, by codifying tried and true principles into law? Of course!

As a closing thought, perhaps the model of the huge, centralized social network is ultimately a deeply flawed one and no amount of regulation can help it. Perhaps the end game is, once the average of all individual desires have been codified into law, that the traditional social network becomes economically non-viable. Centralizing all communication under a single large entity certainly sounds deeply wrong to me personally. And if it is so, the users will eventually realize this.


>That doesn't mean you can only choose between "anything goes" and "here's a list of government-approved algorithms, everything else is illegal". But it does mean that instead of individuals developing tools and heuristics to protect themselves, the government should mandate the creation of such tools.

Except the government doesn't understand even the basics of how the internet works. And you want them to regulate the specifics of algorithms?

>That's essentially what GDPR is about. Instead of having to find out for yourself how your data is being used, the company needs to tell you. Instead of having to write a crawler yourself to get your data back out, the company has to provide it to you.

Here's an idea: DON'T GIVE OUT THE DATA IN THE FIRST PLACE. Ask for more controls over your data in browsers, because none of these rules are going to actually protect you. Yes, Google in Europe can't legally steal your data, but do you think a Chinese company cares? Of course not, because the EU does not have jurisdiction over them. All GDPR did was make European and above-the-board corporations and citizens worse off for some security theater, but they didn't actually deal with the problem.

GDPR has also made using the web frustrating due to every single website having a pop up now and some sites being outright blocked in the EU. GDPR is a terrible example of government intervention, because it shows that the people making decisions don't understand what they're making decisions about.


GDPR is a direct reaction to large American corporations (Facebook, Google, Microsoft, etc.) not doing the right thing off their own backs for European users. They should make data collection opt-in not opt-out.

It’s very hard to “not give data out in the first place”. You don’t have to give it out, they’ll do anything they can to take it from you without asking.

I’ve recently had to email Airbnb to cancel my account becaue I don’t want to be tracked by Facebook and Airbnb put that tracking on every page logged in or not. Airbnb’s view is that I have to create a Facebook account and then set privacy settings there. They refuse to honour Do Not Track headers.

Airbnb won’t even let me login to delete my account. Instead I have email them and hope they’ll actually delete my data and account.

When you have companies making things as difficult as possible for users and using sophisticated tax avoidance to crush local competition on price you need legislation with teeth.


Isn't that exactly what A/B testing is? I'd say lack of consent is the unethical part of it.


If holding people responsible for their actions, whether carried out via an algorithm or not, stifles innovation, I don't care. I mean, yeah, fomenting genocide through algorithms is genuinely innovative. Protecting people is more important.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: