What I struggle to understand is how could esteemed AI researchers like LeCun and many others work for such a company to give it even more powers and edge? How can they justify this as scientists?
The huge influx of money into research (here AI, but could be anything in the future) combined with the disfunctions of academia is very troubling.
It is not just that. Some people in this field just have questionable ethics, unfortunately.
Some are just childish and have this close minded ignorant macho attitude that though men, as they style themselves, should not care about ethics.
Then there others who just don't care either way as long as they get paid to play with their toys.
+1. I'm a foreigner from Mexico-- back there all engineering majors study ethics in school as part of the requirements. I studied abroad in the US, and was surprised at how not only there was no ethics requirement, but many engineering majors actually shunned anything having to do with the humanities in a very "us vs. them", "we do real work", "this is not important" manner.
It's hard to measure what impact a couple of classes could have and it would certainly vary from person to person, but it would at least set the stage for conversation and be an acknowledgement to the importance of ethics in engineering.
To add an anecdote to the pile, I went to school in the US and had to pass an ethics in engineering course to complete degree requirements for Computer Science in the late 90s.
I joined a year ago to lead the AI team (Yann is part of it but I am not answering for him here). It's a choice as I don't have to work any longer. Why am I there?
- I believe Facebook's products are good for the world. They have had an extremely positive impact on my family in particular.
- It is the one place where AI can have a really positive impact on the world.
- It's is the most talented set of people I have ever worked with. Not just the AI team but every single person I meet there.
- I believe in Zuck. Despite all the bashing, he is one of the most thoughtful and visionary leader I have worked for.
This said I don't agree with everything that the company has done. But Facebook is a place were you are free to disagree openly and so far my team and I have always been able to do what we considered the right thing to do.
[Edit: agreeing with the comments that I should have written "is one place" instead of "is the one place"]
> - I believe Facebook's products are good for the world. They have had an extremely positive impact on my family in particular.
I see that as trading off some positive impact at a personal level for a much larger negative social impact. Many people do this quite often in various situations for different reasons. Not holding this against you at this point.
> - It is the one place where AI can have a really positive impact on the world.
You'd need to substantiate how Facebook is "the one place" for making this claim, and also explain how all the evil things the company has done all these years, including news of tracking teens recently, gets somehow compensated for or adjusted against the positive impact you claim it can have. Without looking at all the negative things the company has done, this is just daydreaming, IMNSHO.
> - I believe in Zuck. Despite all the bashing, he is one of the most thoughtful and visionary leader I have worked for.
Thoughtful and visionary don't necessarily imply that it's good for everyone else. He doesn't seem visionary in what he says or writes. He may have a vision for himself, that's for sure. He's shrewd, cunning, obstinate and all that, but "thoughtful and visionary" on a broader level is really a huge stretch of imagination. Also consider the reason why the WhatsApp founders left the company.
> But Facebook is a place were you are free to disagree openly
It doesn't look like many employees disagree openly in the company, or don't follow up with disagreements when the CEO and COO shoot things down. The employees at Google, another company which thrives on profiling and tracking people, have shown a lot more disagreement in public in the recent times (though only for a few things). I haven't seen something as vehement or as many from Facebook employees (I have to research if something like that has even happened). So there's something else going on in the company (maybe Facebook employees who realize the negative impact of the company and how disagreements aren't encouraged just quit silently?). From the outside it doesn't look like a company that accepts or even allows disagreements. It looks like one person at the top vetoes everything that doesn't match his strategy. Again, consider the reason why the WhatsApp founders left the company.
Thanks for your clear reply. There's only one thing though from what you mentioned that is not vague or subjective:
> It is the one place where AI can have a really positive impact on the world.
Can you develop on this please?
Why do you think a social network (however efficient it may be with e.g. targeted ads) will have a bigger impact on its members versus e.g. AI in healthcare? or finance? or education? which will have a truly global impact.
Not OP, and I don't agree with "the one place", but I think I might see his point from a content moderation perspective.
The possible effect of AI in other fields I think is overstated, or worries me because it might take jobs. I'm skeptical about AI in education and don't really see how it could fit. I think the negatives in education come from a broken system and not necessarily a lack of "an efficient way to extract information from data".
Dropping more automation in finance will just help extract value more efficiently, not necessarily create it, which if anything I think is a negative impact on the world.
Content moderation sounds like a big positive though. It is necessary, but not really a job many people have or many people "should" have-- there's a lot of violent, gory, traumatic things those people have to sift through.
I do think healthcare can be augmented by AI and doctors working together in a way that doesn't cut jobs and increases the effectivity of treatment. Wether that will happen or this will be an excuse to cut staff is a worry, though the implications if effectivity of diagnosis and/or treatment increases are not to be understated-- quite literally life-changing.
Perhaps, but if you notice that anecdotally, it may be that you and your network and becoming more experienced and therefore more attractive to Facebook.
I feel the same way about Carmack. Such wasted potential that could've been so much better used at a company like SpaceX or even one like Epic Games or Valve.
Ha! Let me explain to you how the sausage is made at [American] universities. You're a fresh-faced researcher with a brand new PhD and you got super, super lucky and landed one of the very few tenure-track positions at a research university available each year. On your first day of orientation, your department head hands you a piece of paper with a number written on it. The number varies every year based on the university and the economy. Maybe this year it's $500,000, maybe a few years from now it's $1M. That number is how much money in research grants you need to bring in to the university over the next 6 years.
If you bring that money in, congratulations, you get tenure; now you get a chance to work on whatever interests you for the rest of your life. But if you fall short of that goal, you're denied tenue; GTFO. You might think being a really good teacher will save you; it won't. You might think that volunteering for some annoying university committee will save you; it won't. You may think writing some really impactful papers (that nevertheless fail to bring in research grant money) will save you; it won't, unless you're really, really, really close to your monetary goal.
So how do you get across the hump? In your first year or two, you think that ethical projects can totally work for you. You bring in a $10K grant here, a $25K grant there. Then you realize you're far short of your goal. So you do what most successful professors do. You start taking DARPA money, DOD money, DOJ money, Amazon or Google or Facebook money. You start building (or facilitating the building of) technology that kills people, that selects people to kill, that monitors populations for trouble-makers or for people susceptible to advertising campaigns. Of course, those aren't the words used in your grant proposals, instead you keep it really abstract. But deep down, you know what your research is going toward.
I know how academia works. Though being a theoretical particle physicist, I don't have to worry about the societal impact of my research and I can't be influenced by industry.
You seem to be unhappy with your situation. And you also seem to suggest that the choice boils down to either work in a hypocritical (re ethics) environment in academia or be openly cynical about it and join facebook and the likes.
Well. I think the world is bigger than this. And we always have choice. Be it academic prestige, good salaries, power, ethics, etc.
There are plenty of small companies trying hard to make a positive impact in the world (not just with words like Zuck) and which would be happy to have you. But then the price to pay is your academic dream.
It's easier because.. FB has access to millions of people that universities do not? Because fbook can manipulate the minds, emotions and money and politics of highly targeted groups without anyone knowing and the universities could never do that, or would never do that?
Is it because fbook has less ethics / laws / rules in place? Because universities need to answer to the community they are in and have the fear of pitchforks and loss of funding / donors / non-monopoly and need to think of their future and reputation and fbook does not?
What other reasons could there be?
This is something that indeed, unfortunately, we probably should have had regulators talking about long ago.
> It's easier because.. FB has access to millions of people that universities do not? Because fbook can manipulate the minds, emotions and money and politics of highly targeted groups without anyone knowing and the universities could never do that, or would never do that?
> Is it because fbook has less ethics / laws / rules in place? Because universities need to answer to the community they are in and have the fear of pitchforks and loss of funding / donors / non-monopoly and need to think of their future and reputation and fbook does not?
> What other reasons could there be?
Money. Lots and lots of money, a scarce resource at most universities.
Well this is true only if your research is well aligned with fb's interests. Not the case in universitiew where academic freedom, albeit not perfect, still exists to a large extent.
Simple, not everyone share the same concern regarding privacy or those sjw stuff, moreover fb has vast amount of data and also funding, it's very suitable for AI research.
Even if he refuse to work with fb, pretty sure a lot of other people will be happy to replace.
SJW is a weird insult to use here, the outcry isn't anything to do with SJWs. FAANG in general are actually very pro SJW in the gender/racial sense, see Damore's firing, the twitter feed of senior devs at any of them.
They're the epitomy of the liberal elite, privacy is totally orthogonal as an issue.
You can say right now that "not everyone cares about privacy" and it seems reasonable, but in a decade or two it will sound like saying "not everyone cares about environmentalism" does now, it's ignorance that will slowly bear fruit.
Unfortunately, I don't think your personal definition of the word is in step with common parlance, hence my comment.
>Social justice warrior (SJW) is a pejorative term for an individual who promotes socially progressive views, including feminism, civil rights, and multiculturalism, as well as identity politics.
> Even if he refuse to work with fb, pretty sure a lot of other people will be happy to replace.
I think this applies to some level to entry positions. I know plenty of people coming out of school who need the money, especially considering how aggressive the economic system in the US is to young people (student debt, impossible housing, tough job market to beginners, the responsibility of saving for an uncertain future due to the lack of safety nets, etc). It is also the case there's many people who could replace you: entry-level projects will be done, regardless of wether you do them or not.
Nonetheless, there's a point at which you're set. You could take another job without worries, and in fact you're in demand. Moreover, you're effective at what you do, and have started contributing with decision-making or really bringing efficiency into projects. At this point, other people will be happy to replace sure, but they won't be the same. There's more responsibility on you.
And lastly, you reach the point at which some of these top researchers are. They're beyond fine both economically and career wise-- they don't "need this job". They're also crucial. In their projects it's either them or no one. They are very responsible for their work.
> Even if he refuse to work with fb, pretty sure a lot of other people will be happy to replace.
Good for them. When I see a dog turd on the sidewalk, I don't step into it because otherwise, other people might. I make sure to not step into it for purely personal reasons -- it's me and my shoes that matter, not the dog turd regardless of its ambitions, nor people who might not pay attention to where they're walking.
Inspired by your comment, I'll make my point more graphically: Nazi camps were also suitable for medical and spychological testing. Nazi and even interned doctors did all sorts of experiments on prisoners. And if they refused to do so, pretty sure other people would have been happy to replace. Not everyone shared the same concerns regarding human life.
It depends how you see things. imho some things are intrinsically wrong, doing them because "someone else would have lined up to take advantage of the opportunities" doesn't change the fact that it's wrong in the first place.
Just the same way that if something wrong is done by a lot of people it doesn't make it right (smoking, drinking &c.).
The huge influx of money into research (here AI, but could be anything in the future) combined with the disfunctions of academia is very troubling.