My highly cynical take as someone researching in ML is that all these interdisciplinary AI studies, AI policy, etc are mostly grifters trying to get in on the gold-rush.
What has AI policy produced so far, really? Algorithm analysis and safety research is done on the algorithmic side, not on the policy side. Every time I have read an AI policy paper, it has been listings of desirable properties with absolutely no input on how to make any real steps towards them. In the meantime, realities of algorithm driven mass surveillance and censorship is moving ahead at a rapid pace.
Possible conclusion 1: We just need way more AI policy research.
Possible conclusion 2: AI policy is done by the wrong people and deep algorithmic understanding is a pre-requisite.
Possible conclusion 3: Current AI policy is just used as a fig-leaf by tech companies who hire a few policy essay-writers without substance. The lack of progress in that area is a feature, not bug.
Not mutually exclusive, not exhaustive. Feel free to point to valuable policy research.
a common pattern I see in papers written by people on the technical side is:
"The correct way of ensuring algorithmic fairness is X" where X is something they already happened to work on.
In various papers X could be robust optimization, differential privacy, causal inference, adversarial training, etc. The papers are mostly good, actually, but it's sort of putting the cart before the horse.
I think there might be legitimate value in having a bunch of lawyers, sociologists, and philosophers set a target more-or-less in ignorance, and let the people on the technical side try to hit it.
Of course even better would be interdisciplinary collaboration but that's hard, there aren't many incentives in its favor (where do you publish? any given venue is worthless to half of the authors). and it requires humility on the part of everyone involved.
Not surprising. Computer scientists have a bit of experience with computation, after all. It would actually be more concerning if a bunch of new stuff was being invented whole cloth.
> but it's sort of putting the cart before the horse.
Not at all. ML algorithms are just fucking algorithms and computer scientists have been thinking about what it means for an algorithm to be correct since... Turing.
And have been proving various theorems about ML algorithms in particular since at least the 60s.
Complaints that "AI safety looks a lot like previous CS research" are basically equivalent to observations that "neural nets have been around for a lot longer than alexnet".
> I think there might be legitimate value in having a bunch of lawyers, sociologists, and philosophers set a target more-or-less in ignorance, and let the people on the technical side try to hit it.
I disagree. This is how you end up with endless navel gazing about trolley problems while actual vehicles kill people by accelerating without control because redundant parts are too expensive and engineers don't have enough voice. Philosophers are rarely interested in honest-to-god engineering ethics, which almost always boils down to "pay well enough to hire good people, and then listen to the good people you're paying good money to have around".
> Philosophers are rarely interested in honest-to-god engineering ethics [...]
Would you mind expanding on this? I would expect representatives of a field of knowledge that has an area called 'ethics' to be more concerned about ethics than your run of the mill engineer.
I don't think it's money grubbing. I think it's a genuine attempt to connect with the zeitgeist and be relevant, paired with a fundamental misunderstanding about what's actually happening inside self-driving groups.
But the intention doesn't really matter.
What matters is the utility of the output!
In fact, somewhat ironically, I think a lot of the good work on ethics for AI is coming out of engineering, business, statistics, and economics departments. And those academic departments to do be a bit more "money grubbing" relative to philosophy :-)
Have you looked at the list of authors on the paper? If you are not familiar with these people and their work then I suggest that you do some reading and thinking.
I have an experience to share. I was there at the start of the web in the sense that I was building early web servers. I contributed nothing, but I was very excited and interested - as were tens of thousands of Computer Scientists.
I did not imagine any of the negative consequences of the web that have emerged. Very few people did. The community was blindsided, and I think that we have a reasonable excuse, just as a person t-boned on a junction has a reasonable excuse. We didn't see it coming, and it had never happened to us before.
With AI the fears and concerns raised in the media are mostly stupid, but there are genuine potential societal harms in terms of loss of freedom, dignity and opportunity that could easily arise if we don't set the system to favour human values over commercial and state ones.
We will not have the excuse of ignorance and surprise this time, and so I salute informed, reasonable and open people who are making the effort to engage with these issues.
And did your early experience make you "foresee" the future negative developments? Reading back into those days, the biggest fear seems to be the loss of jobs and human isolation, nobody predicted privacy. Making highly speculative policy predictions and then basing decisions upon them runs the risk of attacking the wrong problems at the wrong time or retarding progress for the wrong reasons.
Yes, there are risks in foresight and avoidance. Drivers can swerve into oncoming traffic and so forth. However I posit that for every driver that swerves into danger there are scores or more who swerve out of harm's way.
The point of my post was that I did not foresee, nor did almost anyone, but we were idealistic and as a community we had not had the pervasive impact that Computer Science has now had on people's lives. We now have a far greater resource, experience and responsibility to innovate responsibly and with care. Delinquent and careless research that creates harm is unacceptable.
I don’t mean to sound entirely too rude, but I want to ask the following hypothetical question. You are working at a research lab at a research university. You need to hire someone who is going to help you do research on new techniques for interpretable machine learning. Do you hire
1. Someone with extensive knowledge of machine learning, and a much smaller (but existent) interest in the societal importance of interpretability
Or
2. A policy guy who knows tons about why these things should be interpretable, but has a very limited knowledge of machine learning?
Now, let’s say the position is policy relating to machine learning. Do you hire
1. The guy with extensive knowledge of policy who knows something of machine learning
Or
2. The guy with extensive knowledge of ML who knows a bit of policy?
It just seems like common sense, and it’s hard not to read a lot of comments as high ego engineers arguing that their tribe needs more sinecures and special treatment
I work in this space and regularly interact with folks on both sides of that aisle.
I would hire both. If you want to do anything serious in the space, you need the resources to hire a well-rounded team. Maybe in five years this won't be true, but at the moment this is a space where technological capabilities are driving policy making decisions.
I had to choose one, the answer depends on if this is a soldier or a general.
If it's a soldier ($ or $$ salary), I would choose a CS PhD who has demonstrated an interest and aptitude for learning about policy. There are a lot of opportunities for that person to learn about policy including part-time fellowships with think tanks and federal agencies, sometimes even embedded in a lawmaker's staff. Conversely, taking a policy person and getting them to the point where they can adequately process the fire hose of AI fairness/safety/explainability research is going to require a lot more effort.
If it's a general ($$$ or $$$$ salary), I would choose whoever I could manage to hire that has the most influence in whatever agency or legislature is most relevant to the policies I want to push.
But again, especially for soldiers, it's a false choice, and if you find yourself in a situation where you have to make this choice then you need to focus on fundraising instead of your first hire...
Seems to be #2. There are more people worried about a computer making decisions impacting them than there are people who understand the algorithms themselves.
It's not clear to me how this "field" of study is any different from existing academic analyses of automation or computing. AI doesn't change what computers do; it affects only how it's done. The problems inherent in automation are as old as windmills, falling water, and certainly electricity.
Sure, a few modern fields are affected more than others, but improving voice recognition or visual object identification component by 20% doesn't fundamentally change the system using such code to a degree that it warrants a new academic discipline of study.
A new legal field of study on automation makes more sense to me. Algorithmic bias, product or service liability, baseline accountability, the standards of due diligence and safety -- these are rising concerns exacerbated by the recent increase in automation, AI-based or not. But these problems are hardly unique to AI any more than the misdirection of elections was unique to manipulation of digital social media.
I'm also doubtful that meaningful solutions to byproducts of automation by corporations and nations can be meaningfully addressed by a bunch of academic computer scientists.
>AI doesn't change what computers do, if affects only how it's done.
Except AI is very much changing how things are done as well as what computers do. Image recognition, natural language processing, self piloted vehicles, these are all novel applications for machines which carry significant real world risks to life and property.
Moreover, AI in it's current state is a complex black box with unpredictable outputs for given inputs, which may be chaotic - see, for example, adversarial attacks. If we want a better handle on, say, regularizing outputs in a predictable way, or at least a clearer window into the occluded complexity of neural network decision making, a whole new field may very well be warranted. There's a reason we are calling these program outputs behaviors now; it's a new type of loosely -deterministic computing until we develop a deeper understanding which will not be trivial.
These absolutely are AI as far as the term has been used for a significant amount of time. There is no value in trying to redefine it after, what, 70 years?
> There is no value in trying to redefine it after, what, 70 years?
That implies any meaningful definition to begin with. That is entirely an illusion and people are using this to mislead investors (not that I'll ever shed a tear for them).
People absolutely use it usefully, that it can be used as marketing fluff is a far more recent thing. If it's never been used usefully and isn't now, there's no point arguing about it because that war has been lost and if it has been used usefully redefining it makes no sense.
Arguing about definitions is easily one of the least relevant parts of any discussion so I'll leave it here. I just wish AI topics didn't always have someone say it wasn't "real AI".
AI is “do you wish you had a slave”. I can’t think of any other definition that fits all the use cases. It’s also more obvious why it’s a snake oil term: slaves are people you can communicate meaningfully with in both directions.
Effective policymaking takes into account both the precise technical terminology and the real-world usage of a concept. Otherwise, it wouldn't be able to tackle any issue for which popular usage doesn't take into account technical nuance, leaving those fields in a lawless limbo.
Sure, just make it illegal to say you're selling AI technology. Problem solved. I'm absolutely fine with the use of the term in the context of regulating its abuse.
Real-world usage would imply people use it in a meaningful way. I have yet to see any evidence of this.
When electricity first came about, it also 'just changed how things were done', a better way to do what we did with diesel and lamp-oil. But the consequences to society over time were much bigger than just more light...
If you lead, or aspire to lead, major projects, I submit a reasonable definition of 'major' includes sitting at the table in national capitols with senior policy makers. They will expect you to be as invested in understanding their concerns, and the concerns of their constituents, as they are in trying to understand your project and your objectives. Reading articles like this, and I read and annotated the whole thing, is the homework you are going to have to do. Yes, they should all learn category theory and linear algebra and they haven't. That doesn't mean you suddenly have insight on the evolution of the problem spaces they've spent their entire, obviously successful careers navigating. So maybe take the time pay the penence for your genius, and invest the time to understand something about their efforts as they are fumbling around in the dark. At least they had the decency to put in writing for you to read, or ridicule as the case may be.
Also, feel free to look up some of the authors on this paper. They don't suck.
I'm inclined to believe a substantial number of researchers are currently being deliberately fuzzy about what "AI" can and cannot do. Why not call it algorithms and statistics? I think lay-people have a very skewed understanding of what has already been achieved through AI. They may also not understand the word algorithms but at least it doesn't make them think of Skynet. For example, if you asked a person on the street whether there exists an "artificially intelligent supercomputer" somewhere that could help you plan all aspects of a small but entertaining dinner party, they would probably say yes. They imagine that you could just ask IBM Watson to help out, and he'd tell you what to do. This is completely false. "AI" systems are very fragile, and yes, we could build something that plans dinner parties, but we'd have to start over if we needed to plan a kid's birthday party instead. It's very far from strong AI. Ten years ago when it was mostly IBM telling lies, the end result was a couple of billion dollars wasted by hapless healthcare conglomerates. That's already bad. But now we have people from MIT, Stanford, Harvard, Yale, and more embracing the term AI and relying on unfounded hype to push for funding. It would be much less sexy if we called it "Facebook/Google/etc enable unfair/discriminatory advertising by combining intensive data collection with logistic regression, sometimes in multiple layers, and with some graph algorithms thrown in". But it would be a much better starting point for a well-informed debate. I'm not trying to minimize the importance of algorithms in our world, but a healthy discussion should be based on a sound understanding of the facts on the ground, and AI hype is not helping with that. I strongly prefer the less hyperbolic terminology adopted by someone like Aaron Roth at UPenn, e.g. see the blurb for "The Ethical Algorithm: The Science of Socially Aware Algorithm Design".
Bonus AI rant: For the celebration of the new MIT Schwarzman College of Computing, which is a huge expansion of the arguably most important computer science department in the world, there was a discussion panel on AI consisting of MIT President Reif, Henry Kissinger (War criminal?, Theranos board member, wannabe AI expert), Tom Friedman (columnist of limited substance), and Stephen Schwarzman (business man, coined the "increased taxes on carried interest are like Hitler's invasion of Poland" analogy, brought the dough). How the heck is that the inaugural panel!?!
The point of the article is that we need a science to address the plain fact that the behavior of modern AI and machine learning systems, especially neural networks, is unpredictable and incomprehensible given the current state of knowledge. It is more about addressing the limitations of AI than hyping it.
most actual researchers are entirely sick of the hype and are desperately trying to correct it, to no avail. i think laypeople just have a strong desire to get overexcited about AI and nothing is going to stop them.
Maybe researchers would like that, but PR is important even for academic researchers. Sexy and hyped topics get more media coverage, more funding and lead to more impressive CVs and careers.
Laypeople only have this desire because of endless marketing and propaganda by companies like IBM, Tesla and Google, uncorrected by people who should know better.
There is no "AI" and anyone who uses that term is mentally ill or selling snake oil.
"Means and motive matter as much as ends. AIs don’t operate in isolation. Somebody designs them, somebody gathers the data to train them, somebody decides how to use the answers they give. Those human-scale decisions are –or should be– documented and understandable, especially for AIs operating in larger domains for higher stakes. It’s natural to want to ask a programmer how you can trust an AI. More revealing is to ask why they do."
Is there a rigorous definition of “machine behavior” as a research field?
I can see how a good-enough marriage of ML/NN to classic symbolic AI could yield a system capable of higher-order, intention-based “behavior” in complex circumstances. But we’re still a long way from achieving that, as far as I know.
What has AI policy produced so far, really? Algorithm analysis and safety research is done on the algorithmic side, not on the policy side. Every time I have read an AI policy paper, it has been listings of desirable properties with absolutely no input on how to make any real steps towards them. In the meantime, realities of algorithm driven mass surveillance and censorship is moving ahead at a rapid pace.
Possible conclusion 1: We just need way more AI policy research.
Possible conclusion 2: AI policy is done by the wrong people and deep algorithmic understanding is a pre-requisite.
Possible conclusion 3: Current AI policy is just used as a fig-leaf by tech companies who hire a few policy essay-writers without substance. The lack of progress in that area is a feature, not bug.
Not mutually exclusive, not exhaustive. Feel free to point to valuable policy research.