They are not conspirancy theories because they don't assume a concerted effort from these entities but just a common interest and their action converge to reinforce the same phenomenon.
The author is a system biologist writing about sociology, so the paper should be read with the vocabulary of sociology, not of colloquial language.
> The pragmatic interest on the part of industry is natural, since the behaviorist approach that has appealed to many AI researchers aligns with the profit motives of surveillance capitalism.
Still, the language is loaded, the examples of claims (of AI proponents) are cherry-picked, the limitations of technology are misrepresented and the results are dismissed because they don't take "historical context" or "emotions visible clearly on the faces" into account.
It doesn't read like a scientific paper at all - or is this how papers in non-STEM look like in general?
The language is loaded because it's part of an ongoing discourse that by
>is this how papers in non-STEM look like in general?
you're not familiar with. And yes, this is quite a good paper for sociology standards. It comes from a STEM-guy so I think I like his style of writing.
> the examples of claims (of AI proponents) are cherry-picked,
This is not hard science where you have to find hard rules and a counter-example breaks your argument. He's commenting on a trend that we can all relate to. Is it a vocal minority or it's actually the vast majority of the industry/media? For that you can go back to data but that's not the goal of the paper and it doesn't invalidate his thesis anyway, as long as the narrative dominates the public discourse.
Agreed - unfortunately, I'm not familiar with the field at all :(
> it doesn't invalidate his thesis anyway
But, but, his thesis is that we're inevitably heading in the direction of a dystopia with "robo judges" and scientific pursuit being judged based on "metrics"... And that "surveillance capitalism" companies and the proponents of AI are power-hungry demons who plan to use AI to force some unspecified "psychology model" on the society as a whole!
Well, maybe that's how it is - I suspect it's not like that, but I can't know for sure. My objection is that, whatever the plans of evil corporations and traitors-to-humanity scientists, our current technology is nowhere near enabling any of the "changes to society" the author fears. The "superhuman intelligence" is not going to surface for a long time, the "bots" which "write quality editorials and replace journalists" will probably be realized something like 5 years before the "superhuman intelligence" mentioned, so also in the (very) far future. As for funding science, it's not AI (nor AI proponents) who come up with "metrics", but people, and they did so for the past 2 centuries at the very least. Yes, it's dangerous, but it has nothing to do with AI. The paper makes it sound like the hype around the AI is an imminent danger to the society-as-we-know-it, but isn't it infinitely more probable that this hype will follow thousands of others and simply die out?
I guess what I want to say is that the difference in complexity between - also impressive in their own right - current ML-based solution and any kind of understanding is so vast that worrying about what will happen when the AI will be capable of the latter in no way justifies the sensationalist tone of the article.
Well, I could be misinterpreting the author due to my unfamiliarity, so maybe it's not particularly sensationalist for the field and I just misinterpreted it.
> But, but, his thesis is that we're inevitably heading in the direction of a dystopia with "robo judges" and scientific pursuit being judged based on "metrics"... And that "surveillance capitalism" companies and the proponents of AI are power-hungry demons who plan to use AI to force some unspecified "psychology model" on the society as a whole!
I would argue that this is the present, not the future.
> Well, maybe that's how it is - I suspect it's not like that, but I can't know for sure. My objection is that, whatever the plans of evil corporations and traitors-to-humanity scientists, our current technology is nowhere near enabling any of the "changes to society" the author fears.
You don't need advanced technology for that. The existing technology is more than enough and we're seeing the devastating effects on the existing society. I don't think that for the thesis of the author, the actual progress of technology is relevant: if it looks intelligent, they will apply the narrative and profit from it.
The author is talking about how the promise of a yet to come AGI helps to build a narrative today that is used to exploit people. This is one thing. The dystopia is a critique to the narrative itself, that would lead to even further deterioration of the social fabric if it keeps being pursued. This is completely independent by the satisfaction of the promise of AGI or similar. As long as the narrative is believable, it will be used.
The author is a system biologist writing about sociology, so the paper should be read with the vocabulary of sociology, not of colloquial language.