If computers and machines are inherently oppressive for the reasons stated, then so is all of existence. The cosmic machinery that governs these computers is really at the root, and that machinery’s “ruthlessness” extends beyond modern technology.
The “sinister ruthlessness” the author projects is also observed on a much broader scale in nature and isn’t limited to human-made machines.
I think there’s a very important conversation to be had about the role of technology in our lives and the ways that it can be harmful. But I don’t believe that anthropomorphizing it, assigning terms like “sinister”/“ruthless” or viewing it as “inherently oppressive” moves that conversation in a useful direction.
In a word: yes. The universe is cold, ruthless, and does not care about us. I make no claims otherwise.
Society isn't that, though. Not least because nobody is particularly happy about the above fact and most people are trying to get away from it, not embrace it. And so, society is made up of people who have the capacity for empathy and adaptation to the nuances of others. So the relevant comparison here is not the timeframe before human civilization came to exist, but the society which existed prior to computerisation.
The adoption of computers within a society largely replaces human-to-human interactions with human-to-computer interactions, or interactions between humans which are now modulated and controlled by a computer (human-computer-human). That adoption trend inherently converts relationships between humans which are adaptable and contain a component of empathy, to interactions controlled by non-adaptive machinery with no capacity for empathy.
The adoption of computers as a technology within the context of a society seems, to me, to make society inherently more oppressive and less humane for this reason. (The main exception I would grant is computers only used by the people who have freely chosen to own them (as per comment above), and to which other people aren't subject to non-consensually.)
> The adoption of computers within a society largely replaces human-to-human interactions with human-to-computer interactions, or interactions between humans which are now modulated and controlled by a computer (human-computer-human). That adoption trend inherently converts relationships between humans which are adaptable and contain a component of empathy, to interactions controlled by non-adaptive machinery with no capacity for empathy.
I agree for a subset of tech interactions (not all, or even most machinery resembles this), and this is the primary conversation we need to be having IMO.
> The adoption of computers as a technology within the context of a society seems, to me, to make society inherently more oppressive and less humane for this reason
This is where I don’t agree, and I don’t see how this follows from the former paragraph.
Zooming out again for a moment: from the cold indifference of the universe, infinite possibilities emerge. That same coldness gives rise to the most horrible things one can imagine, but so too the most wonderful things one can imagine.
I’m old enough to remember life before everyone had a computer, and for me, that life was significantly worse. My ability to connect with people outside of my immediate circles is how I survived and eventually escaped a deeply harmful environment.
I do think that technology can magnify humanity’s best and worst impulses, and makes dangerous people more dangerous. But it also enables global collaboration and connection on a scale that was previously impossible, and unlocks positive outcomes that were simply impossible before.
Again, I think there’s a really important conversation to be had about the pitfalls of technology. But to position it as “inherently” anything seems unproductive at best and extremely counterproductive at worst. To position it as inherently oppressive seems to ignore the many ways it is anything but that.
Next time you're enjoying a beautiful sunset think about whether or not the entire universe conspiring to produce such a beautiful spectacle for you to enjoy is cold and ruthless place?
Yes bad things happen, but the Universe can be good to us and bad to us.
I think you misunderstood my comment. My point was not that the only thing that exists is cold ruthlessness, but that the claim that technology is inherently oppressive does not make sense. Doesn’t make sense for the reasons you state: it brings both good and bad. And doesn’t make sense because the same factors are in play as you zoom out beyond specific technologies and observe the broader machinery at work.
The cold ruthlessness of the universe is what makes the sunset so beautiful.
The beauty of the sunset is what makes the universe so cold and ruthless.
These are not statements about causality, but rather about the spectrum of possible phenomena and how we experience it. The highest highs are balanced by the lowest lows. Beauty and ugliness are interdependent concepts that co-emerge.
> Yes bad things happen, but the Universe can be good to us and bad to us
The universe doesn’t have agency as far as we know. It is not bestowing good and bad on us.
Things happen based on the laws of nature. We interpret them and assign them labels like good and bad based on our individual experience of what we see. I’m picking this nit because I think the primary error in the original piece was to assign intrinsic properties/essence to objects of technology (or all of existence) when in reality these are mental phenomena layered on top of whatever is actually happening.
I think the author (Hugh Landau) is making a point about the nature of machines. I accept this point, and the related one that humans can use machines to enforce ruthlessness, and that the machine interface means that there's no human who can readily be blamed for the ruthlessness.
However, the point is a little bit undermined by Landau's erroneous assumption about slam-door railway stock in the UK. It absolutely was not abandoned to improve schedule-keeping; on the contrary, it was abandoned because it was unsafe as hell.
Can you elaborate? I don't see anything to support this claim from the link you provide.
You are correct that the completion of the phase-out of slam-door rolling stock in the UK will relate to modern safety (and accessibility) regulations, amongst other factors. But that is different to what motivated the initial introduction of remotely controlled doors, which as that article states, dates back to the London Underground in the 1920s.
While it might be the case that centrally controlled doors are mostly more safe (with some exceptions), that mere fact doesn't imply that it is the primary motivating factor behind the historical adoption trend. So it's an interesting claim, but unless I'm missing something in the link I don't see it as supporting this claim?
> "These newer units are safer as the doors have central locking ... In the past, the doors on slam-door trains could be opened at any time, even while the train was moving."
> "Due to a number of high-profile accidents in the 1990s, the manually-locked slam doors were supplemented with electronic, driver- or guard-operated central locking before they were gradually phased out in favour of sliding doors through the 2000s, resulting in a sharp decline in the number of deaths per year from passengers falling from trains"
> "Some units had individual compartments, each with its own door and no access to any other part of the train; however, these were unpopular due to security concerns and the lack of access to toilets ... The phase out was speeded up after the Murder of Deborah Linsley in such an individual compartment in 1988."
The Asbestos in many of these slam-door trains is less relevant to the central point, but was also a factor in the transition.
The first quote is simply a statement that central locking is safer. I don't think that's really disputed, but it's not the same as saying that it was the defining motivation, especially in the 1920s.
The second quote relates to the greater adoption in the 1990s. But this is far after the initial adoption by the London Underground in the 1920s, and presumably these safety issues during the period 1920-1990 weren't so great as to be a showstopper, even if a safer design is preferable. This suggests to me that there was some other, much stronger motivating factor behind the development of the technology in the 1920s on the London Underground, with safety being a trailing motivator.
The third relates to a design issue entirely orthogonal to the design of the doors.
> presumably these safety issues during the period 1920-1990 weren't so great as to be a showstopper, even if a safer design is preferable.
Many things are not considered "showstoppers" during the early (or even later) stages of technology despite obvious ongoing harm. For example, there were 42,000 motor vehicle fatalities in the US in 2022. Despite this being a large loss of human life, it's not deemed a "showstopper", largely because we really don't have better options and because of the freedom that these vehicles enable. Now let's assume for a moment that self driving tech is perfected, and would theoretically cut down deaths by 75%. The safer design would be preferable, but for purely practical reasons would not be widespread for probably decades to come.
> This suggests to me that there was some other, much stronger motivating factor behind the development of the technology in the 1920s on the London Underground, with safety being a trailing motivator
How does this suggest that?
> The third relates to a design issue entirely orthogonal to the design of the doors.
The third relates to the cultural context driving adoption of updated designs, and the point is that this transition can't be reduced to any single factor.
I'm really trying to understand your position here, but you seem to continue reverting to your thesis and framing everything in terms of that thesis instead of explaining why the thesis is justified.
The context of this discussion is my claim that "the adoption of electric, centrally controlled doors was naturally motivated in major part by timeliness."
You're claiming that this is refuted by the Wikipedia article, but I don't see any evidence for that. To be clear, I'm open to evidence that this isn't the case, there just isn't any there, because it discusses motives for adopting centrally controlled doors in the 1990s when the technology for centrally controlled doors was already widely available. It doesn't tell us anything about the initial motives for developing the technology for centrally controlled train doors many decades earlier, just that later on an additional motive showed up which drove some additional (late) adoption.
Regardless of your claim or any of the objections to it, what happened in the 1920s happened. In principle, there's an answer to the question: "why did this transition happen?", and the goal of a conversation like this is to try to get closer to the truth. In practice, it's often hard to actually get solid answers without diving deep into the historical record, and rarely will there ever be a single traceable driving force or solitary "this is the reason".
You seem to be constraining the reasons you're willing to entertain to fit your opinions instead of finding the evidence to back them, and that's what I keep pushing back against.
The GP pointed out that there are indeed documented reasons for the transition that have nothing to do with timetables. So far, that's the only form of evidence that's been offered for any of the perspectives shared here. Simply restating your opinion and demanding the same kind of evidence that you have not yourself yet supplied is not sufficient.
> You're claiming that this is refuted by the Wikipedia article
I'm not (to be fair, it's possible that the GP was, but I don't know). I was reacting to this:
> I don't see anything to support this claim from the link you provide.
Bottom line: you wrote the article and made major claims about the nature of tech. The burden of proof is on you to justify those claims. Categorically rejecting actual evidence that may have explanatory value while offering no evidence of your own is making it increasingly difficult to take the position seriously.
The reality is probably some combination of all of the above: the newer designs kill less people, are more efficient, and satisfied a growing cultural discomfort with the perceived danger. The outcome of this combination is not purely good, or purely bad. It is neither oppressive nor perfectly beneficial.
The title really is the thesis. But the author backs it up with opinion.
Like many of these articles, the author seems to apply all these adjectives that impute intent. No technology is "inherently oppressive" nor is any technology "ruthless" because machines themselves have no intent. I believe this anthropomorphic sleight of hand is deliberate because the author does get around to changing the topic to "machine-assisted ruthlessness."
He might as well say that "tools are inherently oppressive" because any tool can be used by oppressing humans to enforce their own oppression or ruthlessness. This is the same as opining that knives are inherently violent. The argument is illogical, and an appeal to emotion.
This misses the point that computers are a technology. Like any technology, the choice to adopt a technology is always made by a human actor with some knowledge of the relevant tradeoffs. Ergo, wilful adoption of a technology which (necessarily) possesses the attributes I describe can be taken as tacit consent to those attributes and the collateral damage they cause. There is a human actor hiding behind every life ruined by "computer says no".
Saying that no technology is inherently oppressive is strange to me. Are prisons not by design an intentionally oppressive technology?
The human perception of agency in inanimate objects is curiously variable. To my understanding, this is a factor in why some people are more religious and some people are less so. Since my article is intrinsically written from the capacity to perceive agency as present in an inanimate object, it's perhaps less compelling to people who aren't wired this way, is my guess. This is not a criticism of anyone, just an observation of an interesting human neurological variability. Other constructions of the same argument are of course possible.
In any case, the point here is that a human actor decides to adopt computers as a technology; and that that act generally replaces a human interaction which was previously social, human-to-human, and therefore on some level adaptive and modulated by empathy, whereas a human-computer interaction is inherently uncaring. The adoption of computer technology in society has led to a generalised trend of replacing human-human (social) interactions with human-computer or human-computer-human interactions, which generally remove all opportunities for adaptation or empathic variation.
This leads to my view that computers as a technology in society are inherently oppressive.
Though I suppose one possible qualification would be that this possibly is only the case when applied as a technology by a different party to the party which will be subject to it. A computer in someone's home by their free choice, which is purely controlled by them (which is therefore not any modern computer, I should note) seems like an exception.
My position is that the technology does not possess the attributes you claim it does.
Prisons are not a technology. Prisons are buildings that use various technologies, including optics, metallurgy, concrete, electronics... etc. But prisons are an application.
Adoption of computers, and the decision to use them to oppress or to free, is entirely within in the purview of the adopter. That is, the only people to blame are the ones who set the technology to a specific purpose, those who apply it.
Your essay confuses application with existence, and I don't think that make sense. A car exists. It can be used to drive kids to school, or it can be used to mow down pedestrians. There is nothing inherent in the technology with relation to intent. I think the same is true of computers.
I disagree that prisons are a technology, as that doesn't fit the term.
However, even if we were to consider prisons a technology, there are good applications and bad applications. Prisons may be used to keep violent predators away from the civil society they might harm. Prisons may also be used as a deterrent against crime, also beneficial to a civil society. That prisons are used as profit centers by corporations who also have regulatory capture is an evil of the people running those corporations and the corrupt officials who look the other way in order to line their pockets. But neither the good, nor the evil has anything to do with concrete, steel, monitoring systems, and plumbing of a prison.
Humans are good or evil, or sometimes a complicated mixture of both. Shifting the animus to the inanimate shifts the responsibility, and I doubt even clear-thinking religious people would be on-board with that. (I'm not, and I'm religious.)
Action, intent, and agency are human things. Ascribing those things to technology hearkens back to animism not rationalism. The spirit of the river made it jump its banks and flood because the river had an ill temper. The computer was ruthless when it calculated the pay of Bob. That's not how our universe works.
I agree with the premise of "machine-assisted ruthlessness." I simply disagree with the notion that the ruthlessness or oppressiveness is inherent to the tech.
I would absolutely consider prisons a technology, most obviously because we can see human societies where prisons are not a viable technology. If a member of a more primitive tribe starts killing people, the pragmatic solution is to kill them, not incarcerate them. Incarceration imposes technological (e.g. can you build buildings strong enough to hold people involuntarily?) prerequisites and high or extremely high logistical costs (see the cost of housing a prisoner for a year in the UK). You ultimately also need guards and the spare labour in society to be able to allocate to that task instead of potentially more important tasks, like obtaining food. Whether that is feasible will in turn be determined by human productivity in the various fields of production, in the case of food which is determined by the availability of agricultural technology (50% of the UK workforce used to be working on agriculture, now only about 2% to my recollection). Incarceration on the scale we see today is a relatively recent phenomenon.
Obviously such technology can be good or bad.
In my view the IT community falls into the trap of a narrow definition of the term, which now has been supplanted by an even narrower definition where technology just means "IT".
See the other thread above for my thoughts on the latter part. The adoption of a technology is done by the decision of a human. Nobody's claiming that computers have imposed themselves on society...
All "technology" (using the broad definition) is a manifestation of human ideas and the evolution of those ideas over time. Human thought preceded the creation of prisons or the repurposing of existing structures as prisons. And even the idea of a "prison" has to be collapsed into a broader category of "secure building", which is a technology that emerged because of just how useful it is to have a robust place to live in and avoid the elements, intruders, etc.
To frame prisons as "inherently oppressive" is to pretend that a prison is not just another building with the label "prison" on the outside built to a certain specification of security. Much like a church is just a building where people gather to practice their faith. I realize that these buildings have some unique features that are optimized for their particular purpose, but at each level you drill down, those technologies almost universally co-emerged because of other human needs.
These buildings do not have intrinsic essences, nor were the underlying techniques of engineering used to build them solely developed for the purpose of incarcerating people or praying to various gods.
> The adoption of a technology is done by the decision of a human
And so is the inception of the original idea that led to the establishment of the technology. The tools we build are a manifestation of collective human consciousness at any given point in time. That goes not just for the ways we use technology, but also for the technologies we invent and decide to build in the first place.
The call is coming from inside the house. The thing we need to be focusing on is our own nature, and how we tend to oppress each other. This is a problem not solved by assigning technology with properties like "oppressive". Technology is not oppressive. People are.
> Nobody's claiming that computers have imposed themselves on society...
Oppression is generally something that humans do to other humans (I agree that they use tools along the way). By saying that computers are inherently oppressive, you're implying that computers are somehow imposing themselves on society. If a human has to be involved for the oppression to occur, the computer is by definition not inherently oppressive.
I don't think I'm presenting an "IT perspective" as it actually broadens the notion of what technology is to include products and applications. My objections to this conclusion are philosophical not stylistic.
According to Flusser, these things are not machines, they are apparatus. Machines process things, apparatus process information. They accept inputs and return outputs. They emulate theories and thoughts - like rules in games, differentiating them from toys, these theories/rules are one thing that make them what they are - to achieve some output, certain inputs have to be provided.
Ruthless? They are far from unpredictable, how can something like that be ruthless, if you know how it will respond to specific inputs?
The “sinister ruthlessness” the author projects is also observed on a much broader scale in nature and isn’t limited to human-made machines.
I think there’s a very important conversation to be had about the role of technology in our lives and the ways that it can be harmful. But I don’t believe that anthropomorphizing it, assigning terms like “sinister”/“ruthless” or viewing it as “inherently oppressive” moves that conversation in a useful direction.