Ir seems to imply that it is a demand that technology not change things socially or if they will be bad. Both are flat out impossible as the inventor(s) don't even know what will change ahead of time. It is neither reasonable nor actionable.
If it doesn't imply that it is meaningless as the dumb luddite line "Science never sacrifices, it always murders." How the fuck is a process of discovery supposed to sacrifice anything when it isn't even an entity! Attributing responsibility of murders to it is similiarly daft - why not blame murders on violence instead?
Sure scientests should come out with ethical codes of "what not to do" after infamous incidents like say Harry Harlowe's gratitutousness monkey torture or human experimentation on the vulnerable but moral culpability is on the men responsible, not those who engage in a common different process. It would nonsensic like trying to declare eaters responsible for some depraved cannibal eating an infant!
The question is how will a group of people decide their own key performance measures. Different groups have different KPIs either intrinsically or some statistic, e.g. PDG, ESG indicators, Real return, etc.
If your only or main KPI ignores the social consequences , then you're doing it
> Despite de social consequences.
But I get your point, it's a well known school of thought based on selfishness, I'd guess.
Despite the social consequences.