On the morals of network research and beyond


It seems to me that a long chain of moral actors are involved and are or were required for the harm, presented as being the responsibility of the researchers alone, to materialize. By choosing to focus on the researchers, the other moral agents are made invisible — including those that massively profit by the architectures that enable this harm, both monetarily and politically, as well as those doing the actual harm.

There is a difference between a medical researcher administering a drug, that may — through a natural and amoral process — lead to harm, and the case being considered here. At least two other moral actors have to misinterpret information and act, in what I would consider immoral ways, for harm to occur in the setting of the networking research: the surveillance box manufacturers and the state representatives. The full architecture of the web and the internet enables it. I would argue that the bulk of responsibility — and the spotlight of moral outrage — should be on these actors. Placing it squarely on the researchers makes a mockery of the discussion of the ethical implications of our technological artefacts.

Dispel three key fallacies

  • The first one is that things we do not like (some may brand “immoral”) happen because others do not think of the moral implications of their actions. In fact it is entirely possible that they do and decide to act in a manner we do not like none-the-less.

  • The second fallacy is that ethics, and research ethics more specifically, comes down to a “common sense” variant of “do no harm” — and that is that. In fact Ethics, as a philosophical discipline is extremely deep, and there are plenty of entirely legitimate ways to argue that doing harm is perfectly fine.

  • Finally, we should dispel in conversations about research ethics, the myth that morality equals legality. In fact it should probably be our responsibility to highlight the immorality of this state of affairs, before writing public reviews about the immorality of a hypothetical censorship detection system.

  • Thus, I would argue, if one is to make an ethical point relating to the values and risks of technology they have to make it in the larger context of how technology is fielded and used, the politics around it, who has power, who makes the money, who does the torturing and the killing, and why. Technology lives within a big moral picture that a research community has a responsibility to comment on. Focusing moral attention on the microcosm of a specific hypothetical use case — just because it is the closest to our research community — misses the point, perpetuating silently a terrible state of moral affairs.

Last updated