Vignette 4: Similarities to AI

Effective Altruism Foundation:

Artificial Intelligence: Opportunities and Risks

Policy Paper

Measure 1 The promotion of a factual, rational discourse is essential so that cultural prejudices can be dismantled and the most pressing questions of safety can be focused upon.

Measure 2 Legal frameworks must be adapted so as to include the risks and potential of new technologies. AI manufacturers should be required to invest more in the safety and reliability of technologies, and principles like predictability, transparency, and non-manipulability should be enforced, so that the risk of (and potential damage from) unexpected catastrophes can be minimized.

Measure 3 Can we as a society deal with the consequences of AI automation in a sensible way? Are our current social systems sufficiently prepared for a future wherein the human workforce increasingly gives way to machines? These questions must be clarified in detail. If need be, proactive measures should be taken to cushion negative developments or to render them more positive. Proposals like an unconditional basic income or a negative income tax are worth examining as possible ways to ensure a fair distribution of the profits from increased productivity.

Measure 4 It is worth developing institutional measures to promote safety, for example by granting research funding to projects which concentrate on the analysis and prevention of risks in AI development. Politicians must, in general, allocate more resources towards the ethical development of future-shaping technologies.

Measure 5 Efforts towards international research collaboration (analogous to CERN’s role in particle physics) are to be encouraged. International coordination is particularly essential in the field of AI because it also minimizes the risk of a technological arms race. A ban on all risky AI research would not be practicable, as it would lead to a rapid and dangerous relocation of research to countries with lower safety standards.

Measure 6 Certain AI systems are likely to have the capacity to suffer, particularly neuromorphic ones as they are structured analogously to the human brain. Research projects that develop or test such AIs should be placed under the supervision of ethical commissions (analogous to animal research commissions).


The measures above have been discussed in the AI community.  

Questions:

  1. What kind of parallels are there between AI and biobots?  
  2. How are the two technologies different and how does that impact what kind of measures should be put in place? 
  3. How can the EBICS community adapt these measures to biological robots?  
  4. What main recommendations should EBICS make for future guidelines in biological robot design and implementation?