Menü Schließen
Klicke hier zur PDF-Version des Beitrags!

“We’re making tools not col­le­agues, and the gre­at dan­ger is not app­re­cia­ting the dif­fe­rence, which we should stri­ve to accen­tua­te, mar­king and defen­ding it with poli­ti­cal and legal inno­va­tions. (…) We don’t need arti­fi­ci­al con­scious agents. (…) We need intel­li­gent tools.”2 Dani­el C. Den­nett “We may hope that machi­nes will even­tual­ly com­pe­te with men in all purely intellec­tu­al fields.”3 Alan M. Turing One major chall­enge of the 21st cen­tu­ry to human­kind is the wide­spread use of Arti­fi­ci­al Intel­li­gence (AI). Hard­ly any day pas­ses wit­hout news about the dis­rup­ti­ve force of AI – both good and bad. Some warn that AI could be the worst event in the histo­ry of our civi­liza­ti­on. Others stress the chan­ces of AI dia­gno­sing, for ins­tance, can­cer, or sup­port­ing humans in the form of auto­no­mous cars. But becau­se AI is so dis­rup­ti­ve the call for its regu­la­ti­on is wide-spread, inclu­ding the call by some actors for inter­na­tio­nal trea­ties ban­ning, for ins­tance, socal­led “kil­ler robots”. Nevert­hel­ess, until now the­re is no con­sen­sus how and to which ext­ent we should regu­la­te AI. This paper exami­nes whe­ther we can iden­ti­fy key ele­ments of respon­si­ble AI, spells out what exists as part “top down” regu­la­ti­on, and how new gui­de­lines, such as the 2019 OECD Recom­men­da­ti­ons on AI can be part of a solu­ti­on to regul­ta­te AI sys­tems. In the end, a solu­ti­on shall be pro­po­sed that is coher­ent with inter­na­tio­nal human rights to frame the chal­lenges posed by AI that lie ahead of us wit­hout under­mi­ning sci­ence and inno­va­ti­on; reasons are given why and how a human rights based approach to respon­si­ble AI should inspi­re a new decla­ra­ti­on at the inter­na­tio­nal level. Intro­duc­tion Ever­y­thing about AI is a hype. It is labe­led a dis­rup­ti­ve tech­no­lo­gy. Its trans­for­ma­ti­ve force is com­pared to that of elec­tri­ci­ty. It is said that just as elec­tri­ci­ty trans­for­med peo­p­les’ lives and indus­tries 100 years ago, AI will now trans­form our lives.4 As we are incor­po­ra­ting AI sys­tems into our life, we bene­fit from the effi­ci­en­ci­es that come from AI sys­tems (AIs).5 Howe­ver, a tech­no­lo­gy like AI is, first of all, a tool. I argue, as the phi­lo­so­pher Dani­el C. Den­nett argues, that AIs are tools and should be regard­ed and trea­ted as tools. They are tools with a spe­ci­fic qua­li­ty and power, becau­se AI sys­tems can be used for mul­ti­ple pur­po­ses, and will imi­ta­te and replace human beings in many intel­li­gent acti­vi­ties, shape human beha­vi­or and even chan­ge us as human beings in the process6 in inten­ded and unin­ten­ded ways.7 But even if AIs could be in prin­ci­ple as auto­no­mous as a per­son they lack our vul­nerabi­li­ty and mortality.8 This means that as long as we deve­lop, sell and use AI, we can and have to deci­de how we frame the rules and norms gover­ning AI. As always when we have the chan­ce to get a new, powerful tech­no­lo­gi­cal tool, we have to ans­wer the ques­ti­on how we can make sure that we as a socie­ty will make the right choices – or at least mini­mi­ze the risk that we will make the wrong choices; and how do we deci­de what is right and wrong – espe­ci­al­ly as the field of AI is an area hard­ly any­bo­dy under­stands ful­ly. I argue that the­se are ques­ti­ons that can­not be ans­we­red Sil­ja Vöneky Key Ele­ments of Respon­si­ble Arti­fi­ci­al Intel­li­gence – Dis­rup­ti­ve Tech­no­lo­gies, Dyna­mic Law1 1 The back­ground of this paper is my rese­arch on ques­ti­ons of demo­cra­tic legi­ti­ma­cy in ethi­cal decis­i­on making as a Direc­tor of an Inde­pen­dent Max Planck Rese­arch School in Hei­del­berg on bio­tech­no­lo­gy gover­nan­ce, and on the gover­nan­ce of exis­ten­ti­al risks as a Fel­low at Har­vard Law School (2015–2016). I am gra­teful for the inspi­ra­ti­on and exch­an­ge with the mem­bers of our FRIAS Sal­tus Rese­arch Group “Respon­si­ble AI”, Phil­ipp Kell­mey­er (Neu­ro­lo­gy, Neu­roe­thics), Oli­ver Mül­ler (Phi­lo­so­phy), and Wolf­ram Burgard (Robo­tics) over the last months. I want to thank the rese­arch assistants Tobi­as Cro­ne, Isa­bel­la Beck, Eva Böning, and Gideon Whee­ler for their valuable sup­port. 2 Dani­el C. Den­nett, What can we do?, in John Brock­man (ed.), Pos­si­ble Minds – 25 Ways of Loo­king at AI, 2019, 46, 51. 3 Alan M. Turing, Com­pu­ting Machi­nery and Intel­li­gence, Mind LIX, 1950, 433 et seq. (reprin­ted in Mar­ga­ret A. Boden (ed.), The Phi­lo­so­phy of Arti­fi­ci­al Intel­li­gence, 1990, 65 et seq.). 4 Andrew Ng, in Mar­tin Ford (ed.), Archi­tects of Intel­li­gence, 2018, 185, 190. 5 Iyad Rahwan/Manuel Cebrian/Nick Obra­do­vich et al., Machi­ne beha­viour, Natu­re 568 (2019), 477, 484. 6 Nor­bert Wie­ner, The Human Use of Human Beings, 1954, 96. 7 Iyad Rahwan/Manuel Cebrian/Nick Obra­do­vich et al., Machi­ne beha­viour, Natu­re 568 (2019), 478; Dani­el C. Den­nett, What can we do?, in John Brock­man (ed.), Pos­si­ble Minds – 25 Ways of Loo­king at AI, 2019, 43. 8 Dani­el C. Den­nett, ibid., 51 et seq. Ord­nung der Wis­sen­schaft 2020, ISSN 2197–9197 1 0 O RDNUNG DER WISSENSCHAFT 1 (2020), 9–22 9 OECD Recom­men­da­ti­on of the Coun­cil on Arti­fi­ci­al Intel­li­gence, adopted 22.05.2019 (OECD Prin­ci­ples on AI); cf. OECD/ LEGAL/0449, available at: https://legalinstruments.oecd.org/en/ instru­ment­s/OECD-LEGAL-0449; in Ger­man (unof­fi­ci­al trans­la­ti­on) “Emp­feh­lung des Rats zu künst­li­cher Intel­li­genz” available at: http://www.oecd.org/berlin/presse/Empfehlung-des-Rats-zukuenstlicher-Intelligenz.pdf. 10 Stuart J. Russel/Peter Nor­ving, Arti­fi­ci­al Intel­li­gence – A Modern Approach, 3rd ed, 2016, 1. Other defi­ne the field of AI as “a field devo­ted to buil­ding arti­fi­ci­al ani­mals (or at least arti­fi­ci­al crea­tures that – in sui­ta­ble con­texts – appear to be ani­mals), and, for many, arti­fi­ci­al per­sons (or at least arti­fi­ci­al crea­tures that – in sui­ta­ble con­texts – appear to be per­sons).” For this and a dis­cus-sion of dif­fe­rent approa­ches see Sel­mer Bringsjord/Naveen Sun­dar Govin­dara­ju­lu, Arti­fi­ci­al Intel-ligence, in Edward N. Zal­ta (ed.), Stan­ford Ency­clo­pe­dia of Phi­lo­so­phy (SEP), Win­ter 2019 Ed. 11 Stuart J. Russel/Peter Nor­ving, Arti­fi­ci­al Intel­li­gence – A Modern Approach, 3rd ed, 2016, 1. 12 Stuart J. Russel/Peter Nor­ving, ibid., 2. 13 The famous and often quo­ted so-cal­led Turing Test by Alan M. Turing is a beha­vi­oral intel­li­gence test that shall pro­vi­de an ope­ra­tio­nal defi­ni­ti­on of intel­li­gence. Accor­ding to this test a pro­gramm pas­ses the test if a human inter­ro­ga­tor, after posing writ­ten ques­ti­ons via online typed mes­sa­ges for five minu­tes, can­not tell whe­ther the writ­ten ans­wers are given by a human being or a com­put-er, cf. Alan M. Turing, Com­pu­ting Machi­nery and Intel­li­gence, Mind LIX, 1950, 433 et seq. (re-prin­ted in Mar­ga­ret A. Boden (ed.), The Phi­lo­so­phy of Arti­fi­ci­al Intel­li­gence, 1990, 40 et seq.); for a dis­cus­sion see Stuart J. Russel/Peter Nor­ving, Arti­fi­ci­al Intel­li­gence – A Modern Approach, 3rd ed, 2016, 1036 et seq. 14 Iyad Rahwan/Manuel Cebrian/Nick Obra­do­vich et al., Machi­ne beha­viour, Natu­re 568 (2019), 477, 483. 15 An algo­rithm is a pro­cess (or pro­gram) that a com­pu­ter can fol­low. It, for ins­tance, defi­nes a pro­cess to ana­ly­ze a data­set and iden­ti­fy pat­terns in the data; in more gene­ral terms it can be des­ri­bed as a sequence of ins­truc­tions that are car­ri­ed out to trans­form the input to the out­put, see John D. Kel­le­her, Deep Lea­ring, 2019, 7; Ethem Alpay­din, Machi­ne Lear­ning – The New AI, 2016, 16. 16 Iyad Rahwan/Manuel Cebrian/Nick Obra­do­vich et al., Machi­ne beha­viour, Natu­re 568 (2019), 477. by indi­vi­du­als, cor­po­ra­ti­ons or Sta­tes, only, but have to be ans­we­red by the inter­na­tio­nal com­mu­ni­ty as a who­le, as well, becau­se AI rese­arch, deve­lo­p­ment and deploy­ment, and the rela­ted effects are not limi­t­ed to the ter­ri­to­ry of a Sta­te but are trans­na­tio­nal and glo­bal. This paper is a start­ing point to dis­cuss key ele­ments of respon­si­ble AI. Alt­hough the noti­on of intel­li­gence in Arti­fi­ci­al Intel­li­gence might sug­gest other­wi­se, AI as a tech­no­lo­gy is not per se “good”, neither is it “bad”. The first part spells out fea­tures of AI sys­tems, and iden­ti­fies bene­fits and risks deve­lo­ping and using AI sys­tems, in order to show chal­lenges for regu­la­ting the­se tools (see below I). The inter­na­tio­nal gover­nan­ce dimen­si­on is stres­sed in the second part. The­re I will look clo­ser at the Recom­men­da­ti­ons on Arti­fi­ci­al Intel­li­gence by the Orga­ni­sa­ti­on for Eco­no­mic Co-ope­ra­ti­on and Deve­lo­p­ment (OECD) that were adopted in 2019 (see below II).9 The­se are the first uni­ver­sal inter­na­tio­nal soft law rules that try to govern and frame AI in a gene­ral way. Third­ly, I argue that we should stress the link bet­ween human rights and the regu­la­ti­on of AI sys­tems, and high­light the advan­ta­ges of an approach in regu­la­ting AI that is based on legal­ly bin­ding human rights that are part of the exis­ting inter­na­tio­nal legal order (see below III). I. AI Sys­tems as Mul­ti­pur­po­se Tools – Chal­lenges for Regu­la­ti­on 1. Noti­ons and Foun­da­ti­ons When we try to under­stand what AI means as a tech­no­lo­gy, we rea­li­ze that the­re seem to be many aspects and appli­ca­ti­ons rele­vant and lin­ked to AI sys­tems: from facial reco­gni­ti­on sys­tems, to pre­dic­ti­ve poli­cing, from AI cal­led Alpha­Go play­ing the game GO, to social bots and algo­rith­mic trad­ers, from auto­no­mous cars to – may­be even – auto­no­mous wea­pons. A first ques­ti­on we should ans­wer is: How can we explain AI to someone who does not know what AI is, but wants to join and should join the dis­cour­se on regu­la­ti­on and gover­nan­ce? A simp­le start would be to cla­im, that a key fea­ture of the field of AI is the goal to build intel­li­gent entities.10 An AI sys­tem could be defi­ned as a sys­tem that is intel­li­gent, i.e. ratio­nal, in the way and to the ext­ent that it does the “right thing”, given what it knows.11 Howe­ver this is only one defi­ni­ti­on of an AI sys­tem. The stan­dard text­book quo­tes eight defi­ni­ti­ons by dif­fe­rent aut­hors laid out along two dimen­si­ons inclu­ding two aspects to mea­su­re the suc­cess of an AI sys­tem in rela­ti­on to human per­for­mance (“thin­king human­ly”; ”acting human­ly”); and two aspects to mea­su­re the suc­cess of an AI sys­tem in rela­ti­on to ide­al per­for­mance (“thin­king ratio­nal­ly”; “acting rationally”).12 But even if tho­se are cor­rect who sta­te that AI is con­cer­ned with ratio­nal or intel­li­gent beha­vi­or in arti­facts, the under­ly­ing ques­ti­on is whe­ther it is cor­rect to sta­te that the noti­on of “intel­li­gence” means the same as the noti­on of “rationality”.13 It seems reasonable to cla­im that AI sys­tems exhi­bit forms of intel­li­gence that are qua­li­ta­tively dif­fe­rent to tho­se seen in humans or ani­mals as bio­lo­gi­cal agents.14 As a basic descrip­ti­on one might sta­te that AI tools are based on com­plex or simp­le algorithms15 used to make decis­i­ons, and are crea­ted to sol­ve par­ti­cu­lar tasks. Auto­no­mous cars, for ins­tance, must dri­ve (in a given time wit­hout caus­ing acci­dents or vio­la­ting laws) to a cer­tain place, and game-play­ing AI sys­tems should chall­enge or even win against a human being.16 As AI is expec­ted to ful­fill a cer­tain task, the­re are requi­red pre­con­di­ti­ons for a sys­tem to be able to “do the Vöneky · Arti­fi­ci­al Intel­li­gence 1 1 17 The idea of a lear­ning machi­ne was dis­cus­sed by Alan M. Turing, Com­pu­ting Machi­nery and Intel­li­gence, Mind LIX, 1950, 433 et seq. (reprin­ted in Mar­ga­ret A. Boden (ed.), The Phi­lo­so­phy of Arti­fi­ci­al Intel­li­gence, 1990, 64 et seq.) 18 In gene­ral dif­fe­rent types of feed­back can be part of the machi­ne lear­ning pro­cess. The­re is unsu­per­vi­sed lear­ning (no expli­cit feed­back is given), rein­force­ment lear­ning (the sys­tem lear­ns based on rewards or “punish­ments”), and super­vi­sed lear­ning, which means in order to teach a sys­tem what a tea cup is, you have to show it thou­sands of tea cups, cf. Stuart J. Russel/Peter Nor­ving, Arti­fi­ci­al Intel­li­gence – A Modern Approach, 3rd ed, 2016, 706 et seq. 19 Ethem Alpay­din, Machi­ne Lear­ning – The New AI, 2016, 16 et seq. 20 John D. Kel­le­her, Deep Lea­ring, 2019, 6; Ethem Alpay­din, Machi­ne Lear­ning – The New AI, 2016, 16 et seq. 21 John D. Kel­le­her, Deep Lea­ring, 2019, 8. 22 John D. Kel­le­her, Deep Lea­ring, 2019, 1: “Deep lear­ning is the sub­field of arti­fi­ci­al intel­li­gence that focu­ses on crea­ting lar­ge neu­ral net­work models that are capa­ble of making accu­ra­te data-dri­ven decis­i­ons.” Ethem Alpay­din, Machi­ne Lear­ning – The New AI, 2016, 104: “With few as-sump­ti­ons and litt­le manu­al inter­fe­rence, struc­tures simi­lar to the hier­ar­chi­cal cone are being au-toma­ti­cal­ly lear­ned from lar­ge amounts of data. (…) This is the idea behind deep neu­ral net­works whe­re, start­ing from the raw input, each hid­den lay­er com­bi­nes the values in its pre­ce­ding lay­er and lear­ns more com­pli­ca­ted func­tions of the input.” 23 Eric Topol, Deep Medi­ci­ne, 2019, 9 et seq., 16 et seq. 24 See Yann LeCun et al., Deep Lear­ning, Natu­re 521 (2015), 436– 444, available at: http://www.nature.com/nature/journal/v521/ n7553/full/nature14539.html. 25 John D. Kel­le­her, Deep Lear­ning, 2019, 4. 26 Eric Topol, Deep Medi­ci­ne, 2019, 10. 27 Iyad Rahwan/Manuel Cebrian/Nick Obra­do­vich et al., Machi­ne beha­viour, Natu­re 568 (2019), 478. 28 Andrew Ng, in Mar­tin Ford (ed.), Archi­tects of Intel­li­gence, 2018, 20; Gut­ach­ten der Daten­ethik­kom­mis­si­on, 2019, 167 f. 29 Iyad Rahwan/Manuel Cebrian/Nick Obra­do­vich et al., Machi­ne beha­viour, Natu­re 568 (2019), 478. 30 Stuart Rus­sel, Human Com­pa­ti­ble – Arti­fi­ci­al Intel­li­gence and the Pro­blem of Con­trol, 253 et seq. 31 W. Dani­el Hills, The First Machi­ne Intel­li­gen­ces, in John Brock­man (ed.), Pos­si­ble Minds – 25 Ways of Loo­king at AI, 2019, 172, 173. 32 Nor­bert Wie­ner, The Human Use of Human Beings, 1954, 181. 33 Stuart Rus­sel, Human Com­pa­ti­ble – Arti­fi­ci­al Intel­li­gence and the Pro­blem of Con­trol, 103 et seq; Iyad Rahwan/Manuel Cebrian/ Nick Obra­do­vich et al., Machi­ne beha­viour, Natu­re 568 (2019), 477 et seq. right thing”. Depen­ding on the are­as of use, AI key capa­bi­li­ties are natu­ral lan­guage pro­ces­sing (speech reco­gni­ti­on), reaso­ning, (lear­ning, per­cep­ti­on, and action (robo­tics). Espe­ci­al­ly learning17 is a key abili­ty of modern AI sys­tems, 18 as for some pro­blems it is unclear how to trans­form the input to the output.19 This means that algo­rith­ms are deve­lo­ped that enable the machi­ne to extra­ct func­tions from a data­set to ful­fill a cer­tain task.20 The socal­led deep lear­ning, the field of machi­ne lear­ning that focu­ses on deep neu­ral networks,21 is the cen­tral part of cur­rent AI sys­tems if lar­ge data­sets are available, as for face reco­gni­ti­on on digi­tal cameras22 or in the field of medi­ci­ne to dia­gno­se cer­tain illnesses.23 Deep lear­ning mecha­nisms that are able to impro­ve them­sel­ves wit­hout human inter­ac­tion and wit­hout rule-based pro­gramming alre­a­dy exist today.24 As John Kel­le­her puts it: “Deep lear­ning enables data-dri­ven decis­i­ons by iden­ti­fy­ing and extra­c­ting pat­terns from lar­ge datasets”.25 It is not asto­nis­hing that sin­ce 2012 the num­ber of new deep lear­ning AI algo­rith­ms has grown exponential26 but as the func­tion­al pro­ces­ses that gene­ra­te the out­put are not clear (or at least hard to inter­pret) the pro­blem of the com­ple­xi­ty and opa­ci­ty of algo­rith­ms that seem to be “black boxes” is obvious as well.27 2. Risks and Chan­ces The “black boxes” pro­blem shows that it is important, if we think about AI regu­la­ti­on or gover­nan­ce, to look at the dif­fe­rent risks and chan­ces that can be lin­ked to the deve­lo­p­ment and use of AI sys­tems. Ques­ti­ons of con­cern that are rai­sed are rela­ted to our demo­cra­tic order (news ran­king algo­rith­ms, “algo­rith­mic jus­ti­ce”), kine­tics (auto­no­mous cars and auto­no­mous wea­pons), our eco­no­my and mar­kets (algo­rith­mic tra­ding and pri­cing), and our socie­ty (con­ver­sa­tio­nal robots). A major and inher­ent risk if a sys­tem lear­ns from data is that bias in AI sys­tems can hard­ly be avo­ided. At least if AI lear­ns from human-gene­ra­ted (text) data, they can or even will include health, gen­der or racial stereotypes.28 Some cla­im, howe­ver, that the­re are bet­ter ways for redu­cing bias in AI than for redu­cing bias in humans, so AI sys­tems may be or beco­me less bia­sed than humans.29 Bes­i­des, the­re are risks of misu­se, if AI sys­tems are used to com­mit cri­mes, as for ins­tance fraud.30 Ano­ther risk is that AI tech­no­lo­gies have the poten­ti­al for grea­ter con­cen­tra­ti­on of power. Tho­se who are able to use this tech­no­lo­gy can beco­me more powerful (cor­po­ra­ti­ons or governments),31 and can influence lar­ge num­bers of peo­p­le (for ins­tance to vote in a cer­tain way). It was Nor­bert Wie­ner who wro­te in 1954 “(…) that such machi­nes, though hel­p­less by them­sel­ves, may be used by a human being or a block of human beings to increase their con­trol over the rest of the race or that poli­ti­cal lea­ders may attempt to con­trol their popu­la­ti­ons by means not of machi­nes them­sel­ves but through poli­ti­cal tech­ni­ques as nar­row and indif­fe­rent to human pos­si­bi­li­ty as if they had , in fact, been con­cei­ved mechanically.”32 If we think about regu­la­ti­on, we must not for­get the unin­ten­ded and unan­ti­ci­pa­ted nega­ti­ve and/or posi­ti­ve con­se­quen­ces of AI sys­tems and that the­re might be a seve­re lack of pre­dic­ta­bi­li­ty of the­se consequences.33 The 1 2 O RDNUNG DER WISSENSCHAFT 1 (2020), 9–22 34 Iyad Rahwan/Manuel Cebrian/Nick Obra­do­vich et al., Machi­ne beha­viour, Natu­re 568 (2019), 478. 35 Stres­sing the need to ana­ly­ze risks, cf. Max Teg­mark, Let’s Aspi­re to More Than Making Our-sel­ves Obso­le­te, in John Brock­man (ed.), Pos­si­ble Minds – 25 Ways of Loo­king at AI, 2019, 76 et seq.; Stuart Rus­sel, Human Com­pa­ti­ble – Arti­fi­ci­al Intel­li­gence and the Pro­blem of Con­trol, 103 et seq. 36 Iyad Rahwan/Manuel Cebrian/Nick Obra­do­vich et al., Machi­ne beha­viour, Natu­re 568 (2019), 478. 37 Depen­ding on the area the AI sys­tem is deploy­ed the sys­tem has to be mea­su­red against the human expert that usual­ly is allo­wed to ful­fil a task (as for ins­tance an AI dia­gno­sis sys­tem). This dif­fers from the view of the Ger­man Daten­ethik­kom­mis­si­on as the com­mis­si­on argues that the­re is an ethi­cal obli­ga­ti­on to use AI sys­tems if they ful­fil a cer­tain task bet­ter as an human, cf. Gut­ach­ten der Daten­ethik­kom­mis­si­on, 2019, 172. 38 Stuart J. Rus­sel, Human Com­pa­ti­ble, 2019, pp. 172 et seq. 39 Some cla­im that weak AI means that AI dri­ven machi­nes act “as if they were intel­li­gent”, cf. Stuart J. Russel/Peter Nor­vig, Arti­fi­ci­al Intel­li­gence – A Modern Approach, 3rd ed, 2016, 1035. 40 Stuart J. Russel/Peter Nor­vig, ibid., 1035; Mur­ray Shana­han, The Tech­no­lo­gi­cal Sin­gu­la­ri­ty, 2015, 3. 41 The term “the Sin­gu­la­ri­ty” was coin­ed in 1993 by the com­pu­ter sci­en­tist and aut­hor Ver­non Vin­ge; he was con­vin­ced that “[w] ithin thir­ty years, we will have the tech­no­lo­gi­cal means to crea­te super­hu­man intel­li­gence,” and he con­cluded: “I think it’s fair to call this event a sin­gu­la­ri­ty (“the Sin­gu­la­ri­ty” for the pur­po­se of this paper).” See Ver­nor Vin­ge, The Coming Tech­no­lo­gi­cal Sin­gu­la­ri­ty: How to Sur­vi­ve in the Post-Human Era, in Geoffrey A. Lan­dis (ed.), Visi­on-21: Inter­di­sci­pli­na­ry Sci­ence and Engi­nee­ring in the Era of Cyber­space (1993), 11, 12 (NASA Publi­ca­ti­on CP10129), available at: https://ntrs.nasa.gov/archive/nasa/casi.ntrs. nasa.gov/19940022856.pdf. 42 Stuart J. Rus­sel, The Pur­po­se Put into the Machi­ne, in John Brock­man (ed.), Pos­si­ble Minds: 25 Ways of Loo­king at AI, 2019, 20 et seq., 26. Some experts pre­dict that super­hu­man intel­li­gence will hap­pen by 2050, see e.g., Ray Kurz­weil, The Sin­gu­la­ri­ty is Near, 2005, 127; for more fore­casts, see Nick Bostrom, Super­in­tel­li­gence, Paths, Dan­gers, Stra­te­gies, 2014, at 19–21. 43 Elie­zer Yud­kow­sky, Arti­fi­ci­al Intel­li­gence as a posi­ti­ve and nega­ti­ve fac­tor in glo­bal risk, in Nick Bostrom/Milan Ćir­ko­vić (eds.), Glo­bal Cata­stro­phic Risks, 2011, at 341. 43 Elie­zer Yud­kow­sky, Arti­fi­ci­al Intel­li­gence as a posi­ti­ve and nega­ti­ve fac­tor in glo­bal risk, in Nick Bostrom/Milan Ćir­ko­vić (eds.), Glo­bal Cata­stro­phic Risks, 2011, at 341. 44 Max Teg­mark, Will The­re Be a Sin­gu­la­ri­ty within Our Life­time?, in John Brock­man (ed.), What Should We Be Worried About?, 2014, 30, 32. 45 Stuart J. Rus­sel, The Pur­po­se Put into the Machi­ne, in John Brock­man (ed.), Pos­si­ble Minds: 25 Ways of Loo­king at AI, 2019, 26. 46 For a simi­lar approach, howe­ver less based in the risks for the vio­la­ti­on of human rights, see Gut­ach­ten der Daten­ethik­kom­mis­si­on, 2019, 173. use of AI will pro­vi­de new and even bet­ter ways to impro­ve our health sys­tem, to pro­tect our envi­ron­ment and to allo­ca­te resources34 Howe­ver, plau­si­ble risk sce­na­ri­os may show that the fear of the poten­ti­al loss of human over­sight is not per se irrational.35 They sup­port the call for a “human in the loop”, that – for ins­tance – a judge deci­des about the fate of a per­son, not an AI sys­tem, and a com­ba­tant deci­des about lethal or non-lethal force during an armed con­flict, not an auto­no­mous wea­pon. But to keep us as per­sons “in the loop” means that we need sta­te based regu­la­ti­on stres­sing this as a neces­sa­ry pre­con­di­ti­on at least in the are­as whe­re the­re are serious risks for the vio­la­ti­on of human rights or human digni­ty. I agree with tho­se who cla­im, that it is important to under­stand the pro­per­ties of AI sys­tems if we think about AI regu­la­ti­on and gover­nan­ce and that the­re is the need to look at the beha­vi­or of “black box” algo­rith­ms, simi­lar to the beha­vi­or of ani­mals, in real world settings.36 My hypo­the­sis is, that an AI sys­tem that ser­ves human beings has to meet the “at least as good as a human being / human expert”37 thres­hold. This sets even a hig­her thres­hold as the one that is part of the idea of “bene­fi­ci­al machi­nes”, defi­ned as intel­li­gent machi­nes who­se actions can be expec­ted to achie­ve our objec­ti­ves rather than their objectives.38 We also have to keep in mind the future deve­lo­p­ment of AI sys­tems and their inter­lin­kage. I have spel­led out so far fea­tures of so-cal­led nar­row AI or weak AI. Weak AI pos­s­es­ses spe­cia­li­zed, domain spe­ci­fic, intelligence.39 In con­trast, Arti­fi­ci­al Gene­ral Intel­li­gence (AGI) will pos­sess gene­ral intel­li­gence and strong AI could mean, as some cla­im, that AI sys­tems “are actual­ly thinking”.40 Whe­ther the­re is a chan­ce that AGI, and human-level or super­hu­man AI (the Singularity)41 will be pos­si­ble within our life­time is uncertain.42 It is not per se implau­si­ble to argue, as some sci­en­tists do, that intel­li­gence explo­si­on leads to a dyna­mi­cal­ly unsta­ble sys­tem as smar­ter sys­tems will have an easier time making them­sel­ves smarter43 and that the­re will be a point bey­ond which it is impos­si­ble for us to make relia­ble predictions.44 And it seems con­vin­cing that if super­in­tel­li­gent AI was pos­si­ble it would be a signi­fi­cant risk for humanity.45 3. Cur­rent and Future AI Regu­la­ti­on a. Bases For regu­la­ti­ve issues, the dif­fe­ren­tia­ti­on of nar­row AI ver­sus AGI might be hel­pful as a start­ing point. It is more con­vin­cing, howe­ver, to find cate­go­ries that show the pos­si­ble (nega­ti­ve) impact of AI sys­tems to core human rights, human digni­ty and to con­sti­tu­tio­nal rights, such as pro­tec­tion against dis­cri­mi­na­ti­on, the right to life, the right to health, the right to pri­va­cy, and the right to take part in elec­tions, etc.46 From this per­spec­ti­ve, even deve­lo­p­ments such as a fast take-off sce­na­rio, which means that an AGI sys­tem beco­mes super-intel­li­gent becau­se of Vöneky · Arti­fi­ci­al Intel­li­gence 1 3 47 Andrew Ng, in Mar­tin Ford (ed.), Archi­tects of Intel­li­gence, 2018, 202. 48 For a gover­nan­ce frame­work of super­in­tel­li­gent AI as an exis­ten­ti­al risk, see Sil­ja Voe­neky, Human Rights and Legi­ti­ma­te Gover­nan­ce of Exis­ten­ti­al and Glo­bal Cata­stro­phic Risks, in Sil­ja Voeneky/Gerald Neu­man (eds.), Human Rights, Demo­cra­cy, and Legi­ti­ma­cy in Times of Dis­or­der, 2018, 160 et seq. 49 Regu­la­ti­on (EU) 2016/679 of the Euro­pean Par­lia­ment and of the Coun­cil of 27.04.2016 on the pro­tec­tion of natu­ral per­sons with regard to the pro­ces­sing of per­so­nal data and on the free move­ment of such data, and repe­al­ing Direc­ti­ve 95/46/EC, in force sin­ce 25.05.2018, cf. OJEU L119/1, 04.05.2016. 50 Art. 4 (1) GDPR: “‘per­so­nal data’ means any infor­ma­ti­on rela­ting to an iden­ti­fied or iden­ti­fia­ble natu­ral per­son (‘data sub­ject’); an iden­ti­fia­ble natu­ral per­son is one who can be iden­ti­fied, direct­ly or indi­rect­ly, in par­ti­cu­lar by refe­rence to an iden­ti­fier such as a name, an iden­ti­fi­ca­ti­on num­ber, loca­ti­on data, an online iden­ti­fier or to one or more fac­tors spe­ci­fic to the phy­si­cal, phy­sio­lo­gi­cal, gene­tic, men­tal, eco­no­mic, cul­tu­ral or social iden­ti­ty of that natu­ral per­son”. 51 Howe­ver, art. 2 (2) lit. c and d GDPR excludes from the mate­ri­al scope the pro­ces­sing as defi­ned in art. 4 (2) GDPR of per­so­nal data by a natu­ral per­son in the cour­se „of a purely per­so­nal or house­hold acti­vi­ty”, and by the com­pe­tent aut­ho­ri­ties for the pur­po­ses inter alia „of the pre­ven­ti­on (…) or pro­se­cu­ti­on of cri­mi­nal offen­ces”. 52 Cf. art. 7, art. 4 (11) GDPR: “ (…) ‘con­sent’ of the data sub­ject means any free­ly given, spe­ci­fic, infor­med and unam­bi­guous indi­ca­ti­on of the data subject‘s wis­hes by which he or she, by a state­ment or by a clear affir­ma­ti­ve action, signi­fies agree­ment to the pro­ces­sing of per­so­nal data relat-ing to him or her;”. 53 See as well art. 12 GDPR. 54 See art. 6 (4) GDPR. 55 With regard to the respon­si­ble and accoun­ta­ble per­son or enti­ty (“the con­trol­ler” accor­ding to art. 4 (7) GDPR) and fur­ther duties of the con­trol­ler see art. 5 (2) (“accoun­ta­bi­li­ty”), art. 32 (“secu­ri­ty of pro­ces­sing”) and art. 35 GDPR (“data pro­tec­tion impact assess­ment”). For a dis­cus­sion in Ger­ma­ny how to app­ly the GDPR to AI sys­tems see, inter alia, the Ent­schlie­ßung der 97. Kon­fe­renz der unab­hän­gi­gen Daten­schutz­auf­sicht­be­hör­den des Bun­des und der Län­der, 03.04.2019 (“Ham­ba­cher Erklä­rung zur Künst­li­chen Intel­li­genz”), available at: https://www.datenschutzkonferenz-online. de/media/en/20190405_hambacher_erklaerung.pdf. For the cla­im that the­re is a need for a new EU Regu­la­ti­on for AI sys­tems, see the Gut­ach­ten der Daten­ethik­kom­mis­si­on, 2019, 180 pro­po­sing a “EU-Ver­ord­nung für Algo­rith­mi­sche Sys­te­me” (EUVAS). 56 See as well Gut­ach­ten der Daten­ethik­kom­mis­si­on, 2019, 191 et seq. 57 The Ger­man Con­sti­tu­tio­nal Court declared to be com­pe­tent to review the appli­ca­ti­on of natio­nal legis­la­ti­on on the basis of the rights of the Char­ter of Fun­da­men­tal Rights of the Euro­pean Uni­on even in an area that is ful­ly har­mo­ni­zed accor­ding to EU law, cf. BVerfG, Decis­i­on of 06.11.2019, 1 BvR 276/17, Right to be for­got­ten II. 58 Cf. OJEC C 364/1, 18.12.2000. 59 Art. 8 EUChHR: “Pro­tec­tion of per­so­nal data (1) Ever­yo­ne has the right to the pro­tec­tion of per­so­nal data con­cer­ning him or her. (2) Such data must be pro­ces­sed fair­ly for spe­ci­fied pur­po­ses and on the basis of the con­sent of the per­son con­cer­ned or some other legi­ti­ma­te basis laid down by law. Ever­yo­ne has the right of access to data which has been coll­ec­ted con­cer­ning him or her, and the right to have it rec­ti­fied. (3) Com­pli­ance with the­se rules shall be sub­ject to con­trol by an inde­pen­dent aut­ho­ri­ty.” 60 Cf. Oscar Schwartz, Mind-Rea­ding Tech? How Pri­va­te Com­pa­nies could gain Access to Our Brains, The Guar­di­an, 24.10.2019, online available at: https://www.theguardian.com/technology/2019/ oct/24/­mind-rea­ding-tech-pri­va­te-com­pa­nies-access-brains. a recur­si­ve self-impro­ve­ment cycle,47 that are dif­fi­cult to pre­dict, must not be negle­c­ted as we can think about how to frame low pro­ba­bi­li­ty high impact sce­na­ri­os in a pro­por­tio­nal way.48 b. Sec­tor-Spe­ci­fic Rules and Mul­ti­le­vel Regu­la­ti­on When spea­king about gover­nan­ce and regu­la­ti­on, it is important to dif­fe­ren­tia­te bet­ween rules that are legal­ly bin­ding on the one hand and non-bin­ding soft law, on the other hand. In the area of inter­na­tio­nal, Euro­pean Uni­on, and natio­nal law, we see that at least parts of AId­ri­ven tech­no­lo­gy are cover­ed by exis­ting sec­tor-spe­ci­fic rules. (1) AI Sys­tems Dri­ven by (Big) Data The Gene­ral Data Pro­tec­tion Regu­la­ti­on (GDPR)49 aims to pro­tect per­so­nal data50 of natu­ral per­sons (art. 1 (1) GDPR) and appli­es to the pro­ces­sing of this data even by whol­ly auto­ma­ted means (art. 2 (1) GDPR).51 The GDPR requi­res an infor­med consent52 of the con­su­mer if some­bo­dy wants to use his or her data. It can be seen as sec­tor-spe­ci­fic law gover­ning AI sys­tems as AI sys­tems often make use of lar­ge amounts of per­so­nal data. The gene­ral prin­ci­ples that are laid down for – inter alia – the pro­ces­sing of per­so­nal data (inclu­ding lawful­ness, fair­ness and transparency53) and the coll­ec­tion of per­so­nal data (pur­po­se limi­ta­ti­on) in art. 5 GDPR are appli­ca­ble with regard to AI systems,54 and have to be imple­men­ted via appro­pria­te tech­ni­cal and orga­niza­tio­nal mea­su­res by the con­trol­ler (art. 25 GDPR).55 Accor­ding to art. 22 GDPR we, as data sub­jects, have the right “not to be sub­ject to a decis­i­on based sole­ly on auto­ma­ted pro­ces­sing” that pro­du­ces legal effects con­cer­ning the data sub­ject or simi­lar­ly affects him or her.56 Sub­stan­ti­ve legi­ti­ma­cy of this regu­la­ti­ons is given becau­se the GDPR is in cohe­rence with the human rights that bind EU organs and can be review­ed and imple­men­ted by the Euro­pean Court of Jus­ti­ce and the Ger­man Con­sti­tu­tio­nal Court,57 espe­ci­al­ly art. 8 of the Char­ter of Fun­da­men­tal Rights of the Euro­pean Uni­on (EUChHR)58 that lays down the pro­tec­tion of per­so­nal data.59 Like every regu­la­ti­on and law, the GDPR has lacu­nae, and the­re might be rele­vant lacu­nae in the area of AI-dri­ven tech­no­lo­gy, as for ins­tance, with regard to brain data that is used for con­su­mer technology.60 The decisi­ve ques­ti­on is whe­ther all rele­vant aspects of brain data pro­tec­tion are alre­a­dy 1 4 O RDNUNG DER WISSENSCHAFT 1 (2020), 9–22 61 To dis­cuss this in detail is bey­ond the scope of this paper but it is one area of rese­arch of the Sal-tus-FRI­AS Respon­si­ble AI Rese­arch Group the aut­hor is part of. 62 Regu­la­ti­on (EU) 2017/745 of the Euro­pean Par­lia­ment and of the Coun­cil of 05.04.2017 on medi­cal devices, amen­ding Direc­ti­ve 2001/83/EC, Regu­la­ti­on (EC) No 178/2002 and regu­la­ti­on (EC) No 1223/2009 and repe­al­ing Coun­cil Direc­ti­ves 90/385/EEC and 93/42/EEC, OJEU L117/1, 05.05.2017. It came into force May 2017, but medi­cal devices will have a tran­si­ti­on time of three years (until May 2020) to meet the new requi­re­ments. 63 Art. 2 MDR. „ (…) ‘medi­cal device’ means any instru­ment, appa­ra­tus, appli­ance, soft­ware, im-plant, reagent, mate­ri­al or other artic­le inten­ded by the manu­fac­tu­rer to be used, alo­ne or in com­bi­na­ti­on, for human beings for one or more of the fol­lo­wing spe­ci­fic medi­cal pur­po­ses: (…)”. For exemp­ti­ons see, howe­ver, art. 1 (6) MDR. 64 Cf. art. 54, 55, art. 106 (3), Annex IX Sec­tion 5.1, Annex X Sec­tion 6 MDR. 65 Cf. the Phar­maceu­ti­cal legis­la­ti­on for medi­cinal pro­ducts of human use, Vol. 1, inclu­ding dif­fe­rent Direc­ti­ves and Regu­la­ti­ons, available at: https://ec.europa.eu/health/documents/eudralex/vol1_de. 66 §§ 21 et seq. Gesetz über den Ver­kehr mit Arz­nei­mit­teln (Arz­nei­mit­tel­ge­setz, AMG), BGBl. I, 1626; Regu­la­ti­on (EU) No 536/2014 of the Euro­pean Par­lia­ment and of the Coun­cil of 16.04.2014 on cli­ni­cal tri­als on medi­cinal pro­ducts for human use, OJEU L 158/1, 27.05.2014. 67 Art. 1 Ach­tes Gesetz zur Ände­rung des Stra­ßen­ver­kehrs­ge­set­zes (8. StVG­ÄndG), 16.06.2017, BGBl. I 1648. 68 Ethik-Kom­mis­si­on „Auto­ma­ti­sier­tes und ver­netz­tes Fah­ren“ des Bun­des­mi­nis­te­ri­ums für Ver­kehr und digi­ta­le Infra­struk­tur, Report June 2017, available at: https://www.bmvi.de/SharedDocs/ DE/Publikationen/DG/bericht-der-ethik-kommission.pdf?__ blob=publicationFile. 69 § 1a (1) StVG: „Der Betrieb eines Kraft­fahr­zeugs mit­tels hoch- und voll­au­to­ma­ti­sier­ter Fahr­funk­ti­on ist zuläs­sig, wenn die Funk­ti­on bestim­mungs­ge­mäß ver­wen­det wird.” 70 This seems true even if the descrip­ti­on of the inten­ded pur­po­se and the level of auto­ma­ti­on shall be „unam­bi­gous“ accor­ding to ratio­na­le of the law maker, cf. BT-Drucks., 18/11300, 20: “Die Sys­tem­be­schrei­bung des Fahr­zeugs muss über die Art der Aus­stat­tung mit auto­ma­ti­sier­ter Fahr­funk­ti­on und über den Grad der Auto­ma­ti­sie­rung unmiss­ver­ständ­lich Aus­kunft geben, um den Fah­rer über den Rah­men der bestim­mungs­ge­mä­ßen Ver­wen­dung zu infor­mie­ren.“ 71 Bernd Grzes­zick, art. 20, in Roman Herzog/Rupert Scholz/Matthias Herdegen/Hans Klein (eds.), Maunz/Dürig Grund­ge­setz-Kom­men­tar, para. 51–57. 72 Agree­ment Con­cer­ning the Adop­ti­on of Har­mo­ni­zed Tech­ni­cal United Nati­ons Regu­la­ti­ons for Whee­led Vehic­les, Equip­ment and Parts which can be Fit­ted and/or be Used on Whee­led Vehic­les and the Con­di­ti­ons for Recipro­cal Reco­gni­ti­on of Appr­ovals Gran­ted on the Basis of the­se United Nati­ons Regu­la­ti­ons. cover­ed by the pro­tec­tion of health data (art. 4 (15) GDPR) or bio­me­tric data (art. 4 (14) GDPR) that are defi­ned in the regulation.61 (2) AI Sys­tems as Medi­cal Devices Bes­i­des, the­re is EU Regu­la­ti­on on Medi­cal Devices (MDR),62 which governs cer­tain AI-dri­ven apps in the health sec­tor and other AI-dri­ven medi­cal devices, for ins­tance, in the area of neurotechnology.63 And again, one has to ask whe­ther this regu­la­ti­on is suf­fi­ci­ent to pro­tect the human digni­ty, life and health of con­su­mers, as the impact on human digni­ty, life and health might be more farea­ching than the usu­al pro­ducts that were envi­sa­ged by the draf­ters of the regu­la­ti­on. Alt­hough the new EU medi­cal device regu­la­ti­on was adopted in 2017, it includes a so-cal­led scru­ti­ny process64 for high-risk pro­ducts (cer­tain class III devices), which is a con­sul­ta­ti­on pro­ce­du­re pri­or to mar­ket appr­oval. It is not a pre­ven­ti­ve per­mit pro­ce­du­re, dif­fe­ring from the per­mit pro­ce­du­re neces­sa­ry for the mar­ket appr­oval of new medi­ci­ne (medi­cinal pro­ducts), as the­re is a detail­ed regu­la­ti­on at the natio­nal and even more at the Euro­pean Uni­on level,65 inclu­ding a new Cli­ni­cal Tri­al Regulation.66 That the pre­ven­ti­ve pro­ce­du­res dif­fer whe­ther the object of the rele­vant laws is a “medi­cal device” or a “medi­cinal pro­duct” is not con­vin­cing, if the risks invol­ved for human health for a con­su­mer are the same when com­pa­ring new drugs and cer­tain new medi­cal devices, as – for ins­tance – new neu­ro­tech­no­lo­gy. (3) AI Sys­tems as (Semi-)Autonomous Cars Sec­tor-spe­ci­fic (top-down) regu­la­ti­on is alre­a­dy in force when it comes to the use of (semi-)autonomous cars. In Ger­ma­ny, the rele­vant natio­nal law was amen­ded in 2017,67 befo­re the com­pe­tent fede­ral ethic com­mis­si­on published its report,68 in order to include new high­ly or ful­ly auto­ma­ted sys­tems (§ 1a, § 1b and § 63 StVG). § 1a (1) StVG sta­tes that the ope­ra­ti­on of a car by means of a high­ly or ful­ly auto­ma­ted dri­ving func­tion is per­mis­si­ble, pro­vi­ded the func­tion is used for its inten­ded purpose.69 Howe­ver, what “inten­ded pur­po­se” means must be defi­ned by the auto­mo­ti­ve com­pa­ny. The­r­e­fo­re § 1a (1) StVG means a dyna­mic refe­rence to the pri­va­te stan­dard-set­ting by a cor­po­ra­ti­on that seems to be rather vague70 espe­ci­al­ly if you think about the rule of law and the prin­ci­ple of “Rechts­klar­heit”, which means that legal rules have to be clear and understandable.71 It is even true with regard to the appli­ca­ble inter­na­tio­nal trea­ties that sec­tor-spe­ci­fic law can be amen­ded and chan­ged (even at the inter­na­tio­nal level) if it is neces­sa­ry to adapt the old rules to now AI-dri­ven sys­tems. The UN/ECE 1958 Agreement72 was amen­ded in 2017 and 2018 (the Vöneky · Arti­fi­ci­al Intel­li­gence 1 5 73 Adden­dum 78: UN Regu­la­ti­on No. 79 Rev. 3, ECE/TRANS/ WP.29/2016/57 ECE/TRANS/WP.29/2017/10 (as amen­ded by para­graph 70 of the report ECE/TRANS/WP.29/1129), 30.11.2017, “Uni­form pro­vi­si­ons con­cer­ning the appr­oval of vehic­les with regard to stee­ring equip­ment”: “2.3.4.1. ‘Auto­ma­ti­cal­ly com­man­ded stee­ring func­tion (ACSF)’ means a func­tion within an elec­tro­nic con­trol sys­tem whe­re actua­ti­on of the stee­ring sys­tem can result from auto­ma­tic eva­lua­ti­on of signals initia­ted on-board the vehic­le, pos­si­bly in con­junc­tion with pas­si­ve infra­struc­tu­re fea­tures, to gene­ra­te con­trol action in order to assist the dri­ver. 2.3.4.1.1. ‘ACSF of Cate­go­ry A’ means a func­tion that ope­ra­tes at a speed no grea­ter than 10 km/h to assist the dri­ver, on demand, in low speed or par­king mano­eu­vring. 2.3.4.1.2. ‘ACSF of Cate­go­ry B1’ means a func­tion which assists the dri­ver in kee­ping the vehic­le within the cho­sen lane, by influen­cing the late­ral move­ment of the vehic­le. 2.3.4.1.3. ‘ACSF of Cate­go­ry B2’ means a func­tion which is initiated/activated by the dri­ver and which keeps the vehic­le within its lane by influen­cing the late­ral move­ment of the vehic­le for exten­ded peri­ods wit­hout fur­ther dri­ver command/confirmation. 2.3.4.1.4. ‘ACSF of Cate­go­ry C’ means, a func­tion which is initiated/activated by the dri­ver and which can per­form a sin­gle late­ral mano­eu­vre (e.g. lane chan­ge) when com­man­ded by the dri­ver. 2.3.4.1.5. ‘ACSF of Cate­go­ry D’ means a func­tion which is initiated/activated by the dri­ver and which can indi­ca­te the pos­si­bi­li­ty of a sin­gle late­ral mano­eu­vre (e.g. lane chan­ge) but per­forms that func­tion only fol­lo­wing a con­fir­ma­ti­on by the dri­ver. 2.3.4.1.6. ‘ACSF of Cate­go­ry E’ means a func­tion which is initiated/ acti­va­ted by the dri­ver and which can con­ti­nuous­ly deter­mi­ne the pos­si­bi­li­ty of a mano­eu­vre (e.g. lane chan­ge) and com­ple­te the­se mano­eu­vres for exten­ded peri­ods wit­hout fur­ther dri­ver command/confirmation.” 74 Adden­dum 12‑H: UN Regu­la­ti­on No. 13‑H, ECE/TRANS/ WP.29/2014/46/Rev.1 and ECE/TRANS/WP.29/2016/50, 05.06.2018, “Uni­form pro­vi­si­ons con­cer­ning the appr­oval of pas­sen­ger cars with regard to bra­king”: 2.20. “‘Auto­ma­ti­cal­ly com­man­ded bra­king’ means a func­tion within a com­plex elec­tro­nic con­trol sys­tem whe­re actua­ti­on of the bra­king system(s) or bra­kes of cer­tain axles is made for the pur­po­se of gene­ra­ting vehic­le retar­da­ti­on with or wit­hout a direct action of the dri­ver, resul­ting from the auto­ma­tic eva­lua­ti­on of on-board initia­ted infor­ma­ti­on.” 75 To under­stand the rele­van­ce of the­se regu­la­ti­ons in a mul­ti-level regu­la­ti­on sys­tem one has to take into account that other inter­na­tio­nal, Euro­pean natio­nal pro­vi­si­ons refer direct­ly or indi­rect­ly to the UN/ECE Regu­la­ti­ons, cf. e.g. art. 8 (5bis) and art. 39 of the Vien­na Con­ven­ti­on on Road Traf-fic; art. 21 (1), 29 (3), 35 (2) of the Euro­pean Direc­ti­ve 2007/46/EC (“Frame­work Direc­ti­ve”); § 1a (3) StVG. 76 Con­ven­ti­on on Pro­hi­bi­ti­ons or Rest­ric­tions on the Use of Cer­tain Con­ven­tio­nal Wea­pons Which May Be Dee­med to Be Exces­si­ve­ly Inju­rious or to Have Indis­cri­mi­na­te Effects (CCW), Group of Govern­men­tal Experts on Emer­ging Tech­no­lo­gies in the Area of Lethal Auto­no­mous Wea­pons Sys­tems, Gen­e­va, 25.– 29.03.2019 and 20.–21.08.2019, Report of the 2019 ses­si­on, CCW/ GGW.1/2019/3, 25.09.2019, available at: https://undocs.org/en/ CCW/GGE.1/2019/3. 77 Ibid., Annex IV, 13 et seq. 78 Ibid., Annex IV: (b) “Human respon­si­bi­li­ty for decis­i­ons on the use of wea­pons sys­tems must be retai­ned sin­ce accoun­ta­bi­li­ty can­not be trans­fer­red to machi­nes. This should be con­side­red across the enti­re life cycle of the wea­pons sys­tem; (…) (d) Accoun­ta­bi­li­ty for deve­lo­ping, deploy­ing and using any emer­ging wea­pons sys­tem in the frame­work of the CCW must be ensu­red in accordance with appli­ca­ble inter­na­tio­nal law, includ-ing through the ope­ra­ti­on of such sys­tems within a respon­si­ble chain of human com­mand and con­trol;”. 79 For this view and a defi­ni­ti­on see working paper (WP) sub­mit­ted by the Rus­si­an Fede­ra­ti­on, CCW/GGE.1/2019/WP.1, 15.03.2019, para. 5: “unman­ned tech­ni­cal means other than ord­nan­ce that are inten­ded for car­ry­ing out com­bat and sup­port mis­si­ons wit­hout any invol­vement of the ope­ra­tor“, express­ly exclu­ding unman­ned aeri­al vehic­les as high­ly auto­ma­ted sys­tems. UN Regu­la­ti­ons No. 7973 and No. 13-H74) to have a legal basis for the use of (semi-)autonomous cars.75 The examp­les men­tio­ned abo­ve show that detail­ed, legal­ly bin­ding laws and regu­la­ti­ons are alre­a­dy in force to regu­la­te AI sys­tems at the inter­na­tio­nal, Euro­pean, and natio­nal level. Accor­ding to this, the “nar­ra­ti­ve” is not cor­rect which includes the cla­im that (top-down) sta­te-based regu­la­ti­on lags (or: must lag) behind the tech­ni­cal deve­lo­p­ment, espe­ci­al­ly in the area of a fast­mo­ving dis­rup­ti­ve tech­no­lo­gy as AI. It seems rather con­vin­cing to argue ins­tead that whe­ther the­re is meaningful regu­la­ti­on in the field of AI depends on the poli­ti­cal will to regu­la­te AI sys­tems at the natio­nal, Euro­pean, and inter­na­tio­nal level. (4) AI Sys­tems as (Semi-)Autonomous Wea­pons The poli­ti­cal will to regu­la­te will depend on the interest(s) and pre­fe­ren­ces of sta­tes, espe­ci­al­ly with regard to eco­no­mic goals and secu­ri­ty issues as in most socie­ties (demo­cra­tic or unde­mo­cra­tic) the­re seems broad con­sen­sus that eco­no­mic growth of the natio­nal eco­no­my is a (pri­ma­ry) aim and pro­vi­ding natio­nal secu­ri­ty is the most important legi­ti­ma­te goal of a sta­te. This might explain why the­re are at the inter­na­tio­nal level – at least until now – are­as whe­re the­re is no con­sen­sus to regu­la­te AI sys­tems as a regu­la­ti­on is seen as a limi­ting force for eco­no­mic growth and/or natio­nal secu­ri­ty. This is obvious with regard to (semi-)autonomous wea­pons. Though a Group of Govern­men­tal Experts (GGE) on Emer­ging Tech­no­lo­gies in the Area of Lethal Auto­no­mous Wea­pons Sys­tems (LAWS) was estab­lished in 2016 and has met in Gen­e­va sin­ce 2017 con­ve­ned through the Con­fe­rence on Cer­tain Con­ven­tio­nal Wea­pons (CCW) and a report of the 2019 ses­si­on of the GGE is published76 the­re are only gui­ding prin­ci­ples affirm­ed by the Group.77 The­se gui­ding prin­ci­ples stress, inter alia, the need for accoun­ta­bi­li­ty (lit. b and d),78 and risk assess­ment mea­su­res as part of the design (lit. g). Howe­ver, the­re is no agree­ment on a meaningful inter­na­tio­nal trea­ty, and it is still dis­pu­ted whe­ther the dis­cus­sion within the GGE should be limi­t­ed to ful­ly auto­no­mous systems.79 The most­ly sta­te-dri­ven dis­cus­sions at the CCW have shown that some Sta­tes are arguing for a pro­hi­bi­ti­on as 1 6 O RDNUNG DER WISSENSCHAFT 1 (2020), 9–22 80 WP sub­mit­ted by the Rus­si­an Fede­ra­ti­on, CCW/GGE.1/2019/ WP.1; para. 2: “The Rus­si­an Fede­ra­ti­on pre­su­mes that poten­ti­al LAWS can be more effi­ci­ent than a human ope­ra­tor in addres­sing the tasks my mini­mi­zing the error rate. (…).” 81 WP sub­mit­ted by the USA, CCW/GGE.1/2019/WP.5, 28.03.2019, para. 2 lit. c: “Emer­ging tech­no­lo­gies in the area of LAWS could streng­then the imple­men­ta­ti­on of IHL, by, inter alia, redu­cing the risk of civi­li­an casu­al­ties, faci­li­ta­ting the inves­ti­ga­ti­on or report­ing of inci­dents invol­ving poten­ti­al vio­la­ti­ons, enhan­cing the abili­ty to imple­ment cor­rec­ti­ve actions, and auto­ma­ti­cal­ly gene­ra­ting infor­ma­ti­on on unex­plo­ded ord­nan­ce.”; cf. as well ibid., para. 15. 82 WP sub­mit­ted by the Rus­si­an Fede­ra­ti­on, CCW/GGE.1/2019/ WP.1, para. 10: “The Rus­si­an Fede­ra­ti­on is con­vin­ced that the issue of LAWS is extre­me­ly sen­si­ti­ve. While dis­cus­sing it, the GGE should not igno­re poten­ti­al bene­fits of such sys­tems in the con­text of ensu­ring Sta­tes‘ natio­nal secu­ri­ty. (…)”. 83 WP sub­mit­ted by France, CCW/GGe.2/2018/WP.3, stres­sing inter alia the prin­ci­ples of com­mand respon­si­bi­li­ty, ibid. para. 6, stres­sing a “cen­tral role for human com­mand in the use of force” (para. 12): “(…) In this regard, the com­mand must retain the abili­ty to take final decis­i­ons regar­ding the use of lethal force inclu­ding within the frame­work of using sys­tems with levels of auto­no­my or with various arti­fi­ci­al intel­li­gence com­pon­ents.” 84 Even the Ger­man Daten­ethik­kom­mis­si­on stres­ses that the­re is not per se a “red line” with regard to auto­no­mous wea­pons as long as the kil­ling of human beings it not deter­mi­ned by an AI sys­tem, Gut­ach­ten der Daten­ethik­kom­mis­si­on, 2019, 180. 85 WP sub­mit­ted by the Rus­si­an Fede­ra­ti­on, CCW/GGE.1/2019/ WP.1, para. 7. For a dif­fe­rent approach see the ICRC Working Paper on Auto­no­my, AI and Robo­tics: Tech­ni­cal Aspects of Hu-man Con­trol, CCW/GGE.1/2019/WP.7, 20.08.2019. 86 16.12.1971, 1015 U.N.T.S. 163, ente­red into force 26.03.1975. The BWC allows rese­arch on bio­lo­gi­cal agents for pre­ven­ti­ve, pro­tec­ti­ve or other peaceful pur­po­ses; howe­ver this trea­ty does not pro­vi­de suf­fi­ci­ent pro­tec­tion against the risks of misu­se of rese­arch becau­se rese­arch con­duc­ted for peaceful pur­po­ses is neither limi­t­ed nor pro­hi­bi­ted. 87 29.01.2000, 2226 U.N.T.S. 208, ente­red into force 11.09.2003. 88 The Nago­ya-Kua­la Lum­pur Sup­ple­men­ta­ry Pro­to­col on Lia­bi­li­ty and Redress to the Car­ta­ge­na Pro­to­col on Bio­sa­fe­ty, 15.10.2010, ente­red into force 05.03.2018. 89 The term “inter­na­tio­nal soft law” is unders­tood in this paper to cover rules that can­not be attri­bu­ted to a for­mal legal source of public inter­na­tio­nal law and that are, hence, not direct­ly legal­ly bin­ding but have been agreed upon by sub­jects of inter­na­tio­nal law (i.e. Sta­tes, inter­na­tio­nal orga­niza­ti­ons) that could, in prin­ci­ple, estab­lish inter­na­tio­nal hard law; for a simi­lar defi­ni­ti­on see Dani­el Thü­rer, Soft Law, in Rüdi­ger Wolf­rum (ed.), Max Planck Ency­clo­pe­dia of Public Inter­na­tio­nal Law, 2012, Vol. 9, 271, para. 8. The noti­on does not include pri­va­te rule making by cor­po­ra­ti­ons (inclu­ding codes of con­duct) or mere recom­men­da­ti­ons by stake­hol­ders, non-govern­men­tal orga­ni­sa­ti­ons and other pri­va­te enti­ties. 90 See abo­ve note 9. 91 Cf. https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449. part of a new inter­na­tio­nal trea­ty, like Aus­tria, yet other Sta­tes, like Russia80 and the US,81 are stres­sing the advantages82 of the deve­lo­p­ment and use of (semi-)autonomous wea­pons. Ger­ma­ny and France83 do not sup­port an inter­na­tio­nal trea­ty but opted for a soft law code of con­duct with regard to framing the use of tho­se weapons.84 Bes­i­des, key ele­ments of a gover­nan­ce regime of (semi-)autonomous wea­pons are unclear. What is meant by “human con­trol over the ope­ra­ti­on of such sys­tems” is dis­cus­sed even if it is sta­ted that this is an important limi­ting fac­tor by a sta­te. Rus­sia, for ins­tance, argues that “the con­trol sys­tem of LAWS should pro­vi­de for inter­ven­ti­on by a human ope­ra­tor or the upper-level con­trol sys­tem to chan­ge the mode of ope­ra­ti­on of such sys­tems, inclu­ding par­ti­al or com­ple­te deactivation”.85 With this, Rus­sia eli­mi­na­tes meaningful human con­trol as a neces­sa­ry pre­con­di­ti­on to use (semi-)autonomous wea­pons. The “human in the loop” as a last resort of using lethal wea­pons and the sub­ject of respon­si­bi­li­ty – with the last resort to con­vict some­bo­dy as a war cri­mi­nal – is repla­ced by the upper-level con­trol sys­tem that might be ano­ther AI sys­tem. (5) First Con­clu­si­on The examp­les men­tio­ned abo­ve show the loopho­les of the inter­na­tio­nal regu­la­ti­on of AI sys­tems, alt­hough the­re are spe­ci­fic rules in place in some are­as, most­ly at the Euro­pean level. But more important­ly that the­re is no coher­ent, gene­ral, or uni­ver­sal inter­na­tio­nal regu­la­ti­on of AI as part of the inter­na­tio­nal hard law. Alt­hough the­re are lacu­nae in other are­as as well (thus far no inter­na­tio­nal trea­ty on exis­ten­ti­al and glo­bal cata­stro­phic risks and sci­en­ti­fic rese­arch exists) this wide­spread inter­na­tio­nal non-regu­la­ti­on of AI rese­arch and deve­lo­p­ment is dif­fe­rent from other fields of fast moving tech­no­lo­gi­cal pro­gress: bio­tech­no­lo­gy. In the field of bio­tech­no­lo­gy the­re are a trea­ties, like the Bio­lo­gi­cal Wea­pons Con­ven­ti­on (BWC),86 the Con­ven­ti­on on Bio­lo­gi­cal Diver­si­ty, the Car­ta­ge­na Pro­to­col on Biosafety,87 and the Kua­la Lum­pur Lia­bi­li­ty Protocol88 that are appli­ca­ble in order to pro­hi­bit rese­arch that is not aimed at peaceful pur­po­ses or to dimi­sih risks rela­ted to the gene­tic modi­fi­ca­ti­on of living orga­nisms. The­r­e­fo­re, it is important to look clo­ser to the first attempt to adopt gene­ral AI prin­ci­ples at the inter­na­tio­nal level as part of the inter­na­tio­nal soft law.89 II. OECD AI Recom­men­da­ti­ons as Inter­na­tio­nal Soft Law 1. Basis and Con­tent The OECD issued recom­men­da­ti­ons on AI in 201990 and 43 Sta­tes have adopted the­se principles91 inclu­ding rele­vant actors in the field of AI as the US, South Korea, Japan, UK, France, and Ger­ma­ny, and Sta­tes that are not mem­bers of the OECD. The recom­men­da­ti­ons were draf­ted with the help of an expert group (AIGO) that Vöneky · Arti­fi­ci­al Intel­li­gence 1 7 92 Ger­ma­ny did send one mem­ber (Poli­cy Law: Digi­tal Work and Socie­ty, Fede­ral Minis­try for La-bour and Social Affairs), Japan two, as well as France, and the Euro­pean Com­mis­si­on; South Korea did send three mem­bers, as the USA (US Depart­ment of Sta­te, US Depart­ment of Com­mer­ce; US Natio­nal sci­ence Foun­da­ti­on). 93 Cf. https://www.oecd.org/going-digital/ai/oecd-aigo-membership-list.pdf. 94 Cf. OECD Web­site: What are the OECD Prin­ci­ples on AI?, https:// www.oecd.org/going-digital/ai/principles/. 95 The­se are: 1. Human agen­cy and over­sight; 2. Tech­ni­cal robust­ness and safe­ty 3. Pri­va­cy and data gover­nan­ce; 4. Trans­pa­ren­cy; 5. Diver­si­ty, non-dis­cri­mi­na­ti­on and fair­ness; 6. Socie­tal and envi­ron­men­tal well-being 7. Accoun­ta­bi­li­ty. 96 An AI sys­tem is defi­ned as “a machi­ne-based sys­tem that can, for a given set of human-defi­ned objec­ti­ves, make pre­dic­tions, recom­men­da­ti­ons, or decis­i­ons influen­cing real or vir­tu­al envi­ron­ments. AI sys­tems are desi­gned to ope­ra­te with vary­ing levels of auto­no­my.” Cf. I OECD AI Recom­men­da­ti­ons. 97 Ibid., I OECD AI Recom­men­da­ti­ons. 98 See abo­ve at note 28. See as well Gut­ach­ten der Daten­ethik­kom­mis­si­on, 2019, 194. 99 Sil­ja Vöneky, Recht, Moral und Ethik, 2010, 284 et seq. 100 See abo­ve at note 93. con­sists of 50 mem­bers from – as the OECD wri­tes – governments,92 aca­de­mia, busi­ness, civil socie­ty etc., inclu­ding IBM, Micro­soft, Goog­le, Face­book, Deep­Mind, as well as invi­ted experts from MIT.93 The OECD claims that the­se Prin­ci­ples will be a glo­bal refe­rence point for trust­wor­t­hy AI.94 It refers to the noti­on of trust­wor­t­hy AI, as did the High-level Expert Group on AI (AI HLEG) set up by the EU, which published Ethics Gui­de­lines on AI in April 2019 lis­ting seven key requi­re­ments that AI sys­tems shall meet to be trustworthy.95 The OECD recom­men­da­ti­ons sta­te and spell out five com­ple­men­ta­ry value-based “prin­ci­ples for respon­si­ble ste­ward­ship of trust­wor­t­hy AI” (section1):96 the­se are inclu­si­ve growth, sus­tainable deve­lo­p­ment and well-being (1.1); human-cen­te­red values and fair­ness (1.2.); trans­pa­ren­cy and explaina­bi­li­ty (1.3.); robust­ness, secu­ri­ty and safe­ty (1.4.); and accoun­ta­bi­li­ty (1.5.). In addi­ti­on, AI actors – mea­ning tho­se who play an acti­ve role in the AI sys­tem life­cy­cle, inclu­ding orga­niza­ti­ons and indi­vi­du­als that deploy or ope­ra­te AI97 – should respect the rule for human rights and demo­cra­tic values (1.2. lit. a). The­se include free­dom, digni­ty and auto­no­my, pri­va­cy and data pro­tec­tion, non-dis­cri­mi­na­ti­on and equa­li­ty, diver­si­ty, fair­ness, social jus­ti­ce, and inter­na­tio­nal­ly reco­gni­zed labor rights. But the wor­ding of the prin­ci­ples is very soft. For ins­tance, AI actors should imple­ment “mecha­nisms and safe­guards, such as capa­ci­ty for human deter­mi­na­ti­on, that are appro­pria­te to the con­text and con­sis­tent with the sta­te of the art” (1.2. lit. b). The recom­men­da­ti­on about trans­pa­ren­cy and explaina­bi­li­ty (1.3.) has only slight­ly more sub­s­tance. It sta­tes that AI actors “[…] should pro­vi­de meaningful infor­ma­ti­on, appro­pria­te to the con­text, and con­sis­tent with the sta­te of art […] (iv.) to enable tho­se adver­se­ly affec­ted by an AI sys­tem to chall­enge its out­co­me based on plain and easy-to-under­stand infor­ma­ti­on on the fac­tors, and the logic that ser­ved as the basis for the pre­dic­tion, rec-ommen­da­ti­on or decis­i­on.” Addi­tio­nal­ly, it sta­tes that “AI actors should, based on their roles, the con­text, and their abili­ty to act, app­ly a sys­te­ma­tic risk manage­ment approach to each pha­se of the AI sys­tem life­cy­cle, on a con­ti­nuous basis to address risks rela­ted to AI sys­tems, inclu­ding pri­va­cy, digi­tal secu­ri­ty, safe­ty and bias.” (1.4 lit. c). If we think that dis­cri­mi­na­ti­on and unju­s­ti­fied bia­ses are one of the key pro­blems of AI,98 asking for a risk manage­ment approach to avo­id the­se pro­blems does not seem to be suf­fi­ci­ent as a stan­dard of AI actor (cor­po­ra­ti­on) due dili­gence. And the wor­ding with regard to accoun­ta­bi­li­ty is soft as well (1.5): “AI actors should be accoun­ta­ble for the pro­per func­tio­ning of AI sys­tems and for the respect of the abo­ve prin­ci­ples, based on their roles, the con­text and con­sis­tent with the sta­te for the art.” This does not mean and does not men­ti­on any legal lia­bi­li­ty or legal respon­si­bi­li­ty.” 2. (Dis-)Advantages and Legi­ti­ma­cy The OECD recom­men­da­ti­ons show some of the advan­ta­ges and dis­ad­van­ta­ges that we see in the area of inter­na­tio­nal soft law. The advan­ta­ges are that they can be draf­ted in a short peri­od of time (the working group star­ted in 2018); that they can include experts from the rele­vant fields and sta­te offi­ci­als; that they can spell out and iden­ti­fy an exis­ting over­lap­ping con­sen­sus of mem­ber sta­tes, here the OECD mem­ber sta­tes; and that they might deve­lop some kind of nor­ma­ti­ve force even if they are not legal­ly bin­ding as an inter­na­tio­nal treaty.99 Howe­ver, the dis­ad­van­ta­ges of the OECD recom­men­da­ti­ons are obvious as well. First­ly, the basis for the pro­ce­du­ral legi­ti­ma­cy is unclear as to which experts are allo­wed to par­ti­ci­pa­te is not enti­re­ly clear. In the field of AI, experts are employ­ed, paid, or clo­se­ly lin­ked to AI corporations100 hence, the advice they give is not (enti­re­ly) inde­pen­dent. If an Inter­na­tio­nal Orga­ni­sa­ti­on (IO) or Sta­te one wants to enhan­ce pro­ce­du­ral legi­ti­ma­cy for AI recom­men­da­ti­ons, one should rely on dif­fe­rent groups: one of the inde­pen­dent experts with no (finan­cial) links to cor­po­ra­ti­ons, one of the experts working for cor­po­ra­ti­ons, and a third group con­sis­ting of civil socie­ty and 1 8 O RDNUNG DER WISSENSCHAFT 1 (2020), 9–22 101 See below Part III. 102 The argu­ments at part III. 1.–3. were published in my paper Human Rights and Legi­ti­ma­te Gover­nan­ce of Exis­ten­ti­al and Glo­bal Cata­stro­phic Risks, in Sil­ja Voeneky/Gerald Neu­man (eds.), Human Rights, Demo­cra­cy, and Legi­ti­ma­cy in Times of Dis­or­der, 2018, 149. 103 For the basis on the con­cept and noti­on of “legi­ti­ma­cy”, see Sil­ja Voe­neky, Recht, Moral und Ethik, 2010, 130–162. For dis­cus­sion of the legi­ti­ma­cy of inter­na­tio­nal law, see Allen Buchanan, The Legi­ti­ma­cy of Inter­na­tio­nal Law, in Saman­tha Besson/John Tasiou­las (eds.), The Phi­lo­so­phy of Inter­na­tio­nal Law, 2010, 79–96; John Tasiou­las, Legi­ti­ma­cy of Inter­na­tio­nal Law, in Saman­tha Besson/ John Tasiou­las (eds.), The Phi­lo­so­phy of Inter­na­tio­nal Law, 2010, at 97–116. 104 A deon­to­lo­gi­cal theo­ry of ethics is one which holds that at least some acts are moral­ly obli­ga­to­ry regard­less of their con­se­quen­ces, see Robert G. Olson, in Paul Edward (ed.), The Ency­clo­pe­dia of Phi­lo­so­phy, 1967, 1–2, 343. 105 Which means that the­se views main­tain that “it is some­ti­mes wrong to do what pro­du­ces the best available out­co­me over­all” as the­se views incor­po­ra­te “agent-cent­red rest­ric­tions,” see Samu­el Scheff­ler, The Rejec­tion of Con­se­quen­tia­lism, 1994, 2. 106 On “direct” and “act” uti­li­ta­ria­nism, see Richard B. Brandt, Facts, Values, and Mora­li­ty, 1996, 142; for the noti­on of act-con­se­quen­tia­lism and clas­si­cal uti­li­ta­ria­nism see Samu­el Scheff­ler, supra note 105, at 2–3; for an over­view see John C. Smart, Uti­li­ta­ria­nism, in Paul Edward (ed.), The Ency­clo­pe­dia of Phi­lo­so­phy 1967, 7–8, 206. NGO mem­bers. Sta­tes or IO could then compa­re the recom­men­da­ti­ons, dis­cuss the dif­fe­ren­ces, and choo­se or com­bi­ne the one most con­vin­cing. Second­ly, we have to dis­cuss the sub­stan­ti­ve legi­ti­ma­cy becau­se the OECD recom­men­da­ti­ons do not stress the respon­si­bi­li­ty of govern­ments to pro­tect human rights in the area of AI. They include only five recom­men­da­ti­ons to poli­cy­ma­kers (“adher­ents”, sec­tion 2) that shall be imple­men­ted in natio­nal poli­ci­es and inter­na­tio­nal coope­ra­ti­on con­sis­tent with the prin­ci­ples men­tio­ned abo­ve. The­se include inves­t­ing in AI rese­arch and deve­lo­p­ment (2.1), fos­te­ring a digi­tal eco­sys­tem for AI (2.2), sha­ping an enab­ling poli­cy envi­ron­ment for AI (2.3), buil­ding human capa­ci­ty and pre­pa­ring for labor mar­ket trans­for­ma­ti­on (2.4), and inter­na­tio­nal coope­ra­ti­on for trust­wor­t­hy AI (2.5). 3. Second Con­clu­si­on As a con­clu­si­on of this second part one could sta­te that the OECD recom­men­da­ti­ons lower the thres­hold too far and shift the focus too far away from Sta­tes as main actors of the inter­na­tio­nal com­mu­ni­ty and as tho­se obli­ged to pro­tect human rights101 towards pri­va­te actors. This is a major dis­ad­van­ta­ge becau­se alt­hough the­se recom­men­da­ti­ons exist, it is still unclear what sta­te obli­ga­ti­ons can be dedu­ced from legal­ly bin­ding human rights – inclu­ding the rele­vant human rights trea­ties and rules of cus­to­ma­ry law – with regard to the gover­nan­ce of AI. Bes­i­des, the recom­men­da­ti­ons that address pri­va­te actors and their respon­si­bi­li­ties are draf­ted in a lan­guage that is too soft and vague. As a result, I argue that the OECD Recom­men­da­ti­ons could and should have been more meaningful with regard to stan­dards of due dili­gence and respon­si­bi­li­ty in the age of AI for pri­va­te actors and – even more – with regard to sta­te duties to pro­tect human rights. The lat­ter aspect might even lead to a trend to under­mi­ne sta­te duties to pro­tect human rights in times of AI – and this could under­mi­ne the rele­van­ce of human rights regar­ding AI regu­la­ti­on as a who­le. III. Legi­ti­ma­cy, Human Rights and AI Regu­la­ti­on The ques­ti­on to be ans­we­red in this third part is: why are human rights decisi­ve with regard to the regu­la­ti­on of AI, and how can we defend the link bet­ween legi­ti­ma­cy and human rights in the field of AI regu­la­ti­on? 1. Legi­ti­ma­cy I start with the noti­on of legi­ti­ma­cy. As I have writ­ten befo­re, legi­ti­ma­cy should be view­ed pri­ma­ri­ly as a nor­ma­ti­ve, not a descrip­ti­ve, con­cept: 102 It refers to stan­dards of jus­ti­fi­ca­ti­on of gover­nan­ce, regu­la­ti­on and obli­ga­ti­ons. Hence, legi­ti­ma­te gover­nan­ce or regu­la­ti­on means that the gui­ding norms and stan­dards have to be jus­ti­fia­ble in a supra-legal way (i.e. they pos­sess ratio­nal accep­ta­bi­li­ty). If we think about inter­na­tio­nal regu­la­ti­on, it seems fruitful to link the noti­on of “legi­ti­ma­te regu­la­ti­on” to the exis­ting legal order of public inter­na­tio­nal law. Wit­hout say­ing that lega­li­ty is suf­fi­ci­ent for legi­ti­ma­cy, I argue that gui­ding norms and stan­dards have to be coher­ent with exis­ting inter­na­tio­nal law inso­far as the inter­na­tio­nal law reflects moral (i.e. jus­ti­fied) values.103 2. Ethi­cal Para­digms We have to sta­te that the­re are dif­fe­rent ethi­cal para­digms that can jus­ti­fy regu­la­ti­on in the field of AI in a supra legal way. One is the human rights-based approach that can be con­side­red a deon­to­lo­gi­cal concept,104 as the right­ness or wrong­ness of con­duct is deri­ved from the cha­rac­ter of the beha­vi­or itself.105 Ano­ther approach is uti­li­ta­ria­nism, which can be descri­bed as the doc­tri­ne which sta­tes that “one should per­form that act, among tho­se that on the evi­dence are available to one, that will most pro­ba­b­ly maxi­mi­se benefits”.106 It seems important to note that the dif­fe­rent nor­ma­ti­ve ethi­cal theo­ries are Vöneky · Arti­fi­ci­al Intel­li­gence 1 9 based on reasonable grounds (i.e. they pos­sess ratio­nal acceptability),107 and one can­not deci­de whe­ther the­re is a theo­ry that cle­ar­ly trumps the others. The­r­e­fo­re, in loo­king for stan­dards that are the bases of legi­ti­ma­te regu­la­ti­on of AI sys­tems, it is not fruitful to deci­de whe­ther one nor­ma­ti­ve ethi­cal theo­ry is in gene­ral terms the most con­vin­cing one, but rather which ethi­cal para­digm seems to be the most con­vin­cing in regard to the spe­ci­fic ques­ti­ons that we have to deal with when framing AI sys­tems. 3. Human Rights-based AI Regu­la­ti­on As I have argued befo­re with regard to the regu­la­ti­on of exis­ten­ti­al risks, 108 I argue that AI regu­la­ti­on and gover­nan­ce should be based on human rights, more pre­cis­e­ly on legal­ly bin­ding human rights. Other ethi­cal approa­ches shall not be ruled out as far as they are com­pa­ti­ble with human rights. But I reject views that argue that uti­li­ta­ri­an stan­dards should be the pri­ma­ry stan­dard to mea­su­re the legi­ti­ma­cy of an AI regu­la­ti­ve regime.109 The argu­ments sup­port­ing this cla­im are the fol­lo­wing: To regu­la­te AI is a glo­bal chall­enge. Hence, it would be a major defi­cit not to rely on human rights. They are part of exis­ting inter­na­tio­nal law. They are not only roo­ted in the moral dis­cour­se as uni­ver­sal values, but they also bind many, or even all Sta­tes (as trea­ty law or cus­to­ma­ry law), and they can be imple­men­ted by courts or other insti­tu­tio­nal means, laid down in human right trea­ties, such as the Euro­pean Con­ven­ti­on on Human Rights (ECHR)110 and the Inter­na­tio­nal Coven­ant on Civil and Poli­ti­cal Rights (ICCPR). The lat­ter is a uni­ver­sal human rights trea­ty that is bin­ding on more than 170 Sta­tes Parties,111 inclu­ding major AI rele­vant actors, like the USA. What seems to be even more important is that when we turn to a human rights frame­work, we see that inter­na­tio­nal legal human rights make it pos­si­ble to spell out the decisi­ve values that must be taken into account for asses­sing dif­fe­rent AI-rese­arch, ‑deve­lo­p­ment and ‑deploy­ment sce­na­ri­os. In the area of AI rese­arch free­dom of rese­arch is decisi­ve as a legal­ly bin­ding human right, ent­ail­ed in the rights of free­dom of thought and free­dom of expres­si­on that are laid down in the CCPR as an inter­na­tio­nal uni­ver­sal human rights trea­ty. Howe­ver, this free­dom is not abso­lu­te: The pro­tec­tion – for ins­tance – of life and health of human beings, of pri­va­cy and against dis­cri­mi­na­ti­on are legi­ti­ma­te aims that can jus­ti­fy pro­por­tio­nal limi­ta­ti­ons of this right.112 The human rights frame­work, the­r­e­fo­re, stres­ses that the­re exists a need to find pro­por­tio­nal limi­ta­ti­ons in the field of AI rese­arch if the­re are dan­gers or risks113 for human life and health or 107 In order to argue this way we have to ans­wer the ques­ti­on what our cri­te­ria of ratio­nal accep­ta­bi­li­ty are. My ans­wer is based on the argu­ments by the phi­lo­so­pher Hila­ry Put­nam that our cri­te­ria of ratio­nal accep­ta­bi­li­ty are, inter alia, cohe­rence, con­sis­ten­cy, and rele­van­ce; that “fact (or truth) and ratio­na­li­ty are inter­de­pen­dent noti­ons” but that, nevert­hel­ess, no neu­tral under­stan­ding of ratio­na­li­ty exists as the cri­te­ria of “ratio­nal accep­ta­bi­li­ty rest on and pre­sup­po­se our values”, and the “theo­ry of truth pre­sup­po­ses theo­ry of ratio­na­li­ty which in turn pre­sup­po­ses our theo­ry of good”. Put­nam con­cluded that the theo­ry of the good is “its­elf depen­dent upon assump­ti­ons about human natu­re, about socie­ty, about the uni­ver­se (inclu­ding theo­lo­gi­cal and meta­phy­si­cal assump­ti­ons).” See Hila­ry Put­nam, Reason, Truth and Histo­ry, 1981, 198, 201, 215. 108 This and the argu­ments at III.2. and 3. were published in my paper Human Rights and Legi­ti­ma­te Gover­nan­ce of Exis­ten­ti­al and Glo­bal Cata­stro­phic Risks, in Sil­ja Voeneky/Gerald Neu­man (eds.), Human Rights, Demo­cra­cy, and Legi­ti­ma­cy in Times of Dis­or­der, 2018, 151 et seq. 109 In many cases, neither the risks nor the bene­fits of AI rese­arch and deve­lo­p­ment can be quan­ti­fied; the risk of misu­se of AI sys­tems by cri­mi­nals, men­tio­ned abo­ve, can­not be quan­ti­fied; the unclear or unpre­dic­ta­ble bene­fits of basic AI rese­arch can­not be quan­ti­fied eit­her – nevert­hel­ess, basic rese­arch may often be the neces­sa­ry con­di­ti­on in order to achie­ve bene­fits for human beings in the long run. The­se are draw­backs of a uti­li­ta­ri­an risk-bene­fit approach for some of the AI sce­na­ri­os descri­bed abo­ve. For the lack of pre­dic­ta­bi­li­ty sur­roun­ding the con­se­quen­ces of AI, cf. Iyad Rahwan/Manuel Cebrian/Nick Obra­do­vich et al., Machi­ne beha­viour, Natu­re 568 (2019), 477. For a gene­ral dis­cus­sion of the human rights approach ver­sus uti­li­ta­ria­nism see Her­bert L. A. Hart, Bet­ween Uti­li­ty and Rights, Colum. L. Rev. 79 (1979), 828. For a dis­cus­sion of a com­bi­na­ti­on of uti­li­ta­ria­nism and other value based approa­ches (auto­no­my, diver­si­ty) and refe­rence to the Uni­ver­sal Decla­ra­ti­on of Human Rights for the codi­fi­ca­ti­on of moral prin­ci­ples appli­ca­ble to future AI, see Max Teg­mark, Life 3.0, 2017, 271–75. 110 The Inter­na­tio­nal Coven­ant on Civil and Poli­ti­cal Rights adopted by G.A. Res. 2200A (XXI), 16.12.1966, ente­red into force 23.03.1976, 999 U.N.T.S. 171, and the Euro­pean Con­ven­ti­on on Human Rights, adopted by the Mem­bers of the Coun­cil of Euro­pe, 04.11.1950, available at: http://www.echr.coe.int/Documents/Convention_ENG.pdf. 111 Art. 18 ICCPR, 19; art. 9, 10 ECHR. A dif­fe­rent approach is taken, howe­ver, in the Char­ter of Fun­da­men­tal Rights of the Euro­pean Uni­on, art. 13 (Free­dom of the arts and sci­en­ces). The­re it is express­ly laid down that “The arts and sci­en­ti­fic rese­arch shall be free of cons­traint. Aca­de­mic free­dom shall be respec­ted.” Simi­lar norms are included in natio­nal con­sti­tu­ti­ons, see e.g. Grund­ge­setz für die Bun­des­re­pu­blik Deutsch­land, art. 5 (3) (23.05.1949) which sta­tes that “Arts and sci­en­ces, rese­arch and tea­ching shall be free. The free­dom of tea­ching shall not release any per­son from alle­gi­ance to the con­sti­tu­ti­on.” 112 The legi­ti­ma­te aims for which the right of free­dom of expres­si­on and the right of free­dom of sci­ence can be limi­t­ed accor­ding to the Inter­na­tio­nal Coven­ant on Civil and Poli­ti­cal Rights and the Euro­pean Con­ven­ti­on on Human Rights are even broa­der. See art. 19 (3) ICCPR, art. 10 (2) ECHR. 113 Risk can be defi­ned as a risk is an “unwan­ted event which may or may not occur”, see Sven O. Hans­son, Risk, in Edward N. Zal­ta (ed.), Stan­ford Ency­clo­pe­dia of Phi­lo­so­phy, available at: https:// plato.stanford.edu/entries/risk/. The­re is no accept­ed defi­ni­ti­on of the term in public inter­na­tio­nal law; it is unclear how—and whether—a “risk” is dif­fe­rent from a “thre­at,” a “dan­ger” and a “hazard,” see Grant Wil­son, Mini­mi­zing Glo­bal Cata­stro­phic and Exis­ten­ti­al Risks from Emer­ging Tech­no­lo­gies through Inter­na­tio­nal Law, Vir­gi­nia Envi­ron­men­tal L.J. 31 (2013), 307, 310. 2 0 O RDNUNG DER WISSENSCHAFT 1 (2020), 9–22 pri­va­cy. What limits to the free­dom of rese­arch are jus­ti­fied depends on the pro­ba­bi­li­ty of the rea­liza­ti­on of a risk114 and the seve­ri­ty of the pos­si­ble harm. The­r­e­fo­re, demands of ratio­nal risk-bene­fit assess­ment can and should be part of the inter­pre­ta­ti­on of human rights, as the­re is the need to avo­id dis­pro­por­tio­na­te means in order to mini­mi­ze risks even in low/unknown pro­ba­bi­li­ty cases: What pro­por­tio­na­li­ty means is lin­ked to the risks and bene­fits one can reason­ab­ly anti­ci­pa­te in the area of AI. To do a risk-bene­fit assess­ment of the AI sys­tem in ques­ti­on, as far as this is pos­si­ble and ratio­nal, the­r­e­fo­re is an important ele­ment in imple­men­ting the human rights frame­work. Bes­i­des, even so-cal­led first gene­ra­ti­on human rights, as laid down in the CCPR, obli­ge Sta­tes not only to respect, but also to pro­tect the fun­da­men­tal rights of individuals.115 They sta­te that Sta­tes par­ties are obli­ged by inter­na­tio­nal human rights trea­ties to take appro­pria­te (legal) mea­su­res to pro­tect inter alia the life and health of individuals.116 And alt­hough the­re is wide dis­cre­ti­on for Sta­tes to pro­tect human rights, mea­su­res must not be inef­fec­ti­ve. Last but not least, a human rights-based approach requi­res pro­ce­du­ral rights for indi­vi­du­als to par­ti­ci­pa­te in the making of decis­i­ons that affect them in the area of AI-deve­lo­p­ments. To rely on human rights mean that we have to spell out in more detail, how to enhan­ce pro­ce­du­ral legi­ti­ma­cy. The­se argu­ments might show that the core of the regu­la­ti­on and gover­nan­ce pro­blem – that AI sys­tems should ser­ve us as human beings and not the other way around – can be expres­sed best on the basis of a human rights frame­work. It is cor­rect that human rights law, even the right to life, is not aiming to pro­tect huma­ni­ty, but aiming to pro­tect individuals.117 Howe­ver, huma­ni­ty con­sists of us as indi­vi­du­als. Even if we are not arguing that human rights pro­tect future gene­ra­ti­ons, we may not negle­ct that indi­vi­du­als born today can have a life expec­tancy of more than 70 years in many Sta­tes, and the­se indi­vi­du­als are pro­tec­ted by human rights law. Hence, it seems con­sis­tent with the object and pur­po­se of human rights trea­ties that we view human rights law, and the duty of Sta­tes towards human beings becau­se of human rights, in a 70 year peri­od. IV. Future AI Regu­la­ti­on In this paper, I spell out what the defi­ci­en­ci­es of cur­rent AI regu­la­ti­ons (inclu­ding inter­na­tio­nal soft law) are (part I and II), and I argue why inter­na­tio­nal law, and inter­na­tio­nal human rights are and should be the basis for a legi­ti­ma­te glo­bal AI regu­la­ti­on and risk reduc­tion regime (part III). This approach makes it pos­si­ble to deve­lop rules with regard to AI sys­tems in cohe­rence with rele­vant and moral­ly jus­ti­fied values of a huma­ne world order that is aiming for future sci­en­ti­fic and tech­no­lo­gi­cal advan­ces in a respon­si­ble man­ner, inclu­ding the human right to life, the right to non-dis­cri­mi­na­ti­on, the right to pri­va­cy and the right to free­dom of sci­ence. Howe­ver, this is only a first step as cur­rent human rights norms and trea­ties are a basis and a start­ing point. The­r­e­fo­re the­re is the need – as a second step – to spe­ci­fy the gene­ral human rights by nego­tia­ting a human rights­ba­sed UN or UNESCO soft law decla­ra­ti­on on “AI Ethics and Human Rights”. This new decla­ra­ti­on could and should avo­id the dis­ad­van­ta­ges of the 2019 OECD AI re114 AI-gover­nan­ce means in many cases the gover­nan­ce of risks, as many impacts of AI are unclear and it is even unclear whe­ther the­re will be some­thing like AGI or a sin­gu­la­ri­ty, see abo­ve note 42. But human rights can be used as a basis for human-cen­te­red risk gover­nan­ce. It was Robert Nozick who show­ed that an exten­si­on of a rights-based moral theo­ry to inde­ter­mi­ni­stic cases is pos­si­ble as a duty not to harm other peo­p­le can be exten­ded to a duty not to per­form actions that increase their risk of being har­med. See Sil­ja Voe­neky, Human Rights and Legi­ti­ma­te Gover­nan­ce of Exis­ten­ti­al and Glo­bal Cata­stro­phic Risks, in Sil­ja Voeneky/Gerald Neu­man (eds.), Human Rights, Demo­cra­cy, and Legi­ti­ma­cy in Times of Dis­or­der, 2018, 153. 115 It is an obli­ga­ti­on to pro­tect, not only an obli­ga­ti­on to respect; see U.N. Com­mis­si­on on Human Rights, Res. 2005/69, 29.04.2005, U.N. Doc. E/CN.4/2005/L.10/Add.17; Com­mit­tee on Eco­no­mic, Social and Cul­tu­ral Rights, Gene­ral Com­ment No 13, para. 46 (1999), reprin­ted in U.N. Doc. HRI/GEN/1/Rev.9, 72 (2008). 116 For the right to life, art. 6 (1) ICCPR, the second sen­tence pro­vi­des that the right to life “shall be pro­tec­ted by law.” In addi­ti­on, the right to life is the pre­con­di­ti­on for the exer­cise of any other human right, part of cus­to­ma­ry inter­na­tio­nal law and enshri­ned in all major gene­ral human rights con­ven­ti­ons. The Euro­pean Court of Human Rights has stres­sed the posi­ti­ve obli­ga­ti­on to pro­tect human life in seve­ral decis­i­ons; for an over­view see Niels Peter­sen, Life, Right to, Inter­na­tio­nal Pro­tec­tion, in Rüdi­ger Wolf­rum (ed.), Max Planck Ency­clo­pe­dia of Public Inter­na­tio­nal Law, 2012, Vol. 6, 866. Nevert­hel­ess, the U.S. has not accept­ed that the­re exists a duty to pro­tect against pri­va­te inter­fe­rence due to art. 6 ICCPR; see Obser­va­tions of the United Sta­tes of Ame­ri­ca On the Human Rights Committee’s Draft Gene­ral Com­ment No. 36, On Artic­le 6 – Right to Life, para. 30–38 (06.10.2017), available at: http://www. ohchr.org/EN/HRBodies/CCPR/Pages/GC36-Article6Righttolife. aspx. 117 An excep­ti­on – as part of a soft law decla­ra­ti­on – is art. 2 (b) of the Cai­ro Decla­ra­ti­on on Human Rights in Islam, 05.08.1990, adopted by Orga­niza­ti­on of the Isla­mic Con­fe­rence Res. No. 49/19‑P (1990). Vöneky · Arti­fi­ci­al Intel­li­gence 2 1 com­men­da­ti­ons. For this, we should iden­ti­fy tho­se are­as of AI-rese­arch, ‑deve­lo­p­ment, and ‑deploy­ment, which ent­ail seve­re risks for core human rights.118 A future uni­ver­sal “AI Ethics and Human Rights”119 decla­ra­ti­on should include sec­tor-spe­ci­fic rules based on human rights that pro­tect the most vul­nerable rights and human digni­ty at the inter­na­tio­nal level – as for ins­tance, by pro­tec­ting brain data. And this decla­ra­ti­on could and should mer­ge prin­ci­ples of “AI ethics”,120 as the pin­ci­ples of fair­ness, accoun­ta­bi­li­ty, explaina­bi­li­ty and transparency,121 with human rights as long as prin­ci­ples of AI ethics are coher­ent with and spe­ci­fy human rights in the field of AI.122 Sil­ja Vöneky ist Pro­fes­so­rin an der Albert-Lud­wigs-Uni­ver­si­tät Frei­burg und Direk­to­rin des Lehr­stuhls für Völ­ker­recht, Rechts­ver­glei­chung und Rechts­ethik sowie Fel­low am FRIAS Sal­tus Grup­pe Respon­si­ble AI. 118 I rely on tho­se human rights that are part of the human rights trea­ties; whe­ther the­re is the need for new human rights in the time of AI, as for a right of digi­tal auto­no­my (digi­ta­le Selbst­be­stim­mung) as the Ger­man Daten­ethik­kom­mis­si­on (cf. Gut­ach­ten der Daten­ethik­kom­mis­si­on, 2019, available at: https://www.bmjv. de/SharedDocs/Downloads/DE/Themen/Fokusthemen/Gutachten_DEK_DE.pdf?__blob=publicationFile&v=5) argues or whe­ther a new human right, that could be clai­med by cor­po­ra­ti­ons, will under­mi­ne basic human rights of natu­ral per­sons is still open to dis­cus­sion. 119 Simi­lar to the UNESCO Decla­ra­ti­on on „Bio­e­thics and Human Rights“, 19.10.2005, available at: http://portal.unesco.org/en/ ev.php-URL_ID=31058&URL_DO=DO_TOPIC&URL_SECTION=201.html . 120 As was shown in Part II at least some of the prin­ci­ples are alre­a­dy part of AI sec­tor-spe­ci­fic regu­la-tion. 121 For the noti­on of and the need for trans­pa­ren­cy see Gut­ach­ten der Daten­ethik­kom­mis­si­on, 2019, 169 et seq., 175, 185, 215 (Trans­pa­renz, Erklär­bar­keit und Nach­voll­zieh­bar­keit). 122 Bes­i­des, the­re is the urgent need with regard to risks rela­ted to AI sys­tems to have proac­ti­ve pre-ven­ti­ve regu­la­ti­on in place, which is backed by meaningful rules for ope­ra­tor incen­ti­ves to redu­ce risks bey­ond pure ope­ra­tor lia­bi­li­ty; for a pro­po­sal see a paper by Thors­ten Schmidt/Silja Vöneky on “How to regu­la­te dis­rup­ti­ve tech­no­lo­gies?” (forth­co­ming 2020). 2 2 O RDNUNG DER WISSENSCHAFT 1 (2020), 9–22