Menü Schließen
Klicke hier zur PDF-Version des Beitrags!

“We’re making tools not col­leagues, and the gre­at dan­ger is not appre­cia­ting the dif­fe­rence, which we should stri­ve to accen­tua­te, mar­king and defen­ding it with poli­ti­cal and legal inno­va­tions. (…) We don’t need arti­fi­cial con­scious agents. (…) We need intel­li­gent tools.”2 Dani­el C. Den­nett “We may hope that machi­nes will even­tual­ly com­pe­te with men in all pure­ly intel­lec­tu­al fields.”3 Alan M. Turing One major chal­len­ge of the 21st cen­tu­ry to human­kind is the widespread use of Arti­fi­cial Intel­li­gence (AI). Hard­ly any day pas­ses without news about the dis­rup­ti­ve for­ce of AI – both good and bad. Some warn that AI could be the worst event in the histo­ry of our civi­liz­a­ti­on. Others stress the chan­ces of AI dia­gno­sing, for instance, can­cer, or sup­por­ting humans in the form of auto­no­mous cars. But becau­se AI is so dis­rup­ti­ve the call for its regu­la­ti­on is wide-spread, inclu­ding the call by some actors for inter­na­tio­nal trea­ties ban­ning, for instance, socal­led “kil­ler robots”. Nevertheless, until now the­re is no con­sen­sus how and to which extent we should regu­la­te AI. This paper exami­nes whe­ther we can iden­ti­fy key ele­ments of respon­si­ble AI, spells out what exists as part “top down” regu­la­ti­on, and how new gui­de­li­nes, such as the 2019 OECD Recom­men­da­ti­ons on AI can be part of a solu­ti­on to regul­ta­te AI sys­tems. In the end, a solu­ti­on shall be pro­po­sed that is cohe­rent with inter­na­tio­nal human rights to frame the chal­len­ges posed by AI that lie ahead of us without under­mi­ning sci­ence and inno­va­ti­on; rea­sons are given why and how a human rights based approach to respon­si­ble AI should inspi­re a new decla­ra­ti­on at the inter­na­tio­nal level. Intro­duc­tion Ever­ything about AI is a hype. It is labe­led a dis­rup­ti­ve tech­no­lo­gy. Its trans­for­ma­ti­ve for­ce is com­pa­red to that of electri­ci­ty. It is said that just as electri­ci­ty trans­for­med peo­p­les’ lives and indus­tries 100 years ago, AI will now trans­form our lives.4 As we are incor­po­ra­ting AI sys­tems into our life, we bene­fit from the effi­ci­en­ci­es that come from AI sys­tems (AIs).5 Howe­ver, a tech­no­lo­gy like AI is, first of all, a tool. I argue, as the phi­lo­so­pher Dani­el C. Den­nett argues, that AIs are tools and should be regar­ded and trea­ted as tools. They are tools with a spe­ci­fic qua­li­ty and power, becau­se AI sys­tems can be used for mul­ti­ple pur­po­ses, and will imi­ta­te and replace human bein­gs in many intel­li­gent acti­vi­ties, shape human beha­vi­or and even chan­ge us as human bein­gs in the process6 in inten­ded and unin­ten­ded ways.7 But even if AIs could be in princip­le as auto­no­mous as a per­son they lack our vul­nera­bi­li­ty and mortality.8 This means that as long as we deve­lop, sell and use AI, we can and have to deci­de how we frame the rules and norms gover­ning AI. As always when we have the chan­ce to get a new, power­ful tech­no­lo­gi­cal tool, we have to ans­wer the ques­ti­on how we can make sure that we as a socie­ty will make the right choices – or at least mini­mi­ze the risk that we will make the wrong choices; and how do we deci­de what is right and wrong – espe­cial­ly as the field of AI is an area hard­ly any­bo­dy under­stands ful­ly. I argue that the­se are ques­ti­ons that can­not be ans­we­red Sil­ja Vöne­ky Key Ele­ments of Respon­si­ble Arti­fi­cial Intel­li­gence – Dis­rup­ti­ve Tech­no­lo­gies, Dyna­mic Law1 1 The back­ground of this paper is my rese­arch on ques­ti­ons of demo­cra­tic legi­ti­ma­cy in ethi­cal decisi­on making as a Direc­tor of an Inde­pen­dent Max Planck Rese­arch School in Hei­del­berg on bio­tech­no­lo­gy gover­nan­ce, and on the gover­nan­ce of exis­ten­ti­al risks as a Fel­low at Har­vard Law School (2015–2016). I am gra­te­ful for the inspi­ra­ti­on and exchan­ge with the mem­bers of our FRIAS Sal­tus Rese­arch Group “Respon­si­ble AI”, Phil­ipp Kell­mey­er (Neu­ro­lo­gy, Neu­roethics), Oli­ver Mül­ler (Phi­lo­so­phy), and Wolf­ram Bur­gard (Robo­tics) over the last mon­ths. I want to thank the rese­arch assi­stants Tobi­as Cro­ne, Isa­bel­la Beck, Eva Böning, and Gide­on Whee­ler for their valu­able sup­port. 2 Dani­el C. Den­nett, What can we do?, in John Brock­man (ed.), Pos­si­ble Minds – 25 Ways of Loo­king at AI, 2019, 46, 51. 3 Alan M. Turing, Com­pu­ting Machine­ry and Intel­li­gence, Mind LIX, 1950, 433 et seq. (reprin­ted in Mar­ga­ret A. Boden (ed.), The Phi­lo­so­phy of Arti­fi­cial Intel­li­gence, 1990, 65 et seq.). 4 Andrew Ng, in Mar­tin Ford (ed.), Archi­tects of Intel­li­gence, 2018, 185, 190. 5 Iyad Rahwan/Manuel Cebrian/Nick Obra­do­vich et al., Machi­ne beha­viour, Natu­re 568 (2019), 477, 484. 6 Nor­bert Wie­ner, The Human Use of Human Bein­gs, 1954, 96. 7 Iyad Rahwan/Manuel Cebrian/Nick Obra­do­vich et al., Machi­ne beha­viour, Natu­re 568 (2019), 478; Dani­el C. Den­nett, What can we do?, in John Brock­man (ed.), Pos­si­ble Minds – 25 Ways of Loo­king at AI, 2019, 43. 8 Dani­el C. Den­nett, ibid., 51 et seq. Ord­nung der Wis­sen­schaft 2020, ISSN 2197–9197 1 0 O RDNUNG DER WISSENSCHAFT 1 (2020), 9–22 9 OECD Recom­men­da­ti­on of the Coun­cil on Arti­fi­cial Intel­li­gence, adop­ted 22.05.2019 (OECD Princi­ples on AI); cf. OECD/ LEGAL/0449, avail­ab­le at: instru­ments/O­ECD-LEGAL-0449; in Ger­man (unof­fi­cial trans­la­ti­on) “Emp­feh­lung des Rats zu künst­li­cher Intel­li­genz” avail­ab­le at: 10 Stuart J. Russel/Peter Nor­ving, Arti­fi­cial Intel­li­gence – A Modern Approach, 3rd ed, 2016, 1. Other defi­ne the field of AI as “a field devo­ted to buil­ding arti­fi­cial ani­mals (or at least arti­fi­cial crea­tures that – in sui­ta­ble con­texts – appe­ar to be ani­mals), and, for many, arti­fi­cial per­sons (or at least arti­fi­cial crea­tures that – in sui­ta­ble con­texts – appe­ar to be per­sons).” For this and a dis­cus-sion of dif­fe­rent approa­ches see Sel­mer Bringsjord/Naveen Sundar Govin­dar­a­ju­lu, Arti­fi­cial Intel-ligence, in Edward N. Zal­ta (ed.), Stan­ford Ency­clo­pe­dia of Phi­lo­so­phy (SEP), Win­ter 2019 Ed. 11 Stuart J. Russel/Peter Nor­ving, Arti­fi­cial Intel­li­gence – A Modern Approach, 3rd ed, 2016, 1. 12 Stuart J. Russel/Peter Nor­ving, ibid., 2. 13 The famous and often quo­ted so-cal­led Turing Test by Alan M. Turing is a beha­vio­ral intel­li­gence test that shall pro­vi­de an ope­ra­tio­nal defi­ni­ti­on of intel­li­gence. Accord­ing to this test a pro­gramm pas­ses the test if a human inter­ro­ga­tor, after posing writ­ten ques­ti­ons via online typed messages for five minu­tes, can­not tell whe­ther the writ­ten ans­wers are given by a human being or a com­put-er, cf. Alan M. Turing, Com­pu­ting Machine­ry and Intel­li­gence, Mind LIX, 1950, 433 et seq. (re-prin­ted in Mar­ga­ret A. Boden (ed.), The Phi­lo­so­phy of Arti­fi­cial Intel­li­gence, 1990, 40 et seq.); for a dis­cus­sion see Stuart J. Russel/Peter Nor­ving, Arti­fi­cial Intel­li­gence – A Modern Approach, 3rd ed, 2016, 1036 et seq. 14 Iyad Rahwan/Manuel Cebrian/Nick Obra­do­vich et al., Machi­ne beha­viour, Natu­re 568 (2019), 477, 483. 15 An algo­rithm is a pro­cess (or pro­gram) that a com­pu­ter can fol­low. It, for instance, defi­nes a pro­cess to ana­ly­ze a data­set and iden­ti­fy pat­terns in the data; in more gene­ral terms it can be des­ri­bed as a sequence of inst­ruc­tions that are car­ri­ed out to trans­form the input to the out­put, see John D. Kel­le­her, Deep Lea­ring, 2019, 7; Ethem Alpaydin, Machi­ne Lear­ning – The New AI, 2016, 16. 16 Iyad Rahwan/Manuel Cebrian/Nick Obra­do­vich et al., Machi­ne beha­viour, Natu­re 568 (2019), 477. by indi­vi­du­als, cor­po­ra­ti­ons or Sta­tes, only, but have to be ans­we­red by the inter­na­tio­nal com­mu­ni­ty as a who­le, as well, becau­se AI rese­arch, deve­lo­p­ment and deploy­ment, and the rela­ted effects are not limi­ted to the ter­ri­to­ry of a Sta­te but are trans­na­tio­nal and glo­bal. This paper is a star­ting point to dis­cuss key ele­ments of respon­si­ble AI. Alt­hough the noti­on of intel­li­gence in Arti­fi­cial Intel­li­gence might sug­gest other­wi­se, AI as a tech­no­lo­gy is not per se “good”, neit­her is it “bad”. The first part spells out fea­tures of AI sys­tems, and iden­ti­fies bene­fits and risks deve­lo­ping and using AI sys­tems, in order to show chal­len­ges for regu­la­ting the­se tools (see below I). The inter­na­tio­nal gover­nan­ce dimen­si­on is stres­sed in the second part. The­re I will look clo­ser at the Recom­men­da­ti­ons on Arti­fi­cial Intel­li­gence by the Orga­ni­sa­ti­on for Eco­no­mic Co-ope­ra­ti­on and Deve­lo­p­ment (OECD) that were adop­ted in 2019 (see below II).9 The­se are the first uni­ver­sal inter­na­tio­nal soft law rules that try to govern and frame AI in a gene­ral way. Third­ly, I argue that we should stress the link bet­ween human rights and the regu­la­ti­on of AI sys­tems, and high­light the advan­ta­ges of an approach in regu­la­ting AI that is based on legal­ly bin­ding human rights that are part of the exis­ting inter­na­tio­nal legal order (see below III). I. AI Sys­tems as Mul­tipur­po­se Tools – Chal­len­ges for Regu­la­ti­on 1. Noti­ons and Foun­da­ti­ons When we try to under­stand what AI means as a tech­no­lo­gy, we rea­li­ze that the­re seem to be many aspects and app­li­ca­ti­ons rele­vant and lin­ked to AI sys­tems: from facial reco­gni­ti­on sys­tems, to pre­dic­ti­ve poli­cing, from AI cal­led Alpha­Go play­ing the game GO, to social bots and algo­rith­mic tra­ders, from auto­no­mous cars to – may­be even – auto­no­mous wea­pons. A first ques­ti­on we should ans­wer is: How can we exp­lain AI to someo­ne who does not know what AI is, but wants to join and should join the dis­cour­se on regu­la­ti­on and gover­nan­ce? A simp­le start would be to claim, that a key fea­ture of the field of AI is the goal to build intel­li­gent entities.10 An AI sys­tem could be defi­ned as a sys­tem that is intel­li­gent, i.e. ratio­nal, in the way and to the extent that it does the “right thing”, given what it knows.11 Howe­ver this is only one defi­ni­ti­on of an AI sys­tem. The stan­dard text­book quo­tes eight defi­ni­ti­ons by dif­fe­rent aut­hors laid out along two dimen­si­ons inclu­ding two aspects to mea­su­re the suc­cess of an AI sys­tem in rela­ti­on to human per­for­mance (“thin­king human­ly”; ”acting human­ly”); and two aspects to mea­su­re the suc­cess of an AI sys­tem in rela­ti­on to ide­al per­for­mance (“thin­king ratio­nal­ly”; “acting rationally”).12 But even if tho­se are cor­rect who sta­te that AI is con­cer­ned with ratio­nal or intel­li­gent beha­vi­or in arti­facts, the under­ly­ing ques­ti­on is whe­ther it is cor­rect to sta­te that the noti­on of “intel­li­gence” means the same as the noti­on of “rationality”.13 It seems rea­son­ab­le to claim that AI sys­tems exhi­bit forms of intel­li­gence that are qua­li­ta­tively dif­fe­rent to tho­se seen in humans or ani­mals as bio­lo­gi­cal agents.14 As a basic descrip­ti­on one might sta­te that AI tools are based on com­plex or simp­le algorithms15 used to make decisi­ons, and are crea­ted to sol­ve par­ti­cu­lar tasks. Auto­no­mous cars, for instance, must dri­ve (in a given time without causing acci­dents or vio­la­ting laws) to a cer­tain place, and game-play­ing AI sys­tems should chal­len­ge or even win against a human being.16 As AI is expec­ted to ful­fill a cer­tain task, the­re are requi­red pre­con­di­ti­ons for a sys­tem to be able to “do the Vöne­ky · Arti­fi­cial Intel­li­gence 1 1 17 The idea of a lear­ning machi­ne was dis­cus­sed by Alan M. Turing, Com­pu­ting Machine­ry and Intel­li­gence, Mind LIX, 1950, 433 et seq. (reprin­ted in Mar­ga­ret A. Boden (ed.), The Phi­lo­so­phy of Arti­fi­cial Intel­li­gence, 1990, 64 et seq.) 18 In gene­ral dif­fe­rent types of feed­back can be part of the machi­ne lear­ning pro­cess. The­re is unsu­per­vi­sed lear­ning (no expli­cit feed­back is given), rein­for­ce­ment lear­ning (the sys­tem learns based on rewards or “punish­ments”), and super­vi­sed lear­ning, which means in order to teach a sys­tem what a tea cup is, you have to show it thousands of tea cups, cf. Stuart J. Russel/Peter Nor­ving, Arti­fi­cial Intel­li­gence – A Modern Approach, 3rd ed, 2016, 706 et seq. 19 Ethem Alpaydin, Machi­ne Lear­ning – The New AI, 2016, 16 et seq. 20 John D. Kel­le­her, Deep Lea­ring, 2019, 6; Ethem Alpaydin, Machi­ne Lear­ning – The New AI, 2016, 16 et seq. 21 John D. Kel­le­her, Deep Lea­ring, 2019, 8. 22 John D. Kel­le­her, Deep Lea­ring, 2019, 1: “Deep lear­ning is the sub­field of arti­fi­cial intel­li­gence that focu­ses on crea­ting lar­ge neural net­work models that are capa­ble of making accu­ra­te data-dri­ven decisi­ons.” Ethem Alpaydin, Machi­ne Lear­ning – The New AI, 2016, 104: “With few as-sump­ti­ons and litt­le manu­al inter­fe­rence, struc­tures simi­lar to the hier­ar­chi­cal cone are being au-toma­ti­cal­ly lear­ned from lar­ge amounts of data. (…) This is the idea behind deep neural net­works whe­re, star­ting from the raw input, each hid­den lay­er com­bi­nes the values in its pre­ce­ding lay­er and learns more com­pli­ca­ted func­tions of the input.” 23 Eric Topol, Deep Medi­ci­ne, 2019, 9 et seq., 16 et seq. 24 See Yann LeCun et al., Deep Lear­ning, Natu­re 521 (2015), 436– 444, avail­ab­le at: n7553/full/nature14539.html. 25 John D. Kel­le­her, Deep Lear­ning, 2019, 4. 26 Eric Topol, Deep Medi­ci­ne, 2019, 10. 27 Iyad Rahwan/Manuel Cebrian/Nick Obra­do­vich et al., Machi­ne beha­viour, Natu­re 568 (2019), 478. 28 Andrew Ng, in Mar­tin Ford (ed.), Archi­tects of Intel­li­gence, 2018, 20; Gut­ach­ten der Daten­ethik­kom­mis­si­on, 2019, 167 f. 29 Iyad Rahwan/Manuel Cebrian/Nick Obra­do­vich et al., Machi­ne beha­viour, Natu­re 568 (2019), 478. 30 Stuart Rus­sel, Human Com­pa­ti­ble – Arti­fi­cial Intel­li­gence and the Pro­blem of Con­trol, 253 et seq. 31 W. Dani­el Hills, The First Machi­ne Intel­li­gen­ces, in John Brock­man (ed.), Pos­si­ble Minds – 25 Ways of Loo­king at AI, 2019, 172, 173. 32 Nor­bert Wie­ner, The Human Use of Human Bein­gs, 1954, 181. 33 Stuart Rus­sel, Human Com­pa­ti­ble – Arti­fi­cial Intel­li­gence and the Pro­blem of Con­trol, 103 et seq; Iyad Rahwan/Manuel Cebrian/ Nick Obra­do­vich et al., Machi­ne beha­viour, Natu­re 568 (2019), 477 et seq. right thing”. Depen­ding on the are­as of use, AI key capa­bi­li­ties are natu­ral lan­guage pro­ces­sing (speech reco­gni­ti­on), rea­so­ning, (lear­ning, per­cep­ti­on, and action (robo­tics). Espe­cial­ly learning17 is a key abi­li­ty of modern AI sys­tems, 18 as for some pro­blems it is unclear how to trans­form the input to the output.19 This means that algo­rith­ms are deve­lo­ped that enab­le the machi­ne to extract func­tions from a data­set to ful­fill a cer­tain task.20 The socal­led deep lear­ning, the field of machi­ne lear­ning that focu­ses on deep neural networks,21 is the cen­tral part of cur­rent AI sys­tems if lar­ge data­sets are avail­ab­le, as for face reco­gni­ti­on on digi­tal cameras22 or in the field of medi­ci­ne to dia­gno­se cer­tain illnesses.23 Deep lear­ning mecha­nisms that are able to impro­ve them­sel­ves without human inter­ac­tion and without rule-based pro­gramming alrea­dy exist today.24 As John Kel­le­her puts it: “Deep lear­ning enab­les data-dri­ven decisi­ons by iden­ti­fy­ing and extrac­ting pat­terns from lar­ge datasets”.25 It is not asto­nis­hing that sin­ce 2012 the num­ber of new deep lear­ning AI algo­rith­ms has grown exponential26 but as the func­tio­n­al pro­ces­ses that gene­ra­te the out­put are not clear (or at least hard to inter­pret) the pro­blem of the com­ple­xi­ty and opa­ci­ty of algo­rith­ms that seem to be “black boxes” is obvious as well.27 2. Risks and Chan­ces The “black boxes” pro­blem shows that it is important, if we think about AI regu­la­ti­on or gover­nan­ce, to look at the dif­fe­rent risks and chan­ces that can be lin­ked to the deve­lo­p­ment and use of AI sys­tems. Ques­ti­ons of con­cern that are rai­sed are rela­ted to our demo­cra­tic order (news ran­king algo­rith­ms, “algo­rith­mic jus­ti­ce”), kine­tics (auto­no­mous cars and auto­no­mous wea­pons), our eco­no­my and mar­kets (algo­rith­mic tra­ding and pri­cing), and our socie­ty (con­ver­sa­tio­nal robots). A major and inherent risk if a sys­tem learns from data is that bias in AI sys­tems can hard­ly be avoided. At least if AI learns from human-gene­ra­ted (text) data, they can or even will inclu­de health, gen­der or racial stereotypes.28 Some claim, howe­ver, that the­re are bet­ter ways for redu­cing bias in AI than for redu­cing bias in humans, so AI sys­tems may be or beco­me less bia­sed than humans.29 Bes­i­des, the­re are risks of misu­se, if AI sys­tems are used to com­mit cri­mes, as for instance fraud.30 Ano­t­her risk is that AI tech­no­lo­gies have the poten­ti­al for grea­ter con­cen­tra­ti­on of power. Tho­se who are able to use this tech­no­lo­gy can beco­me more power­ful (cor­po­ra­ti­ons or governments),31 and can influ­ence lar­ge num­bers of peop­le (for instance to vote in a cer­tain way). It was Nor­bert Wie­ner who wro­te in 1954 “(…) that such machi­nes, though hel­pless by them­sel­ves, may be used by a human being or a block of human bein­gs to incre­a­se their con­trol over the rest of the race or that poli­ti­cal lea­ders may attempt to con­trol their popu­la­ti­ons by means not of machi­nes them­sel­ves but through poli­ti­cal tech­ni­ques as nar­row and indif­fe­rent to human pos­si­bi­li­ty as if they had , in fact, been con­cei­ved mechanically.”32 If we think about regu­la­ti­on, we must not for­get the unin­ten­ded and unan­ti­ci­pa­ted nega­ti­ve and/or posi­ti­ve con­se­quen­ces of AI sys­tems and that the­re might be a seve­re lack of pre­dic­ta­bi­li­ty of the­se consequences.33 The 1 2 O RDNUNG DER WISSENSCHAFT 1 (2020), 9–22 34 Iyad Rahwan/Manuel Cebrian/Nick Obra­do­vich et al., Machi­ne beha­viour, Natu­re 568 (2019), 478. 35 Stres­sing the need to ana­ly­ze risks, cf. Max Teg­mark, Let’s Aspi­re to More Than Making Our-sel­ves Obso­le­te, in John Brock­man (ed.), Pos­si­ble Minds – 25 Ways of Loo­king at AI, 2019, 76 et seq.; Stuart Rus­sel, Human Com­pa­ti­ble – Arti­fi­cial Intel­li­gence and the Pro­blem of Con­trol, 103 et seq. 36 Iyad Rahwan/Manuel Cebrian/Nick Obra­do­vich et al., Machi­ne beha­viour, Natu­re 568 (2019), 478. 37 Depen­ding on the area the AI sys­tem is deploy­ed the sys­tem has to be mea­su­red against the human expert that usual­ly is allo­wed to ful­fil a task (as for instance an AI dia­gno­sis sys­tem). This dif­fers from the view of the Ger­man Daten­ethik­kom­mis­si­on as the com­mis­si­on argues that the­re is an ethi­cal obli­ga­ti­on to use AI sys­tems if they ful­fil a cer­tain task bet­ter as an human, cf. Gut­ach­ten der Daten­ethik­kom­mis­si­on, 2019, 172. 38 Stuart J. Rus­sel, Human Com­pa­ti­ble, 2019, pp. 172 et seq. 39 Some claim that weak AI means that AI dri­ven machi­nes act “as if they were intel­li­gent”, cf. Stuart J. Russel/Peter Nor­vig, Arti­fi­cial Intel­li­gence – A Modern Approach, 3rd ed, 2016, 1035. 40 Stuart J. Russel/Peter Nor­vig, ibid., 1035; Mur­ray Shana­han, The Tech­no­lo­gi­cal Sin­gu­la­ri­ty, 2015, 3. 41 The term “the Sin­gu­la­ri­ty” was coi­ned in 1993 by the com­pu­ter sci­en­tist and aut­hor Ver­non Vin­ge; he was con­vin­ced that “[w] ithin thir­ty years, we will have the tech­no­lo­gi­cal means to crea­te super­hu­man intel­li­gence,” and he con­clu­ded: “I think it’s fair to call this event a sin­gu­la­ri­ty (“the Sin­gu­la­ri­ty” for the pur­po­se of this paper).” See Ver­nor Vin­ge, The Com­ing Tech­no­lo­gi­cal Sin­gu­la­ri­ty: How to Sur­vi­ve in the Post-Human Era, in Geoff­rey A. Lan­dis (ed.), Visi­on-21: Inter­di­sci­pli­na­ry Sci­ence and Engi­nee­ring in the Era of Cyber­space (1993), 11, 12 (NASA Publi­ca­ti­on CP10129), avail­ab­le at: 42 Stuart J. Rus­sel, The Pur­po­se Put into the Machi­ne, in John Brock­man (ed.), Pos­si­ble Minds: 25 Ways of Loo­king at AI, 2019, 20 et seq., 26. Some experts pre­dict that super­hu­man intel­li­gence will hap­pen by 2050, see e.g., Ray Kurz­weil, The Sin­gu­la­ri­ty is Near, 2005, 127; for more fore­casts, see Nick Bos­trom, Super­in­tel­li­gence, Paths, Dan­gers, Stra­te­gies, 2014, at 19–21. 43 Elie­zer Yud­kow­sky, Arti­fi­cial Intel­li­gence as a posi­ti­ve and nega­ti­ve fac­tor in glo­bal risk, in Nick Bostrom/Milan Ćir­ko­vić (eds.), Glo­bal Cata­stro­phic Risks, 2011, at 341. 43 Elie­zer Yud­kow­sky, Arti­fi­cial Intel­li­gence as a posi­ti­ve and nega­ti­ve fac­tor in glo­bal risk, in Nick Bostrom/Milan Ćir­ko­vić (eds.), Glo­bal Cata­stro­phic Risks, 2011, at 341. 44 Max Teg­mark, Will The­re Be a Sin­gu­la­ri­ty wit­hin Our Life­time?, in John Brock­man (ed.), What Should We Be Worried About?, 2014, 30, 32. 45 Stuart J. Rus­sel, The Pur­po­se Put into the Machi­ne, in John Brock­man (ed.), Pos­si­ble Minds: 25 Ways of Loo­king at AI, 2019, 26. 46 For a simi­lar approach, howe­ver less based in the risks for the vio­la­ti­on of human rights, see Gut­ach­ten der Daten­ethik­kom­mis­si­on, 2019, 173. use of AI will pro­vi­de new and even bet­ter ways to impro­ve our health sys­tem, to pro­tect our envi­ron­ment and to allo­ca­te resources34 Howe­ver, plau­si­ble risk sce­n­a­ri­os may show that the fear of the poten­ti­al loss of human over­sight is not per se irrational.35 They sup­port the call for a “human in the loop”, that – for instance – a judge deci­des about the fate of a per­son, not an AI sys­tem, and a com­ba­tant deci­des about let­hal or non-let­hal for­ce during an armed con­flict, not an auto­no­mous wea­pon. But to keep us as per­sons “in the loop” means that we need sta­te based regu­la­ti­on stres­sing this as a necessa­ry pre­con­di­ti­on at least in the are­as whe­re the­re are serious risks for the vio­la­ti­on of human rights or human digni­ty. I agree with tho­se who claim, that it is important to under­stand the pro­per­ties of AI sys­tems if we think about AI regu­la­ti­on and gover­nan­ce and that the­re is the need to look at the beha­vi­or of “black box” algo­rith­ms, simi­lar to the beha­vi­or of ani­mals, in real world settings.36 My hypo­the­sis is, that an AI sys­tem that ser­ves human bein­gs has to meet the “at least as good as a human being / human expert”37 thres­hold. This sets even a hig­her thres­hold as the one that is part of the idea of “bene­fi­cial machi­nes”, defi­ned as intel­li­gent machi­nes who­se actions can be expec­ted to achie­ve our objec­ti­ves rather than their objectives.38 We also have to keep in mind the future deve­lo­p­ment of AI sys­tems and their inter­lin­kage. I have spel­led out so far fea­tures of so-cal­led nar­row AI or weak AI. Weak AI pos­ses­ses spe­cia­li­zed, domain spe­ci­fic, intelligence.39 In con­trast, Arti­fi­cial Gene­ral Intel­li­gence (AGI) will pos­sess gene­ral intel­li­gence and strong AI could mean, as some claim, that AI sys­tems “are actual­ly thinking”.40 Whe­ther the­re is a chan­ce that AGI, and human-level or super­hu­man AI (the Singularity)41 will be pos­si­ble wit­hin our life­time is uncertain.42 It is not per se implau­si­ble to argue, as some sci­en­tists do, that intel­li­gence explo­si­on leads to a dyna­mi­cal­ly unsta­ble sys­tem as smar­ter sys­tems will have an easier time making them­sel­ves smarter43 and that the­re will be a point bey­ond which it is impos­si­ble for us to make reli­able predictions.44 And it seems con­vin­cing that if super­in­tel­li­gent AI was pos­si­ble it would be a signi­fi­cant risk for humanity.45 3. Cur­rent and Future AI Regu­la­ti­on a. Bases For regu­la­ti­ve issu­es, the dif­fe­ren­tia­ti­on of nar­row AI ver­sus AGI might be hel­pful as a star­ting point. It is more con­vin­cing, howe­ver, to find cate­go­ries that show the pos­si­ble (nega­ti­ve) impact of AI sys­tems to core human rights, human digni­ty and to con­sti­tu­tio­nal rights, such as pro­tec­tion against discri­mi­na­ti­on, the right to life, the right to health, the right to pri­va­cy, and the right to take part in elec­tions, etc.46 From this per­spec­ti­ve, even deve­lo­p­ments such as a fast take-off sce­n­a­rio, which means that an AGI sys­tem beco­mes super-intel­li­gent becau­se of Vöne­ky · Arti­fi­cial Intel­li­gence 1 3 47 Andrew Ng, in Mar­tin Ford (ed.), Archi­tects of Intel­li­gence, 2018, 202. 48 For a gover­nan­ce frame­work of super­in­tel­li­gent AI as an exis­ten­ti­al risk, see Sil­ja Voe­n­e­ky, Human Rights and Legi­ti­ma­te Gover­nan­ce of Exis­ten­ti­al and Glo­bal Cata­stro­phic Risks, in Sil­ja Voeneky/Gerald Neu­man (eds.), Human Rights, Demo­cra­cy, and Legi­ti­ma­cy in Times of Dis­or­der, 2018, 160 et seq. 49 Regu­la­ti­on (EU) 2016/679 of the Euro­pean Par­lia­ment and of the Coun­cil of 27.04.2016 on the pro­tec­tion of natu­ral per­sons with regard to the pro­ces­sing of per­so­nal data and on the free move­ment of such data, and repe­aling Direc­ti­ve 95/46/EC, in for­ce sin­ce 25.05.2018, cf. OJEU L119/1, 04.05.2016. 50 Art. 4 (1) GDPR: “‘per­so­nal data’ means any infor­ma­ti­on rela­ting to an iden­ti­fied or iden­ti­fia­ble natu­ral per­son (‘data sub­ject’); an iden­ti­fia­ble natu­ral per­son is one who can be iden­ti­fied, direct­ly or indi­rect­ly, in par­ti­cu­lar by refe­rence to an iden­ti­fier such as a name, an iden­ti­fi­ca­ti­on num­ber, loca­ti­on data, an online iden­ti­fier or to one or more fac­tors spe­ci­fic to the phy­si­cal, phy­sio­lo­gi­cal, gene­tic, men­tal, eco­no­mic, cul­tu­ral or social iden­ti­ty of that natu­ral per­son”. 51 Howe­ver, art. 2 (2) lit. c and d GDPR exclu­des from the mate­ri­al scope the pro­ces­sing as defi­ned in art. 4 (2) GDPR of per­so­nal data by a natu­ral per­son in the cour­se „of a pure­ly per­so­nal or house­hold acti­vi­ty”, and by the com­pe­tent aut­ho­ri­ties for the pur­po­ses inter alia „of the pre­ven­ti­on (…) or pro­se­cu­ti­on of cri­mi­nal offen­ces”. 52 Cf. art. 7, art. 4 (11) GDPR: “ (…) ‘con­sent’ of the data sub­ject means any free­ly given, spe­ci­fic, infor­med and unam­bi­guous indi­ca­ti­on of the data subject‘s wis­hes by which he or she, by a state­ment or by a clear affir­ma­ti­ve action, signi­fies agree­ment to the pro­ces­sing of per­so­nal data relat-ing to him or her;”. 53 See as well art. 12 GDPR. 54 See art. 6 (4) GDPR. 55 With regard to the respon­si­ble and accoun­ta­ble per­son or enti­ty (“the con­trol­ler” accord­ing to art. 4 (7) GDPR) and fur­ther duties of the con­trol­ler see art. 5 (2) (“accoun­ta­bi­li­ty”), art. 32 (“secu­ri­ty of pro­ces­sing”) and art. 35 GDPR (“data pro­tec­tion impact assess­ment”). For a dis­cus­sion in Ger­ma­ny how to app­ly the GDPR to AI sys­tems see, inter alia, the Ent­schlie­ßung der 97. Kon­fe­renz der unab­hän­gi­gen Daten­schutz­auf­sicht­be­hör­den des Bun­des und der Län­der, 03.04.2019 (“Ham­ba­cher Erklä­rung zur Künst­li­chen Intel­li­genz”), avail­ab­le at: https://www.datenschutzkonferenz-online. de/media/en/20190405_hambacher_erklaerung.pdf. For the claim that the­re is a need for a new EU Regu­la­ti­on for AI sys­tems, see the Gut­ach­ten der Daten­ethik­kom­mis­si­on, 2019, 180 pro­po­sing a “EU-Ver­ord­nung für Algo­rith­mi­sche Sys­te­me” (EUVAS). 56 See as well Gut­ach­ten der Daten­ethik­kom­mis­si­on, 2019, 191 et seq. 57 The Ger­man Con­sti­tu­tio­nal Court decla­red to be com­pe­tent to review the app­li­ca­ti­on of natio­nal legis­la­ti­on on the basis of the rights of the Char­ter of Fun­da­men­tal Rights of the Euro­pean Uni­on even in an area that is ful­ly har­mo­ni­zed accord­ing to EU law, cf. BVerfG, Decisi­on of 06.11.2019, 1 BvR 276/17, Right to be for­got­ten II. 58 Cf. OJEC C 364/1, 18.12.2000. 59 Art. 8 EUChHR: “Pro­tec­tion of per­so­nal data (1) Ever­yo­ne has the right to the pro­tec­tion of per­so­nal data con­cer­ning him or her. (2) Such data must be pro­ces­sed fair­ly for spe­ci­fied pur­po­ses and on the basis of the con­sent of the per­son con­cer­ned or some other legi­ti­ma­te basis laid down by law. Ever­yo­ne has the right of access to data which has been collec­ted con­cer­ning him or her, and the right to have it rec­ti­fied. (3) Com­pli­an­ce with the­se rules shall be sub­ject to con­trol by an inde­pen­dent aut­ho­ri­ty.” 60 Cf. Oscar Schwartz, Mind-Rea­ding Tech? How Pri­va­te Com­pa­nies could gain Access to Our Brains, The Guar­di­an, 24.10.2019, online avail­ab­le at: oct/24/­mind-rea­ding-tech-pri­va­te-com­pa­nies-access-brains. a recur­si­ve self-impro­ve­ment cycle,47 that are dif­fi­cult to pre­dict, must not be neglec­ted as we can think about how to frame low pro­ba­bi­li­ty high impact sce­n­a­ri­os in a pro­por­tio­nal way.48 b. Sec­tor-Spe­ci­fic Rules and Mul­ti­le­vel Regu­la­ti­on When spea­king about gover­nan­ce and regu­la­ti­on, it is important to dif­fe­ren­tia­te bet­ween rules that are legal­ly bin­ding on the one hand and non-bin­ding soft law, on the other hand. In the area of inter­na­tio­nal, Euro­pean Uni­on, and natio­nal law, we see that at least parts of AId­ri­ven tech­no­lo­gy are cove­r­ed by exis­ting sec­tor-spe­ci­fic rules. (1) AI Sys­tems Dri­ven by (Big) Data The Gene­ral Data Pro­tec­tion Regu­la­ti­on (GDPR)49 aims to pro­tect per­so­nal data50 of natu­ral per­sons (art. 1 (1) GDPR) and app­lies to the pro­ces­sing of this data even by whol­ly auto­ma­ted means (art. 2 (1) GDPR).51 The GDPR requi­res an infor­med consent52 of the con­su­mer if some­bo­dy wants to use his or her data. It can be seen as sec­tor-spe­ci­fic law gover­ning AI sys­tems as AI sys­tems often make use of lar­ge amounts of per­so­nal data. The gene­ral princi­ples that are laid down for – inter alia – the pro­ces­sing of per­so­nal data (inclu­ding law­ful­ness, fair­ness and transparency53) and the collec­tion of per­so­nal data (pur­po­se limi­ta­ti­on) in art. 5 GDPR are app­li­ca­ble with regard to AI systems,54 and have to be imple­men­ted via appro­pria­te tech­ni­cal and orga­niz­a­tio­nal mea­su­res by the con­trol­ler (art. 25 GDPR).55 Accord­ing to art. 22 GDPR we, as data sub­jects, have the right “not to be sub­ject to a decisi­on based sole­ly on auto­ma­ted pro­ces­sing” that pro­du­ces legal effects con­cer­ning the data sub­ject or simi­lar­ly affects him or her.56 Sub­stan­ti­ve legi­ti­ma­cy of this regu­la­ti­ons is given becau­se the GDPR is in cohe­rence with the human rights that bind EU organs and can be review­ed and imple­men­ted by the Euro­pean Court of Jus­ti­ce and the Ger­man Con­sti­tu­tio­nal Court,57 espe­cial­ly art. 8 of the Char­ter of Fun­da­men­tal Rights of the Euro­pean Uni­on (EUChHR)58 that lays down the pro­tec­tion of per­so­nal data.59 Like every regu­la­ti­on and law, the GDPR has lacu­nae, and the­re might be rele­vant lacu­nae in the area of AI-dri­ven tech­no­lo­gy, as for instance, with regard to brain data that is used for con­su­mer technology.60 The decisi­ve ques­ti­on is whe­ther all rele­vant aspects of brain data pro­tec­tion are alrea­dy 1 4 O RDNUNG DER WISSENSCHAFT 1 (2020), 9–22 61 To dis­cuss this in detail is bey­ond the scope of this paper but it is one area of rese­arch of the Sal-tus-FRI­AS Respon­si­ble AI Rese­arch Group the aut­hor is part of. 62 Regu­la­ti­on (EU) 2017/745 of the Euro­pean Par­lia­ment and of the Coun­cil of 05.04.2017 on medi­cal devices, amen­ding Direc­ti­ve 2001/83/EC, Regu­la­ti­on (EC) No 178/2002 and regu­la­ti­on (EC) No 1223/2009 and repe­aling Coun­cil Direc­ti­ves 90/385/EEC and 93/42/EEC, OJEU L117/1, 05.05.2017. It came into for­ce May 2017, but medi­cal devices will have a tran­si­ti­on time of three years (until May 2020) to meet the new requi­re­ments. 63 Art. 2 MDR. „ (…) ‘medi­cal device’ means any instru­ment, appa­ra­tus, app­li­an­ce, soft­ware, im-plant, reagent, mate­ri­al or other arti­cle inten­ded by the manu­fac­tu­rer to be used, alo­ne or in com­bi­na­ti­on, for human bein­gs for one or more of the fol­lowing spe­ci­fic medi­cal pur­po­ses: (…)”. For exemp­ti­ons see, howe­ver, art. 1 (6) MDR. 64 Cf. art. 54, 55, art. 106 (3), Annex IX Sec­tion 5.1, Annex X Sec­tion 6 MDR. 65 Cf. the Phar­maceu­ti­cal legis­la­ti­on for medi­ci­nal pro­ducts of human use, Vol. 1, inclu­ding dif­fe­rent Direc­ti­ves and Regu­la­ti­ons, avail­ab­le at: 66 §§ 21 et seq. Gesetz über den Ver­kehr mit Arz­nei­mit­teln (Arz­nei­mit­tel­ge­setz, AMG), BGBl. I, 1626; Regu­la­ti­on (EU) No 536/2014 of the Euro­pean Par­lia­ment and of the Coun­cil of 16.04.2014 on cli­ni­cal tri­als on medi­ci­nal pro­ducts for human use, OJEU L 158/1, 27.05.2014. 67 Art. 1 Ach­tes Gesetz zur Ände­rung des Stra­ßen­ver­kehrs­ge­set­zes (8. StVGÄndG), 16.06.2017, BGBl. I 1648. 68 Ethik-Kom­mis­si­on „Auto­ma­ti­sier­tes und ver­netz­tes Fah­ren“ des Bun­des­mi­nis­te­ri­ums für Ver­kehr und digi­ta­le Infra­struk­tur, Report June 2017, avail­ab­le at: DE/Publikationen/DG/bericht-der-ethik-kommission.pdf?__ blob=publicationFile. 69 § 1a (1) StVG: „Der Betrieb eines Kraft­fahr­zeugs mit­tels hoch- und voll­au­to­ma­ti­sier­ter Fahr­funk­ti­on ist zuläs­sig, wenn die Funk­ti­on bestim­mungs­ge­mäß ver­wen­det wird.” 70 This seems true even if the descrip­ti­on of the inten­ded pur­po­se and the level of auto­ma­ti­on shall be „unam­bi­gous“ accord­ing to ratio­na­le of the law maker, cf. BT-Drucks., 18/11300, 20: “Die Sys­tem­be­schrei­bung des Fahr­zeugs muss über die Art der Aus­stat­tung mit auto­ma­ti­sier­ter Fahr­funk­ti­on und über den Grad der Auto­ma­ti­sie­rung unmiss­ver­ständ­lich Aus­kunft geben, um den Fah­rer über den Rah­men der bestim­mungs­ge­mä­ßen Ver­wen­dung zu infor­mie­ren.“ 71 Bernd Grzes­zick, art. 20, in Roman Herzog/Rupert Scholz/Matthias Herdegen/Hans Klein (eds.), Maunz/Dürig Grund­ge­setz-Kom­men­tar, para. 51–57. 72 Agree­ment Con­cer­ning the Adop­ti­on of Har­mo­ni­zed Tech­ni­cal United Nati­ons Regu­la­ti­ons for Whee­led Vehi­cles, Equip­ment and Parts which can be Fit­ted and/or be Used on Whee­led Vehi­cles and the Con­di­ti­ons for Reci­pro­cal Reco­gni­ti­on of Appro­vals Gran­ted on the Basis of the­se United Nati­ons Regu­la­ti­ons. cove­r­ed by the pro­tec­tion of health data (art. 4 (15) GDPR) or bio­metric data (art. 4 (14) GDPR) that are defi­ned in the regulation.61 (2) AI Sys­tems as Medi­cal Devices Bes­i­des, the­re is EU Regu­la­ti­on on Medi­cal Devices (MDR),62 which governs cer­tain AI-dri­ven apps in the health sec­tor and other AI-dri­ven medi­cal devices, for instance, in the area of neurotechnology.63 And again, one has to ask whe­ther this regu­la­ti­on is suf­fi­ci­ent to pro­tect the human digni­ty, life and health of con­su­mers, as the impact on human digni­ty, life and health might be more fareaching than the usu­al pro­ducts that were envi­sa­ged by the draf­ters of the regu­la­ti­on. Alt­hough the new EU medi­cal device regu­la­ti­on was adop­ted in 2017, it inclu­des a so-cal­led scru­ti­ny process64 for high-risk pro­ducts (cer­tain class III devices), which is a con­sul­ta­ti­on pro­ce­du­re pri­or to mar­ket appro­val. It is not a pre­ven­ti­ve per­mit pro­ce­du­re, dif­fe­ring from the per­mit pro­ce­du­re necessa­ry for the mar­ket appro­val of new medi­ci­ne (medi­ci­nal pro­ducts), as the­re is a detail­ed regu­la­ti­on at the natio­nal and even more at the Euro­pean Uni­on level,65 inclu­ding a new Cli­ni­cal Tri­al Regulation.66 That the pre­ven­ti­ve pro­ce­du­res dif­fer whe­ther the object of the rele­vant laws is a “medi­cal device” or a “medi­ci­nal pro­duct” is not con­vin­cing, if the risks invol­ved for human health for a con­su­mer are the same when com­pa­ring new drugs and cer­tain new medi­cal devices, as – for instance – new neu­ro­tech­no­lo­gy. (3) AI Sys­tems as (Semi-)Autonomous Cars Sec­tor-spe­ci­fic (top-down) regu­la­ti­on is alrea­dy in for­ce when it comes to the use of (semi-)autonomous cars. In Ger­ma­ny, the rele­vant natio­nal law was amen­ded in 2017,67 befo­re the com­pe­tent federal ethic com­mis­si­on publis­hed its report,68 in order to inclu­de new high­ly or ful­ly auto­ma­ted sys­tems (§ 1a, § 1b and § 63 StVG). § 1a (1) StVG sta­tes that the ope­ra­ti­on of a car by means of a high­ly or ful­ly auto­ma­ted dri­ving func­tion is per­mis­si­ble, pro­vi­ded the func­tion is used for its inten­ded purpose.69 Howe­ver, what “inten­ded pur­po­se” means must be defi­ned by the auto­mo­ti­ve com­pa­ny. The­re­fo­re § 1a (1) StVG means a dyna­mic refe­rence to the pri­va­te stan­dard-set­ting by a cor­po­ra­ti­on that seems to be rather vague70 espe­cial­ly if you think about the rule of law and the princip­le of “Rechts­klar­heit”, which means that legal rules have to be clear and understandable.71 It is even true with regard to the app­li­ca­ble inter­na­tio­nal trea­ties that sec­tor-spe­ci­fic law can be amen­ded and chan­ged (even at the inter­na­tio­nal level) if it is necessa­ry to adapt the old rules to now AI-dri­ven sys­tems. The UN/ECE 1958 Agreement72 was amen­ded in 2017 and 2018 (the Vöne­ky · Arti­fi­cial Intel­li­gence 1 5 73 Adden­dum 78: UN Regu­la­ti­on No. 79 Rev. 3, ECE/TRANS/ WP.29/2016/57 ECE/TRANS/WP.29/2017/10 (as amen­ded by para­graph 70 of the report ECE/TRANS/WP.29/1129), 30.11.2017, “Uni­form pro­vi­si­ons con­cer­ning the appro­val of vehi­cles with regard to stee­ring equip­ment”: “ ‘Auto­ma­ti­cal­ly com­man­ded stee­ring func­tion (ACSF)’ means a func­tion wit­hin an elec­tro­nic con­trol sys­tem whe­re actua­ti­on of the stee­ring sys­tem can result from auto­ma­tic eva­lua­ti­on of signals initia­ted on-board the vehi­cle, pos­si­b­ly in con­junc­tion with pas­si­ve infra­st­ruc­tu­re fea­tures, to gene­ra­te con­trol action in order to assist the dri­ver. ‘ACSF of Cate­go­ry A’ means a func­tion that ope­ra­tes at a speed no grea­ter than 10 km/h to assist the dri­ver, on demand, in low speed or par­king mano­eu­v­ring. ‘ACSF of Cate­go­ry B1’ means a func­tion which assists the dri­ver in kee­ping the vehi­cle wit­hin the cho­sen lane, by influ­en­cing the late­ral move­ment of the vehi­cle. ‘ACSF of Cate­go­ry B2’ means a func­tion which is initiated/activated by the dri­ver and which keeps the vehi­cle wit­hin its lane by influ­en­cing the late­ral move­ment of the vehi­cle for exten­ded peri­ods without fur­ther dri­ver command/confirmation. ‘ACSF of Cate­go­ry C’ means, a func­tion which is initiated/activated by the dri­ver and which can per­form a sin­gle late­ral mano­eu­vre (e.g. lane chan­ge) when com­man­ded by the dri­ver. ‘ACSF of Cate­go­ry D’ means a func­tion which is initiated/activated by the dri­ver and which can indi­ca­te the pos­si­bi­li­ty of a sin­gle late­ral mano­eu­vre (e.g. lane chan­ge) but per­forms that func­tion only fol­lowing a con­fir­ma­ti­on by the dri­ver. ‘ACSF of Cate­go­ry E’ means a func­tion which is initiated/ acti­va­ted by the dri­ver and which can con­ti­nuous­ly deter­mi­ne the pos­si­bi­li­ty of a mano­eu­vre (e.g. lane chan­ge) and com­ple­te the­se mano­eu­vres for exten­ded peri­ods without fur­ther dri­ver command/confirmation.” 74 Adden­dum 12‑H: UN Regu­la­ti­on No. 13‑H, ECE/TRANS/ WP.29/2014/46/Rev.1 and ECE/TRANS/WP.29/2016/50, 05.06.2018, “Uni­form pro­vi­si­ons con­cer­ning the appro­val of pas­sen­ger cars with regard to bra­king”: 2.20. “‘Auto­ma­ti­cal­ly com­man­ded bra­king’ means a func­tion wit­hin a com­plex elec­tro­nic con­trol sys­tem whe­re actua­ti­on of the bra­king system(s) or bra­kes of cer­tain axles is made for the pur­po­se of genera­ting vehi­cle retar­da­ti­on with or without a direct action of the dri­ver, resul­ting from the auto­ma­tic eva­lua­ti­on of on-board initia­ted infor­ma­ti­on.” 75 To under­stand the rele­van­ce of the­se regu­la­ti­ons in a mul­ti-level regu­la­ti­on sys­tem one has to take into account that other inter­na­tio­nal, Euro­pean natio­nal pro­vi­si­ons refer direct­ly or indi­rect­ly to the UN/ECE Regu­la­ti­ons, cf. e.g. art. 8 (5bis) and art. 39 of the Vien­na Con­ven­ti­on on Road Traf-fic; art. 21 (1), 29 (3), 35 (2) of the Euro­pean Direc­ti­ve 2007/46/EC (“Frame­work Direc­ti­ve”); § 1a (3) StVG. 76 Con­ven­ti­on on Pro­hi­bi­ti­ons or Restric­tions on the Use of Cer­tain Con­ven­tio­nal Wea­pons Which May Be Deemed to Be Exces­si­ve­ly Inju­rious or to Have Indiscri­mi­na­te Effects (CCW), Group of Govern­men­tal Experts on Emer­ging Tech­no­lo­gies in the Area of Let­hal Auto­no­mous Wea­pons Sys­tems, Gene­va, 25.– 29.03.2019 and 20.–21.08.2019, Report of the 2019 ses­si­on, CCW/ GGW.1/2019/3, 25.09.2019, avail­ab­le at: CCW/GGE.1/2019/3. 77 Ibid., Annex IV, 13 et seq. 78 Ibid., Annex IV: (b) “Human respon­si­bi­li­ty for decisi­ons on the use of wea­pons sys­tems must be retai­ned sin­ce accoun­ta­bi­li­ty can­not be trans­fer­red to machi­nes. This should be con­si­de­red across the ent­i­re life cycle of the wea­pons sys­tem; (…) (d) Accoun­ta­bi­li­ty for deve­lo­ping, deploy­ing and using any emer­ging wea­pons sys­tem in the frame­work of the CCW must be ensu­red in accordance with app­li­ca­ble inter­na­tio­nal law, includ-ing through the ope­ra­ti­on of such sys­tems wit­hin a respon­si­ble chain of human com­mand and con­trol;”. 79 For this view and a defi­ni­ti­on see working paper (WP) sub­mit­ted by the Rus­si­an Fede­ra­ti­on, CCW/GGE.1/2019/WP.1, 15.03.2019, para. 5: “unman­ned tech­ni­cal means other than ord­nan­ce that are inten­ded for car­ry­ing out com­bat and sup­port mis­si­ons without any invol­ve­ment of the ope­ra­tor“, express­ly exclu­ding unman­ned aeri­al vehi­cles as high­ly auto­ma­ted sys­tems. UN Regu­la­ti­ons No. 7973 and No. 13-H74) to have a legal basis for the use of (semi-)autonomous cars.75 The examp­les men­tio­ned abo­ve show that detail­ed, legal­ly bin­ding laws and regu­la­ti­ons are alrea­dy in for­ce to regu­la­te AI sys­tems at the inter­na­tio­nal, Euro­pean, and natio­nal level. Accord­ing to this, the “nar­ra­ti­ve” is not cor­rect which inclu­des the claim that (top-down) sta­te-based regu­la­ti­on lags (or: must lag) behind the tech­ni­cal deve­lo­p­ment, espe­cial­ly in the area of a fast­mo­ving dis­rup­ti­ve tech­no­lo­gy as AI. It seems rather con­vin­cing to argue ins­tead that whe­ther the­re is mea­ning­ful regu­la­ti­on in the field of AI depends on the poli­ti­cal will to regu­la­te AI sys­tems at the natio­nal, Euro­pean, and inter­na­tio­nal level. (4) AI Sys­tems as (Semi-)Autonomous Wea­pons The poli­ti­cal will to regu­la­te will depend on the interest(s) and pre­fe­ren­ces of sta­tes, espe­cial­ly with regard to eco­no­mic goals and secu­ri­ty issu­es as in most socie­ties (demo­cra­tic or unde­mo­cra­tic) the­re seems broad con­sen­sus that eco­no­mic growth of the natio­nal eco­no­my is a (pri­ma­ry) aim and pro­vi­ding natio­nal secu­ri­ty is the most important legi­ti­ma­te goal of a sta­te. This might exp­lain why the­re are at the inter­na­tio­nal level – at least until now – are­as whe­re the­re is no con­sen­sus to regu­la­te AI sys­tems as a regu­la­ti­on is seen as a limi­t­ing for­ce for eco­no­mic growth and/or natio­nal secu­ri­ty. This is obvious with regard to (semi-)autonomous wea­pons. Though a Group of Govern­men­tal Experts (GGE) on Emer­ging Tech­no­lo­gies in the Area of Let­hal Auto­no­mous Wea­pons Sys­tems (LAWS) was estab­lis­hed in 2016 and has met in Gene­va sin­ce 2017 con­ve­ned through the Con­fe­rence on Cer­tain Con­ven­tio­nal Wea­pons (CCW) and a report of the 2019 ses­si­on of the GGE is published76 the­re are only gui­ding princi­ples affir­med by the Group.77 The­se gui­ding princi­ples stress, inter alia, the need for accoun­ta­bi­li­ty (lit. b and d),78 and risk assess­ment mea­su­res as part of the design (lit. g). Howe­ver, the­re is no agree­ment on a mea­ning­ful inter­na­tio­nal trea­ty, and it is still dis­pu­ted whe­ther the dis­cus­sion wit­hin the GGE should be limi­ted to ful­ly auto­no­mous systems.79 The most­ly sta­te-dri­ven dis­cus­sions at the CCW have shown that some Sta­tes are arguing for a pro­hi­bi­ti­on as 1 6 O RDNUNG DER WISSENSCHAFT 1 (2020), 9–22 80 WP sub­mit­ted by the Rus­si­an Fede­ra­ti­on, CCW/GGE.1/2019/ WP.1; para. 2: “The Rus­si­an Fede­ra­ti­on pre­su­mes that poten­ti­al LAWS can be more effi­ci­ent than a human ope­ra­tor in addres­sing the tasks my mini­mi­zing the error rate. (…).” 81 WP sub­mit­ted by the USA, CCW/GGE.1/2019/WP.5, 28.03.2019, para. 2 lit. c: “Emer­ging tech­no­lo­gies in the area of LAWS could streng­t­hen the imple­men­ta­ti­on of IHL, by, inter alia, redu­cing the risk of civi­li­an casu­al­ties, faci­li­ta­ting the inves­ti­ga­ti­on or repor­ting of inci­dents invol­ving poten­ti­al vio­la­ti­ons, enhan­cing the abi­li­ty to imple­ment cor­rec­ti­ve actions, and auto­ma­ti­cal­ly genera­ting infor­ma­ti­on on unex­plo­ded ord­nan­ce.”; cf. as well ibid., para. 15. 82 WP sub­mit­ted by the Rus­si­an Fede­ra­ti­on, CCW/GGE.1/2019/ WP.1, para. 10: “The Rus­si­an Fede­ra­ti­on is con­vin­ced that the issue of LAWS is extre­me­ly sen­si­ti­ve. While dis­cus­sing it, the GGE should not igno­re poten­ti­al bene­fits of such sys­tems in the con­text of ensu­ring Sta­tes‘ natio­nal secu­ri­ty. (…)”. 83 WP sub­mit­ted by Fran­ce, CCW/GGe.2/2018/WP.3, stres­sing inter alia the princi­ples of com­mand respon­si­bi­li­ty, ibid. para. 6, stres­sing a “cen­tral role for human com­mand in the use of for­ce” (para. 12): “(…) In this regard, the com­mand must retain the abi­li­ty to take final decisi­ons regar­ding the use of let­hal for­ce inclu­ding wit­hin the frame­work of using sys­tems with levels of auto­no­my or with various arti­fi­cial intel­li­gence com­pon­ents.” 84 Even the Ger­man Daten­ethik­kom­mis­si­on stres­ses that the­re is not per se a “red line” with regard to auto­no­mous wea­pons as long as the kil­ling of human bein­gs it not deter­mi­ned by an AI sys­tem, Gut­ach­ten der Daten­ethik­kom­mis­si­on, 2019, 180. 85 WP sub­mit­ted by the Rus­si­an Fede­ra­ti­on, CCW/GGE.1/2019/ WP.1, para. 7. For a dif­fe­rent approach see the ICRC Working Paper on Auto­no­my, AI and Robo­tics: Tech­ni­cal Aspects of Hu-man Con­trol, CCW/GGE.1/2019/WP.7, 20.08.2019. 86 16.12.1971, 1015 U.N.T.S. 163, ent­e­red into for­ce 26.03.1975. The BWC allows rese­arch on bio­lo­gi­cal agents for pre­ven­ti­ve, pro­tec­ti­ve or other peace­ful pur­po­ses; howe­ver this trea­ty does not pro­vi­de suf­fi­ci­ent pro­tec­tion against the risks of misu­se of rese­arch becau­se rese­arch con­duc­ted for peace­ful pur­po­ses is neit­her limi­ted nor pro­hi­bi­ted. 87 29.01.2000, 2226 U.N.T.S. 208, ent­e­red into for­ce 11.09.2003. 88 The Nago­ya-Kua­la Lum­pur Sup­ple­men­ta­ry Pro­to­col on Lia­bi­li­ty and Redress to the Car­ta­ge­na Pro­to­col on Bio­safe­ty, 15.10.2010, ent­e­red into for­ce 05.03.2018. 89 The term “inter­na­tio­nal soft law” is unders­tood in this paper to cover rules that can­not be attri­bu­t­ed to a for­mal legal source of public inter­na­tio­nal law and that are, hence, not direct­ly legal­ly bin­ding but have been agreed upon by sub­jects of inter­na­tio­nal law (i.e. Sta­tes, inter­na­tio­nal orga­niz­a­ti­ons) that could, in princip­le, estab­lish inter­na­tio­nal hard law; for a simi­lar defi­ni­ti­on see Dani­el Thü­rer, Soft Law, in Rüdi­ger Wol­frum (ed.), Max Planck Ency­clo­pe­dia of Public Inter­na­tio­nal Law, 2012, Vol. 9, 271, para. 8. The noti­on does not inclu­de pri­va­te rule making by cor­po­ra­ti­ons (inclu­ding codes of con­duct) or mere recom­men­da­ti­ons by sta­ke­hol­ders, non-govern­men­tal orga­ni­sa­ti­ons and other pri­va­te enti­ties. 90 See abo­ve note 9. 91 Cf. part of a new inter­na­tio­nal trea­ty, like Aus­tria, yet other Sta­tes, like Russia80 and the US,81 are stres­sing the advantages82 of the deve­lo­p­ment and use of (semi-)autonomous wea­pons. Ger­ma­ny and France83 do not sup­port an inter­na­tio­nal trea­ty but opted for a soft law code of con­duct with regard to framing the use of tho­se weapons.84 Bes­i­des, key ele­ments of a gover­nan­ce regime of (semi-)autonomous wea­pons are unclear. What is meant by “human con­trol over the ope­ra­ti­on of such sys­tems” is dis­cus­sed even if it is sta­ted that this is an important limi­t­ing fac­tor by a sta­te. Rus­sia, for instance, argues that “the con­trol sys­tem of LAWS should pro­vi­de for inter­ven­ti­on by a human ope­ra­tor or the upper-level con­trol sys­tem to chan­ge the mode of ope­ra­ti­on of such sys­tems, inclu­ding par­ti­al or com­ple­te deactivation”.85 With this, Rus­sia eli­mi­na­tes mea­ning­ful human con­trol as a necessa­ry pre­con­di­ti­on to use (semi-)autonomous wea­pons. The “human in the loop” as a last resort of using let­hal wea­pons and the sub­ject of respon­si­bi­li­ty – with the last resort to con­vict some­bo­dy as a war cri­mi­nal – is repla­ced by the upper-level con­trol sys­tem that might be ano­t­her AI sys­tem. (5) First Con­clu­si­on The examp­les men­tio­ned abo­ve show the loo­p­ho­les of the inter­na­tio­nal regu­la­ti­on of AI sys­tems, alt­hough the­re are spe­ci­fic rules in place in some are­as, most­ly at the Euro­pean level. But more import­ant­ly that the­re is no cohe­rent, gene­ral, or uni­ver­sal inter­na­tio­nal regu­la­ti­on of AI as part of the inter­na­tio­nal hard law. Alt­hough the­re are lacu­nae in other are­as as well (thus far no inter­na­tio­nal trea­ty on exis­ten­ti­al and glo­bal cata­stro­phic risks and sci­en­ti­fic rese­arch exists) this widespread inter­na­tio­nal non-regu­la­ti­on of AI rese­arch and deve­lo­p­ment is dif­fe­rent from other fiel­ds of fast moving tech­no­lo­gi­cal pro­gress: bio­tech­no­lo­gy. In the field of bio­tech­no­lo­gy the­re are a trea­ties, like the Bio­lo­gi­cal Wea­pons Con­ven­ti­on (BWC),86 the Con­ven­ti­on on Bio­lo­gi­cal Diver­si­ty, the Car­ta­ge­na Pro­to­col on Biosafety,87 and the Kua­la Lum­pur Lia­bi­li­ty Protocol88 that are app­li­ca­ble in order to pro­hi­bit rese­arch that is not aimed at peace­ful pur­po­ses or to dimi­sih risks rela­ted to the gene­tic modi­fi­ca­ti­on of living orga­nisms. The­re­fo­re, it is important to look clo­ser to the first attempt to adopt gene­ral AI princi­ples at the inter­na­tio­nal level as part of the inter­na­tio­nal soft law.89 II. OECD AI Recom­men­da­ti­ons as Inter­na­tio­nal Soft Law 1. Basis and Con­tent The OECD issued recom­men­da­ti­ons on AI in 201990 and 43 Sta­tes have adop­ted the­se principles91 inclu­ding rele­vant actors in the field of AI as the US, South Korea, Japan, UK, Fran­ce, and Ger­ma­ny, and Sta­tes that are not mem­bers of the OECD. The recom­men­da­ti­ons were draf­ted with the help of an expert group (AIGO) that Vöne­ky · Arti­fi­cial Intel­li­gence 1 7 92 Ger­ma­ny did send one mem­ber (Poli­cy Law: Digi­tal Work and Socie­ty, Federal Minis­try for La-bour and Social Affairs), Japan two, as well as Fran­ce, and the Euro­pean Com­mis­si­on; South Korea did send three mem­bers, as the USA (US Depart­ment of Sta­te, US Depart­ment of Com­mer­ce; US Natio­nal sci­ence Foun­da­ti­on). 93 Cf. 94 Cf. OECD Web­site: What are the OECD Princi­ples on AI?, https:// 95 The­se are: 1. Human agen­cy and over­sight; 2. Tech­ni­cal robust­ness and safe­ty 3. Pri­va­cy and data gover­nan­ce; 4. Trans­pa­ren­cy; 5. Diver­si­ty, non-discri­mi­na­ti­on and fair­ness; 6. Socie­tal and envi­ron­men­tal well-being 7. Accoun­ta­bi­li­ty. 96 An AI sys­tem is defi­ned as “a machi­ne-based sys­tem that can, for a given set of human-defi­ned objec­ti­ves, make pre­dic­tions, recom­men­da­ti­ons, or decisi­ons influ­en­cing real or vir­tu­al envi­ron­ments. AI sys­tems are desi­gned to ope­ra­te with vary­ing levels of auto­no­my.” Cf. I OECD AI Recom­men­da­ti­ons. 97 Ibid., I OECD AI Recom­men­da­ti­ons. 98 See abo­ve at note 28. See as well Gut­ach­ten der Daten­ethik­kom­mis­si­on, 2019, 194. 99 Sil­ja Vöne­ky, Recht, Moral und Ethik, 2010, 284 et seq. 100 See abo­ve at note 93. con­sists of 50 mem­bers from – as the OECD wri­tes – governments,92 aca­de­mia, busi­ness, civil socie­ty etc., inclu­ding IBM, Micro­soft, Goog­le, Face­book, Deep­Mind, as well as invi­ted experts from MIT.93 The OECD claims that the­se Princi­ples will be a glo­bal refe­rence point for trust­worthy AI.94 It refers to the noti­on of trust­worthy AI, as did the High-level Expert Group on AI (AI HLEG) set up by the EU, which publis­hed Ethics Gui­de­li­nes on AI in April 2019 lis­ting seven key requi­re­ments that AI sys­tems shall meet to be trustworthy.95 The OECD recom­men­da­ti­ons sta­te and spell out five com­ple­men­ta­ry value-based “princi­ples for respon­si­ble ste­wardship of trust­worthy AI” (section1):96 the­se are inclu­si­ve growth, sus­tainab­le deve­lo­p­ment and well-being (1.1); human-cen­te­red values and fair­ness (1.2.); trans­pa­ren­cy and exp­laina­bi­li­ty (1.3.); robust­ness, secu­ri­ty and safe­ty (1.4.); and accoun­ta­bi­li­ty (1.5.). In addi­ti­on, AI actors – mea­ning tho­se who play an acti­ve role in the AI sys­tem lifecy­cle, inclu­ding orga­niz­a­ti­ons and indi­vi­du­als that deploy or ope­ra­te AI97 – should respect the rule for human rights and demo­cra­tic values (1.2. lit. a). The­se inclu­de free­dom, digni­ty and auto­no­my, pri­va­cy and data pro­tec­tion, non-discri­mi­na­ti­on and equa­li­ty, diver­si­ty, fair­ness, social jus­ti­ce, and inter­na­tio­nal­ly reco­gni­zed labor rights. But the wor­d­ing of the princi­ples is very soft. For instance, AI actors should imple­ment “mecha­nisms and safe­guards, such as capa­ci­ty for human deter­mi­na­ti­on, that are appro­pria­te to the con­text and con­sis­tent with the sta­te of the art” (1.2. lit. b). The recom­men­da­ti­on about trans­pa­ren­cy and exp­laina­bi­li­ty (1.3.) has only slight­ly more sub­s­tance. It sta­tes that AI actors “[…] should pro­vi­de mea­ning­ful infor­ma­ti­on, appro­pria­te to the con­text, and con­sis­tent with the sta­te of art […] (iv.) to enab­le tho­se adver­se­ly affec­ted by an AI sys­tem to chal­len­ge its out­co­me based on plain and easy-to-under­stand infor­ma­ti­on on the fac­tors, and the logic that ser­ved as the basis for the pre­dic­tion, rec-ommen­da­ti­on or decisi­on.” Addi­tio­nal­ly, it sta­tes that “AI actors should, based on their roles, the con­text, and their abi­li­ty to act, app­ly a sys­te­ma­tic risk manage­ment approach to each pha­se of the AI sys­tem lifecy­cle, on a con­ti­nuous basis to address risks rela­ted to AI sys­tems, inclu­ding pri­va­cy, digi­tal secu­ri­ty, safe­ty and bias.” (1.4 lit. c). If we think that discri­mi­na­ti­on and unju­s­ti­fied bia­ses are one of the key pro­blems of AI,98 asking for a risk manage­ment approach to avoid the­se pro­blems does not seem to be suf­fi­ci­ent as a stan­dard of AI actor (cor­po­ra­ti­on) due dili­gence. And the wor­d­ing with regard to accoun­ta­bi­li­ty is soft as well (1.5): “AI actors should be accoun­ta­ble for the pro­per func­tio­n­ing of AI sys­tems and for the respect of the abo­ve princi­ples, based on their roles, the con­text and con­sis­tent with the sta­te for the art.” This does not mean and does not men­ti­on any legal lia­bi­li­ty or legal respon­si­bi­li­ty.” 2. (Dis-)Advantages and Legi­ti­ma­cy The OECD recom­men­da­ti­ons show some of the advan­ta­ges and dis­ad­van­ta­ges that we see in the area of inter­na­tio­nal soft law. The advan­ta­ges are that they can be draf­ted in a short peri­od of time (the working group star­ted in 2018); that they can inclu­de experts from the rele­vant fiel­ds and sta­te offi­cials; that they can spell out and iden­ti­fy an exis­ting over­lap­ping con­sen­sus of mem­ber sta­tes, here the OECD mem­ber sta­tes; and that they might deve­lop some kind of nor­ma­ti­ve for­ce even if they are not legal­ly bin­ding as an inter­na­tio­nal treaty.99 Howe­ver, the dis­ad­van­ta­ges of the OECD recom­men­da­ti­ons are obvious as well. First­ly, the basis for the pro­ce­du­ral legi­ti­ma­cy is unclear as to which experts are allo­wed to par­ti­ci­pa­te is not ent­i­re­ly clear. In the field of AI, experts are employ­ed, paid, or clo­se­ly lin­ked to AI corporations100 hence, the advice they give is not (ent­i­re­ly) inde­pen­dent. If an Inter­na­tio­nal Orga­ni­sa­ti­on (IO) or Sta­te one wants to enhan­ce pro­ce­du­ral legi­ti­ma­cy for AI recom­men­da­ti­ons, one should rely on dif­fe­rent groups: one of the inde­pen­dent experts with no (finan­cial) links to cor­po­ra­ti­ons, one of the experts working for cor­po­ra­ti­ons, and a third group con­sis­ting of civil socie­ty and 1 8 O RDNUNG DER WISSENSCHAFT 1 (2020), 9–22 101 See below Part III. 102 The argu­ments at part III. 1.–3. were publis­hed in my paper Human Rights and Legi­ti­ma­te Gover­nan­ce of Exis­ten­ti­al and Glo­bal Cata­stro­phic Risks, in Sil­ja Voeneky/Gerald Neu­man (eds.), Human Rights, Demo­cra­cy, and Legi­ti­ma­cy in Times of Dis­or­der, 2018, 149. 103 For the basis on the con­cept and noti­on of “legi­ti­ma­cy”, see Sil­ja Voe­n­e­ky, Recht, Moral und Ethik, 2010, 130–162. For dis­cus­sion of the legi­ti­ma­cy of inter­na­tio­nal law, see Allen Buchanan, The Legi­ti­ma­cy of Inter­na­tio­nal Law, in Saman­tha Besson/John Tasiou­las (eds.), The Phi­lo­so­phy of Inter­na­tio­nal Law, 2010, 79–96; John Tasiou­las, Legi­ti­ma­cy of Inter­na­tio­nal Law, in Saman­tha Besson/ John Tasiou­las (eds.), The Phi­lo­so­phy of Inter­na­tio­nal Law, 2010, at 97–116. 104 A deon­to­lo­gi­cal theo­ry of ethics is one which holds that at least some acts are moral­ly obli­ga­to­ry regard­less of their con­se­quen­ces, see Robert G. Olson, in Paul Edward (ed.), The Ency­clo­pe­dia of Phi­lo­so­phy, 1967, 1–2, 343. 105 Which means that the­se views main­tain that “it is some­ti­mes wrong to do what pro­du­ces the best avail­ab­le out­co­me over­all” as the­se views incor­po­ra­te “agent-cent­red restric­tions,” see Samu­el Scheff­ler, The Rejec­tion of Con­se­quen­tia­lism, 1994, 2. 106 On “direct” and “act” uti­li­ta­ria­nism, see Richard B. Brandt, Facts, Values, and Mora­li­ty, 1996, 142; for the noti­on of act-con­se­quen­tia­lism and clas­si­cal uti­li­ta­ria­nism see Samu­el Scheff­ler, supra note 105, at 2–3; for an over­view see John C. Smart, Uti­li­ta­ria­nism, in Paul Edward (ed.), The Ency­clo­pe­dia of Phi­lo­so­phy 1967, 7–8, 206. NGO mem­bers. Sta­tes or IO could then com­pa­re the recom­men­da­ti­ons, dis­cuss the dif­fe­ren­ces, and choo­se or com­bi­ne the one most con­vin­cing. Second­ly, we have to dis­cuss the sub­stan­ti­ve legi­ti­ma­cy becau­se the OECD recom­men­da­ti­ons do not stress the respon­si­bi­li­ty of governments to pro­tect human rights in the area of AI. They inclu­de only five recom­men­da­ti­ons to poli­cy­ma­kers (“adher­ents”, sec­tion 2) that shall be imple­men­ted in natio­nal poli­ci­es and inter­na­tio­nal coope­ra­ti­on con­sis­tent with the princi­ples men­tio­ned abo­ve. The­se inclu­de inves­ting in AI rese­arch and deve­lo­p­ment (2.1), fos­te­ring a digi­tal eco­sys­tem for AI (2.2), shaping an enab­ling poli­cy envi­ron­ment for AI (2.3), buil­ding human capa­ci­ty and pre­pa­ring for labor mar­ket trans­for­ma­ti­on (2.4), and inter­na­tio­nal coope­ra­ti­on for trust­worthy AI (2.5). 3. Second Con­clu­si­on As a con­clu­si­on of this second part one could sta­te that the OECD recom­men­da­ti­ons lower the thres­hold too far and shift the focus too far away from Sta­tes as main actors of the inter­na­tio­nal com­mu­ni­ty and as tho­se obli­ged to pro­tect human rights101 towards pri­va­te actors. This is a major dis­ad­van­ta­ge becau­se alt­hough the­se recom­men­da­ti­ons exist, it is still unclear what sta­te obli­ga­ti­ons can be dedu­ced from legal­ly bin­ding human rights – inclu­ding the rele­vant human rights trea­ties and rules of cus­to­ma­ry law – with regard to the gover­nan­ce of AI. Bes­i­des, the recom­men­da­ti­ons that address pri­va­te actors and their respon­si­bi­li­ties are draf­ted in a lan­guage that is too soft and vague. As a result, I argue that the OECD Recom­men­da­ti­ons could and should have been more mea­ning­ful with regard to stan­dards of due dili­gence and respon­si­bi­li­ty in the age of AI for pri­va­te actors and – even more – with regard to sta­te duties to pro­tect human rights. The lat­ter aspect might even lead to a trend to under­mi­ne sta­te duties to pro­tect human rights in times of AI – and this could under­mi­ne the rele­van­ce of human rights regar­ding AI regu­la­ti­on as a who­le. III. Legi­ti­ma­cy, Human Rights and AI Regu­la­ti­on The ques­ti­on to be ans­we­red in this third part is: why are human rights decisi­ve with regard to the regu­la­ti­on of AI, and how can we defend the link bet­ween legi­ti­ma­cy and human rights in the field of AI regu­la­ti­on? 1. Legi­ti­ma­cy I start with the noti­on of legi­ti­ma­cy. As I have writ­ten befo­re, legi­ti­ma­cy should be view­ed pri­ma­ri­ly as a nor­ma­ti­ve, not a descrip­ti­ve, con­cept: 102 It refers to stan­dards of jus­ti­fi­ca­ti­on of gover­nan­ce, regu­la­ti­on and obli­ga­ti­ons. Hence, legi­ti­ma­te gover­nan­ce or regu­la­ti­on means that the gui­ding norms and stan­dards have to be jus­ti­fia­ble in a supra-legal way (i.e. they pos­sess ratio­nal accep­ta­bi­li­ty). If we think about inter­na­tio­nal regu­la­ti­on, it seems fruit­ful to link the noti­on of “legi­ti­ma­te regu­la­ti­on” to the exis­ting legal order of public inter­na­tio­nal law. Without say­ing that lega­li­ty is suf­fi­ci­ent for legi­ti­ma­cy, I argue that gui­ding norms and stan­dards have to be cohe­rent with exis­ting inter­na­tio­nal law inso­far as the inter­na­tio­nal law reflects moral (i.e. jus­ti­fied) values.103 2. Ethi­cal Para­digms We have to sta­te that the­re are dif­fe­rent ethi­cal para­digms that can jus­ti­fy regu­la­ti­on in the field of AI in a supra legal way. One is the human rights-based approach that can be con­si­de­red a deon­to­lo­gi­cal concept,104 as the right­ness or wrong­ness of con­duct is deri­ved from the cha­rac­ter of the beha­vi­or itself.105 Ano­t­her approach is uti­li­ta­ria­nism, which can be descri­bed as the doc­tri­ne which sta­tes that “one should per­form that act, among tho­se that on the evi­dence are avail­ab­le to one, that will most pro­bab­ly maxi­mi­se benefits”.106 It seems important to note that the dif­fe­rent nor­ma­ti­ve ethi­cal theo­ries are Vöne­ky · Arti­fi­cial Intel­li­gence 1 9 based on rea­son­ab­le grounds (i.e. they pos­sess ratio­nal acceptability),107 and one can­not deci­de whe­ther the­re is a theo­ry that clear­ly trumps the others. The­re­fo­re, in loo­king for stan­dards that are the bases of legi­ti­ma­te regu­la­ti­on of AI sys­tems, it is not fruit­ful to deci­de whe­ther one nor­ma­ti­ve ethi­cal theo­ry is in gene­ral terms the most con­vin­cing one, but rather which ethi­cal para­digm seems to be the most con­vin­cing in regard to the spe­ci­fic ques­ti­ons that we have to deal with when framing AI sys­tems. 3. Human Rights-based AI Regu­la­ti­on As I have argued befo­re with regard to the regu­la­ti­on of exis­ten­ti­al risks, 108 I argue that AI regu­la­ti­on and gover­nan­ce should be based on human rights, more pre­cise­ly on legal­ly bin­ding human rights. Other ethi­cal approa­ches shall not be ruled out as far as they are com­pa­ti­ble with human rights. But I reject views that argue that uti­li­ta­ri­an stan­dards should be the pri­ma­ry stan­dard to mea­su­re the legi­ti­ma­cy of an AI regu­la­ti­ve regime.109 The argu­ments sup­por­ting this claim are the fol­lowing: To regu­la­te AI is a glo­bal chal­len­ge. Hence, it would be a major defi­cit not to rely on human rights. They are part of exis­ting inter­na­tio­nal law. They are not only roo­ted in the moral dis­cour­se as uni­ver­sal values, but they also bind many, or even all Sta­tes (as trea­ty law or cus­to­ma­ry law), and they can be imple­men­ted by courts or other insti­tu­tio­nal means, laid down in human right trea­ties, such as the Euro­pean Con­ven­ti­on on Human Rights (ECHR)110 and the Inter­na­tio­nal Covenant on Civil and Poli­ti­cal Rights (ICCPR). The lat­ter is a uni­ver­sal human rights trea­ty that is bin­ding on more than 170 Sta­tes Parties,111 inclu­ding major AI rele­vant actors, like the USA. What seems to be even more important is that when we turn to a human rights frame­work, we see that inter­na­tio­nal legal human rights make it pos­si­ble to spell out the decisi­ve values that must be taken into account for asses­sing dif­fe­rent AI-rese­arch, ‑deve­lo­p­ment and ‑deploy­ment sce­n­a­ri­os. In the area of AI rese­arch free­dom of rese­arch is decisi­ve as a legal­ly bin­ding human right, ent­ail­ed in the rights of free­dom of thought and free­dom of expres­si­on that are laid down in the CCPR as an inter­na­tio­nal uni­ver­sal human rights trea­ty. Howe­ver, this free­dom is not abso­lu­te: The pro­tec­tion – for instance – of life and health of human bein­gs, of pri­va­cy and against discri­mi­na­ti­on are legi­ti­ma­te aims that can jus­ti­fy pro­por­tio­nal limi­ta­ti­ons of this right.112 The human rights frame­work, the­re­fo­re, stres­ses that the­re exists a need to find pro­por­tio­nal limi­ta­ti­ons in the field of AI rese­arch if the­re are dan­gers or risks113 for human life and health or 107 In order to argue this way we have to ans­wer the ques­ti­on what our cri­te­ria of ratio­nal accep­ta­bi­li­ty are. My ans­wer is based on the argu­ments by the phi­lo­so­pher Hil­ary Put­nam that our cri­te­ria of ratio­nal accep­ta­bi­li­ty are, inter alia, cohe­rence, con­sis­ten­cy, and rele­van­ce; that “fact (or truth) and ratio­na­li­ty are inter­de­pen­dent noti­ons” but that, nevertheless, no neu­tral under­stan­ding of ratio­na­li­ty exists as the cri­te­ria of “ratio­nal accep­ta­bi­li­ty rest on and pre­sup­po­se our values”, and the “theo­ry of truth pre­sup­po­ses theo­ry of ratio­na­li­ty which in turn pre­sup­po­ses our theo­ry of good”. Put­nam con­clu­ded that the theo­ry of the good is “its­elf depen­dent upon assump­ti­ons about human natu­re, about socie­ty, about the uni­ver­se (inclu­ding theo­lo­gi­cal and meta­phy­si­cal assump­ti­ons).” See Hil­ary Put­nam, Rea­son, Truth and Histo­ry, 1981, 198, 201, 215. 108 This and the argu­ments at III.2. and 3. were publis­hed in my paper Human Rights and Legi­ti­ma­te Gover­nan­ce of Exis­ten­ti­al and Glo­bal Cata­stro­phic Risks, in Sil­ja Voeneky/Gerald Neu­man (eds.), Human Rights, Demo­cra­cy, and Legi­ti­ma­cy in Times of Dis­or­der, 2018, 151 et seq. 109 In many cases, neit­her the risks nor the bene­fits of AI rese­arch and deve­lo­p­ment can be quan­ti­fied; the risk of misu­se of AI sys­tems by cri­mi­nals, men­tio­ned abo­ve, can­not be quan­ti­fied; the unclear or unpre­dic­ta­ble bene­fits of basic AI rese­arch can­not be quan­ti­fied eit­her – nevertheless, basic rese­arch may often be the necessa­ry con­di­ti­on in order to achie­ve bene­fits for human bein­gs in the long run. The­se are draw­backs of a uti­li­ta­ri­an risk-bene­fit approach for some of the AI sce­n­a­ri­os descri­bed abo­ve. For the lack of pre­dic­ta­bi­li­ty sur­roun­ding the con­se­quen­ces of AI, cf. Iyad Rahwan/Manuel Cebrian/Nick Obra­do­vich et al., Machi­ne beha­viour, Natu­re 568 (2019), 477. For a gene­ral dis­cus­sion of the human rights approach ver­sus uti­li­ta­ria­nism see Her­bert L. A. Hart, Bet­ween Uti­li­ty and Rights, Colum. L. Rev. 79 (1979), 828. For a dis­cus­sion of a com­bi­na­ti­on of uti­li­ta­ria­nism and other value based approa­ches (auto­no­my, diver­si­ty) and refe­rence to the Uni­ver­sal Decla­ra­ti­on of Human Rights for the codi­fi­ca­ti­on of moral princi­ples app­li­ca­ble to future AI, see Max Teg­mark, Life 3.0, 2017, 271–75. 110 The Inter­na­tio­nal Covenant on Civil and Poli­ti­cal Rights adop­ted by G.A. Res. 2200A (XXI), 16.12.1966, ent­e­red into for­ce 23.03.1976, 999 U.N.T.S. 171, and the Euro­pean Con­ven­ti­on on Human Rights, adop­ted by the Mem­bers of the Coun­cil of Euro­pe, 04.11.1950, avail­ab­le at: 111 Art. 18 ICCPR, 19; art. 9, 10 ECHR. A dif­fe­rent approach is taken, howe­ver, in the Char­ter of Fun­da­men­tal Rights of the Euro­pean Uni­on, art. 13 (Free­dom of the arts and sci­en­ces). The­re it is express­ly laid down that “The arts and sci­en­ti­fic rese­arch shall be free of cons­traint. Aca­de­mic free­dom shall be respec­ted.” Simi­lar norms are inclu­ded in natio­nal con­sti­tu­ti­ons, see e.g. Grund­ge­setz für die Bun­des­re­pu­blik Deutsch­land, art. 5 (3) (23.05.1949) which sta­tes that “Arts and sci­en­ces, rese­arch and tea­ching shall be free. The free­dom of tea­ching shall not release any per­son from alle­gi­an­ce to the con­sti­tu­ti­on.” 112 The legi­ti­ma­te aims for which the right of free­dom of expres­si­on and the right of free­dom of sci­ence can be limi­ted accord­ing to the Inter­na­tio­nal Covenant on Civil and Poli­ti­cal Rights and the Euro­pean Con­ven­ti­on on Human Rights are even broa­der. See art. 19 (3) ICCPR, art. 10 (2) ECHR. 113 Risk can be defi­ned as a risk is an “unwan­ted event which may or may not occur”, see Sven O. Hans­son, Risk, in Edward N. Zal­ta (ed.), Stan­ford Ency­clo­pe­dia of Phi­lo­so­phy, avail­ab­le at: https:// The­re is no accep­ted defi­ni­ti­on of the term in public inter­na­tio­nal law; it is unclear how—and whether—a “risk” is dif­fe­rent from a “thre­at,” a “dan­ger” and a “hazard,” see Grant Wil­son, Mini­mi­zing Glo­bal Cata­stro­phic and Exis­ten­ti­al Risks from Emer­ging Tech­no­lo­gies through Inter­na­tio­nal Law, Vir­gi­nia Envi­ron­men­tal L.J. 31 (2013), 307, 310. 2 0 O RDNUNG DER WISSENSCHAFT 1 (2020), 9–22 pri­va­cy. What limits to the free­dom of rese­arch are jus­ti­fied depends on the pro­ba­bi­li­ty of the rea­liz­a­ti­on of a risk114 and the seve­ri­ty of the pos­si­ble harm. The­re­fo­re, deman­ds of ratio­nal risk-bene­fit assess­ment can and should be part of the inter­pre­ta­ti­on of human rights, as the­re is the need to avoid dis­pro­por­tio­na­te means in order to mini­mi­ze risks even in low/unknown pro­ba­bi­li­ty cases: What pro­por­tio­na­li­ty means is lin­ked to the risks and bene­fits one can rea­son­ab­ly anti­ci­pa­te in the area of AI. To do a risk-bene­fit assess­ment of the AI sys­tem in ques­ti­on, as far as this is pos­si­ble and ratio­nal, the­re­fo­re is an important ele­ment in imple­men­ting the human rights frame­work. Bes­i­des, even so-cal­led first genera­ti­on human rights, as laid down in the CCPR, obli­ge Sta­tes not only to respect, but also to pro­tect the fun­da­men­tal rights of individuals.115 They sta­te that Sta­tes par­ties are obli­ged by inter­na­tio­nal human rights trea­ties to take appro­pria­te (legal) mea­su­res to pro­tect inter alia the life and health of individuals.116 And alt­hough the­re is wide dis­cre­ti­on for Sta­tes to pro­tect human rights, mea­su­res must not be inef­fec­ti­ve. Last but not least, a human rights-based approach requi­res pro­ce­du­ral rights for indi­vi­du­als to par­ti­ci­pa­te in the making of decisi­ons that affect them in the area of AI-deve­lo­p­ments. To rely on human rights mean that we have to spell out in more detail, how to enhan­ce pro­ce­du­ral legi­ti­ma­cy. The­se argu­ments might show that the core of the regu­la­ti­on and gover­nan­ce pro­blem – that AI sys­tems should ser­ve us as human bein­gs and not the other way around – can be expres­sed best on the basis of a human rights frame­work. It is cor­rect that human rights law, even the right to life, is not aiming to pro­tect huma­ni­ty, but aiming to pro­tect individuals.117 Howe­ver, huma­ni­ty con­sists of us as indi­vi­du­als. Even if we are not arguing that human rights pro­tect future genera­ti­ons, we may not neglect that indi­vi­du­als born today can have a life expec­tancy of more than 70 years in many Sta­tes, and the­se indi­vi­du­als are pro­tec­ted by human rights law. Hence, it seems con­sis­tent with the object and pur­po­se of human rights trea­ties that we view human rights law, and the duty of Sta­tes towards human bein­gs becau­se of human rights, in a 70 year peri­od. IV. Future AI Regu­la­ti­on In this paper, I spell out what the defi­ci­en­ci­es of cur­rent AI regu­la­ti­ons (inclu­ding inter­na­tio­nal soft law) are (part I and II), and I argue why inter­na­tio­nal law, and inter­na­tio­nal human rights are and should be the basis for a legi­ti­ma­te glo­bal AI regu­la­ti­on and risk reduc­tion regime (part III). This approach makes it pos­si­ble to deve­lop rules with regard to AI sys­tems in cohe­rence with rele­vant and moral­ly jus­ti­fied values of a huma­ne world order that is aiming for future sci­en­ti­fic and tech­no­lo­gi­cal advan­ces in a respon­si­ble man­ner, inclu­ding the human right to life, the right to non-discri­mi­na­ti­on, the right to pri­va­cy and the right to free­dom of sci­ence. Howe­ver, this is only a first step as cur­rent human rights norms and trea­ties are a basis and a star­ting point. The­re­fo­re the­re is the need – as a second step – to spe­ci­fy the gene­ral human rights by nego­tia­ting a human rights­ba­sed UN or UNESCO soft law decla­ra­ti­on on “AI Ethics and Human Rights”. This new decla­ra­ti­on could and should avoid the dis­ad­van­ta­ges of the 2019 OECD AI re114 AI-gover­nan­ce means in many cases the gover­nan­ce of risks, as many impacts of AI are unclear and it is even unclear whe­ther the­re will be some­thing like AGI or a sin­gu­la­ri­ty, see abo­ve note 42. But human rights can be used as a basis for human-cen­te­red risk gover­nan­ce. It was Robert Nozick who show­ed that an exten­si­on of a rights-based moral theo­ry to inde­ter­mi­nistic cases is pos­si­ble as a duty not to harm other peop­le can be exten­ded to a duty not to per­form actions that incre­a­se their risk of being har­med. See Sil­ja Voe­n­e­ky, Human Rights and Legi­ti­ma­te Gover­nan­ce of Exis­ten­ti­al and Glo­bal Cata­stro­phic Risks, in Sil­ja Voeneky/Gerald Neu­man (eds.), Human Rights, Demo­cra­cy, and Legi­ti­ma­cy in Times of Dis­or­der, 2018, 153. 115 It is an obli­ga­ti­on to pro­tect, not only an obli­ga­ti­on to respect; see U.N. Com­mis­si­on on Human Rights, Res. 2005/69, 29.04.2005, U.N. Doc. E/CN.4/2005/L.10/Add.17; Com­mit­tee on Eco­no­mic, Social and Cul­tu­ral Rights, Gene­ral Com­ment No 13, para. 46 (1999), reprin­ted in U.N. Doc. HRI/GEN/1/Rev.9, 72 (2008). 116 For the right to life, art. 6 (1) ICCPR, the second sen­tence pro­vi­des that the right to life “shall be pro­tec­ted by law.” In addi­ti­on, the right to life is the pre­con­di­ti­on for the exer­cise of any other human right, part of cus­to­ma­ry inter­na­tio­nal law and ensh­ri­ned in all major gene­ral human rights con­ven­ti­ons. The Euro­pean Court of Human Rights has stres­sed the posi­ti­ve obli­ga­ti­on to pro­tect human life in several decisi­ons; for an over­view see Niels Peter­sen, Life, Right to, Inter­na­tio­nal Pro­tec­tion, in Rüdi­ger Wol­frum (ed.), Max Planck Ency­clo­pe­dia of Public Inter­na­tio­nal Law, 2012, Vol. 6, 866. Nevertheless, the U.S. has not accep­ted that the­re exists a duty to pro­tect against pri­va­te inter­fe­rence due to art. 6 ICCPR; see Obser­va­tions of the United Sta­tes of Ame­ri­ca On the Human Rights Committee’s Draft Gene­ral Com­ment No. 36, On Arti­cle 6 – Right to Life, para. 30–38 (06.10.2017), avail­ab­le at: http://www. aspx. 117 An excep­ti­on – as part of a soft law decla­ra­ti­on – is art. 2 (b) of the Cai­ro Decla­ra­ti­on on Human Rights in Islam, 05.08.1990, adop­ted by Orga­niz­a­ti­on of the Isla­mic Con­fe­rence Res. No. 49/19‑P (1990). Vöne­ky · Arti­fi­cial Intel­li­gence 2 1 com­men­da­ti­ons. For this, we should iden­ti­fy tho­se are­as of AI-rese­arch, ‑deve­lo­p­ment, and ‑deploy­ment, which ent­ail seve­re risks for core human rights.118 A future uni­ver­sal “AI Ethics and Human Rights”119 decla­ra­ti­on should inclu­de sec­tor-spe­ci­fic rules based on human rights that pro­tect the most vul­nerable rights and human digni­ty at the inter­na­tio­nal level – as for instance, by pro­tec­ting brain data. And this decla­ra­ti­on could and should mer­ge princi­ples of “AI ethics”,120 as the pin­ci­ples of fair­ness, accoun­ta­bi­li­ty, exp­laina­bi­li­ty and transparency,121 with human rights as long as princi­ples of AI ethics are cohe­rent with and spe­ci­fy human rights in the field of AI.122 Sil­ja Vöne­ky ist Pro­fes­so­rin an der Albert-Lud­wigs-Uni­ver­si­tät Frei­burg und Direk­to­rin des Lehr­stuhls für Völ­ker­recht, Rechts­ver­glei­chung und Rechts­ethik sowie Fel­low am FRIAS Sal­tus Grup­pe Respon­si­ble AI. 118 I rely on tho­se human rights that are part of the human rights trea­ties; whe­ther the­re is the need for new human rights in the time of AI, as for a right of digi­tal auto­no­my (digi­ta­le Selbst­be­stim­mung) as the Ger­man Daten­ethik­kom­mis­si­on (cf. Gut­ach­ten der Daten­ethik­kom­mis­si­on, 2019, avail­ab­le at: https://www.bmjv. de/SharedDocs/Downloads/DE/Themen/Fokusthemen/Gutachten_DEK_DE.pdf?__blob=publicationFile&v=5) argues or whe­ther a new human right, that could be clai­med by cor­po­ra­ti­ons, will under­mi­ne basic human rights of natu­ral per­sons is still open to dis­cus­sion. 119 Simi­lar to the UNESCO Decla­ra­ti­on on „Bio­ethics and Human Rights“, 19.10.2005, avail­ab­le at: ev.php-URL_ID=31058&URL_DO=DO_TOPIC&URL_SECTION=201.html . 120 As was shown in Part II at least some of the princi­ples are alrea­dy part of AI sec­tor-spe­ci­fic regu­la-tion. 121 For the noti­on of and the need for trans­pa­ren­cy see Gut­ach­ten der Daten­ethik­kom­mis­si­on, 2019, 169 et seq., 175, 185, 215 (Trans­pa­renz, Erklär­bar­keit und Nach­voll­zieh­bar­keit). 122 Bes­i­des, the­re is the urgent need with regard to risks rela­ted to AI sys­tems to have proac­ti­ve pre-ven­ti­ve regu­la­ti­on in place, which is backed by mea­ning­ful rules for ope­ra­tor incen­ti­ves to redu­ce risks bey­ond pure ope­ra­tor lia­bi­li­ty; for a pro­po­sal see a paper by Thors­ten Schmidt/Silja Vöne­ky on “How to regu­la­te dis­rup­ti­ve tech­no­lo­gies?” (forth­co­m­ing 2020). 2 2 O RDNUNG DER WISSENSCHAFT 1 (2020), 9–22