Menü Schließen
Klicke hier zur PDF-Version des Beitrags!

I. Intro­duc­tion

In Mid-Sep­tem­ber 2023, a group of six aca­de­mic resear- chers from Har­vard Busi­ness School, The Whar­ton School, War­wick Busi­ness School, MIT Slo­an School of Manage­ment, and three manage­ment con­sul­tants of the Bos­ton Con­sul­ting Group published what has sin­ce beco­me­thethird­most-down­loa­de­dan­d­quo­ted­scho­lar- ly paper of 2023. “Navi­ga­ting the Jag­ged Tech­no­lo­gi­cal Fron­tier: Field Expe­ri­men­tal Evi­dence of the Effects of AI on Know­ledge Worker Pro­duc­ti­vi­ty and Qua­li­ty,” short, the “expe­ri­ment,” as some of the aut­hors call it, is a first-of-its-kind ran­do­mi­zed con­trol tri­al with more than 750 BCG con­sul­tants world­wi­de as subjects.1 It is the first stu­dy to test the use of gene­ra­ti­ve AI in a pro­fes­sio­nal ser­vices setting—through tasks that reflect what know- ledge workers do every day. “This is important becau­se under­stan­ding the impli­ca­ti­ons of LLMs for the work of orga­niza­ti­ons and indi­vi­du­als has taken on urgen­cy among scho­lars, workers, com­pa­nies, and even govern- ments,” the aut­hors explain.2

They were cor­rect in that assump­ti­on: After only a few weeks, “Navi­ga­ting the Jag­ged Tech­no­lo­gi­cal Fron- tier: Field Expe­ri­men­tal Evi­dence of the Effects of AI on Know­ledge Worker Pro­duc­ti­vi­ty and Qua­li­ty” has pro- found­ly impac­ted, e.g., the U.K. government’s thin­king and decision-making.3 Its con­clu­si­ons have rea­ched the “AI Safe­ty Sum­mit” that hos­ted 28 govern­ments and nu- merous indus­try and civil socie­ty experts recent­ly at Bletch­ley Park. The stu­dy, led by Karim Lakha­ni of Har- vard Busi­ness School, has been dis­cus­sed by C‑suite exe-

  1. 1  Dell‘Acqua, Fabri­zio und McFow­land, Edward und Mollick, Ethan R. und Lifs­hitz-Assaf, Hila und Kel­logg, Kathe­ri­ne und Rajen­dran, Saran und Kray­er, Lisa und Can­de­lon, Fran­çois und Lakha­ni, Karim R., Navi­ga­ting the Jag­ged Tech­no­lo­gi­cal Fron­tier: Field Expe­ri­men­tal Evi­dence of the Effects of AI on Know­ledge Worker Pro­duc­ti­vi­ty and Qua­li­ty (15. Sep­tem­ber 2023). Har­vard Busi­ness School Tech­no­lo­gy & Ope­ra­ti­ons Mgt. Unit Working Paper No. 24–013, see SSRN: https://ssrn.com/abstract=4573321 or http:// dx.doi.org/10.2139/ssrn.4573321.
  2. 2  Dell‘Acqua, Fabri­zio and McFow­land, Edward and Mollick, Ethan R. and Lifs­hitz-Assaf, Hila and Kel­logg, Kathe­ri­ne and Rajen­dran, Saran and Kray­er, Lisa and Can­de­lon, Fran­çois and Lakha­ni, Karim R., Navi­ga­ting the Jag­ged Tech­no­lo­gi­cal Fron­tier: Field

cuti­ves world­wi­de and quo­ted num­e­rous times in newspapers.4

When has a Ger­man or Euro­pean scho­lar­ly rese­arch paper on AI last had this real-world impact? What is more, the report by Lakha­ni et al. is only the latest ex- amp­le of such impactful work with solid influence on com­pa­nies and govern­ments: the newest thin­king co- ming out of the Bel­fer Cen­ter at Har­vard Ken­ne­dy School on bio­se­cu­ri­ty on the age of AI by Janet Egan and Eric Rosen­bach, published in ear­ly Novem­ber 2023, is set to struc­tu­re the deba­te on bio­lo­gi­cal wea­pons and AI5. Simi­lar­ly, the Yale Infor­ma­ti­on Socie­ty Pro­ject at Yale Law School has been owning the dis­cus­sion on free speech and social media for years now. Espe­ci­al­ly when it comes to digi­tal poli­cy and digi­tal govern­ment, AI po- licy and regu­la­ti­on, and bio- and cyber­se­cu­ri­ty, U.S. aca- demic insti­tu­ti­ons have long coin­ed a very dif­fe­rent style of rese­arch and tea­ching that has made them glo­bal thought lea­ders and, in fact, agen­da-set­ters for govern- ments and com­pa­nies, on the­se digi­tal topics. Even when it comes to such core Euro­pean topics, like regu­la­ting AI, e.g. with the Euro­pean AI Act, Ame­ri­can voices coin the deba­te almost more than Euro­pean voices: The let­ter de- man­ding a mora­to­ri­um on AI rese­arch for six months and strict regu­la­ti­on, signed by 30,000 experts, resear- chers, indus­try figu­res and other lea­ders in March 2023, among them Dani­elle Allen, Elon Musk, Geoffrey Hin- ton, and many other pro­mi­nent voices, was published by the Future of Life Insti­tu­te in Cali­for­nia, led by Antho­ny Aguir­re, the Fag­gin Pre­si­den­ti­al Pro­fes­sor for the Phy- sics of Infor­ma­ti­on at U.C. San­ta Cruz.

Expe­ri­men­tal Evi­dence of the Effects of AI on Know­ledge Worker Pro­duc­ti­vi­ty and Qua­li­ty (Sep­tem­ber 15, 2023). Har­vard Busi­ness School Tech­no­lo­gy & Ope­ra­ti­ons Mgt. Unit Working Paper No. 24–0131, page 2.

3 https://www.gov.uk/government/publications/frontier-ai-capabili- ties-and-risks-dis­cus­sion-paper/­fu­ture-risks-of-fron­tier-ai-annex- a.

4 For a short sum­ma­ry of the results, plea­se see https://www.bcg. com/­pu­bli­ca­ti­ons­/2023/how-peo­p­le-crea­te-and-des­troy-value- with-gen-ai.

5 https://www.belfercenter.org/publication/biosecurity-age-ai- whats-risk.

Kirs­ten Rulf

Why U.S. Uni­ver­si­ties have more influence in the glo­bal deba­te on AI Gover­nan­ce and Regu­la­ti­on and how Ger­man Uni­ver­si­ties can recla­im their seat at the table. A work­shop report

Ord­nung der Wis­sen­schaft 2024, ISSN 2197–9197

ORDNUNG DER WISSENSCHAFT 1 (2024), 1–6

While this short “Werk­statt­be­richt” or work­shop re- port does not pre­su­me to explo­re every ang­le of the dif- feren­ces bet­ween U.S. and Ger­man, or more broad­ly, Eu- ropean aca­de­mic insti­tu­ti­ons, when it comes to tea­ching and rese­ar­ching digi­tal and tech­no­lo­gy poli­cy, it never- thel­ess wants to shed light on some of the reasons why we so often find U.S. aca­de­mic voices at the helm of the- se topics, stee­ring the dis­cus­sion, and not sel­dom­ly stee- ring govern­ments or inter­na­tio­nal bodies like the Euro- pean Uni­on and United Nati­ons. Let’s give some con­cre- te examples.

II. Not lear­ning for school but for life

To begin with, three cha­rac­te­ristics of the col­la­bo­ra­ti­on bet­ween Karim Lakha­ni and others with Bos­ton Con­sul- ting Group make it a prime exam­p­le to illus­tra­te the enorm­ous dif­fe­ren­ces bet­ween U.S. aca­de­mic insti­tu- tions and Ger­man uni­ver­si­ties and aca­de­mic insti­tu­ti­ons when it comes to rese­ar­ching and tea­ching the socie­tal and poli­cy­ma­king impli­ca­ti­ons of Arti­fi­ci­al Intel­li­gence, in par­ti­cu­lar Gene­ra­ti­ve Arti­fi­ci­al Intel­li­gence, or short GenAI.

First, “Navi­ga­ting the Jag­ged Tech­no­lo­gi­cal Fron­tier: Field Expe­ri­men­tal Evi­dence of the Effects of AI on Know­ledge Worker Pro­duc­ti­vi­ty and Qua­li­ty” was con- cep­tua­li­zed first and fore­most with prac­ti­cal appli­ca­ti­ons and recom­men­da­ti­ons for cor­po­ra­tes, poli­cy­ma­kers, and only then other aca­de­mic rese­ar­chers in mind. Lakha­ni and his col­le­agues pri­ma­ri­ly ans­wer an excee­din­gly time- ly and rele­vant ques­ti­on for cor­po­ra­tes, name­ly whe­ther or not it is wort­hwhile, from a cost-bene­fit per­spec­ti­ve, to invest in a much-hyped but expen­si­ve and com­plex, poten­ti­al­ly even dan­ge­rous tech­no­lo­gy, and if so, how to do it effec­tively. It is a ques­ti­on that is asked dai­ly by C- suite exe­cu­ti­ves world­wi­de: what impli­ca­ti­ons does Ge- nAI have for my stra­te­gic work­force planning?

Second, Bos­ton Con­sul­ting Group, a stra­tegy con­sul- ting firm that advi­ses C‑suite exe­cu­ti­ves, found not only the per­fect stu­dy object as a glo­bal com­pa­ny of 30,000 + employees with vary­ing back­grounds, senio­ri­ty levels, and abili­ties but also an ide­al mul­ti­pli­er for the results. The same expe­ri­ment in a purely aca­de­mic set­ting done with uni­ver­si­ty stu­dents would not have had the same impact or signi­fi­can­ce, as the aut­hors ack­now­ledge them­sel­ves: “A cru­cial fea­ture of our expe­ri­ment was the avai­la­bi­li­ty of our expe­ri­men­tal sub­jects. Spe­ci­fi­cal­ly, we

6 Dell‘Acqua, Fabri­zio and McFow­land, Edward and Mollick, Ethan R. and Lifs­hitz-Assaf, Hila and Kel­logg, Kathe­ri­ne and Rajen­dran, Saran and Kray­er, Lisa and Can­de­lon, Fran­çois and Lakha­ni, Karim R., Navi­ga­ting the Jag­ged Tech­no­lo­gi­cal Fron­tier: Field

tap­ped into a high human capi­tal popu­la­ti­on, with par­ti- cipan­ts who were not only high­ly skil­led but also enga- ged in tasks that clo­se­ly mir­rored part of their pro­fes­sio- nal activities“.6

Fur­ther­mo­re, the expe­ri­ment by Lakha­ni et al. deli- bera­te­ly high­lights start­ing points to help poli­cy­ma­kers gau­ge whe­re they need to focus poli­cy pro­grams, which are sup­po­sed to help tho­se nega­tively affec­ted by the tech­no­lo­gy. The paper first gives fact-based and prac­ti­cal insights into who the­se peo­p­le may be that requi­re help and who the stake­hol­ders may be that need to be brought to the table to tack­le the pro­blem: “An imme­dia­te dan­ger emer­ging from the­se fin­dings, for ins­tance, is that peop- le will stop dele­ga­ting work insi­de the fron­tier to juni­or workers, crea­ting long-term trai­ning defi­ci­ts. Navi­ga­ting the fron­tier requi­res exper­ti­se, which must be built through for­mal edu­ca­ti­on, on-the-job trai­ning, and em- ployee-dri­ven upskilling.”

Only as an aftert­hought do the aut­hors want to con- tri­bu­te to a purely aca­de­mic deba­te. But their first and fore­most ambi­ti­on is to shape the dis­cus­sion in indus­try and governments.

The­se cha­rac­te­ristics of “Navi­ga­ting the Jag­ged Tech- nolo­gi­cal Fron­tier: Field Expe­ri­men­tal Evi­dence of the Effects of AI on Know­ledge Worker Pro­duc­ti­vi­ty and Qua­li­ty,” name­ly aiming for imme­dia­te prac­ti­cal appli­ca- tion of the rese­arch in com­pa­nies, picking strong busi- ness part­ners and lever­aging them not just for rese­arch but also for mar­ke­ting, and, last­ly, very cle­ar­ly sta­ting the broa­der uti­li­ty for govern­ments, very much high­light and demons­tra­te the typi­cal approach of the U.S. pro­fes- sio­nal school. We might add a fourth one: working with prac­ti­tio­ners, regard­less of their aca­de­mic refe­ren­ces. While, for exam­p­le, BCG has its rese­arch unit and inter- nal think tank with the Bruce Hen­der­son Insti­tu­te, this is not an aca­de­mic insti­tu­ti­on, nor does it cla­im or want to be. Yet its lea­ders, sea­so­ned prac­ti­tio­ners of AI and Ge- nAI imple­men­ta­ti­on in cor­po­ra­tes, are equal co-aut­hors of the scho­lar­ly paper — not­hing you often see in Ger­man aca­de­mic circles.

III. Cha­rac­te­ristics of the U.S. Pro­fes­sio­nal School

All of the­se cha­rac­te­ristics are typi­cal for U.S. pro­fes­sio- nal schools. The­se schools, like Har­vard Busi­ness School, but also its more poli­cy-ori­en­ted sibling Har­vard Ken­ne- dy School, or, a bit fur­ther south of the U.S. East Coast,

Expe­ri­men­tal Evi­dence of the Effects of AI on Know­ledge Worker Pro­duc­ti­vi­ty and Qua­li­ty (Sep­tem­ber 15, 2023). Har­vard Busi­ness School Tech­no­lo­gy & Ope­ra­ti­ons Mgt. Unit Working Paper No. 24–013, p.17.

Rulf · U.S. Uni­ver­si­ties influence on AI Gover­nan­ce and Regu­la­ti­on 3

Yale Law School, or, to ven­ture more to the U.S. West Coast, the Gold­man School of Public Poli­cy at the Uni- ver­si­ty of Cali­for­nia Ber­ke­ley, are not well unders­tood in Ger­ma­ny at all. The­re is not even an excel­lent Ger­man trans­la­ti­on for the “Public Poli­cy” disci­pli­ne – it is cer- tain­ly not “Poli­tik­wis­sen­schaft.”

The most signi­fi­cant dif­fe­ren­ces – and secrets of suc- cess for why they have so much of a seat at the table in glo­bal dis­cus­sions – to Ger­man uni­ver­si­ties of the­se U.S. pro­fes­sio­nal schools can be found in their syl­la­bi, in their tea­ching per­son­nel, and (main­ly as a con­se­quence of the lat­ter) their atti­tu­des towards col­la­bo­ra­ti­on with pri­va­te sec­tor actors and governments.

Take, for exam­p­le, the syl­labus of Har­vard Ken­ne­dy School, argu­ab­ly the most famous and respec­ted public poli­cy school in the U.S., which has been the alma mater of pre­si­dents of the likes of Barack Oba­ma, Ellen John- son Sir­leaf, Feli­pe Cal­derón, let alo­ne dozens of minis- ters in any coun­try of the world, U.S. con­gress­men and ‑women, as well as sena­tors, and lea­ders of the World Bank, IMF, and United Nati­ons. Despi­te its evi­dent suc- cess, Har­vard Ken­ne­dy School’s syl­labus would hard­ly get aca­de­mic appr­oval from a Ger­man uni­ver­si­ty pre­si- dent. I have often expe­ri­en­ced a slight haught­in­ess among Ger­man aca­de­mics when it comes to Har­vard Ken­ne­dy School clas­ses like “poli­cy ana­ly­sis,” “lea­der- ship,” “nego­tia­ti­ons,” “and the making of a poli­ti­ci­an,” and their cur­ri­cu­la: effi­ci­ent, litt­le to no writ­ten home- work of aca­de­mic natu­re, almost no tra­di­tio­nal tea­cher- cen­te­red “chalk-and-talk” tea­ching, but ins­tead stu­dents are put, e.g., through real-time, real-sta­kes nego­tia­ti­on prac­ti­ces with peers, have to found com­pa­nies or NGOs, wri­te and pitch op-eds that are published in news­pa­pers around the world, cal­cu­la­te bud­gets and make trade-offs – in short, stu­dent have to put them­sel­ves, their visi­ons, and their argu­ments on the line in real-world situa­tions that prepa­re them for the care­ers that they aspi­re to: dip- lomats, poli­ti­ci­ans, poli­cy­ma­kers, agents of chan­ge in ci- vil socie­ty orga­niza­ti­ons. Even lawy­ers: clas­si­cal rese­arch or time in the libra­ry, as Ger­man under­gra­dua­te or mas- ter stu­dents still expe­ri­ence it for the majo­ri­ty of their clas­ses, is not con­side­red appro­pria­te or suf­fi­ci­ent to pre- pare for a care­er as a judge or att­or­ney at inter­na­tio­nal­ly renow­ned insti­tu­ti­ons like Yale Law School, Har­vard Law School or Colum­bia Law School. Any U.S. law school has at least a law cli­nic for stu­dents to act as legal coun­sel in real life and prac­ti­ce their skills. Clas­ses are high­ly inter­ac­ti­ve and chal­len­ging rhe­to­ri­cal­ly; they

7 https://www.hks.harvard.edu/courses/science-and-implications- generative-ai.

most­ly cen­ter around the latest news and case stu­dies rather than theo­re­ti­cal frameworks.

This prac­ti­cal approach to a pro­fes­si­on is par­ti­cu­lar­ly rele­vant regar­ding a fast-moving topic like digi­tal and tech­no­lo­gy poli­cy. Con­sider the fact that any book, even any paper or regu­la­ti­on, like that E.U. AI Act, that was writ­ten befo­re Novem­ber 2022, the release of ChatGPT 3.5 by Ope­nAI, has almost no rele­van­ce any­mo­re for today’s deba­te on AI, its gover­nan­ce or socie­tal impli­ca- tions. And this is not the first time that tech­no­lo­gi­cal pro­gress outruns poli­cy­ma­kers. In the U.S., govern­ments — fede­ral, sta­te, and local – and uni­ver­si­ties have lear­ned during the Cold War and its con­stant nuclear thre­at that they need to think and deba­te inter­di­sci­pli­na­ry if they want their deba­te to be able to keep up with tech­no­lo­gi- cal pro­gress. Fur­ther­mo­re, they need to be cur­rent and not recur to frame­works that may no lon­ger be applicable.

Con­se­quent­ly, digi­tal poli­cy is taught dif­fer­ent­ly in the­se schools than in Ger­ma­ny and Europe.

First­ly, in most U.S. uni­ver­si­ties, digi­tal and emer­ging tech­no­lo­gy poli­cy have their home in the pro­fes­sio­nal school, i.e., in a poli­cy or law school, often have dedi­ca- ted stu­dy tracks and are always taught by an inter­di­scip- lina­ry team and in the case method, i.e., along a prac­ti­cal exam­p­le of their appli­ca­ti­on. Take, for ins­tance, the new cour­se “The Sci­ence and Impli­ca­ti­ons of Gene­ra­ti­ve AI” at Har­vard Ken­ne­dy School: it is taught by three pro­fes- sors – one eco­no­mist, one mathe­ma­ti­ci­an, and a public poli­cy pro­fes­sor. They pro­mi­se their stu­dents they will learn “through case stu­dies, simu­la­ti­ons, and pro­ject- based assign­ments to assess the advan­ta­ges and risks of deploy­ing gene­ra­ti­ve AI. The cur­ri­cu­lum unders­cores the signi­fi­can­ce of infor­med poli­cy­ma­king in this rapid- ly evol­ving field, see­king to ensu­re that HKS gra­dua­tes can harness AI tech­no­lo­gy respon­si­bly for the bene­fit of society.”7

By con­trast, only some Euro­pean uni­ver­si­ties offer inter­di­sci­pli­na­ry tea­ching on AI or case methods. Ox- ford Uni­ver­si­ty, for exam­p­le, focu­ses on the social sci- ence of the inter­net and digi­tal tech­no­lo­gy at the Oxford Inter­net Insti­tu­te, but through a very aca­de­mic lens.

ETH Zurich in Switz­er­land inte­res­t­ingly hou­ses in- ter­di­sci­pli­na­ry rese­arch on the socie­tal impli­ca­ti­ons of new tech­no­lo­gies, inclu­ding AI, in the Depart­ment of Huma­ni­ties, Social and Poli­ti­cal Sci­en­ces. But in the core Euro­pean Uni­on its­elf, despi­te the E.U. being the first mover on com­pre­hen­si­ve legis­la­ti­on on AI with the E.U.

ORDNUNG DER WISSENSCHAFT 1 (2024), 1–6

AI Act, only a handful of uni­ver­si­ties offer inter­di­sci­pli- nary clas­ses on AI, among them Tech­ni­cal Uni­ver­si­ty of Munich in Ger­ma­ny, the KTH Roy­al Insti­tu­te of Tech­no- logy in Swe­den, Delft Uni­ver­si­ty of Tech­no­lo­gy in the Net­her­lands, and Uni­ver­si­ty of Hel­sin­ki in Fin­land. But we have yet to see any of them have as broad and pro­mi- nent a seat at the table as Har­vard or Yale have regar­ding AI poli­cy in Washing­ton. Or a paper that is more broad- ly agen­da-set­ting and glo­bal­ly dis­cour­se-domi­na­ting than the one from Har­vard Busi­ness School.

Ano­ther huge dif­fe­rence is the for­mal qua­li­fi­ca­ti­on of tea­ching per­son­nel and facul­ty: U.S. pro­fes­sio­nal schools often care more about real-world expe­ri­ence than aca­de- mic acco­lades. This goes for all disci­pli­nes, really:

Jac­in­da Ardern, for­mer prime minis­ter of New Zeal- and, is equal­ly part of the Har­vard facul­ty as was Ban Ki- moon, Secre­ta­ry Gene­ral of the U.N. Emma Sky, the foun­ding Direc­tor of Yale’s Inter­na­tio­nal Lea­der­ship Cen­ter, ser­ved as poli­ti­cal advi­sor to the Com­man­ding Gene­ral of U.S. Forces in Iraq, as deve­lo­p­ment advi­sor to the Com­man­der of NATO’s Inter­na­tio­nal Secu­ri­ty Assis- tance Force in Afgha­ni­stan, and as poli­ti­cal advi­sor to the U.S. Secu­ri­ty Coor­di­na­tor for the Midd­le East Peace Pro­cess. None have a Ph.D. or would qua­li­fy for a for­mal tea­ching posi­ti­on in Ger­ma­ny. Simi­lar­ly, the cur­rent ad- minis­tra­tor for USAID, the U.S. Agen­cy for Inter­na­tio­nal Deve­lo­p­ment, and for­mer United Sta­tes Ambassa­dor to the United Nati­ons, Saman­tha Power, who is on lea­ve from not one but two pro­fes­sor­ships, the Anna Lindh Pro­fes­sor of the Prac­ti­ce of Glo­bal Lea­der­ship and Pub- lic Poli­cy at Har­vard Ken­ne­dy School and the Wil­liam D. Zabel ’61 Pro­fes­sor of Prac­ti­ce in Human Rights at Har- vard Law School, was a prac­ti­cing jour­na­list befo­re she beca­me one of the most popu­lar pro­fes­sors at Har­vard. She also has no Ph.D. degree in sport, let alo­ne a habilitation.

In digi­tal and emer­ging tech­no­lo­gy poli­cy, picking the best per­son for the job today allows U.S. pro­fes­sio­nal schools to attract the most sea­so­ned prac­ti­tio­ners as tea­chers, who bring their expe­ri­ence direct­ly from the front and often still prac­ti­ce while tea­ching clas­ses. In addi­ti­on, they can also quick­ly and fast adapt to new topics.

Bruce Schnei­er, for exam­p­le, likely the glo­bal­ly most renow­ned cyber­se­cu­ri­ty expert, who is a dai­ly con­sul­tant to govern­ments around the world, does not have a doc- toral degree, which would pro­ba­b­ly take him out of the run­ning for a facul­ty posi­ti­on in any Ger­man university

8 https://innovategovernment.org/.

or aca­de­mic insti­tu­ti­on. But it makes him a high­ly sought-after tea­cher at Har­vard who always con­tri­bu­tes the latest insights to his stu­dents and decis­i­on-makers in Washington.

Simi­lar­ly, Nick Sinai joi­n­ed Har­vard in 2014 from the White House, whe­re he was the U.S. Depu­ty Chief Tech- nolo­gy Offi­cer. Sinai led Pre­si­dent Obama’s Open Data Initia­ti­ves, co-led the Open Govern­ment Initia­ti­ve, and hel­ped start the Pre­si­den­ti­al Inno­va­ti­on Fel­low pro­gram. Befo­re this, he play­ed a key role in craf­ting the Natio­nal Broad­band Plan at the FCC. Today, he works as a seni­or advi­sor at a Ven­ture Capi­tal firm. Howe­ver, he still tea­ches every spring at Har­vard Ken­ne­dy School a high- ly prac­ti­cal class cal­led “Tech and Inno­va­ti­on in Govern- ment.”. Stu­dents the­re are pai­red with govern­ments and public sec­tor enti­ties to sol­ve real-world digi­tal prob- lems, like coding a data­ba­se and desig­ning a digi­tal govern­ment solution.8

Con­se­quent­ly, the­se pro­fes­sio­nal Schools have con­si- dera­ble advan­ta­ges in con­tri­bu­ting meaningful rese­arch and edu­ca­ting tomorrow’s lea­ders who alre­a­dy have real- life expe­ri­ence coming out of uni­ver­si­ty. A signi­fi­cant bene­fit, espe­ci­al­ly regar­ding fast-moving topics like Ge- nera­ti­ve AI, is for both stu­dents and pro­fes­sors and com- panies and socie­ties. At the same time, rese­arch by pro- fes­sors is, in turn, inspi­red by pro­blems from the real world. The stu­dy by Lakha­ni et al. is the latest, but by far not the only exam­p­le, of them set­ting the agen­da for govern­ments or companies.

This brings us to the last and likely most con­tro­ver­si- al dif­fe­rence bet­ween U.S. pro­fes­sio­nal schools and their digi­tal poli­cy work com­pared to Ger­man or Euro­pean pro­grams: the high­ly con­tes­ted topic of indus­try col­l­abo- rati­on and sponsoring.

Stan­ford, Car­ne­gie Mel­lon, MIT, Har­vard and Yale have a long histo­ry of col­la­bo­ra­ting with big tech com­pa- nies and cor­po­ra­ti­ons. Vice ver­sa, Alpha­bet, the parent com­pa­ny of Goog­le, col­la­bo­ra­tes with various uni­ver­si- ties glo­bal­ly through its sub­si­dia­ries like Goog­le and Deep­Mind on AI rese­arch and pro­jects. Meta, Micro­soft, Ama­zon – all the lar­ge tech com­pa­nies have uni­ver­si­ty part­ner­ships in the U.S. and their rese­arch labs. The­se col­la­bo­ra­ti­ons might invol­ve joint rese­arch pro­jects, aca- demic grants, fel­low­ship pro­grams, and other forms of scho­lar­ly enga­ge­ment to advan­ce the sta­te of the art in AI and pro­mo­te the respon­si­ble use and under­stan­ding of AI tech­no­lo­gy. Ope­nAI, still par­ti­al­ly a non-pro­fit orga- niza­ti­on, often col­la­bo­ra­tes with rese­ar­chers from diffe-

Rulf · U.S. Uni­ver­si­ties influence on AI Gover­nan­ce and Regu­la­ti­on 5

rent insti­tu­ti­ons and may form part­ner­ships with uni­ver- sities for par­ti­cu­lar pro­jects or initiatives.

They accept money from big tech or other indus­try col­la­bo­ra­ti­ons of dif­fe­rent forms, e.g., with com­pa­nies like Bos­ton Con­sul­ting Group. It still some­ti­mes rai­ses eye­brows in the Ger­man aca­de­mic com­mu­ni­ty and with good reason. Deba­tes around the eco­no­mic impli­ca­ti­ons of AI regu­la­ti­on, inclu­ding its impact on inno­va­ti­on, com­pe­ti­ti­on, and mar­ket dyna­mics and dis­cus­sions of AI’s impact on labor mar­kets and how law can address poten­ti­al job dis­pla­ce­ment may be fea­si­ble in an ivo­ry- tower set­ting. But ques­ti­ons around pri­va­cy and data pro­tec­tion, e.g., ana­ly­zing the suf­fi­ci­en­cy of exis­ting pri- vacy laws and how they app­ly to AI, and deba­ting whe- ther new pri­va­cy frame­works are nee­ded, or issues of se- curi­ty and cyber­se­cu­ri­ty of LLMs, e.g., the uni­que secu- rity chal­lenges posed by AI, and how regu­la­ti­on can mi- tiga­te risks such as adver­sa­ri­al attacks, or, inde­ed, a pro­per assess­ment of tech­ni­cal stan­dards, e.g. the role of tech­ni­cal stan­dards in AI regu­la­ti­on, and how aca­de­mic rese­arch can con­tri­bu­te to the deve­lo­p­ment of robust, wide­ly-accept­ed stan­dards – the­se topics can­not be dis- cus­sed wit­hout col­la­bo­ra­ti­on with the deve­lo­per com­pa- nies themselves.

IV. Con­clu­si­on

While it is unli­kely that we will see Ger­man aca­de­mic insti­tu­ti­ons turn into full-on pro­fes­sio­nal schools, besi- des the few exis­ting initia­ti­ves like the Her­tie School in Ber­lin, the Wil­ly-Brandt-School, or the Buce­ri­us Law School in Ham­burg, and while we can even argue whe- ther or not that might be sen­si­ble on the who­le, I stron- gly belie­ve that Ger­man and Euro­pean aca­de­mic rese­arch needs more of a seat at the table, when it comes to tech- nolo­gy and digi­tal poli­cy and glo­bal deba­tes around regu­la­ting tech­no­lo­gies like Arti­fi­ci­al Intel­li­gence. And that this will only come about by ope­ning up more to the prac­ti­cal, to prac­ti­tio­ners as tea­chers, and to indus­try as col­la­bo­ra­ti­ve part­ners. Bes­i­des, it means beco­ming fas­ter in publi­shing well-foun­ded state­ments in more acces­sib- le publi­ca­ti­ons and giving in to more mar­ke­ting, also through indus­try partners.

The exam­p­le of the U.S. pro­fes­sio­nal schools and their approach shows that the­se orga­niza­ti­ons often en- gage with poli­cy­ma­kers, aca­de­mics, tech­no­lo­gists, and

the public to fos­ter a bet­ter under­stan­ding of technology’s impact on indi­vi­du­als and com­mu­ni­ties and advo­ca­te for poli­ci­es that ensu­re tech­no­lo­gy ser­ves the broa­der public inte­rest. They are cru­cial in informing and sha- ping the dis­cour­se around tech­no­lo­gy and socie­ty in the USA. Through their various pro­grams and initia­ti­ves, they each seek to bridge the gap bet­ween aca­de­mic re- search and poli­cy prac­ti­ce and to fos­ter a well-infor­med public dis­cour­se on cri­ti­cal glo­bal issues.

And that, after all, is what we need in Ger­ma­ny and Euro­pe, too, when it comes to cri­ti­cal tech­no­lo­gies like Arti­fi­ci­al Intel­li­gence. Fur­ther­mo­re, we need the next ge- nera­ti­on of aca­de­mics to be bet­ter trai­ned to bring their argu­ments into the public domain. With tech­no­lo­gy like GenAI that has so much poten­ti­al to cau­se demo­cra­tic desta­bi­liza­ti­on and dis­in­for­ma­ti­on, it needs trus­ted voices that know how to com­mu­ni­ca­te cle­ar­ly and give prac­ti­cal advice to indus­try, socie­ty, and governments.

Kirs­ten Rulf is a core mem­ber of the Tech­no­lo­gy & Digi­tal Advan­ta­ge prac­ti­ce at the Bos­ton Con­sul­ting Group, as well as a lea­der on the Finan­cial Insti­tu­ti­ons team for BCG X, the firm’s tech build and design busi- ness. At BCG, Kirs­ten focu­ses on the safe and respon- sible deve­lo­p­ment and imple­men­ta­ti­on of AI and gene­ra­ti­ve AI busi­ness models at sca­le. Her pri­ma­ry fields of exper­ti­se are glo­bal AI regu­la­ti­on and gover- nan­ce; data gover­nan­ce; the geo­po­li­tics of tech; and craf­ting and imple­men­ting data-dri­ven busi­ness models. In addi­ti­on to her work at the firm, Kirs­ten tea­ches AI gover­nan­ce and digi­tal trans­for­ma­ti­on at Yale Uni­ver­si­ty and is a UC Ber­ke­ley Tech Poli­cy Fel- low.

Pri­or to joi­ning BCG, Kirs­ten was seni­or digi­tal poli­cy advi­sor to Ger­man Chan­cell­ors Ange­la Mer­kel and Olaf Scholz and the Head of the Digi­tal and Data Depart­ment at the Fede­ral Chan­cel­lery of Ger­ma­ny for more than four years. In that role, she co-nego­tia- ted the EU AI Act, Data Act, and all other Euro­pean digi­tal regu­la­ti­on, and was respon­si­ble for Germany‘s stra­te­gic posi­tio­ning and glo­bal invest­ments in digi- tal tech­no­lo­gy and infrastructure.

Befo­re her work at the Fede­ral Chan­cel­lery, Kirs­ten taught AI and com­pli­ance at Har­vard Law School and ran a rese­arch group on auto­no­mous vehic­les at Har- vard Ken­ne­dy School. Befo­re that, she was a TV cor­re- spon­dent for the BBC and for Ger­man natio­nal TV ARD and its flag­ship news bul­le­tin Tagesschau.

ORDNUNG DER WISSENSCHAFT 1 (2024), 1–6