{"id":142,"date":"2021-09-13T02:45:10","date_gmt":"2021-09-13T02:45:10","guid":{"rendered":"https:\/\/gen-zai.org\/?post_type=genzaimedia&#038;p=142"},"modified":"2021-09-29T05:56:27","modified_gmt":"2021-09-29T05:56:27","slug":"thinking-yoshua-bengio-01","status":"publish","type":"genzaimedia","link":"https:\/\/gen-zai.org\/en\/media\/thinking-yoshua-bengio-01\/","title":{"rendered":"Interview with Yoshua Bengio, Pioneer of AI &#8211; 01"},"content":{"rendered":"\n<p>On June 7th, 2019 at the MILA (Montreal Institute for Learning Algorithms) in Montreal, Canada, I conducted my interview with Professor Yoshua Bengio, who is one of the pioneers of AI. He is well-known as the \u201cfather of AI\u201d for his great contribution to developing so-called deep learning. He has received the <a rel=\"noreferrer noopener\" href=\"https:\/\/awards.acm.org\/about\/2018-turing\" target=\"_blank\">2018 ACM A.M<\/a>. Turing Award with Geoffrey Hinton and Yann LeCun for major breakthroughs in AI.<\/p>\n\n\n\n<p>In my interview, I asked him about the possibilities of AGI, biased data, people\u2019s concerns about GAFA and China, the opportunities and risks of AI and the future of AI. All these questions are based on my previous experiences in the University of Cambridge as well as many international summits and conferences on AI I have been invited to recently.<\/p>\n\n\n\n<p>Bengio is also noteworthy because he chooses to remain as an academic, staying at the University of Montreal as head of the <a rel=\"noreferrer noopener\" href=\"https:\/\/mila.quebec\/\" target=\"_blank\">MILA<\/a>, while other AI leaders such as Geoffrey Hinton have left academia and now work for Google. Bengio continues to contribute to teaching students as well as engaging with local communities. He believes the education of future generations and people\u2019s engagement with AI is crucial for the creation of a better society including AI. This is because he is aware of not only the opportunities but also the risks of AI. As he owns his startup, so-called <a rel=\"noreferrer noopener\" href=\"https:\/\/www.elementai.com\/\" target=\"_blank\">Element AI<\/a>, he is instrumental in buildinga bridge between academia and the business world.<\/p>\n\n\n\n<p>This is my interview with Yoshua.<\/p>\n\n\n\n<h4 class=\"wp-block-heading\">The Road to AGI<\/h4>\n\n\n\n<div class=\"wp-block-group wp-interview\"><div class=\"wp-block-group__inner-container is-layout-flow wp-block-group-is-layout-flow\">\n<p><strong>Yoshua&nbsp;Bengio : <\/strong>Did&nbsp;you&nbsp;have&nbsp;some&nbsp;questions&nbsp;for&nbsp;me?<\/p>\n\n\n\n<p><strong>Toshie&nbsp;Takahashi : <\/strong>Yes,&nbsp;of&nbsp;course.&nbsp;&nbsp;Thank&nbsp;you&nbsp;for&nbsp;taking&nbsp;the&nbsp;time.&nbsp;&nbsp;I&#8217;d&nbsp;like&nbsp;to&nbsp;ask&nbsp;you&nbsp;about&nbsp;AGI.<\/p>\n\n\n\n<p><strong>YB : <\/strong>Okay.<\/p>\n\n\n\n<p><strong>TT : <\/strong>I&nbsp;watched&nbsp;some&nbsp;of&nbsp;your&nbsp;videos&nbsp;and&nbsp;I&nbsp;understand&nbsp;you&nbsp;are&nbsp;very&nbsp;positive&nbsp;about&nbsp;AGI.<\/p>\n\n\n\n<p><strong>YB : <\/strong>No. <\/p>\n\n\n\n<p><strong>TT : <\/strong>No?&nbsp;I&nbsp;thought&nbsp;you&nbsp;showed&nbsp;a&#8230;<\/p>\n\n\n\n<p><strong>YB : <\/strong>I&#8217;m&nbsp;positive&nbsp;that&nbsp;we&nbsp;can&nbsp;build&nbsp;machines&nbsp;as&nbsp;intelligent&nbsp;as&nbsp;humans,&nbsp;but&nbsp;completely&nbsp;general&nbsp;intelligence&nbsp;is&nbsp;a&nbsp;different&nbsp;story.&nbsp;I&#8217;m&nbsp;not&nbsp;positive&nbsp;as&nbsp;to&nbsp;how&nbsp;humans&nbsp;might&nbsp;use&nbsp;it&nbsp;because&nbsp;we&#8217;re&nbsp;not&nbsp;very&nbsp;wise.<\/p>\n\n\n\n<p><strong>TT : <\/strong>Okay.&nbsp;So&nbsp;can&nbsp;you&nbsp;show&nbsp;a&nbsp;road&nbsp;map&nbsp;of&nbsp;how&nbsp;you&nbsp;could&nbsp;create&nbsp;AGI?<\/p>\n\n\n\n<p><strong>YB : <\/strong>Yes,&nbsp;the&nbsp;one&nbsp;I&nbsp;have&nbsp;chosen&nbsp;to&nbsp;explore.<\/p>\n\n\n\n<p><strong>TT : <\/strong>I&nbsp;spent&nbsp;some&nbsp;time&nbsp;in&nbsp;Cambridge,&nbsp;and&nbsp;some&nbsp;scholars,&nbsp;for&nbsp;example&nbsp;Professor&nbsp;John&nbsp;Daugman,&nbsp;the&nbsp;head&nbsp;of&nbsp;Artificial&nbsp;Intelligence&nbsp;Group&nbsp;at&nbsp;the&nbsp;University&nbsp;of&nbsp;Cambridge,&nbsp;said&nbsp;that&nbsp;AGI&nbsp;is&nbsp;an&nbsp;illusion&nbsp;created&nbsp;by&nbsp;science&nbsp;fiction&nbsp;because&nbsp;he&nbsp;said&nbsp;that&nbsp;we&nbsp;don&#8217;t&nbsp;even&nbsp;understand&nbsp;a&nbsp;single&nbsp;neuron&nbsp;so&nbsp;how&nbsp;can&nbsp;we&nbsp;create&nbsp;AGI?<\/p>\n\n\n\n<p><strong>YB : <\/strong>Yes,&nbsp;I&nbsp;disagree&nbsp;with&nbsp;him.<\/p>\n\n\n\n<p><strong>TT : <\/strong>Okay,&nbsp;so&nbsp;could&nbsp;you&nbsp;tell&nbsp;me&nbsp;about&nbsp;that?<\/p>\n\n\n\n<p><strong>YB : <\/strong>Sure.&nbsp;Having&nbsp;worked&nbsp;for&nbsp;decades&nbsp;on&nbsp;AI&nbsp;and&nbsp;machine&nbsp;learning&nbsp;I&nbsp;feel&nbsp;strongly&nbsp;that&nbsp;we&nbsp;have&nbsp;made&nbsp;very&nbsp;substantial&nbsp;progress,&nbsp;and&nbsp;in&nbsp;particular&nbsp;we&nbsp;have&nbsp;uncovered&nbsp;some&nbsp;principles,&nbsp;which&nbsp;today&nbsp;allow&nbsp;us&nbsp;to&nbsp;build&nbsp;very&nbsp;powerful&nbsp;systems.&nbsp;I&nbsp;also&nbsp;recognise&nbsp;that&nbsp;there&#8217;s&nbsp;a&nbsp;long&nbsp;way&nbsp;towards&nbsp;human&nbsp;level&nbsp;AI,&nbsp;and&nbsp;I&nbsp;don&#8217;t&nbsp;know&nbsp;how&nbsp;long&nbsp;it&#8217;s&nbsp;going&nbsp;to&nbsp;take.&nbsp;So&nbsp;I&nbsp;didn&#8217;t&nbsp;say&nbsp;we&#8217;ll&nbsp;find&nbsp;human&nbsp;level&nbsp;AI&nbsp;in&nbsp;five&nbsp;years,&nbsp;or&nbsp;ten&nbsp;years&nbsp;or&nbsp;50&nbsp;years.&nbsp;I&nbsp;don&#8217;t&nbsp;know&nbsp;how&nbsp;much&nbsp;time&nbsp;it&#8217;s&nbsp;going&nbsp;to&nbsp;take,&nbsp;but&nbsp;the&nbsp;human&nbsp;brain&nbsp;is&nbsp;a&nbsp;machine.&nbsp;It&#8217;s&nbsp;a&nbsp;very&nbsp;complex&nbsp;one&nbsp;and&nbsp;we&nbsp;don\u2019t&nbsp;fully&nbsp;understand&nbsp;it,&nbsp;but&nbsp;there\u2019s&nbsp;no&nbsp;reason&nbsp;to&nbsp;believe&nbsp;that&nbsp;we&nbsp;won\u2019t&nbsp;be&nbsp;able&nbsp;to&nbsp;figure&nbsp;out&nbsp;those&nbsp;principles.<\/p>\n\n\n\n<p><strong>TT : <\/strong>I&nbsp;see.&nbsp;Mr.&nbsp;Tom&nbsp;Everitt&nbsp;at&nbsp;DeepMind&nbsp;said&nbsp;that&nbsp;he&nbsp;could&nbsp;create&nbsp;AGI&nbsp;in&nbsp;a&nbsp;couple&nbsp;of&nbsp;decades,&nbsp;maybe&nbsp;20&nbsp;or&nbsp;30&nbsp;years.&nbsp;&nbsp;Not&nbsp;too&nbsp;far&nbsp;in&nbsp;the&nbsp;future.<\/p>\n\n\n\n<p><strong>YB : <\/strong>How&nbsp;does&nbsp;he&nbsp;know?<\/p>\n\n\n\n<p><strong>TT : <\/strong>I&nbsp;don&#8217;t&nbsp;know.&nbsp;I&nbsp;asked&nbsp;him&nbsp;but&nbsp;he&nbsp;didn&#8217;t&nbsp;answer&nbsp;it.<\/p>\n\n\n\n<p><strong>YB : <\/strong>Nobody&nbsp;knows.<\/p>\n\n\n\n<p><strong>TT : <\/strong>Nobody&nbsp;knows.&nbsp;Yes,&nbsp;of&nbsp;course.&nbsp;When&nbsp;I&nbsp;met&nbsp;Professor&nbsp;Sheldon&nbsp;Lee&nbsp;Glashow,&nbsp;a&nbsp;Nobel&nbsp;Prize&nbsp;winning&nbsp;American&nbsp;theoretical&nbsp;physicist,&nbsp;he&nbsp;told&nbsp;us&nbsp;that&nbsp;we&nbsp;won\u2019t&nbsp;have&nbsp;AGI.&nbsp;&nbsp;Or&nbsp;even&nbsp;if&nbsp;we&nbsp;have&nbsp;it,&nbsp;it\u2019d&nbsp;be&nbsp;very&nbsp;far&nbsp;away.<\/p>\n\n\n\n<p><strong>YB : <\/strong>Possibly,&nbsp;so&nbsp;we&nbsp;don&#8217;t&nbsp;know.&nbsp;It&nbsp;could&nbsp;be&nbsp;ten&nbsp;years,&nbsp;it&nbsp;could&nbsp;be&nbsp;100&nbsp;years. <\/p>\n\n\n\n<p><strong>TT : <\/strong>Oh&nbsp;really?<\/p>\n\n\n\n<p><strong>YB : <\/strong>Yes.<\/p>\n\n\n\n<p><strong>TT : <\/strong>Okay.<\/p>\n\n\n\n<p><strong>YB : <\/strong>It&#8217;s&nbsp;impossible&nbsp;to&nbsp;know&nbsp;these&nbsp;things.&nbsp;There\u2019s&nbsp;a&nbsp;beautiful&nbsp;analogy&nbsp;that&nbsp;I&nbsp;heard&nbsp;my&nbsp;friend&nbsp;Yann&nbsp;LeCun&nbsp;mention&nbsp;first.&nbsp;As&nbsp;a&nbsp;researcher,&nbsp;our&nbsp;progress&nbsp;is&nbsp;like&nbsp;climbing&nbsp;a&nbsp;mountain&nbsp;and&nbsp;as&nbsp;we&nbsp;approach&nbsp;the&nbsp;peak&nbsp;of&nbsp;that&nbsp;mountain&nbsp;we&nbsp;realise&nbsp;there&#8217;s&nbsp;some&nbsp;other&nbsp;mountains&nbsp;behind.<\/p>\n\n\n\n<p><strong>TT : <\/strong>Yes,&nbsp;exactly.<\/p>\n\n\n\n<p><strong>YB : <\/strong>And&nbsp;we&nbsp;don\u2019t&nbsp;know&nbsp;what&nbsp;other&nbsp;higher&nbsp;peak&nbsp;is&nbsp;hidden&nbsp;from&nbsp;our&nbsp;view&nbsp;right&nbsp;now.<\/p>\n\n\n\n<p><strong>TT : <\/strong>I&nbsp;see.<\/p>\n\n\n\n<p><strong>YB : <\/strong>So&nbsp;it&nbsp;might&nbsp;be&nbsp;that&nbsp;the&nbsp;obstacles&nbsp;that&nbsp;we&#8217;re&nbsp;currently&nbsp;working&nbsp;on&nbsp;are&nbsp;going&nbsp;to&nbsp;be&nbsp;the&nbsp;last&nbsp;ones&nbsp;to&nbsp;reach&nbsp;human&nbsp;level&nbsp;AI&nbsp;or&nbsp;maybe&nbsp;there&nbsp;will&nbsp;be&nbsp;ten&nbsp;more&nbsp;big&nbsp;challenges&nbsp;that&nbsp;we&nbsp;don\u2019t&nbsp;even&nbsp;perceive&nbsp;right&nbsp;now,&nbsp;so&nbsp;I&nbsp;don&#8217;t&nbsp;think&nbsp;it&#8217;s&nbsp;plausible&nbsp;that&nbsp;we&nbsp;could&nbsp;really&nbsp;know&nbsp;when,&nbsp;how&nbsp;many&nbsp;years,&nbsp;how&nbsp;many&nbsp;decades,&nbsp;it&nbsp;will&nbsp;take&nbsp;to&nbsp;reach&nbsp;human&nbsp;level&nbsp;AI.<\/p>\n\n\n\n<p><strong>TT : <\/strong>I&nbsp;see.&nbsp;&nbsp;But&nbsp;some&nbsp;people&nbsp;also&nbsp;say&nbsp;that&nbsp;we&nbsp;need&nbsp;a&nbsp;different&nbsp;breakthrough&nbsp;to&nbsp;create&nbsp;AGI.&nbsp;We&nbsp;need&nbsp;a&nbsp;kind&nbsp;of&nbsp;paradigm&nbsp;shift&nbsp;from&nbsp;our&nbsp;current&nbsp;approach.&nbsp;&nbsp;Do&nbsp;you&nbsp;think&nbsp;that&nbsp;you&nbsp;can&nbsp;see&nbsp;the&nbsp;road&nbsp;to&nbsp;reach&nbsp;if&nbsp;you&nbsp;keep&nbsp;on&nbsp;with&nbsp;deep&nbsp;learning?&nbsp;So&nbsp;this&nbsp;is&nbsp;a&nbsp;right&nbsp;road?<\/p>\n\n\n\n<p><strong>YB : <\/strong>We&nbsp;have&nbsp;understood&nbsp;as&nbsp;I&nbsp;said&nbsp;some&nbsp;very&nbsp;important&nbsp;principles&nbsp;through&nbsp;our&nbsp;work&nbsp;on&nbsp;deep&nbsp;learning&nbsp;and&nbsp;I&nbsp;believe&nbsp;those&nbsp;principles&nbsp;are&nbsp;here&nbsp;to&nbsp;stay,&nbsp;but&nbsp;we&nbsp;need&nbsp;additional&nbsp;advances&nbsp;that&nbsp;are&nbsp;going&nbsp;to&nbsp;be&nbsp;combined&nbsp;with&nbsp;things&nbsp;we&nbsp;have&nbsp;already&nbsp;figured&nbsp;out.&nbsp;I&nbsp;think&nbsp;deep&nbsp;learning&nbsp;is&nbsp;here&nbsp;to&nbsp;stay,&nbsp;but&nbsp;as&nbsp;is,&nbsp;it\u2019s&nbsp;obviously&nbsp;not&nbsp;sufficient&nbsp;to&nbsp;do,&nbsp;for&nbsp;example,&nbsp;higher-level&nbsp;cognition&nbsp;that&nbsp;humans&nbsp;are&nbsp;doing.&nbsp;We&#8217;ve&nbsp;made&nbsp;a&nbsp;lot&nbsp;of&nbsp;progress&nbsp;on&nbsp;what&nbsp;psychologists&nbsp;call&nbsp;System&nbsp;1&nbsp;cognition,&nbsp;which&nbsp;is&nbsp;everything&nbsp;to&nbsp;do&nbsp;with&nbsp;intuitive&nbsp;tasks.&nbsp;Here&nbsp;is&nbsp;an&nbsp;example&nbsp;of&nbsp;what&nbsp;we&#8217;ve&nbsp;discovered,&nbsp;in&nbsp;fact&nbsp;one&nbsp;of&nbsp;the&nbsp;central&nbsp;ideas&nbsp;in&nbsp;deep&nbsp;learning:&nbsp;the&nbsp;notion&nbsp;of&nbsp;distributed&nbsp;representation.&nbsp;I\u2019m&nbsp;very,&nbsp;very&nbsp;sure&nbsp;that&nbsp;this&nbsp;notion&nbsp;will&nbsp;stay&nbsp;because&nbsp;it&#8217;s&nbsp;so&nbsp;powerful.<\/p>\n<\/div><\/div>\n\n\n\n<h4 class=\"wp-block-heading\">The Future of AI<\/h4>\n\n\n\n<div class=\"wp-block-group wp-interview\"><div class=\"wp-block-group__inner-container is-layout-flow wp-block-group-is-layout-flow\">\n<p><strong>TT : <\/strong>You&nbsp;are&nbsp;really&nbsp;ahead&nbsp;of&nbsp;this&#8230;So&nbsp;what&nbsp;can&nbsp;you&nbsp;see&nbsp;of&nbsp;the&nbsp;future?<\/p>\n\n\n\n<p><strong>YB : <\/strong>I&nbsp;don\u2019t&nbsp;see&nbsp;the&nbsp;future.<\/p>\n\n\n\n<p><strong>TT : <\/strong>Really?&nbsp;I&nbsp;thought&nbsp;that&nbsp;you&nbsp;could&nbsp;see&nbsp;a&nbsp;future&nbsp;we&nbsp;cannot&nbsp;see&nbsp;yet.<\/p>\n\n\n\n<p><strong>YB : <\/strong>No,&nbsp;I&nbsp;have&nbsp;research&nbsp;goals&nbsp;and&nbsp;I&nbsp;have&nbsp;chosen&nbsp;to&nbsp;explore&nbsp;particular&nbsp;directions&nbsp;because&nbsp;I&nbsp;believe&nbsp;in&nbsp;them,&nbsp;so&nbsp;the&nbsp;vision&nbsp;I&nbsp;have&nbsp;is&nbsp;that&nbsp;a&nbsp;big&nbsp;missing&nbsp;piece&nbsp;in&nbsp;our&nbsp;current&nbsp;machine&nbsp;learning&nbsp;systems&nbsp;is&nbsp;what&nbsp;people&nbsp;call&nbsp;common&nbsp;sense.&nbsp;So&nbsp;think&nbsp;about&nbsp;the&nbsp;intelligence&nbsp;of&nbsp;a&nbsp;two-year-old&nbsp;or&nbsp;the&nbsp;intelligence&nbsp;of&nbsp;a&nbsp;cat,&nbsp;we&nbsp;don&#8217;t&nbsp;even&nbsp;have&nbsp;that&nbsp;in&nbsp;machines&nbsp;right&nbsp;now.&nbsp;And&nbsp;that&#8217;s&nbsp;not&nbsp;even&nbsp;starting&nbsp;to&nbsp;think&nbsp;about&nbsp;things&nbsp;like&nbsp;language&nbsp;where&nbsp;we&#8217;re&nbsp;seriously&nbsp;lacking,&nbsp;especially&nbsp;when&nbsp;you&nbsp;look&nbsp;at&nbsp;the&nbsp;mistakes&nbsp;made&nbsp;by&nbsp;current&nbsp;state-of-the-art&nbsp;systems.&nbsp;That&nbsp;sort&nbsp;of&nbsp;common&nbsp;sense&nbsp;understanding&nbsp;includes&nbsp;things&nbsp;that&nbsp;current&nbsp;machine&nbsp;learning&nbsp;doesn&#8217;t&nbsp;do,&nbsp;like&nbsp;understanding&nbsp;cause&nbsp;and&nbsp;effect,&nbsp;and&nbsp;discovering&nbsp;cause&nbsp;and&nbsp;effect.&nbsp;It&nbsp;includes&nbsp;a&nbsp;broad&nbsp;understanding&nbsp;of&nbsp;the&nbsp;world,&nbsp;not&nbsp;just&nbsp;one&nbsp;specialised&nbsp;task.&nbsp;It&nbsp;includes&nbsp;the&nbsp;ability&nbsp;to&nbsp;discover&nbsp;this&nbsp;model&nbsp;of&nbsp;the&nbsp;world&nbsp;through&nbsp;unsupervised&nbsp;exploration.&nbsp;We&nbsp;rely&nbsp;today&nbsp;heavily&nbsp;on&nbsp;supervised&nbsp;learning&nbsp;where&nbsp;all&nbsp;of&nbsp;the&nbsp;high-level&nbsp;concepts&nbsp;have&nbsp;been&nbsp;defined&nbsp;by&nbsp;a&nbsp;human&nbsp;teacher&nbsp;or&nbsp;human&nbsp;labels.&nbsp;There&nbsp;are&nbsp;lots&nbsp;of&nbsp;aspects&nbsp;of&nbsp;intelligence&nbsp;that&nbsp;are&nbsp;currently&nbsp;at&nbsp;the&nbsp;frontier,&nbsp;and&nbsp;I&#8217;m&nbsp;not&nbsp;the&nbsp;only&nbsp;one&nbsp;exploring&nbsp;these&nbsp;things,&nbsp;which&nbsp;could&nbsp;make&nbsp;a&nbsp;big&nbsp;difference&nbsp;in&nbsp;a&nbsp;few&nbsp;years,&nbsp;but&nbsp;it&#8217;s&nbsp;hard&nbsp;to&nbsp;be&nbsp;sure.<\/p>\n\n\n\n<p><strong>TT :<\/strong> Is&nbsp;your&nbsp;goal&nbsp;to&nbsp;create&nbsp;a&nbsp;human&nbsp;or&nbsp;a&nbsp;superhuman?<\/p>\n\n\n\n<p><strong>YB : <\/strong>No,&nbsp;my&nbsp;goal&nbsp;is&nbsp;to&nbsp;understand&nbsp;general&nbsp;principles&nbsp;of&nbsp;intelligence,&nbsp;how&nbsp;an&nbsp;agent&nbsp;can&nbsp;become&nbsp;intelligent.&nbsp;I&nbsp;and&nbsp;many&nbsp;others&nbsp;would&nbsp;like&nbsp;to&nbsp;discover&nbsp;the&nbsp;equivalent&nbsp;of&nbsp;the&nbsp;laws&nbsp;of&nbsp;physics,&nbsp;but&nbsp;for&nbsp;intelligence.<\/p>\n\n\n\n<p><strong>TT : <\/strong>Yes.<\/p>\n\n\n\n<p><strong>YB : <\/strong>And&nbsp;presumably&nbsp;those&nbsp;principles&nbsp;would&nbsp;apply&nbsp;to&nbsp;humans,&nbsp;to&nbsp;animals,&nbsp;to&nbsp;aliens,&nbsp;who&nbsp;might&nbsp;be&nbsp;intelligent,&nbsp;to&nbsp;machines&nbsp;that&nbsp;we&nbsp;can&nbsp;build,&nbsp;so&nbsp;these&nbsp;would&nbsp;be&nbsp;very&nbsp;general&nbsp;principles&nbsp;and&nbsp;machine&nbsp;learning&nbsp;has&nbsp;already&nbsp;established&nbsp;some&nbsp;of&nbsp;those&nbsp;principles,&nbsp;but&nbsp;we&#8217;re&nbsp;still&nbsp;missing&nbsp;some&nbsp;important&nbsp;ones,&nbsp;I&nbsp;believe.<\/p>\n\n\n\n<p><strong>TT : <\/strong>But&nbsp;you&#8217;ve&nbsp;found&nbsp;a&nbsp;simple&nbsp;principle?<\/p>\n\n\n\n<p><strong>YB : <\/strong>Yes,&nbsp;several.&nbsp;But&nbsp;behind&nbsp;this,&nbsp;there&#8217;s&nbsp;a&nbsp;meta-principle,&nbsp;a&nbsp;scientific&nbsp;hypothesis,&nbsp;that&nbsp;intelligence&nbsp;could&nbsp;be&nbsp;explained&nbsp;by&nbsp;a&nbsp;few&nbsp;simple&nbsp;principles.&nbsp;We&nbsp;don&#8217;t&nbsp;know&nbsp;if&nbsp;that&nbsp;hypothesis&nbsp;is&nbsp;true,&nbsp;but&nbsp;the&nbsp;success&nbsp;of&nbsp;deep&nbsp;learning&nbsp;in&nbsp;the&nbsp;last&nbsp;few&nbsp;years&nbsp;is&nbsp;a&nbsp;good&nbsp;validation&nbsp;of&nbsp;that&nbsp;hypothesis.&nbsp;It&#8217;s&nbsp;consistent&nbsp;with&nbsp;that&nbsp;hypothesis&nbsp;because&nbsp;deep&nbsp;learning&nbsp;is&nbsp;built&nbsp;on&nbsp;a&nbsp;few&nbsp;very&nbsp;simple&nbsp;principles.&nbsp;Most&nbsp;of&nbsp;the&nbsp;complexity&nbsp;of&nbsp;the&nbsp;systems&nbsp;that&nbsp;are&nbsp;trained&nbsp;with&nbsp;deep&nbsp;learning&nbsp;is&nbsp;not&nbsp;in&nbsp;the&nbsp;learning&nbsp;mechanismsp;&nbsp;it&#8217;s&nbsp;in&nbsp;the&nbsp;data.&nbsp;The&nbsp;data&nbsp;contains&nbsp;the&nbsp;overwhelming&nbsp;share&nbsp;of&nbsp;the&nbsp;information&nbsp;in&nbsp;a&nbsp;current&nbsp;trained&nbsp;AI&nbsp;system,&nbsp;while&nbsp;only&nbsp;a&nbsp;little&nbsp;bit&nbsp;of&nbsp;information,&nbsp;relatively&nbsp;speaking,&nbsp;is&nbsp;in&nbsp;those&nbsp;principles,&nbsp;which&nbsp;are&nbsp;like&nbsp;the&nbsp;learning&nbsp;procedures.<\/p>\n<\/div><\/div>\n\n\n\n<h4 class=\"wp-block-heading\">Biased Data and AI for Humanity<\/h4>\n\n\n\n<div class=\"wp-block-group wp-interview\"><div class=\"wp-block-group__inner-container is-layout-flow wp-block-group-is-layout-flow\">\n<p><strong>TT : <\/strong>You&nbsp;said&nbsp;that&nbsp;the&nbsp;intelligence&nbsp;is&nbsp;from&nbsp;knowledge&nbsp;and&nbsp;knowledge&nbsp;is&nbsp;acquired&nbsp;from&nbsp;data?<\/p>\n\n\n\n<p><strong>YB :<\/strong> That&#8217;s&nbsp;right.<\/p>\n\n\n\n<p><strong>TT : <\/strong>But&nbsp;if&nbsp;data&nbsp;is&nbsp;biased,&nbsp;what&nbsp;happens?&nbsp;&nbsp;Some&nbsp;social&nbsp;scientists&nbsp;criticize&nbsp;most&nbsp;data&nbsp;for&nbsp;being&nbsp;based&nbsp;on&nbsp;male,&nbsp;Caucasian&nbsp;middle&nbsp;aged&#8230;<\/p>\n\n\n\n<p><strong>YB : <\/strong>That\u2019s&nbsp;right.&nbsp;<\/p>\n\n\n\n<p><strong>TT : <\/strong>So&nbsp;if&nbsp;a&nbsp;young&nbsp;woman&nbsp;of&nbsp;colour&nbsp;applies,&nbsp;for&nbsp;instance,&nbsp;for&nbsp;an&nbsp;insurance&nbsp;policy,&nbsp;AI&nbsp;might&nbsp;say&nbsp;no&nbsp;because&nbsp;they&nbsp;don\u2019t&nbsp;have&nbsp;enough&nbsp;data&nbsp;for&nbsp;those&nbsp;applicants.<\/p>\n\n\n\n<p><strong>YB : <\/strong>Yes,&nbsp;absolutely.&nbsp;I&nbsp;think&nbsp;there&nbsp;are&nbsp;technical&nbsp;solutions&nbsp;and&nbsp;social&nbsp;solutions&nbsp;to&nbsp;this&nbsp;problem.&nbsp;We&nbsp;have&nbsp;to&nbsp;change&nbsp;our&nbsp;social&nbsp;norms,&nbsp;for&nbsp;example,&nbsp;so&nbsp;that&nbsp;companies&nbsp;building&nbsp;products&nbsp;use&nbsp;technological&nbsp;solutions&nbsp;and&nbsp;logistical&nbsp;solutions,&nbsp;for&nbsp;example,&nbsp;in&nbsp;the&nbsp;way&nbsp;that&nbsp;the&nbsp;data&nbsp;is&nbsp;collected,&nbsp;in&nbsp;the&nbsp;way&nbsp;that&nbsp;it&#8217;s&nbsp;described&nbsp;and&nbsp;managed,&nbsp;and&nbsp;in&nbsp;the&nbsp;particular&nbsp;learning&nbsp;algorithms&nbsp;that&nbsp;are&nbsp;used&nbsp;because&nbsp;we&nbsp;know&nbsp;techniques&nbsp;that&nbsp;can&nbsp;mitigate&nbsp;the&nbsp;bias&nbsp;and&nbsp;discrimination.&nbsp;So&nbsp;we&nbsp;can&nbsp;probably&nbsp;include&nbsp;those&nbsp;techniques,&nbsp;but&nbsp;more&nbsp;importantly&nbsp;we&nbsp;need&nbsp;to&nbsp;make&nbsp;sure&nbsp;that&nbsp;companies&nbsp;and&nbsp;governments&nbsp;use&nbsp;them.<\/p>\n\n\n\n<p><strong>TT : <\/strong>Is&nbsp;that&nbsp;why&nbsp;you&nbsp;think&nbsp;it&#8217;s&nbsp;important&nbsp;that&nbsp;both&nbsp;social&nbsp;scientists&nbsp;and&nbsp;natural&nbsp;scientists&nbsp;work&nbsp;for&nbsp;AI&nbsp;together?<\/p>\n\n\n\n<p><strong>YB : <\/strong>Yes.<\/p>\n\n\n\n<p><strong>TT : <\/strong>I&nbsp;love&nbsp;the&nbsp;idea&nbsp;of&nbsp;\u201cAI&nbsp;for&nbsp;humanity\u201d&nbsp;as&nbsp;you&nbsp;have&nbsp;in&nbsp;the&nbsp;Mila&nbsp;here.<\/p>\n\n\n\n<p><strong>YB : <\/strong>Right,&nbsp;because&nbsp;the&nbsp;AI&nbsp;researcher&nbsp;might&nbsp;not&nbsp;realise&nbsp;some&nbsp;of&nbsp;the&nbsp;social&nbsp;issues&nbsp;that&nbsp;could&nbsp;be&nbsp;involved&nbsp;in&nbsp;the&nbsp;deployment.&nbsp;I&nbsp;think&nbsp;it&#8217;s&nbsp;particularly&nbsp;important&nbsp;for&nbsp;people&nbsp;who&nbsp;are&nbsp;doing&nbsp;research&nbsp;or&nbsp;development&nbsp;of&nbsp;products&nbsp;that&nbsp;is&nbsp;close&nbsp;to&nbsp;something&nbsp;that&nbsp;people&nbsp;will&nbsp;use,&nbsp;in&nbsp;large-scale&nbsp;deployment&nbsp;for&nbsp;example.<\/p>\n<\/div><\/div>\n\n\n\n<figure class=\"wp-block-image size-large boxcenter-vertical\"><img loading=\"lazy\" decoding=\"async\" width=\"500\" height=\"746\" src=\"https:\/\/gen-zai.org\/wp-content\/uploads\/2021\/09\/09.png\" alt=\"\" class=\"wp-image-164\" srcset=\"https:\/\/gen-zai.org\/wp-content\/uploads\/2021\/09\/09.png 500w, https:\/\/gen-zai.org\/wp-content\/uploads\/2021\/09\/09-201x300.png 201w\" sizes=\"auto, (max-width: 500px) 100vw, 500px\" \/><figcaption>Prof. Myriam C\u00f4t\u00e9, the Director of AI for Humanity at Montreal Institute for Learning Algorithms<\/figcaption><\/figure>\n\n\n\n<p><a href=\"https:\/\/gen-zai.org\/en\/media\/thinking-yoshua-bengio-02\/\">The 2nd part is here<\/a><\/p>\n","protected":false},"featured_media":359,"template":"","media_category":[2],"media_tag":[5,12,4],"class_list":["post-142","genzaimedia","type-genzaimedia","status-publish","has-post-thumbnail","hentry","media_category-thinking","media_tag-opportunities-and-risks","media_tag-future","media_tag-interviews","en-US"],"_links":{"self":[{"href":"https:\/\/gen-zai.org\/wp-json\/wp\/v2\/genzaimedia\/142","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/gen-zai.org\/wp-json\/wp\/v2\/genzaimedia"}],"about":[{"href":"https:\/\/gen-zai.org\/wp-json\/wp\/v2\/types\/genzaimedia"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/gen-zai.org\/wp-json\/wp\/v2\/media\/359"}],"wp:attachment":[{"href":"https:\/\/gen-zai.org\/wp-json\/wp\/v2\/media?parent=142"}],"wp:term":[{"taxonomy":"media_category","embeddable":true,"href":"https:\/\/gen-zai.org\/wp-json\/wp\/v2\/media_category?post=142"},{"taxonomy":"media_tag","embeddable":true,"href":"https:\/\/gen-zai.org\/wp-json\/wp\/v2\/media_tag?post=142"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}