{"id":227438,"date":"2023-11-01T00:01:45","date_gmt":"2023-10-31T18:31:45","guid":{"rendered":"https:\/\/arunachaltimes.in\/?p=227438"},"modified":"2023-11-01T00:01:45","modified_gmt":"2023-10-31T18:31:45","slug":"cutting-edge-ai-raises-fears-about-risks-to-humanity-are-tech-and-political-leaders-doing-enough","status":"publish","type":"post","link":"https:\/\/arunachaltimes.in\/index.php\/2023\/11\/01\/cutting-edge-ai-raises-fears-about-risks-to-humanity-are-tech-and-political-leaders-doing-enough\/","title":{"rendered":"Cutting-edge AI raises fears about risks to humanity. Are tech and political leaders doing enough?"},"content":{"rendered":"<p style=\"text-align: justify;\">\n<p style=\"text-align: justify;\">\n<p style=\"text-align: justify;\">LONDON, 31 Oct (AP) \u2014 Chatbots like ChatGPT wowed the world with their ability to write speeches, plan vacations or hold a conversation as good as or arguably even better than humans do, thanks to cutting-edge artificial intelligence systems. Now, frontier AI has become the latest buzzword as concerns grow that the emerging technology has capabilities that could endanger humanity.<\/p>\n<p style=\"text-align: justify;\">Everyone from the British government to top researchers and even major AI companies themselves are raising the alarm about frontier AI\u2019s as-yet-unknown dangers and calling for safeguards to protect people from its existential threats.<\/p>\n<p style=\"text-align: justify;\">The debate comes to a head Wednesday, when British Prime Minister Rishi Sunak hosts a two-day summit focused on frontier AI. It\u2019s reportedly expected to draw a group of about 100 officials from 28 countries, including U.S. Vice President Kamala Harris, European Commission President Ursula von der Leyen and executives from key U.S. artificial intelligence companies including OpenAI, Google\u2019s Deepmind and Anthropic.<\/p>\n<p style=\"text-align: justify;\">The venue is Bletchley Park, a former top secret base for World War II codebreakers led by Alan Turing. The historic estate is seen as the birthplace of modern computing because it is where Turing and others famously cracked Nazi Germany\u2019s codes using the world\u2019s first digital programmable computer.<\/p>\n<p style=\"text-align: justify;\">In a speech last week, Sunak said only governments \u2014 not AI companies \u2014 can keep people safe from the technology\u2019s risks. However, he also noted that the U.K.\u2019s approach \u201cis not to rush to regulate,\u201d even as he outlined a host of scary-sounding threats, such as the use of AI to more easily make chemical or biological weapons.<\/p>\n<p style=\"text-align: justify;\">\u201cWe need to take this seriously, and we need to start focusing on trying to get ahead of the problem,\u201d said Jeff Clune, an associate computer science professor at the University of British Columbia focusing on AI and machine learning.<\/p>\n<p style=\"text-align: justify;\">Clune was among a group of influential researchers who authored a paper last week calling for governments to do more to manage risks from AI. It\u2019s the latest in a series of dire warnings from tech moguls like Elon Musk and OpenAI CEO Sam Altman about the rapidly evolving technology and the disparate ways the industry, political leaders and researchers see the path forward when it comes to reining in the risks and regulation.<\/p>\n<p style=\"text-align: justify;\">It\u2019s far from certain that AI will wipe out mankind, Clune said, \u201cbut it has sufficient risk and chance of occurring. And we need to mobilize society\u2019s attention to try to solve it now rather than wait for the worst-case scenario to happen.\u201d<\/p>\n<p style=\"text-align: justify;\">One of Sunak\u2019s big goals is to find agreement on a communique about the nature of AI risks. He\u2019s also unveiling plans for an AI Safety Institute that will evaluate and test new types of the technology and proposing creation of a global expert panel, inspired by the U.N. climate change panel, to understand AI and draw up a \u201cState of AI Science\u201d report.<\/p>\n<p style=\"text-align: justify;\">The summit reflects the British government\u2019s eagerness to host international gatherings to show it has not become isolated and can still lead on the world stage after its departure from the European Union three years ago.<\/p>\n<p style=\"text-align: justify;\">The U.K. also wants to stake its claim in a hot-button policy issue where both the U.S. and the 27-nation EU are making moves.<\/p>\n<p style=\"text-align: justify;\">Brussels is putting the final touches on what\u2019s poised to be the world\u2019s first comprehensive AI regulations, while U.S. President Joe Biden signed a sweeping executive order Monday to guide the development of AI, building on voluntary commitments made by tech companies.<\/p>\n<p style=\"text-align: justify;\">China, which along with the U.S. is one of the two world AI powers, has been invited to the summit, though Sunak couldn\u2019t say with \u201c100% certainty\u201d that representatives from Beijing will attend.<\/p>\n<p style=\"text-align: justify;\">The paper signed by Clune and more than 20 other experts, including two dubbed the \u201cgodfathers\u201d of AI \u2014 Geoffrey Hinton and Yoshua Bengio \u2014 called for governments and AI companies to take concrete action, such as by spending a third of their research and development resources on ensuring safe and ethical use of advanced autonomous AI.<\/p>\n<p style=\"text-align: justify;\">Frontier AI is shorthand for the latest and most powerful systems that go right up to the edge of AI\u2019s capabilities. They\u2019re based on foundation models, which are algorithms trained on a broad range of information scraped from the internet to provide a general, but not infallible, base of knowledge.<\/p>\n<p style=\"text-align: justify;\">That makes frontier AI systems \u201cdangerous because they\u2019re not perfectly knowledgeable,\u201d Clune said. \u201cPeople assume and think that they\u2019re tremendously knowledgeable, and that can get you in trouble.\u201d<\/p>\n<p style=\"text-align: justify;\">The meeting, though, has faced criticism that it\u2019s too preoccupied with far-off dangers.<\/p>\n<p style=\"text-align: justify;\">\u201cThe focus of the summit is actually a bit too narrow,\u201d said Francine Bennett, interim director of the Ada Lovelace Institute, a policy research group in London focusing on AI.<\/p>\n<p style=\"text-align: justify;\">\u201cWe risk just forgetting about the broader set of risk and safety\u201d and the algorithms that are already part of everyday life, she said at a Chatham House panel last week.<\/p>\n<p style=\"text-align: justify;\">Deb Raji, a University of California, Berkeley, researcher who has studied algorithmic bias, pointed to problems with systems already deployed in the U.K., such as police facial recognition systems that had a much higher false detection rate for Black people and an algorithm that botched a high school exam.<\/p>\n<p style=\"text-align: justify;\">The summit is a \u201cmissed opportunity\u201d and marginalizes communities and workers that are most affected by AI, more than 100 civil society groups and experts said in an open letter to Sunak.<\/p>\n<p style=\"text-align: justify;\">Skeptics say the U.K. government has set its summit goals too low, given that regulating AI will not be on the agenda, focusing instead on establishing \u201cguardrails.\u201d<\/p>\n<p style=\"text-align: justify;\">Sunak\u2019s call to not rush into regulation is reminiscent of \u201cthe messaging we hear from a lot of the corporate representatives in the U.S.,\u201d Raji said. \u201cAnd so I\u2019m not surprised that it\u2019s also making its way into what they might be saying to U.K. officials.\u201d<\/p>\n<p style=\"text-align: justify;\">Tech companies shouldn\u2019t be involved in drafting regulations because they tend to \u201cunderestimate or downplay\u201d the urgency and full range of harms, Raji said. They also aren\u2019t so open to supporting proposed laws \u201cthat might be necessary but might effectively endanger their bottom line,\u201d she said.<\/p>\n<p style=\"text-align: justify;\">DeepMind and OpenAI didn\u2019t respond to requests for comment. Anthropic said co-founders Dario Amodei and Jack Clark would be attending.<\/p>\n<p style=\"text-align: justify;\">Microsoft said in a blog post that it looked forward \u201cto the U.K.\u2019s next steps in convening the summit, advancing its efforts on AI safety testing, and supporting greater international collaboration on AI governance.\u201d<\/p>\n<p style=\"text-align: justify;\">The government insists it will have the right mix of attendees from government, academia, civil society and business.<\/p>\n<p style=\"text-align: justify;\">The Institute for Public Policy Research, a center-left U.K. think tank, said it would be a \u201chistoric mistake\u201d if the tech industry was left to regulate itself without government supervision.<\/p>\n<p style=\"text-align: justify;\">\u201cRegulators and the public are largely in the dark about how AI is being deployed across the economy,\u201d said Carsten Jung, the group\u2019s senior economist. \u201cBut self-regulation didn\u2019t work for social media companies, it didn\u2019t work for the finance sector, and it won\u2019t work for AI.\u201d<\/p>\n","protected":false},"excerpt":{"rendered":"<p>LONDON, 31 Oct (AP) \u2014 Chatbots like ChatGPT wowed the world with their ability to write speeches, plan vacations or hold a conversation as good as or arguably even better than humans do, thanks to cutting-edge artificial intelligence systems. Now, frontier AI has become the latest buzzword as concerns grow that the emerging technology has [&hellip;]<\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[15],"tags":[],"class_list":{"0":"post-227438","1":"post","2":"type-post","3":"status-publish","4":"format-standard","6":"category-world"},"_links":{"self":[{"href":"https:\/\/arunachaltimes.in\/index.php\/wp-json\/wp\/v2\/posts\/227438","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/arunachaltimes.in\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/arunachaltimes.in\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/arunachaltimes.in\/index.php\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/arunachaltimes.in\/index.php\/wp-json\/wp\/v2\/comments?post=227438"}],"version-history":[{"count":0,"href":"https:\/\/arunachaltimes.in\/index.php\/wp-json\/wp\/v2\/posts\/227438\/revisions"}],"wp:attachment":[{"href":"https:\/\/arunachaltimes.in\/index.php\/wp-json\/wp\/v2\/media?parent=227438"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/arunachaltimes.in\/index.php\/wp-json\/wp\/v2\/categories?post=227438"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/arunachaltimes.in\/index.php\/wp-json\/wp\/v2\/tags?post=227438"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}