![]() ![]() ![]() “If we’re lucky, they’ll treat us as pets,” says Paul Saffo, a consulting professor at Stanford University, “and if we’re very unlucky, they’ll treat us as food.” But there are academics, including Noam Chomsky, who don’t believe computers will ever be able to attain that level of intelligence. People seem unable to grasp this idea, he says, until they “see robots going down the street killing people”. There may come a point, he believes, where the machine becomes a superintelligent autonomous agent, able to redesign itself in a way we’re unable to understand. What are those risks, specifically? Musk talks of AGI, artificial general intelligence, where the intellect of a computer may at some point match or exceed that of human beings, a moment also known as “The Singularity”. In a 2014 piece for The Huffington Post, Stephen Hawking warned of “complacency”, and asked how we might improve our chances of “avoiding the risks” associated with AI. Musk isn’t trying to suppress the use of AI as a problem-solving tool his concerns lie further down the line, and those concerns are shared by a number of scientists and futurologists. It was, admittedly, something of a straw man argument from Zuckerberg. Patients willing to put faith in AI and robots for surgical procedures, survey showsĪrtificial intelligence, automation ‘will significantly change the way we work’ ![]() Musk chose to respond via Twitter: “His understanding of the subject is limited.”Īrtificial intelligence is more of a secretary than a Skynet “If you’re arguing against AI,” he said, “then you’re arguing against safer cars… against being able to better diagnose people”. Zuckerberg, by contrast, is a shoulder-shrugging optimist he employs an entire AI research team that’s supposedly focused on making our lives “better in the future”, and he criticised Musk (without mentioning him by name) for irresponsible doom-mongering. Musk has voiced his concerns about this for many years, describing AI as a “fundamental risk to the existence of human civilisation” and donating large sums of money towards developing AI in a way he believes is safe. Only last week, two billionaires from the world of tech, Facebook’s Mark Zuckerberg and SpaceX’s CEO Elon Musk, had a very public war of words over the dangers of unregulated AI experimentation. This fear of what might happen when computers become more intelligent than us is a topic that bubbles up with increasing frequency as progress is made in the field of artificial intelligence (AI). Facebook’s engineers, aware that they were meant to be building a tool that enabled bots to communicate with humans, simply tweaked the settings to force them to stick to English sentence structures. “I can can I everything else” was one example of their linguistic invention, which sounds a little clumsy but isn’t as worrisome as something impenetrable like “X&ZPP29 4H27%V5”. In fact, all the bots had done was to sidestep the niceties of English grammar in order to understand each other better, which is something humans do all the time. “This is how it starts,” said one prophet of doom on Twitter, envisaging a scenario where robots agree among themselves to “Annihilate Earth”, but we don’t notice because we have no idea how to speak Robot. Breathless reporters told us that “panicking” Facebook engineers shut down the project when they discovered bots talking to each other in a language of their own invention. There was a perfect example of this yesterday, as a rather humdrum news story from June about the development of Facebook’s chatbots suddenly exploded across the media. ![]() Science fiction has taught us, over many decades, to fear being destroyed by robots. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |