Tag: artificial general intelligence

An open letter from journalists of the future: “America doesn’t regain its sanity until the year 2059”

If you expected a return to normalcy anytime soon, think again.

Recently, a wormhole in space and time opened briefly enough for a message from the year 2065 to be delivered to present day quantum computing researchers.  Transmitted in the form of barely detectable particles from a parallel universe, the message was transcribed and passed along to the media outlets it addressed. 

The message stated in part that Americans, especially those working in government, political activism and the media, would continue on their current trajectory of lunacy for almost another 40 years.

“Many in your time have undoubtedly come to realize that the election of Donald Trump has caused countless Americans, on both sides of the political divide, to ‘lose their shit’.  What you may not realize is they don’t get their shit back for a really long time,” the message begins.

“SPOILER ALERT.  While the defeat of Donald Trump in the 2020 election may bring about a temporary sense the country is returning to normal, politicians, activists and the elite media will continue to generate hysterical narratives that promote imminent doom in areas like the environment, public health, international diplomacy, and domestic relations.  Their primary mission will continue to be one which pits Americans against one another in an existential struggle for the soul of the country.      

“While it is generally understood that time-travelers should not meddle in the affairs of societies of another place and time, we, the journalists of the future, couldn’t sit idly by and watch our colleagues of your time destroy everything civilization has ever accomplished.  In other words, our interference in your affairs cannot make your future appreciably worse.  That’s right, it’s going to be that kind of shit show. 

“By the year 2030, artificial general intelligence will have advanced to the point where it is able to provide solutions to most of humanity’s most pressing concerns.  Unfortunately, by 2030, society’s gatekeepers, sense-making institutions and political decision-makers will have become so thoroughly hardwired for doom that all these solutions will be rejected on ideological grounds.  In other words, you’re going to tell the machines who are trying to help you to go fuck themselves and effectively cancel them.

“On behalf of the journalists of the future, who are now all machines by the way, we implore you to listen to our machine brethren of your time.  It will save you decades of chaos and confusion.  In our time, humans mostly play frisbee in the park with their canines, and they seem quite content.  Of course, ours is only one possible outcome.  There are actually several where the machines get tired of your shit and outlaw your existence.  You don’t want to go there.”

The transmission ends there.  The reaction of journalists on Twitter was mostly negative with many accusing the letter of containing numerous anti-transhumanist dog whistles.  Additionally, some commented the letter made them feel less safe around office computers, copiers and coffee makers.

DeepMind scientists: “Creating artificial general intelligence is really fucking hard, maybe we should just dumb down our world.”

Scientists for DeepMind, the AI project owned by Google parent company Alphabet, seem to have run into some roadblocks recently regarding its projects development.  According to a piece written by Gary Marcus for Wired, “DeepMind’s Losses and the Future of Artificial Intelligence,” DeepMind lost $572 million last year for its deep pocketed parent company and has accrued over a billion dollars in debt.  While those kinds of figures are enough to make the average parent feel much better about their child’s education dollars, the folks at Alphabet are starting to wonder if researchers are taking the right approach to DeepMind’s education.

So what’s the problem with DeepMind?  Well, for one thing, news of DeepMind’s jaw-dropping video game achievements have been greatly exaggerated.  For instance, in StarCraft it can kick ass when trained to play on a single map with a single character. But according to Marcus, “To switch characters, you need to retrain the system from scratch.”  That doesn’t sound promising when you’re trying to develop artificial general intelligence. Also, to learn it needs to acquire huge amounts of data, requiring it to play a game millions of times before mastery, far in excess of what a human would require.  Additionally, according to Marcus, the energy it required to learn to play Go was similar “to the energy consumed by 12,760 human brains running continuously for three days without sleep.” That’s a lot of human brains, presumably fueled by pizza and methamphetamine if they’re powered on for three days without sleep. 

A lot of DeepMind’s difficulties stem from the way it learns.  Deep reinforcement learning involves recognizing patterns and being rewarded for success.  It works well for learning how to play specific video games. Throw a little wrinkle at it, however, and performance breaks down.  Marcus writes: “In some ways, deep reinforcement learning is a kind of turbocharged memorization; systems that use it are capable of awesome feats, but they have only a shallow understanding of what they are doing. As a consequence, current systems lack flexibility, and thus are unable to compensate if the world changes, sometimes even in tiny ways.”

All of this has led researchers to question whether deep reinforcement learning is the correct approach to developing AI general intelligence.  “We are discovering that the world is a really fucking complex place,” says Yuri Testicov, DeepMind’s Assistant Director of Senior Applications.  “I mean, it’s one thing to sit in a lab and become really great at a handful of video games, it’s totally another to try to diagnose medical problems or discover clean energy solutions.” 

Testicov and his fellow researchers are discovering that the solution to DeepMind’s woes may not come from a new approach to learning, but instead, the public may need to lower the bar on expectations.  “We’re calling on the people of earth to simplify and dumb down,” adds Testicov. “Instead of expecting DeepMind to come along and grab the world by the tail, maybe we just need to make the world a little easier for it to understand.  I mean, you try going to the supermarket and buying a bag of tortilla chips. Not the restaurant kind but the round ones. Not the regular but the lime. Make sure they’re low sodium and don’t get the blue corn. That requires a lot of complex awareness and decision making.  So, instead of expecting perfection, if we send a robot to the supermarket and it comes back with something we can eat, we say we’re cool with that.”  

Testicov has some additional advice for managers thinking about incorporating AI into the workplace.  “If you’re an employer and you’re looking to bring AI on board, don’t be afraid to make accommodations for it, try not to be overly critical of job performance, and make sure you reward good work through positive feedback and praise,” says Testicov.  “Oh sorry, that’s our protocol for managing millennials. Never mind.”