Saturday, October 11, 2014



Why 'Frankenstein' Robots Could Be the Future of Artificial Intelligence



Written by

Jordan Pearson






Michael Milford has spent more than a decade obsessing over the inner workings of the rat brain. Milford isn’t a neuroscientist or a zoologist, though. Chiefly, he’s an engineer and a roboticist at Queensland University of Technology, where he’s designing robots that would make Dr. Frankenstein drool if he got his kicks from brain modelling and computer vision instead of reanimating the corpses of townspeople.
Milford’s latest project blends the navigational ability of a rat with the vision of a human. He’s dubbed this a “Frankenstein” approach to robotics in the media, and he believes that such patchwork techniques are not only promising, but practical ways forward to developing artificial intelligence. The reason for this is that the rat brain is fairly well understood by scientists, unlike the human brain.
“Rats represent a really beautiful balance,” Milford told me, “because they have a really sophisticated mammal brain, but we understand more about their brain than probably any other mammal, just because they’ve been studied so extensively.”
Digitally modelling rat brains allowed Milford and his team to design a robust robotic navigation system called RatSLAM. “Rat” is included in the title for obvious reasons, and “SLAM” stands for simultaneous location and mapping. RatSLAM reconstructs the rat brain’s complex structure of place cells, head orientation cells, and conjunctive grid cells that can take a certain degree of environmental uncertainty into account during navigation, in a simplified form.

Superhuman robot navigation with a Frankenstein model...and why I think it's a great idea

While a digital rat brain made for a useful platform to model and navigate space, the catch was that rats have pretty shitty vision. Rats use a number of different physical inputs to navigate space such as their whiskers, sense of smell, and poor eyesight. But the gold standard for robots, according to MIlford, is cameras, which posed a problem for his team.
“Cameras are ubiquitous, they’ve come light years ahead in the last decade, so there’s a disconnect," Milford explained. "If we actually want to exploit the advantages of all of this really cool technology, you have to look for a natural analogue which does have highly capable visual sensors, so that’s why we moved to humans.”
Milford’s patchwork approach may not be quite as sensational as the multi-million dollar projects underway to digitally recreate the human brain, but the advantage his system has over the others, he says, is that it actually works. In 2008, he was able to map and navigate an entire suburb with RatSLAM. Just a couple years later, a tiny robot outfitted with RatSLAM and a camera was able to autonomously navigate an office building and deliver 1000 packages in a two week period, even charging itself by finding its own docking station.
“It’s partly a performance-based thing. We picked the best bits of what we know about nature,” Milford said. “It’s partly a pragmatic thing, in that we know a lot about mapping of space in the rat brain. Conversely, we know a lot about how the visual system in a human works. It’s pragmatic on multiple levels.”
Milford told me that his research is applicable in many areas of science and industry. After all, nearly every living thing on earth, from humans to rats, has to navigate space. It’s a ubiquitous task in nature, and the technology that needs to be developed to replicate it could be just as widely used.
Agriculture and infrastructure monitoring were just two possibilities Milford mentioned, and he’s already in talks with water utility officials in Australia to figure out how his technology could be of use. The most immediate application, however, will be in robotics and artificial intelligence.
“We’re trying to create a system where you can drop it onto any robot that may have different kinds of sensors—it may have a camera, it may have a laser, it might have whiskers, for instance—in any environment and have the robot learn very rapidly, by itself, how to navigate in that environment,” Milford said. “We’re looking to make a black box navigation system for robots, and it could be something to put on your phone, for example.”
MIlford describes these outcomes as “blue sky” thinking, and a portable rat brain that can help you find the nearest open liquor store on a Friday night on your phone really does seem pretty out there. But it’s not implausible, given the progress that Milford and his team have already made with rat-human “Frankenstein” robotics.
Prominent roboticists and thinkers like Selmer Bringsjord, who developed the Lovelace Test for artificial intelligence alongside David Ferrucci, the designer of IBM’s Watson, are pessimistic about the prospects of Artificial Neural Networks and digital human brain modelling for reaching general machine intelligence. According to Bringsjord, robotics researchers should stick to practical approaches short of recreating human biology with computers.
Image: screen-cap
RatSLAM and its associated projects, like human-inspired computer vision, certainly fit within Bringsjord’s conception of where robotics research should be going, and Milford largely agrees with the sentiment.
“I believe that you need a whole portfolio of science going on, all the way from the European Brain Project, all these hundreds of millions of dollars projects that are literally creating the amount of cells that we have in the brain, turning it on, and seeing what happens,” Milford told me. “At the same time, I think we need more focused approaches, like the one we take."
"One of the things I firmly believe in is that we must test and ground whatever these abstracts theories we’re developing are in actual real-world conditions,” he said.

RatSLAM: Using Models of Rodent Hippocampus for Robot Navigation https://www.youtube.com/watch?v=t2w6kYzTbr8

Not everyone is skeptical about the prospects of artificial intelligence on par with that of humans, however. Some people are downright terrified of the idea, like Nick Bostrom, of the Future of Humanity Institute at Oxford University, who’s currently evangelizing for awareness about the world-ending potential of artificial intelligence.
Others are equally as scared, but for a more immediate reason: the effect of robots on the labour market. As robots that can do low-level tasks like answer phones more efficiently than humans can enter the workforce, experts and laypeople alike are concerned that they’ll put humans out of work.

I feel like it’s consistent with society and how society’s progressing, at least in Western countries now.

Milford is in a particularly sensitive position when it comes to this aspect of the potential societal fallout of intelligent robots. After all, he’s explicitly trying to create artificial intelligence that works for industry, right now, with the tools and techniques currently available to scientists. This isn’t some far-off possibility for him, it’s already here.
“It’s amazing how much society could change over the next several decades because of artificial intelligence, without any new breakthroughs,” he said. “Just through good engineering, good marketing, and good commercialization of incremental advances in what we already have.”
You might be wondering, as I did, what Milford thinks about his role in bringing artificial intelligence to the world on a subjective level. In my conversation with him, it was clear that he’s an intelligent guy not afraid to consider the ethical implications of his work while aggressively looking for commercial applications.
So, I asked him how he felt. His answer was about as candid as one could imagine coming from someone so engaged in the search for machine intelligence through uncommon means. “I feel like it’s consistent with society and how society’s progressing, at least in Western countries now," he said. "That doesn’t necessarily make it right, but I’m reasonably comfortable with it.”

No comments:

Post a Comment