Carnegie Mellon Computer Can Teach Itself Common Sense

Carnegie Mellon Computer Can Teach Itself Common Sense

85805742-robot-japan-osaka-0f4ba5a076126474bf5d4944e7651032b43d2c2b-s6-c30Susanne Posel
Occupy Corporatism
November 23, 2013



Carnegie Mellon University (CMU) have created the Never Ending Image Learner (NEIL); a program that runs 24 hours a day, scouring the internet for pictures to learn how to better identify and label objects with reference to associations between objects and persons.

Funding for this project was provided by the Office of Naval Research (ONR) and Google, Inc.

NEIL has learned to recognize:

• A bookshelf
• A Chevy Nova
• A sea-spider
• A sportsperson
• A boy
• A piece of paper
• A shark
• A television

The computer understands that cars have wheels and trading floors can be crowded. However, NEIL is confused by how pink can be a color and the name of a singer.

Astonishingly, NEIL is able to “include a lot of common sense information about the world” by analyzing “three million images, identifying 1,500 types of objects in half a million images and 1,200 types of scenes in hundreds of thousands of images.”

CMU purposed that NEIL would become the most vast collection database of objects, scenes, attributes and contextual relationships so that the computer could learn to understand how information is accessed by an image.

Abhinav Gupta, assistant research professor for the Carnegie Mellon Robotics Institute (CMRI), commented : “Images are the best way to learn visual properties. Images also include a lot of common sense information about the world. People learn this by themselves and, with NEIL, we hope that computers will do so as well.”

Gupta points out that this experiment will facilitate humans teaching computers in the future on how to be more human.
Abhinav Shrivastava, student at CMRI said that NEIL “can sometimes make erroneous assumptions that compound mistakes, so people need to be part of the process. People don’t always know how or what to teach computers. But humans are good at telling computers when they are wrong.”

Grady Booch, fellow for IBM, stated that sentient machines who have “self-awareness, the ability to set goals, and a sense of creativity” were “inevitable”.

Booch said: “If we don’t achieve that degree of sentience, I believe we’re very close to achieving the illusion of sentience whereby we are in a place where we’ll, on a large-scale basis, have to interact with these things.”

The IT guru asserts that “we’re building a generation of autonomous devices that kill . . . these systems are equipped with intelligence to distinguish between legitimate targets and what not to target. We are slowly surrendering our intelligence, our choice, our responsibility, to devices such as this.”

Booch ends with the hope that humankind can “co-evolve” with these intelligent machines.

Tags assigned to this article: