Google's Hinton Outlines New AI Advance That Requires Less Data

Google's Hinton Outlines New AI Advance That Requires Less Data
Advertisement

Google's Geoffrey Hinton, an artificial intelligence pioneer, on Thursday outlined an advance in the technology that improves the rate at which computers correctly identify images and with reliance on less data.

Hinton, an academic whose previous work on artificial neural networks is considered foundational to the commercialisation of machine learning, detailed the approach, known as capsule networks, in two research papers posted anonymously on academic websites last week.

The approach could mean computers learn to identify a photograph of a face taken from a different angle from those it had in its bank of known images. It could also be applied to speech and video recognition.

"This is a much more robust way of identifying objects," Hinton told attendees at the Go North technology summit hosted by Alphabet Inc's Google, detailing proof of a thesis he had first theorised in 1979.

In the work with Google researchers Sara Sabour and Nicholas Frost, individual capsules - small groups of virtual neurons - were instructed to identify parts of a larger whole and the fixed relationships between them.

The system then confirmed whether those same features were present in images the system had never seen before.

Artificial neural networks mimic the behaviour of neurons to enable computers to operate more like the human brain.

Hinton said early testing of the technique had come up with half the errors of current image recognition techniques.

The bundling of neurons working together to determine both whether a feature is present and its characteristics also means the system should require less data to make its predictions.

"The hope is that maybe we might require less data to learn good classifiers of objects, because they have this ability of generalizing to unseen perspectives or configurations of images," said Hugo Larochelle, who heads Google Brain's research efforts in Montreal.

"That's a big problem right now that machine learning and deep learning needs to address, these methods right now require a lot of data to work," he said.

Hinton likened the advance to work two of his students developed in 2009 on speech recognition using neural networks that improved on existing technology and was incorporated into the Android operating system in 2012.

Still, he cautioned it was early days.

"This is just a theory," he said. "It worked quite impressively on a small dataset" but now needs to be tested on larger datasets, he added.

Peer review of the findings is expected in December.

© Thomson Reuters 2017

Comments

For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who'sThat360 on Instagram and YouTube.

Further reading: Google, Internet, Science
Apple Sold 46.7 Million iPhone Units and 10.3 Million iPad Units Last Quarter
Apple CEO Tim Cook Breathes New Life Into Old iPhones
Share on Facebook Gadgets360 Twitter Share Tweet Snapchat Share Reddit Comment google-newsGoogle News
 
 

Advertisement

Follow Us

Advertisement

© Copyright Red Pixels Ventures Limited 2024. All rights reserved.
Trending Products »
Latest Tech News »