We all know Google has been a leader of Machine Learning and Artificial Intelligence research. The observations are prevalent with Google Search Engine.
Google Search is still the most popular and most powerful Search engine ever built, and probably best for decades to come. Google Search always returns the must accurate search result, and its like magic.
The latest in news, Google has set a new landmark with software that learned how to recognize cats, people, and other things simply by watching YouTube videos. That technology, is designed on the same fundamentals of the Human Brain. It basically drives the learning methodology used by Human brain cells. The result is smarter products that benefit from self-learning. The best example is the Speech-recognition.
Google’s Machine learning is pretty amazing at its core. It is based on simulating groups of connected brain cells that communicate and influence one another. When such a neural network, is exposed to data, the relationships between different neurons can change. This all in turn causes the whole network to develop the capacity & capability to respond to incoming queries. It learns a particular kind of pattern, and hence prepares itself for similar data, when queried for.
Machine learning iss specific and important part of neural networks, which has been around for decades. Neural networks helps a computer make decisions, with the set of data it has and predicting future interms of past events that has happened. The best example is chess-playing software, which can take independent decisions to take the optimal steps in order to win the game. Another example is voice recognition– Voice recognition software learns user’s voices and over time learns to understand user.
Google’s engineers have found ways to put more computing power behind the approach than was previously possible, creating neural networks that can learn without human assistance. Fortunately, these are robust enough to be used commercially.
To give you an estimate of what this neural network is capable of doing, lets give you an example. This neural network is able to filter out data to pay attention to, and match patterns, rather than having humans decide that. It can do stuff like identifying shapes of objects, identify objects like differentiate between a cat and a human face, even if has a lot of beard.
These learnings are complex, and often would require very complex algorithms. Even a thing like identifying a shape of object could get complex, and could require a lot of complex programming techniques.
Google products are able to learn and improve over time, y themselves. One good example to quote here would be Google Image search, in which ML enables system to identify contextual description of the image. And this has improved over time.
In June, Google engineers published results of an experiment that threw 10 million images grabbed from YouTube videos at their simulated brain cells, running 16,000 processors across a thousand computers for 10 days without pause. That was the time, when Google’s Machine Learning was highlighted publicly.
“Most people keep their model in a single machine, but we wanted to experiment with very large neural networks,” says Jeff Dean, a Research engineering lead at Google. “If you scale up both the size of the model and the amount of data you train it with, you can learn finer distinctions or more complex features. These models can typically take a lot more context. If, for example, Google’s system thought it heard someone say “I’m going to eat a lychee,” but the last word was slightly muffled, it could confirm its hunch based on past experience of phrases because “lychee” is a fruit and is used in the same context as “apple” or “orange.”
Dean says his team is also testing models that understand both images and text together. “You give it ‘porpoise’ and it gives you pictures of porpoises,” he says. “If you give it a picture of a porpoise, it gives you ‘porpoise’ as a word.”
Neuroscientists have discussed the possibility of what they call the “grandmother neuron,” specialised cells in the brain that fire when they are exposed repeatedly or “trained” to recognise a particular face of an individual.
“You learn to identify a friend through repetition,” said Gary Bradski, a neuroscientist at Industrial Perception, in Palo Alto, Calif.
Similarly, Google’s self-driving cars by helping them understand their surroundings by combining the many streams of data they collect, from laser scans of nearby obstacles to information from the car’s engine.
Similarities to The Human Brain
The workings of Google’s neural networks operate in similar ways to what neuroscientists know about the visual cortex in mammals, the part of the brain that processes visual information, says Bengio. “It turns out that the feature learning networks being used by Google are similar to the methods used by the brain that are able to discover objects that exist.”
Human Brain is much more complex than we know. Even Google’s neural networks are much smaller than the brain, and that they can’t perform many things necessary to intelligence, such as reasoning with information collected from the outside world.
Dean is also careful not to imply that the limited intelligences he’s building are close to matching any biological brain. But he can’t resist pointing out that if you pick the right contest, Google’s neural networks have humans beat.
“We are seeing better than human-level performance in some visual tasks,” he says, giving the example of labeling, where house numbers appear in photos taken by Google’s Street View car, a job that used to be farmed out to many humans.
“They’re starting to use neural nets to decide whether a patch [in an image] is a house number or not,” says Dean, and they turn out to perform better than humans. It’s a small victory—but one that highlights how far artificial neural nets are behind the ones in your head. “It’s probably that it’s not very exciting, and a computer never gets tired,” says Dean. It takes real intelligence to get bored.
We need to see where this leads ultimately. So far, we have reached levels where only Sci-Fi movies have gone before. Future will be damn interesting, are you ready?