Computers Capable Of Human-like Reasoning

Humans have been using computers for decades now in their day-to-day. From smartphones to laptops and computers, they have become a staple in most homes. They have evolved to become necessities in everyday living rather than luxuries that they once used to be. And with many of the advancements we now have in the world, having computers that can reason like humans is fast becoming a reality than just a figment of our imaginations. We now even use them daily without us knowing although these earlier technologies are quite basic and do nothing mind-blowing. Take Amazon’s Alexa and Apple’s Siri, for example. If you own any of these devices, you can talk to them and ask them questions and they actually answer back. Cool, right?

However, no matter how interactive they may already be, they still lack critical reasoning that humans are capable of doing. They are programmed to answer certain questions but nothing else apart from what has already been programmed into their systems. The human mind is a complex thing. And attempting to replicate its capacity to think is no easy feat but there are experts in the field willing to take on the challenge of overcoming the many limitations that prevent computers from going the extra mile.

Now, researchers at Google’s DeepMind have developed a simple algorithm to handle such reasoning—and it has already beaten humans at a complex image comprehension test.

Humans are generally pretty good at relational reasoning, a kind of thinking that uses logic to connect and compare places, sequences, and other entities. But the two main types of AI—statistical and symbolic—have been slow to develop similar capacities. Statistical AI, or machine learning, is great at pattern recognition, but not at using logic. And symbolic AI can reason about relationships using predetermined rules, but it’s not great at learning on the fly.

The new study proposes a way to bridge the gap: an artificial neural network for relational reasoning. Similar to the way neurons are connected in the brain, neural nets stitch together tiny programs that collaboratively find patterns in data. They can have specialized architectures for processing images, parsing language, or even learning games. In this case, the new “relation network” is wired to compare every pair of objects in a scenario individually. “We’re explicitly forcing the network to discover the relationships that exist between the objects,” says Timothy Lillicrap, a computer scientist at DeepMind in London who co-authored the paper.

(Via: http://www.sciencemag.org/news/2017/06/computers-are-starting-reason-humans)

Basically, can you really teach a computer common sense when many people actually don’t have it themselves? Oh, the irony. But the premise here is to empower computers with reasoning highly similar to that of humans, a technology that may become possible by tweaking certain algorithms. Humans have actually been messing around with artificial intelligence for 55 years already but experts are still far from reaching their goals.

Facebook uses natural language processing capabilities to make sense of the millions of largely text-based communications that go through its platforms on a daily basis.

In 2016, the social media giant announced DeepText, a deep learning-based algorithm that can understand several thousand posts per second, in more than 20 languages, with “near-human accuracy.” While the algorithm is still in the testing phase, the company says that it will help show users more relevant content, and at the same time filter out spam.

But DeepText can also perform more complex tasks, such as intelligent search within Messenger, Facebook’s chat programme.

Typing “I need a ride” or “Let’s take a ride there” prompts DeepText to offer the option to book a taxi; on the other hand, the algorithm is smart enough not to offer this option in response to phrases such as “I don’t need a ride” or “I like to ride donkeys.”

DeepText also learns from experience — the more posts and conversations it processes, the better it gets at doing so.

This capability, combined with new deep learning techniques such as the ones Dr Singh is developing, should help to move the field of natural language processing forward.

And thus one day, digital agents and robots might converse on the level seen in Hollywood movies.

(Via: https://www.tech.gov.sg/TechNews/Innovation/2017/05/The-Language-of-Deep-Learning)

The technology of artificial intelligence taking over the world is more likely to happen in the movies and not yet in real life for now. We may think of language barrier to be an inter-racial problem but it actually is the same when trying to teach computers common sense. Computers just lack the background knowledge humans have about objects and ideas that we associate with language use. But it won’t stop the experts from developing algorithms to overcome this big obstacle with the help of thought vectors.

However, we should not overlook the dangers posed by ultra-smart computers to man’s very existence as we aspire for greatness. Computers that can think for themselves like humans do may throw off the balance we try so hard to maintain but almost always fail. We’ll just have to see in five to ten years if many of our nightmares have realized or we are just worrying ourselves right now for no reason at all. Maybe when that time happens, we no longer have to worry about constant headaches like hard drive failure and data recovery if the computers themselves can take care of these problems on their own.

Start a Conversation

Your email address will not be published. Required fields are marked *