Personal tools
You are here: Home Research Trends & Opportunities New Media and New Digital Economy Artificial Intelligence, Machine Learning, and Neural Networks

Artificial Intelligence, Machine Learning, and Neural Networks

(MIT Dome, Yu-Chih Ko)

 Innovation in the AI Era


What is Artificial Intelligence (AI)?

Artificial Intelligence: Knowledge Representation, Machine Learning, Data Mining, and Causal Discovery  

What is Artificial Intelligence (AI), exactly? The question may seem basic, but the answer is kind of complicated. The definition of AI is constantly evolving. What would have been considered AI in the past may not be considered AI today. In basic terms, AI can be defined as: a broad area of computer science that makes machines seem like they have human intelligence. AI is the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings. 

Essentially, AI is the wider concept of machines being able to carry out tasks in a way that could be considered “smart”. In the broadest sense, AI refers to machines that can learn, reason, and act for themselves. They can make their own decisions when faced with new situations, in the same way that humans and animals can. If a machine can solve problems, complete a task, or exhibit other cognitive functions that humans can, then we refer to it as having artificial intelligence.

Every few decades, a technological development leads us to believe that artificial general intelligence (strong AI) , the brand of AI that can think and decide like humans, is just around the corner. However, every time we thought we were closing in on strong AI, we have been disappointed. We are currently in the full heat of one such cycle, thanks to machine learning (and deep learning), the technologies that have been at the heart of AI developments in recent years. 

As it currently stands, the vast majority of the AI advancements and applications you hear about refer to a category of algorithms known as machine learning. Machine learning - as well as deep learning, natural language processing and cognitive computing - are driving innovations in identifying images, personalizing marketing campaigns, genomics, and navigating the self-driving car. Machine learning is the basis of many major breakthroughs, including facial recognition, hyper-realistic photo and voice synthesis, and AlphaGo (the program that beat the best human player in the complex game of Go)

Over the past few years AI has exploded, and especially since 2015. Much of that has to do with the wide availability of GPUs that make parallel processing ever faster, cheaper, and more powerful. It also has to do with the simultaneous one-two punch of practically infinite storage and a flood of data of every stripe (that whole Big Data movement) - images, text, transactions, mapping data, you name it. 


The Goal of Artificial Intelligence (AI)


Rather than increasingly complex calculations, work in the field of AI concentrated on mimicking human decision making processes and carrying out tasks in ever more human ways.

"The goal of AI is to understand intelligence by constructing computational models of intelligent behavior. This entails developing and testing falsifiable algorithmic theories of (aspects of) intelligent behavior, including sensing, representation, reasoning, learning, decision-making, communication, coordination, action, and interaction. AI is also concerned with the engineering of systems that exhibit intelligence." -- (The Pennsylvania State University

AI is an interdisciplinary field and several other disciplines have contributed to its progress and this includes mathematics, economics, linguistics, neuroscience, control theory and cybernetics, psychology, computer engineering, and finally philosophy. Given how broad the field is, it won't be possible to go every single contribution. However, let's go through the main concept introduced by each of these disciplines.

  • With no surprise philosophers were the first AI contributors. They formulated ideas for AI, starting with Aristotle in 400 BC. They considered the mind as a machine, our physical system operating as a set of logical rules. There have been different philosophy movements including rationalism, dualism, materialism, empiricism, induction, etc. 
  • Mathematicians provided the tools to formalize and manipulate logic. They also worked out the details of propositional logic and first order logic. Mathematics also lead the ground for algorithms for logical deduction to draw valid conclusions. Finally mathematicians also contributed with the theory of probability invaluable to help deal with uncertainty in the real world.
  • Economists provided the formal theory of rational decisions to maximize what they call payoff or utility. They combine decision theory and probability theory for decision making and uncertainly. They also address game theory in which an agent is planning to maximize its utility in the presence of an opponent who is aiming or planning against him. Economists also formalize mark of decision processes as a class of sequential decision problems with the mark of property.
  • Neuroscience contributed to AI progress by addressing how brain functions and how brains and computers are similar or dissimilar. A good progress has been made so far in understanding how the brain functions. And we could expect more involvement in AI in the next decades or so.
  • Psychologists care about how we think and act. Cognitive psychology specifically perceives the brain as an information processing machine. It lead to the development of the field of cognitive science.
  • Computer engineering cares about how to build powerful machines to make AI possible. For example although the idea of self driving cars or autonomous driving has been there for decades, it's became only possible today thanks to advances in computer engineering.
  • Control theory and cybernetics aim to design simple optimal agents receiving feedback from the environment. Today modern control theory design systems that maximize
    an objective functions over time, which gets AI and control theory today closer disciplines than ever.
  • Linguistics cares about how our languages and thinking related. And today modern linguistics and AI format we call computational linguistics or natural language processing which is a very important piece in natural language understanding in AI. 


The Foundation of AI

(AI Algorithm Flowchart, Karen Hao, MIT)


Artificial Intelligences (AI) – devices designed to act intelligently – are often classified into one of two fundamental groups – applied or general. 

  • Applied AI is far more common – systems designed to intelligently trade stocks and shares, or maneuver an autonomous vehicle would fall into this category. 
  • Generalized AIs – systems or devices which can in theory handle any task – are less common, but this is where some of the most exciting advancements are happening today. It is also the area that has led to the development of Machine Learning. Often referred to as a subset of AI, it’s really more accurate to think of it as the current state-of-the-art. 

The grand idea is to develop something resembling human intelligence, which is often referred to as “artificial general intelligence,” or “AGI.” Some experts believe that machine learning and deep learning will eventually get us to AGI with enough data, but most would agree there are big missing pieces and it’s still a long way off. AI may have mastered Go, but in other ways it is still much dumber than a toddler.

In that sense, AI is also aspirational, and its definition is constantly evolving. What would have been considered AI in the past may not be considered AI today. Because of this, the boundaries of AI can get really confusing, and the term often gets mangled to include any kind of algorithm or computer program.  To clear things up, you may use the flowchart on the right to  work out whether something is using AI or not.


The Rise of Machine Learning

Machine Learning is a current application of AI. The technology is based on the idea that that we should really just be able to give machines access to data, and let them learn for themselves. Machine learning is a technique in which we train a software model using data. The model learns from the training cases and then we can use the trained model to make predictions for new data cases.
Machine learning provides the foundation for Artificial Intelligence (AI). Two important breakthroughs led to the emergence of Machine Learning as the vehicle which is driving AI development forward with the speed it currently has. One of these was the realization that rather than teaching computers everything they need to know about the world and how to carry out tasks, it might be possible to teach them to learn for themselves. The second was the emergence of the Internet, and the huge increase in the amount of digital information being generated, stored, and made available for analysis.

Once these innovations were in place, engineers realized that rather than teaching computers and machines how to do everything, it would be far more efficient to code them to think like human beings, and then plug them into the Internet to give them access to all of the information in the world.
Machine learning is concerned with the scientific study, exploration, design, analysis, and applications of algorithms that learn concepts, predictive models, behaviors, action policies, etc. from observation, inference, and experimentation and the characterization of the precise conditions under which classes of concepts and behaviors are learnable. Learning algorithms can also be used to model aspects of human and animal learning. Machine learning integrates and builds on advances in algorithms and data structures, statistical inference, information theory, signal processing as well as insights drawn from neural, behavioral, and cognitive sciences. 

Deep Learning and Deep Neural Networks

Deep Learning is a subset of Machine Learning which deals with deep neural networks. It is based on a set of algorithms that attempt to model high-level abstractions in data by using multiple processing layers, with complex structures or otherwise, composed of multiple non-linear transformations.



[More to come ...]

Document Actions