Personal tools
You are here: Home Research Trends & Opportunities New Media and New Digital Economy AI, Machine Learning, Deep Learning, and Neural Networks
 

AI, Machine Learning, Deep Learning, and Neural Networks

MIT Stata Center_051118
(MIT Ray and Maria Stata Center, Jenny Fowter)

 

Artificial Intelligence: Fueling the Next Wave of the Digital Era

 
 

- Overview

Artificial intelligence (AI) is a broad field of computer science focused on creating machines that can perform tasks that typically require human intelligence, such as learning, problem-solving, and perception. 

It encompasses various subfields like Machine Learning (ML), Natural Language Processing (NLP), and Knowledge Representation & Reasoning. 

Modern AI systems leverage vast amounts of data to learn and solve new problems, often mimicking human-like intelligence. 

AI is rapidly evolving and has the potential to transform various industries by enabling businesses to automate tasks, gain valuable insights from data, and create more personalized experiences.

1. Key Aspects of AI: 

  • Cognitive Tasks: AI aims to replicate cognitive functions like learning, reasoning, and problem-solving.
  • Data-Driven Learning: AI systems learn from data, enabling them to adapt and improve their performance over time.
  • Diverse Applications: AI is used in various applications, including natural language processing, image recognition, and robotics.
  • Sub-fields: AI includes specialized areas like Machine Learning, NLP, Knowledge Representation, and Reasoning.
  • Business Benefits: AI can optimize business processes, improve customer experience, and accelerate innovation.


2. Examples of AI in Action: 

  • Virtual assistants: Siri and Alexa use AI to understand and respond to voice commands.
  • Recommendation systems: Amazon uses AI to suggest products based on user behavior.
  • Customer service chatbots: AI-powered chatbots can handle customer queries and provide support.
  • Fraud detection: AI can analyze data to identify and prevent fraudulent activities.
  • Autonomous vehicles: Self-driving cars utilize AI for navigation and decision-making.

 

Please refer to the following for more information:

 

- The Internet of Sensing (IoS)

The Internet of Sensing (IoS) enhances human senses by extending them beyond the physical body, allowing for experiences through multiple senses like enhanced vision, hearing, touch, and smell. It leverages AI, AR, VR, 5G, and hyperautomation to create digital sensory experiences similar to the physical world. 

In contrast, the Internet of Things (IoT) connects the physical and digital worlds by using sensors to monitor physical objects and actuators to respond to changes. 

1. Key aspects of IoS: 

  • Augmented Senses: IoS aims to augment human senses, providing experiences beyond the limitations of our physical bodies.
  • Digital Sensory Experiences: It creates digital sensory experiences that mimic real-world interactions, enabling users to engage with digital content in new ways.
  • Enabling Technologies: IoS relies on advancements in AI, AR, VR, 5G, and hyperautomation.
  • Potential Applications: IoS has potential applications in various fields, including gaming, medical diagnosis, logistics, autonomous driving, language translation, and interactive personal assistance.


2. Key aspects of IoT: 

  • Physical-Digital Connection: IoT connects the physical world (objects) with the digital world (data and control).
  • Sensor-Actuator Interaction: Sensors monitor environmental changes, and actuators respond to these changes.
  • Examples of IoT Applications: IoT is used in smart homes, transportation systems, smart cities, and wearable devices.
  • Cost Reduction and Optimization: IoT can optimize processes, reduce costs, and improve efficiency in various industries through real-time monitoring and control.

 

3. Relationship between IoS and IoT: 

  • Complementary Technologies: While IoT focuses on connecting physical objects and gathering data, IoS focuses on enhancing human sensory experiences.
  • Potential for Convergence: IoT can be a foundational technology for IoS, providing the data and connectivity that enables the sensory experiences.

 

- AI: The Science of Making Inanimate Objects Smart

Artificial intelligence (AI) is a technology that enables computers to mimic human behavior, encompassing a group of technologies that process computer models and systems that perform cognitive functions such as reasoning and learning. 

AI aims to give machines human-like cognitive abilities, and it achieves this through various techniques, with machine learning playing a crucial role in enabling AI systems to learn, adapt, and solve problems based on data and experience.

1, How AI functions: 

  • Learning from Experience: AI software distinguishes itself from traditional pre-programmed software by its ability to learn from experience.
  • Mimicking Human Intelligence: AI doesn't necessarily mean giving machines human-like intelligence or consciousness, but rather enabling them to solve specific problems or classes of problems.
  • Data Analysis and Pattern Recognition: AI relies on computers to collect vast amounts of data about our everyday preferences, purchases, and activities. AI research experts use this data to train machine learning (ML) and predict what we want or dislike.
  • Problem-Solving Skills: AI helps solve problems by performing tasks involving skills like pattern recognition, prediction, optimization, and recommendation generation based on data like video, images, audio, numbers, and text.


2. Examples of AI in action: 

  • Smartphones and Chatbots: AI is already widely used in our digital lives, powering features in smartphones and enabling chatbots for various tasks.
  • Recommendation Systems: AI drives recommendation systems in streaming services and e-commerce platforms, suggesting content or products based on user preferences and behavior.
  • Autonomous Vehicles: AI is crucial in the development of self-driving cars, enabling them to perceive and react to their environment.
  • Medical Advancements: AI is used in healthcare to improve medical diagnostics, facilitate drug discovery and development, and automate online patient experiences.


3. AI and other related terms:

  • Machine Learning (ML): Machine learning is a branch of AI that uses algorithms to automatically learn from data, identify patterns, and make decisions. It's essentially how a computer system develops its intelligence within the broader field of AI.
  • Deep Learning: A more advanced form of machine learning, deep learning utilizes neural networks (modeled after the human brain) to learn complex patterns from data. It's particularly effective for tasks like image and speech recognition and natural language processing.
  • Generative AI: Generative AI is a type of AI that can create new content like text, images, and music. It is built upon deep learning models and large language models (LLMs).

 

The AI Resurgence

AI and ML principles have been around for decades. The recent popularity of AI is a direct result of two factors. First, AI/ML algorithms are computationally intensive. The availability of cloud computing makes it possible to actually run these algorithms. Second, training AI/ML models requires a lot of data. The availability of big data platforms and digital data increases the effectiveness of AI/ML, making it better than humans for many applications. 

The speed, availability and sheer scale of the infrastructure enable bolder algorithms to solve more ambitious problems. Not only is the hardware faster and sometimes enhanced with specialized processor arrays such as GPUs, it is also available as a cloud service. What used to run in specialized labs with access to supercomputers can now be deployed to the cloud at little cost and much easier. 

This has democratized access to the hardware platforms needed to run AI, allowing startups to proliferate. Additionally, emerging open source technologies, such as Hadoop, allow for faster development of scaled AI techniques applied to large and distributed datasets. 

Larger players are investing heavily in various AI technologies. These investments go beyond simple R&D expansion of existing products and are often strategic. For example, the size of IBM's investment in Watson, or Google's investment in driverless cars, deep learning (aka DeepMind), or even quantum computing, promises to significantly improve the efficiency of machine learning algorithms.

A summary of the AI resurgence:

1. Longevity of AI/ML principles: 

  • AI and ML principles have existed for decades.

 

2. Factors driving recent popularity: 

The recent popularity of AI is attributed to two main factors:

  • Cloud Computing: Cloud computing has made running computationally intensive AI/ML algorithms feasible. It provides readily accessible, scalable and cost-effective computing power for data storage and processing, enabling more rigorous testing and refinement of algorithms. Public cloud providers like Amazon and Microsoft have enhanced their support for specialized processors like GPUs, which excel at the parallel processing required by AI algorithms, further reducing training times for complex models.
  • Big Data: The availability of big data platforms and digital data has significantly improved the effectiveness of AI/ML, allowing them to outperform humans in many applications. Big data analytics, often utilizing AI, helps to process and analyze these vast datasets, uncover trends and patterns, and facilitate decision-making.


3. Impact of these factors:

  • Democratization of Access: Cloud computing and the availability of open-source tools like Hadoop have democratized access to the hardware platforms necessary for running AI, leading to a proliferation of startups.
  • Increased Algorithm Sophistication: The availability of scalable infrastructure and processing power enables the development and deployment of more ambitious and effective AI algorithms.


4. Investments in AI: 

Major corporations are making significant strategic investments in AI technologies, including:

  • IBM: Investment in Watson.
  • Google: Investment in areas like driverless cars, deep learning (DeepMind), and quantum computing.
  • Microsoft: Integrating Copilot, refining AI models, strategic partnership with OpenAI, reports AI Magazine.
  • Amazon: Developing custom AI chips, building numerous AI applications, expanding AWS infrastructure, according to AI Magazine.
  • Nvidia: Developing advanced GPUs and AI accelerators, pioneering agentic AI and enterprise AI solutions, driving demand for robotics and industrial AI, says AI Magazine.
  • Meta: Investing in data center infrastructure, developing and open-sourcing Llama LLMs, integrating AI assistants across platforms, according to AI Magazine.


These investments extend beyond simple R&D and are aimed at leveraging AI to significantly improve the efficiency of machine learning algorithms and drive business value.

 

- The Future of AI

AI technologies are already changing how we communicate, how we work and play, and how we shop and health. For businesses, AI has become an absolute necessity to create and maintain a competitive advantage. 

As AI permeates our daily lives and aims to make our lives easier, it will be interesting to see how quickly it develops and evolves, enabling different industries to evolve. Science fiction is slowly becoming a reality as new technological developments appear every day. Who knows what tomorrow will bring?

AI is expected to have a significant impact on the future, with the potential to improve industries, create new jobs, and increase economic growth:

  • Economic growth: AI could increase the world's GDP by 14% by 2030. It could also create new products, services, and industries.
  • Improved industries: AI could improve healthcare, manufacturing, customer service, and other industries. It could also lead to higher-quality experiences for customers and workers.
  • New jobs: AI-driven automation could change the job market, creating new positions and skills.
  • Augmented human capabilities: AI could help humans thrive in their fields by automating repetitive tasks and streamlining workflows.
  • Personalized learning: AI-powered tutoring systems could tailor to individual learning needs.
  • Scientific discovery: AI could help scientists advance their work by extracting data from imagery and performing other tedious tasks.
  • Video creation: AI could be used to create short-form videos for TikTok, video lessons, and corporate presentations.


However, AI also faces challenges, including increased regulation, data privacy concerns, and worries over job losses. If AI falls into the wrong hands, it could be used to expose people's personal information, spread misinformation, and perpetuate social inequalities.

 

- AI Is Evolving to Process the World Like Humans

AI is developing on its own. The software the researchers created draws on concepts from Darwin's theory of evolution, including "survival of the fittest," to build AI programs that can be passed down from generation to generation without human input. AI offers a wide range of technological capabilities that can be applied across all industries, profoundly changing the world around us. 

As AI researchers work to develop and improve their machine learning and AI algorithms, the ultimate goal is to rebuild the human brain. The most perfect AI imaginable would be able to process the world around us through typical sensory input, while leveraging the storage and computing power of supercomputers. 

With this ultimate goal in mind, it's not hard to understand how artificial intelligence is evolving as it continues to evolve. 

Deep learning AI is able to interpret patterns and draw conclusions. Essentially, it's learning how to mimic the way humans process the world around us. That said, from the start, AI often requires typical computer input, such as encoded data. Developing AI that can process the world through audio and visual, sensory input is a daunting task.

 

- The Relationship Between AI, ML, DL, and Neural Networks

Both machine learning (ML) and deep learning (DL) are subsets of AI. But we often use these terms interchangeably. ML is the largest component of AI. All AI-based products or services on the market would not be possible without ML or DL. 

Perhaps both technologies were introduced decades ago. But now, over the past few years, people are using its apps a lot. AI may be the last invention humans need to make.  These three terms -- AI, ML, and DL -- are critical to understanding themselves and their relationships; from sales teams explaining the services they provide, to having to Data scientists who decide which model type to use. 

While each of AI, ML, and DL has its own definition, data requirements, level of sophistication, transparency, and limitations, what that definition is and how they relate to each other is entirely up to the context in which you view them. 

  • Artificial Intelligence (AI): imitating the intelligence or behavioral patterns of humans or any other biological entity. When machines are able to mimic human intelligence through prediction, classification, learning, planning, reasoning and/or perception.
  • Machine Learning (ML): A technique in which computers "learn" from data without using a common set of different rules. This approach is primarily based on training a model from a dataset. Machine learning is a subset of artificial intelligence that combines mathematics and statistics in order to learn from the data itself and improve with experience.
  • Deep Learning (DL): A technique for performing machine learning inspired by our "brain's own network of neurons" - networks that can adapt to new data. Deep learning is a subset of ML that uses neural networks to solve increasingly complex challenges such as image, audio, and video classification.
  • Neural Networks: A beautiful biology-inspired programming paradigm that enables computers to learn from observational data. Deep learning, a set of powerful neural network learning techniques. Neural networks and deep learning currently provide the best solutions for many problems in image recognition, speech recognition, and natural language processing.

 

Machine learning, while widely considered a form of AI, aims to let machines learn from data, not from programming. Its applicable use is to predict outcomes, like we recognize a red octagon sign with white letters and know to stop. 

AI, on the other hand, can determine the best course of action for how to stop, when to stop, etc. Simply put, the difference is: machine learning predicts, artificial intelligence acts. 

 

AI vs ML vs DL vs Neural Networks_120924A
[AI vs ML vs DL vs Neural Networks]

- The Rise of Machine Learning (ML)

Machine learning (ML) is an interdisciplinary field that uses statistics, probability, algorithms to learn from data and provide insights that can be used to build intelligent applications. 

ML is the current application of AI. The technology is based on the idea that we really should be able to give machines access to data and let them learn on their own. 

ML is a technique that uses data to train software models. The model learns from training cases, and we can then use the trained model to make predictions on new data cases. 

ML provides the foundation for AI. Two important breakthroughs have led to the emergence of machine learning, which is advancing AI at the current rate. 

One of them is the realization that instead of teaching a computer everything it needs to understand the world and how to perform tasks, it is better to teach it to teach itself. The second is the advent of the Internet and the enormous growth in the amount of digital information that is generated, stored, and available for analysis. 

Once these innovations were in place, engineers realized that instead of teaching computers and machines how to do everything, they could write code to make them think like humans, and then connect them to the internet, giving them access to all the information in the world. 

ML is concerned with the scientific research, exploration, design, analysis, and application of algorithms that learn concepts, predictive models, behaviors, strategies of action, etc. from observation, reasoning, and experimentation, and the characterization of which classes of precise conditions concepts and behaviors can be learned . 

Learning algorithms can also be used to model various aspects of human and animal learning. ML integrates and builds on advances in algorithms and data structures, statistical inference, information theory, signal processing, and insights gained from neural, behavioral, and cognitive sciences.

 

- Deep Learning (DL)

Deep learning (DL) uses artificial neural networks (ANNs) to perform complex computations on large amounts of data. It is a machine learning based on the structure and function of the human brain. DL algorithms train machines by learning from examples. Industries such as healthcare, e-commerce, entertainment, and advertising commonly use deep learning. 

While deep learning algorithms have self-learning representations, they rely on artificial neural networks that mirror the way the brain computes information. During training, the algorithm uses unknown elements in the input distribution to extract features, group objects, and discover useful data patterns. Like training a machine to learn on its own, this happens at multiple levels, using algorithms to build models. 

DL models use a variety of algorithms. While no network is considered perfect, certain algorithms are better suited to perform specific tasks. To choose the right algorithm, it is best to have a solid understanding of all major algorithms. 

DL is a hot topic these days because it aims to simulate the human mind. It's been getting a lot of attention lately, and for good reason. It is achieving results that were not possible before. In deep learning, computer models learn to perform classification tasks directly from images, text, or sound. 

DL models can achieve state-of-the-art accuracy and sometimes exceed human-level performance. The model is trained by using a large amount of labeled data and a neural network architecture with multiple layers. 

DL is basically ML on steroids that allows for more accurate processing of large amounts of data. Since it is more powerful, it also requires more computing power. Algorithms can determine on their own (without engineer intervention) whether predictions are accurate. 

For example, consider feeding an algorithm thousands of images and videos of cats and dogs. It can see if an animal has whiskers, claws or a furry tail, and uses learning to predict whether new data fed into the system is more likely to be a cat or a dog.

 

- Neural Networks

Neural networks are a family of algorithms that strive to identify potential relationships in a set of data by simulating the way the human brain works. In this sense, a neural network refers to a system of neurons, whether organic or artificial. Neural networks can adapt to changing inputs; thus the network can produce optimal results without redesigning the output criteria. 

Neural networks are a set of algorithms, loosely modeled on the human brain, designed to recognize patterns. They interpret sensory data through a kind of machine perception, labeling or clustering of raw input. The patterns they recognize are numerical and contained in vectors, and all real-world data, whether images, sounds, text, or time series, must be converted into vectors. 

Neural networks help us with clustering and classification. You can think of them as layers of clustering and classification on top of the data you store and manage. They help to group unlabeled data based on similarity between example inputs and to classify data when training on labeled datasets. 

Neural networks can also extract features that are provided to other algorithms for clustering and classification; therefore, you can think of deep neural networks as components of larger ML applications involving reinforcement learning, classification, and regression algorithms.

Neural networks and deep learning currently provide the best solutions for many problems in image recognition, speech recognition, and natural language processing.

 

- Is AI Bubble Bursting?

The AI market's future is debated, with some predicting a potential "bubble burst" due to factors like unsustainable valuations and lack of profitability, while others see strong long-term growth potential. 

While the current AI landscape may exhibit some characteristics of a bubble, including speculative enthusiasm and rapid investment, the underlying technology and its potential for transforming industries remain significant. 

While concerns about a potential AI bubble persist, the technology's underlying potential and continued innovation suggest that the AI market will likely experience a period of maturation and refinement rather than a complete collapse. The future of AI will depend on how effectively companies address challenges related to profitability, regulation, and public perception, while continuing to innovate and develop practical applications.

1. Arguments for a potential AI bubble burst: 

  • Unsustainable valuations: Many AI companies are valued at high levels without clear paths to profitability or revenue generation, potentially leading to a market correction if these expectations aren't met.
  • Lack of profitable revenue streams: Despite significant investment, many AI companies struggle to generate substantial revenue from their AI products and services.
  • Regulatory challenges: The rapidly evolving AI landscape may face increased regulatory scrutiny, potentially impacting market growth and investor confidence.
  • Public distrust: Concerns about data privacy, algorithmic bias, and the potential for job displacement could lead to public distrust and resistance to AI adoption.
  • Difficulty making money: Many businesses are finding it challenging to translate AI investments into tangible profits, which could dampen further investment.
  • Data quality issues: Poor data quality can hinder the performance and reliability of AI systems, leading to project failures and disillusionment.
  • Escalating costs: Developing and deploying AI solutions can be expensive, and if costs become prohibitive, it could stifle innovation and growth.


2. Arguments for continued AI growth: 

  • Underlying technology: AI's core strength lies in its ability to process vast amounts of data and identify patterns, making it a powerful tool for various industries.
  • Transformative potential: AI has the potential to revolutionize various sectors, including healthcare, finance, and transportation, driving long-term growth and innovation.
  • Infrastructure investment: Significant investments in AI infrastructure, such as data centers and specialized hardware, indicate a commitment to long-term growth.
  • Ongoing research and development: Continuous advancements in AI research are addressing current limitations and improving model performance, paving the way for more sophisticated and reliable applications.
  • Growing demand for AI talent: The increasing demand for AI professionals, such as data scientists and machine learning engineers, suggests a strong belief in the long-term prospects of AI.

 

- How Close Is AI to Human-level Intelligence?

AI is getting closer to human-level intelligence in specific tasks, but it's still far from replicating general human intelligence.  

While AI excels at tasks like pattern recognition and processing information, it lags behind in areas like common sense reasoning, creativity, and understanding nuanced human emotions.

While AI is making rapid advancements, it's important to recognize the fundamental differences between AI and human intelligence and to consider the ethical implications of AI development, especially as it relates to the potential for both augmentation and displacement of human capabilities.

Here's a more detailed breakdown: 
  • AI is excelling in Narrow Intelligence: AI systems are currently designed for specific tasks, like playing games (chess, Go), image recognition, and language processing. These systems have demonstrated impressive capabilities within their designated domains.
  • The Challenge of General Intelligence (AGI): Reaching Artificial General Intelligence (AGI), which involves human-level intelligence across diverse tasks, remains a significant hurdle. Many AI experts believe that current approaches, like scaling up neural networks, may not be sufficient to achieve AGI.
  • Human Strengths: Human intelligence is characterized by the ability to learn from limited data, generalize knowledge, and apply it to new situations, along with creativity, empathy, and common sense reasoning. AI systems still struggle with these aspects.
  • The Future is Uncertain:
  • While some experts predict AGI within a few years, others believe it may never happen or will take decades. The path to AGI is not clear, and there's a debate about whether it should even be the primary goal of AI research.
  • AI as a Tool: Instead of aiming to replace human intelligence, AI could be developed as a tool to augment human capabilities, supporting human growth and learning. 
  • Beyond Human Intelligence: If AI does surpass human intelligence, it could lead to Artificial Superintelligence (ASI), a hypothetical state where AI surpasses human intellect in all aspects.
  • Recent Progress: Some AI systems, like OpenAI's o1-preview, have shown impressive performance on tasks like mathematical Olympiad problems, demonstrating progress in reasoning and problem-solving capabilities, according to Nature.
  • Potential for Bias: AI systems can also inherit biases from the data they are trained on, leading to unfair or discriminatory outcomes.


Document Actions