What Is Narrow Artificial Intelligence (AI)? Definition, Challenges, and Best Practices for 2022

essidsolutions

Artificial narrow intelligence (ANI) is defined as the goal-oriented version of AI designed to better perform a single task such as tracking weather updates, generating data science reports by analyzing raw data, or playing games such as poker, chess, etc. This article explains the fundamentals of narrow AI, its key advantages and challenges, and the top 10 best practices for narrow AI development.

What Is Narrow Artificial Intelligence?

Artificial narrow intelligence (ANI) refers to the goal-oriented version of AI designed to better perform a single task such as tracking weather updates, generating data science reports by analyzing raw data, or playing games such as poker, chess, etc.

Narrow Artificial Intelligence

Artificial narrow intelligence systems are programmed to attend one task at a time by pulling in information from a specific dataset. In other words, such systems do not go beyond the assigned tasks.

Unlike general AI, narrow AI lacks self-awareness, consciousness, emotions, and genuine intelligence that can match human intelligence. While such systems may appear sophisticated and intelligent, they operate under a predetermined and predefined set of parameters, constraints, and contexts.

The machine intelligence that surrounds us today is a part of the same narrow AI. Examples include Google Assistant, Siri, Google Translate, and other natural language processing tools. Although these tools can interact with us and process and comprehend human language, they are termed as weak AI as they lack the fluidity or flexibility to think for themselves as humans do. 

Let’s consider Siri. It is not a conscious machine. Instead, it is only a tool performing tasks. When we converse with Siri, it processes the human language, enters it into the system’s search engine like Google, and provides results.

When someone poses abstract questions such as how to handle a personal problem or deal with a traumatic experience to tools like Alexa or Google Assistant, they either give vague responses that lack sense or provide links to articles on the internet that presumably address the issue at hand. 

On the contrary, when we ask a fundamental question such as “what is the temperature outside”, we tend to get an accurate response from virtual assistants such as Siri. This is because answering such basic questions is within the range of Siri’s intelligence for which it is designed.

Moreover, even something as complex as self-driving cars falls under weak AI, as they are trained to navigate the surrounding area with the help of an annotated driving dataset. A typical self-driving vehicle comprises multiple ANI systems that are critical for its smooth movement in a highly complex urban environment.

See More: What Is Artificial Intelligence (AI) as a Service? Definition, Architecture, and Trends

Advantages and Challenges of Narrow AI

Current AI and intelligent machines come under the ‘weak AI’ category. However, this does not discount the benefits of narrow AI, as it is one of the most significant human innovations and intellectual accomplishments.

First, let’s understand the advantages of narrow AI.

Advantages of Narrow AI

1. Facilitates faster decision making

Artificial narrow intelligence systems facilitate faster decision-making as they process data and complete tasks significantly quicker than humans. As a result, they allow us to boost overall productivity & efficiency and thereby improve the quality of life. For example, artificial narrow intelligence systems such as IBM’s Watson assist doctors in making quick data-driven decisions by harnessing the power of AI. This has made healthcare better, faster, and safer than ever before.

2. Relieves humans from mundane tasks

Developments in narrow AI have ensured that humans are relieved from several dull, routine, and mundane tasks. It has made our day-to-day lives easier, right from ordering food online with the help of Siri to reducing the effort of analyzing volumes of data to produce results.

Additionally, technologies such as self-driving cars have relieved us from the stress and burden of being stuck in traffic for long and instead provided us with more leisure time to carry out activities or tasks of our interests.

3. Serves as a building block for the development of more intelligent AI

Artificial narrow intelligence systems serve as the foundation for the eventual development of more intelligent AI versions such as general AI and super AI. Speech recognition allows computers to convert sounds to text with significant accuracy, while computer vision enables the recognition and classification of objects in video streams. Currently, Google is using AI to caption millions of YouTube videos.

Today, AI-powered computer vision is already used to unlock screens and tag friends online. Concurrently, the autonomous vehicle sector is exploring the field of ‘affective AI’ where the system can learn non-verbal nuances (feelings, emotions), and prompt sleepy truck drivers to stay alert and pay attention while driving. All these foundational technologies are only paving the way for future self-aware and conscious versions of AI.

4. Performs single tasks better than humans

Narrow AI systems can perform single tasks far better than humans. For example, a narrow AI system programmed to detect cancer from X-ray or ultrasound images might be able to quickly spot a cancerous mass in a set of images with substantially higher accuracy as compared to a trained radiologist.

Another example is that of a predictive maintenance system used at manufacturing plants. The system collects and analyzes incoming sensor data in real-time to predict whether a machine is about to fail. Narrow AI automates this task. The entire process is much quicker and is virtually impossible for an individual or group of individuals to match as far as speed and accuracy are concerned.

The overall performance, speed, and accuracy of narrow AI supersede that of human beings. That being said, the AI community faces several critical challenges in broadening the scope of narrow AI.

Now, let’s go over the challenges that narrow AI faces. 

Challenges of Narrow AI

1. Absence of explainable AI

One of the essential requirements for the progress of artificial intelligence is the practice of creating AI that is less of a black box. This implies that we must be better positioned to understand what’s happening in neural networks. Today’s AI systems, such as one recommending books to read, employ the black-box approach effectively. The deep learning algorithm used in such cases considers millions of data points as inputs and correlates specific features to provide a result. The underlying process is self-directed and challenging for programmers and experts in the domain to interpret.

However, when people are making high-stake business decisions that involve huge investments by relying on AI models, such a black-box approach can be detrimental as the inputs and operations of the system are not visible to the concerned parties. Thus, one of the key challenges is creating more explainable AI devoid of the black-box approach.

2. Need for impenetrable security

Neural networks are exploited extensively by narrow AI. However, it is vital to understand that AI is quite fragile– it is possible to inject noise and fool the system. For example, an attacker can hack into the software system of autonomous cars and change the AI program code so that the program may mistake a bus on the road for an elephant. This can have serious implications and ramifications. A hacker can also hijack the entire network of autonomous vehicles operating in an area and eventually wipe out a billion-dollar investment.

Moreover, a single intrusion into a neural network can disrupt the operations of several systems reliant on that same network. Additionally, as neural networks are subject to attacks, providing impenetrable security remains a crucial challenge.

3. Need to learn from small data

AI models are trained on data derived from examples–implying that examples are the real currency to today’s AI. For AI to evolve further, it must be prepared to learn more from less data. AI should be able to transfer its learning from one neural network to other networks by leveraging prior knowledge.

AI blends learning and reasoning. Although today’s AI has made significant progress in learning and accumulating knowledge, applying reason to that knowledge remains a challenge. For example, a retailer’s customer service chatbot could answer questions related to store hours, product prices, and the store’s cancellation policies. 

However, a tricky question about why product X is better than a similar product Y may freeze the bot. Although creators can program bots to answer such questions, teaching an AI to apply reasoning by itself remains a problem for most scientists and experts.

4. Prone to bias

Today’s AI systems are prone to bias as they often give incorrect results without a plausible explanation. Complex AI models are continually trained on vast amounts of data that contain biases or inaccurate information. As a result, a model trained on such a biased dataset could consider the incorrect information trustworthy and make skewed predictions.

As AI systems learn from past examples, consider a system responsible for making credit decisions. The system might consider ‘not offering credit to women or minorities’ as appropriate based on previous patterns. Thus, verifying and inspecting that the examples used by the system are free of biases remains a critical challenge.

Moreover, as narrow AI lacks the ‘common sense’ aspect, or a sense of fairness and equity, handling training bias requires substantial planning and design work.

5. Subject to human failings

Narrow AI largely relies on humans to put to task. Hence, it is prone to human failings, such as people setting overly ambitious business targets or prioritizing tasks incorrectly.

Consider a situation where a human wrongly defines a task. In this case, irrespective of how long a machine works or the number of computations it performs, the end result will still be a false conclusion. Therefore, narrow AI’s reliance on fallible humans is a huge challenge for experts in the domain.

See More: Top 10 Machine Learning Algorithms

Top 10 Best Practices for Narrow AI Development in 2022

AI development is contributing immensely to improving people’s lives around the world, right from business operations, aviation, manufacturing, healthcare to education. As AI systems become an integral part of every industry vertical, discussions on the best ways to incorporate fairness, interpretability, privacy, and security into these systems are being opened up.

Here are the top 10 best practices for narrow AI development. 

Best Practices for Narrow AI Development

1. Have a human-centric design approach

The true impact of an AI system’s predictions, recommendations, and decisions can be evaluated by factoring in how actual end-users experience the system. The following routines can be considered to keep a check on your narrow AI development:

  • Use augmentation and assistance as needed. While addressing several users, AI systems can consider producing a single answer in situations where the solution is highly likely to satisfy a diverse set of users and use cases. It may be appropriate to suggest a few options to end-users in other cases.
  • Before diving into full deployment, potential adverse feedback can be incorporated into the design process. This can be followed by live testing and iteration for smaller traffic.
  • Involve a diverse set of users and consider multiple use-case scenarios that allow you to incorporate feedback throughout the project development cycle. Such practice considers a variety of user perspectives while building an AI project, thereby increasing the number of people who can benefit from the technology.

2. Consider metrics to assess training and monitoring

To understand the tradeoffs between several errors and user experiences of the AI system, one should consider several essential metrics rather than opting for a single one.

  • The metrics can include feedback from user surveys, variables that track overall system performance, factors that keep a check on short- and long-term product health such as users’ click-through rate, and quantities that monitor false positive and false negative rates across different categories of the AI product.
  • One must ensure that the metrics selected are based on the context and goals of the AI system. For example, a fire alarm system should have high recall values, irrespective of whether the system returns occasional false alarms.

3. Ensure periodic examination of raw data

Analyzing raw data can help you better understand the working of ML models as they reflect the data that they are trained on. In cases where sensitive raw data is concerned, you can instead focus on understanding the input data while respecting privacy.

  • By examining raw data, you can ascertain whether the data contains any missing values or incorrect labels. You can figure out whether the information is sampled in a manner that represents all the users (i.e., users of all ages) of your AI system.
  • Determining performance during training and serving is a persistent challenge. Thus, during the training phase, you should look for possible skews and address them immediately, including adjusting the training data or restructuring the objective function.
  • Data bias should be addressed by thoroughly analyzing the raw data going into the AI system.

4. Consider the limitations of the AI dataset and model

Understanding the limitations of the dataset and AI model is vital to keep track of the loopholes of narrow AI.

  • A correlation detecting AI model should not be used to make inferences. For example, your AI model may learn that people who buy running shoes are mostly overweight. However, this does not mean that a user who buys a pair of running shoes will become overweight.
  • ML models predominantly work on training data. Hence, it is important to clarify the scope and coverage of training. For example, a chair detector trained with stock photos will work fine. However, the model might falter when tested with cellphone photos clicked by users.
  • Limitations should be communicated to end-users. For example, if your app uses ML to recognize specific butterflies, you must communicate that the model was trained on a small set of images taken from a particular region. By informing users, you would increase your chances of receiving better feedback for the feature or application you provided.

5. Ensure the AI system works as intended by performing tests

To ensure that the designed AI system works as intended and can be trusted, you must undertake quality test practices.

  • Unit tests should be conducted to test each component of the AI system in isolation.
  • Conducting integration tests will enable you to learn how individual ML components interact with other system components.
  • Conduct iterative user tests to incorporate users’ needs in the AI product development cycles.
  • Build quality checks into the system to avoid triggering an immediate response in situations of unintended system failures. For example, if an important feature unexpectedly goes down for a predictive AI model, the AI system may refrain from generating an output prediction.

6. Regularly monitor and update the AI system post-deployment

Regular monitoring will ensure that the AI model considers real-world performances and incorporates user feedback to update the AI system.

  • The AI product should have a clear roadmap that allows it to buy time to address and fix any issues.
  • Fixing issues for both the short and long term is crucial for AI systems. A short-term fix of blocklisting may offer a quick solution; however, it might not fare well in the long run. Hence, balancing short- and long-term fixes can be better than focusing on just one.
  • Understanding how the update may affect the overall system quality and user experience is essential. Hence, before jumping into the updates, you must analyze and understand the difference between the candidate and deployed models.

7. Ensure fairness

Today, AI systems are used across industry sectors to perform critical tasks such as predicting the severity of a medical condition and matching profiles to jobs or marriage partners. The risk here is that any unfairness in such computerized decision-making systems can have a wide-scale impact. Hence, as AI penetrates across societies, it is crucial to design a fair and inclusive model for all.

  • Analyze how the technology will impact different users and use cases over time.
  • Define goals that allow your AI system to work somewhat for diverse use cases. This can include designing certain features delivered in ‘A’ different languages or specific to ‘B’ other age groups.
  • Structure the objective function and underlying algorithms that reflect the fairness goals of the AI system.
  • Monitor the system regularly to check unfair biases learned by the ML models or algorithms over time.
  • Evaluate user experiences across use cases, contexts, and real-world scenarios using TensorFlow Model Analysis tools.

8. Consider interpretability

Narrow AI systems have improved our lives as automated predictions and decision-making has become mainstream. This may relate to several examples, from music recommendations to monitoring a patient’s vital signs. Despite the proliferation of narrow AI across fields, interpretability is crucial to understand and trust AI systems. The following interpretability practices before, during, and after designing and training AI models can be considered.

  • The AI team should work closely with relevant domain experts (e.g., healthcare, marketing, finance) to determine the required interpretability features.
  • Identify post-training interpretability options. Also, determine if you have access to the internals of ML models (i.e., black-box or white-box).
  • Determine whether you can analyze the training or testing data. For example, when working with sensitive and private data, you may not have access to the input data essential for investigation.
  • Evaluate if your AI model offers too much transparency, which can potentially open up vectors for external abuse.
  • Provide explanations regarding the interpretability of AI systems to appropriate model users; technical details may be communicated to industry experts and academia, while general users can be offered visualizations (charts, graphs, and statistics) or summary descriptions.

9. Ensure privacy

ML models are programmed to learn from training data and make subsequent predictions on input data. In some cases, both training and input data can be sensitive. Take the example of a tumor detector trained on biopsy images and deployed on an individual patient’s tumor scans. Here, it is crucial to consider the privacy implications while handling sensitive data. It may include legal and regulatory requirements, social ethics, and the patient’s expectations.

  • Conduct tests using metrics such as exposure measurement or membership inference assessment to determine whether the AI model unintentionally memorizes or exposes sensitive data. Moreover, the metrics can also be used for regression tests later during model maintenance.
  • To understand the tradeoffs and determine the optimal model settings, experimentation with variables and parameters for data minimization (such as aggregation, outlier thresholds, and randomization factors) may be considered.

10. Provide security

Security of AI systems involves determining whether the system is behaving as intended, irrespective of how attackers try to interfere. Addressing the security of an AI system before entirely relying on it is essential for safety-critical applications.

  • Identify all possible attack vectors by building a rigorous threat model. For example, the threat model should be able to identify a bug in the AI system that allows an attacker to change the input to the ML model, which can make it vulnerable.
  • If the system makes a mistake, you need to identify unintended consequences and assess the likelihood and severity of these consequences.
  • Develop methods such as spam filtering to combat adversarial ML. Also, test the system performance in hostile settings using tools such as CleverHans.

See More: AI Job Roles: How to Become a Data Scientist, AI Developer, or Machine Learning Engineer

Takeaway

Today, almost every industry has embraced narrow AI as it achieves superhuman accuracy and performance when accomplishing specific tasks. Factors such as robust IoT connectivity, the proliferation of connected devices, and faster computing realms have propelled the progression of AI systems. While current AI outperforms humans, the challenge now is how narrow AI can evolve into a broader form of general and super AI. Only time will tell how AI masters cross-domain tasks by building new neural networks from scratch while it switches from one domain to another.

Do you think weak AI will ever become strong AI? Comment below or let us know on LinkedInOpens a new window , TwitterOpens a new window , or FacebookOpens a new window . We’d love to hear from you!

MORE ON ARTIFICIAL INTELLIGENCE