3 Automation Practices To Follow To Avoid IT ‘Stock Market Crashes’

essidsolutions

In this article, Paul Barrett, CTO of Enterprise, NETSCOUT, gives his take on the importance of smart automation for IT professionals and strategies to prevent unexpected crashes.

On October 19, 1987, the Dow Jones Industrial Average dropped 22.6%, and the S&P 500 fell 20.4% in a single trading cycle. By the end of the day, $1.7 trillion disappeared from worldwide markets, with $500 billion lost in the United States alone.  

Most observers agree there was a distinct financial bubble due for a correction at that time. But many analysts also believe a second factor contributed to the crash: it was the first time a significant number of automated trading algorithms were at play when a shift in the markets occurred.  

By 1987, both humans and computer programs worked the stock market. IT professionals developed each algorithm in isolation to meet a specific financial goal. Unfortunately, no one had fully comprehended how these algorithms might behave collectively when pushed out of their expected window of operation. The same system is in use today, but with safety nets.  

Automation has many benefits, but it is far from perfect. That means connecting independently designed systems can elicit unexpected outcomes, just as it did in the 1987 stock market crash.  

Also Read: How Purpose-Built Intelligent Automation Solves the Shortcomings of RPA

When you create a chain, and the end connects back to its start, you have created — by design or by accident — a feedback loop. Similarly, in complex IT environments, it is increasingly common to share data. For example, the owners of System A may decide they would benefit from information produced by System B. It should be easy for one system to ingest and act on data from another, but the problem is the owners of System A might not be aware that System B connects to System C. 

Therefore, we must recognize that whenever we combine two separate systems, we create a new “supersystem”. We need to understand how all of our systems operate together because the supersystem they collectively represent may be more extensive than expected. It might even create unanticipated feedback loops.

IT organizations can prevent their own unexpected crashes by ensuring they have visibility throughout their systems. Practicing observability, understanding the limitations of operator’s instructions, and leaning on artificial intelligence capabilities help support this process.  

1. Observability 

Preventing system crashes begins with having optimal visibility and observability. The system’s internal state can be determined from its external inputs and outputs, e.g. ,through instrumentation and monitoring. This is vital information for IT teams, as the network traffic provides an excellent source of observability. The cautionary tale of the stock market crash reinforces the enormous importance of having a high level of observability in computerized systems. Such observability, coupled with continuous monitoring, allows IT organizations to catch problems early and quickly. 

Also Read: IT Modernization: Misconceptions, Insights, and Wins With Automation Strategies

2. Operator’s Instructions

These instructions ultimately drive most automated systems, whether through a user interface, a configuration file, or a script. Engineers are only human, so errors can creep into any of these mechanisms. Even when automated tools, processes, and software behave exactly as requested, they may not produce flawless results for enterprise teams. This brings us back to the importance of maintaining observability, which is critical in these scenarios to avoid the possibly catastrophic impact of system coding problems, whether they come from the programming language or exist within the fabric of the system itself.

3. Leaning on Artificial Intelligence Capabilities 

When enterprises attempt to turn their digital transformation ambitions into reality, teams need to rely on systems that largely run autonomously — for example, under the guidance of artificial intelligence operations (AIOps) algorithms. This kind of automation can reduce the possibility of error and decrease manual responsibilities. However, while AIOps make daily processes more efficient, they still require a level of observability. And they should include failsafes like automated alerts that help those responsible for maintaining these systems catch problems before they cascade into larger crashes.

Also Read: Automation and AI Are Key for Manufacturers’ Go-To-Market Strategy

All enterprises need to make pervasive visibility and observability the focal points of their automated environments. Through this process, architects can understand the interdependencies between various systems, retain the individual oversight needed to identify abnormal behavior before it poses a threat, and take control to avoid IT stock market crashes. 

Did you find this article helpful? Tell us what you think on LinkedInOpens a new window , TwitterOpens a new window , or FacebookOpens a new window . We’d be thrilled to hear from you.