How to Maximize and Optimize IT System Performance

essidsolutions

Many system administrators spend their days doing something that’s likely not in their job descriptions: trying to stay one step ahead of data overload and complaints about slow system performance.

Chances are high that they also lose sleep worrying about system crashes and network shutdowns.

None of this stress or anxiety is necessary, though, because there are cost-effective ways to get maximum performance out of an existing, but overburdened, IT environment.

Primary Cause of I/O Performance Issues

First, it’s important to understand one of the most overlooked aspects of poor performance: I/O.

In physical and virtual data centers and cloud-based environments, performance hinges on interaction between the three basic layers of computing: compute (CPU), network and storage.

  • Performance is dependent on how I/O traverses these three layers.
  • I/O slows as data volume increases, and also when layers are separated in a SAN, the cloud or a data center.
  • Large amounts of I/O are generated for non-application overhead to manage metadata and to keep the compute layers running.
  • I/O performance lags as the amount of data and file system overhead increases.
  • While Windows is a valuable solution used by around 80 percent of all systems globally, it’s not perfect.

Condusiv’s 2019 I/O Performance Survey reported that 28 percent of all organizations are getting user complaints about slow performance from their applications, especially Microsoft SQL apps.

An IT system’s performance is reduced by as much as 40 percent by random fragmented I/O that is generated by the Windows OS (any Windows operating system, including Windows 10 or Windows Server 2019).

No matter the storage environment, Windows OS decreases performance because of its server inefficiencies in the transfer of data to storage. This is because:

  • Windows only recognizes the virtual disk, which is the logical disk within the file system itself. In the “mind” of the OS, the logical disk is the same as the physical disk—but it actually is just framework for reference.
  • Windows forces the underlying server-to-storage architecture to execute a large number of additional I/O operations.
  • Since there are no existing APIs between Windows OS and the storage system, Windows selects the next available space in the logical disk layer instead of choosing the best space to write and then read a file.

I’m often asked about storage controller optimization as a solution. In enterprise storage, the data’s physical location is managed by underlying storage controllers. Storage devices, however, can’t control how Windows writes (and reads). While many storage controllers can buffer or coalesce files on a dedicated SSD or NVRAM tier, or even move blocks of the same file to line up sequentially, this doesn’t solve the problem of the first penalized write or subsequent penalized reads. That’s because the storage controller must identify a pattern before the file blocks can be moved.

Software as a Solution

We all know how inefficiency can make our jobs difficult, keep us up at night and cripple an organization. Fortunately, there are solutions:

  • Balance workloads across servers;
  • Monitor for specific performance issues, especially during periods of peak usage
  • Check if there are software only solutions that will remove bottlenecks from the major chokepoints including storage, network and CPU;
  • Add relatively inexpensive hardware resources to make room for solutions such as memory to allow for more caching.

The default answer for poor performance typically is to buy new hardware with higher performance ratings. Although it can make a difference in the short term, expensive upgrades and productivity lost with system downtime is prohibitive.

When I/O density is low, buying storage with more I/Os per second (IOPS) won’t solve anything. Instead, if you can pack more data into each I/O, the performance would improve dramatically. For example, improving the I/O density from 512 bytes to 32K bytes on SSDs increases read and write speeds by more than 50 times.

Third-party software utilities are available that specifically address these performance issues. These tools are often less costly, easier to implement and more effective than upgrading hardware to the latest and greatest. Most vendors allow for trialware, so you don’t need to commit up front with the money, employee resources and down times necessary for major hardware and/or primary application changes/upgrades.

The bottom line is that there is a way to stay one step ahead in the performance race.