Tag Archives: workload
When many of us started working on computers, memory was VERY expensive and very much limited system performance and capacity. But as costs came down, hardware vendors came up with new ideas to effectively implement memory management (including virtual memory). The most recent is offering various levels of cache, which is memory located closer to the CPU designed for frequently-accessed items. We can’t do a lot about the CPU cache, but it is helpful to understand how it works.
Since 1965, when Gordon Moore first made his famous prediction, we’ve benefited from exponential growth in CPU performance. We’ve now come to expect that, with each upgrade, our CPUs will experience a healthy performance boost. Unfortunately, we may have to start readjusting our expectations.
Many believe CPU performance growth is slowing. To continue meeting our workload performance needs, we’re going to have to increase capacity by growing “wide”—or increasing the number of CPUs in our environments. For some applications, this makes sense: like a road suffering from gridlock, adding CPUs is like adding a few more lanes. But for single-threaded applications like batch—which, in this analogy, are only capable of using one lane—things get a little more complicated.
We’ve already discussed the advantages of using automated capacity management as a way to get the most out of soft capping, but we thought it would be best if you saw the benefits for yourself. The following chart is from an actual z/OS environment that sought to maximize throughput and performance, limit only non-critical workloads, and manage a system’s overall demand within predetermined capacity levels by tracking the variables we mentioned in our previous post.
As you can see, after the techniques were implemented, the slope of the Rolling 4-Hour Average (R4HA) gradually tapers off to skim the cap level. This is because the lowest-priority workloads gradually slowed down as the R4HA peaked. As utilization continued to rise and increase the level of constraint, more low-to-medium priority workloads slowed as well.
While there are definitely some advantages to manual capacity management, as we discussed last week, the most effective method of meeting both the capacity and financial needs of an organization is automated capacity management—provided it’s done effectively.
The goal of any automated system should be to maximize throughput and performance, limit only non-critical workloads, and manage the overall demand within the chosen capacity levels. You ideally want to avoid, or at least limit, the entire LPAR (or LPARs) being capped.
Strict caps, as we’ve mentioned in previous posts, can be harmful to application performance. On the flip side, raising a cap to meet the needs of workload conditions can increase the Rolling 4-Hour Average (R4HA) and, as a result, cancel out the benefits of soft capping. So what is an organization to do?
Well, believe it or not, it is possible to take advantage of the financial benefits of soft capping and meet the needs of your organization at the same time. One technique is to lower the multi-programming level (MPL) of a system by controlling the available batch initiators.