Reduce Peak 4HRA and Software MLC Costs

ThruPut Manager manages workload demand to reduce capacity utilization, based on the 4HRA, when sub-capacity pricing is used with or without capping. More »

Automate z/OS Batch

ThruPut Manager balances workload arrival, importance, and resource availability. More »

Make the most of scarce resources

Because money doesn’t grow on trees, let us lower your MSU consumption and MLC costs. More »

Make Way for Mobile

As mobile applications take up more CPU at unpredictable times, let ThruPut Manager take low importance batch out of the equation and make room for your high priority workload. More »

Country Multiplex Pricing is here

Use ThruPut Manager automation to lower your MSU baseline today and find software license savings, with or without capping, when you move to CMP. More »

Automate production control

Manage z/OS execution according to your CA 7 schedule and due-out times, ensuring automated on-time completion with minimal intervention that frees you for other valuable tasks. More »

Our Customers

ThruPut Manager installations range from individual corporate datacenters to global outsourcing providers in the major industry sectors, including banking, insurance, and government. More »

 

Tag Archives: CPU utilization

CPU Busy: It’s not a linear function

CPU busy, as plotted on a graph, is not a linear function. And yet, many of us assume it is, and use that assumption for capacity planning. While linear regression may work for CPU busy functions from 20-80%, it doesn’t help you much at the lower or higher end of the utilization graph. And that’s where you actually want to know the impact of changes in transaction volume or batch workload.

Overinitiation: When more isn’t better

overinitiation

Even knowledgeable performance experts make the mistake of throwing more resources at work. If 10 buffers are good, 15 would be even better, right? If we have more batch work, throw initiators at it; it will get things moving faster. But it turns out that overinitiation is very similar to putting up more toll collectors on a bridge. Immediately after the toll is paid, the lanes have to shrink to the number that fits on the bridge. If you have too many toll takers, the merge following the toll gets crazy, accidents can happen and no one is moving fast.

SHARE Webinar – Staying in Tune: Practical considerations of key performance areas

SHARE

There is no shortage of available metrics to provide detailed insight into system performance. The problem is sorting through them all and recognizing what’s important. Utilization, for example, comes in many forms. Hiperdispatch can make or break performance, and processor cache… well, a faster CPU won’t help you with a cache miss. This session will discuss these and other key performance areas to watch, what to look for, what to ignore, and what… well, as they say – it depends.

Slowing CPU performance growth and its impact on workload

Since 1965, when Gordon Moore first made his famous prediction, we’ve benefited from exponential growth in CPU performance. We’ve now come to expect that, with each upgrade, our CPUs will experience a healthy performance boost. Unfortunately, we may have to start readjusting our expectations.

Many believe CPU performance growth is slowing. To continue meeting our workload performance needs, we’re going to have to increase capacity by growing “wide”—or increasing the number of CPUs in our environments. For some applications, this makes sense: like a road suffering from gridlock, adding CPUs is like adding a few more lanes. But for single-threaded applications like batch—which, in this analogy, are only capable of using one lane—things get a little more complicated.

Automated capacity management in action

We’ve already discussed the advantages of using automated capacity management as a way to get the most out of soft capping, but we thought it would be best if you saw the benefits for yourself. The following chart is from an actual z/OS environment that sought to maximize throughput and performance, limit only non-critical workloads, and manage a system’s overall demand within predetermined capacity levels by tracking the variables we mentioned in our previous post.

As you can see, after the techniques were implemented, the slope of the Rolling 4-Hour Average (R4HA) gradually tapers off to skim the cap level. This is because the lowest-priority workloads gradually slowed down as the R4HA peaked. As utilization continued to rise and increase the level of constraint, more low-to-medium priority workloads slowed as well.

Automated capacity management: The best of all worlds?

While there are definitely some advantages to manual capacity management, as we discussed last week, the most effective method of meeting both the capacity and financial needs of an organization is automated capacity management—provided it’s done effectively.

The goal of any automated system should be to maximize throughput and performance, limit only non-critical workloads, and manage the overall demand within the chosen capacity levels. You ideally want to avoid, or at least limit, the entire LPAR (or LPARs) being capped.

Pros and cons of manual capacity management

Strict caps, as we’ve mentioned in previous posts, can be harmful to application performance. On the flip side, raising a cap to meet the needs of workload conditions can increase the Rolling 4-Hour Average (R4HA) and, as a result, cancel out the benefits of soft capping. So what is an organization to do?

Well, believe it or not, it is possible to take advantage of the financial benefits of soft capping and meet the needs of your organization at the same time. One technique is to lower the multi-programming level (MPL) of a system by controlling the available batch initiators.