Reduce Peak 4HRA and Software MLC Costs

ThruPut Manager manages workload demand to reduce capacity utilization, based on the 4HRA, when sub-capacity pricing is used with or without capping. More »

Automate z/OS Batch

ThruPut Manager balances workload arrival, importance, and resource availability. More »

Make the most of scarce resources

Because money doesn’t grow on trees, let us lower your MSU consumption and MLC costs. More »

Make Way for Mobile

As mobile applications take up more CPU at unpredictable times, let ThruPut Manager take low importance batch out of the equation and make room for your high priority workload. More »

Country Multiplex Pricing is here

Use ThruPut Manager automation to lower your MSU baseline today and find software license savings, with or without capping, when you move to CMP. More »

Automate production control

Manage z/OS execution according to your CA 7 schedule and due-out times, ensuring automated on-time completion with minimal intervention that frees you for other valuable tasks. More »

Our Customers

ThruPut Manager installations range from individual corporate datacenters to global outsourcing providers in the major industry sectors, including banking, insurance, and government. More »


Tag Archives: LPAR

Introducing LPAR sets: Save your most expensive MSUs – soft capping optional (webinar)


Many datacenters are enjoying the software savings provided by ThruPut Manager’s Automated Capacity Management (ACM) component, a safe and selective method to reduce MSU consumption and resulting MLC costs. Now, ACM introduces a significant enhancement – LPAR Sets.Monthly License Charges are implemented on a CPC basis, but each LPAR may contribute to the total in different ways with different software stacks, varying business requirements, or various MSU costs for each LPAR. We are now introducing LPAR Sets to give you more granular control over your batch workload.

Cache me if you can

memory cache

When many of us started working on computers, memory was VERY expensive and very much limited system performance and capacity. But as costs came down, hardware vendors came up with new ideas to effectively implement memory management (including virtual memory). The most recent is offering various levels of cache, which is memory located closer to the CPU designed for frequently-accessed items. We can’t do a lot about the CPU cache, but it is helpful to understand how it works.

Sub-capacity pricing updates

Over the years, IBM software pricing has changed to reflect shifting industry dynamics. That was true when IBM first introduced sub-capacity pricing. As physical machines were increasingly being split into several LPARs, single installations weren’t consistently using the entire capacity of a CPC.

Sub-capacity pricing took that change into account by offering customers more flexibility. Today’s pricing model is tied to the Rolling 4-Hour Average (R4HA), which reflects the peak usage of the LPARs over the month.

Automated capacity management in action

We’ve already discussed the advantages of using automated capacity management as a way to get the most out of soft capping, but we thought it would be best if you saw the benefits for yourself. The following chart is from an actual z/OS environment that sought to maximize throughput and performance, limit only non-critical workloads, and manage a system’s overall demand within predetermined capacity levels by tracking the variables we mentioned in our previous post.

As you can see, after the techniques were implemented, the slope of the Rolling 4-Hour Average (R4HA) gradually tapers off to skim the cap level. This is because the lowest-priority workloads gradually slowed down as the R4HA peaked. As utilization continued to rise and increase the level of constraint, more low-to-medium priority workloads slowed as well.

Automated capacity management: The best of all worlds?

While there are definitely some advantages to manual capacity management, as we discussed last week, the most effective method of meeting both the capacity and financial needs of an organization is automated capacity management—provided it’s done effectively.

The goal of any automated system should be to maximize throughput and performance, limit only non-critical workloads, and manage the overall demand within the chosen capacity levels. You ideally want to avoid, or at least limit, the entire LPAR (or LPARs) being capped.

Pros and cons of manual capacity management

Strict caps, as we’ve mentioned in previous posts, can be harmful to application performance. On the flip side, raising a cap to meet the needs of workload conditions can increase the Rolling 4-Hour Average (R4HA) and, as a result, cancel out the benefits of soft capping. So what is an organization to do?

Well, believe it or not, it is possible to take advantage of the financial benefits of soft capping and meet the needs of your organization at the same time. One technique is to lower the multi-programming level (MPL) of a system by controlling the available batch initiators.

What happens if your R4HA exceeds your soft cap?

As system utilization grows, applications feel the effects gradually, sometimes starting to slow down as early as 70% CPU utilization and increasing gradually until saturation and timeouts are reached. When your Rolling 4-Hour Average (R4HA) exceeds a system’s cap utilization, however, the effects are instantaneous—and come without warning. Your cap utilization can move from 80% to 90% or even 99.9% of a predetermined cap level—a level that may be far below your machine’s full capacity—without any performance interruptions, but once you exceed that threshold, watch out. Some organizations are exploring creative means to better exploit soft capping and avoid the potential impacts described above.