Reduce Peak 4HRA and Software MLC Costs

ThruPut Manager manages workload demand to reduce capacity utilization, based on the 4HRA, when sub-capacity pricing is used with or without capping. More »

Automate z/OS Batch

ThruPut Manager balances workload arrival, importance, and resource availability. More »

Make the most of scarce resources

Because money doesn’t grow on trees, let us lower your MSU consumption and MLC costs. More »

Make Way for Mobile

As mobile applications take up more CPU at unpredictable times, let ThruPut Manager take low importance batch out of the equation and make room for your high priority workload. More »

Country Multiplex Pricing is here

Use ThruPut Manager automation to lower your MSU baseline today and find software license savings, with or without capping, when you move to CMP. More »

Automate production control

Manage z/OS execution according to your CA 7 schedule and due-out times, ensuring automated on-time completion with minimal intervention that frees you for other valuable tasks. More »

Our Customers

ThruPut Manager installations range from individual corporate datacenters to global outsourcing providers in the major industry sectors, including banking, insurance, and government. More »

 

Tag Archives: Workload Manager

Overinitiation: When more isn’t better

overinitiation

Even knowledgeable performance experts make the mistake of throwing more resources at work. If 10 buffers are good, 15 would be even better, right? If we have more batch work, throw initiators at it; it will get things moving faster. But it turns out that overinitiation is very similar to putting up more toll collectors on a bridge. Immediately after the toll is paid, the lanes have to shrink to the number that fits on the bridge. If you have too many toll takers, the merge following the toll gets crazy, accidents can happen and no one is moving fast.

Is it time to revisit your WLM policies?

When you initially set up your workload manager (WLM) policies, it was a LOT of work—something you’re likely not eager to go through again. As long as performance seems to be okay, it’s easy to forget about it—shifting your focus to the myriad other challenges on your plate. But while it’s tempting to ignore your WLM policies, there are plenty of reasons why you shouldn’t—particularly during hardware upgrades.

When you upgrade hardware, you’re adding capacity—if not speed—which can translate into better performance initially. However, over time, as increased demand sucks up the cycles you’ve added, performance may degrade. That’s why it’s essential to revisit your WLM policies upon every hardware upgrade. Ideally, you want to provide at or just below agreed-upon service levels and set realistic goals.

In search of the batch sweet spot

Advertisers constantly stress the value of more. More data, more minutes, more channels, more choices—it makes it easy to believe that more is truly better. For some situations, it probably is, but as John Baker recently discussed in our CMG webinar, What’s in your nest? More than ever – Less is More!, for performance experts this can seem counter-intuitive.

Don’t get us wrong—we would all love more processor capacity and more memory. But while over-configured datacenters were common in the past, today that’s no longer the case. With more mainframers finding themselves challenged to reduce costs, we rarely get more—except more workloads, more challenges and more demand.

Optimizing batch workload with dynamic initiators

dynamic initiators

When you hear the term dynamic initiators, you probably think of an initiator that simply starts and stops automatically. The thing is, when these initiators become part of a complete automation solution for z/OS batch processing, like ThruPut Manager, they’re capable of much, much more. ThruPut Manager’s automation algorithms optimize resource utilization and throughput of the workload as a whole by deciding to add or remove initiators based on current system load and datacenter policies.

Pros and cons of manual capacity management

Strict caps, as we’ve mentioned in previous posts, can be harmful to application performance. On the flip side, raising a cap to meet the needs of workload conditions can increase the Rolling 4-Hour Average (R4HA) and, as a result, cancel out the benefits of soft capping. So what is an organization to do?

Well, believe it or not, it is possible to take advantage of the financial benefits of soft capping and meet the needs of your organization at the same time. One technique is to lower the multi-programming level (MPL) of a system by controlling the available batch initiators.