Tag Archives: Workload Manager
Even knowledgeable performance experts make the mistake of throwing more resources at work. If 10 buffers are good, 15 would be even better, right? If we have more batch work, throw initiators at it; it will get things moving faster. But it turns out that overinitiation is very similar to putting up more toll collectors on a bridge. Immediately after the toll is paid, the lanes have to shrink to the number that fits on the bridge. If you have too many toll takers, the merge following the toll gets crazy, accidents can happen and no one is moving fast.
When you initially set up your workload manager (WLM) policies, it was a LOT of work—something you’re likely not eager to go through again. As long as performance seems to be okay, it’s easy to forget about it—shifting your focus to the myriad other challenges on your plate. But while it’s tempting to ignore your WLM policies, there are plenty of reasons why you shouldn’t—particularly during hardware upgrades.
When you upgrade hardware, you’re adding capacity—if not speed—which can translate into better performance initially. However, over time, as increased demand sucks up the cycles you’ve added, performance may degrade. That’s why it’s essential to revisit your WLM policies upon every hardware upgrade. Ideally, you want to provide at or just below agreed-upon service levels and set realistic goals.
Advertisers constantly stress the value of more. More data, more minutes, more channels, more choices—it makes it easy to believe that more is truly better. For some situations, it probably is, but as John Baker recently discussed in our CMG webinar, What’s in your nest? More than ever – Less is More!, for performance experts this can seem counter-intuitive.
Don’t get us wrong—we would all love more processor capacity and more memory. But while over-configured datacenters were common in the past, today that’s no longer the case. With more mainframers finding themselves challenged to reduce costs, we rarely get more—except more workloads, more challenges and more demand.
When you hear the term dynamic initiators, you probably think of an initiator that simply starts and stops automatically. The thing is, when these initiators become part of a complete automation solution for z/OS batch processing, like ThruPut Manager, they’re capable of much, much more. ThruPut Manager’s automation algorithms optimize resource utilization and throughput of the workload as a whole by deciding to add or remove initiators based on current system load and datacenter policies.
Strict caps, as we’ve mentioned in previous posts, can be harmful to application performance. On the flip side, raising a cap to meet the needs of workload conditions can increase the Rolling 4-Hour Average (R4HA) and, as a result, cancel out the benefits of soft capping. So what is an organization to do?
Well, believe it or not, it is possible to take advantage of the financial benefits of soft capping and meet the needs of your organization at the same time. One technique is to lower the multi-programming level (MPL) of a system by controlling the available batch initiators.