Tag Archives: MSU
Were you burned over the years by recommended capacity controls, such as hard capping or memory fencing? If you’re a long-time mainframe capacity planner, you quite possibly experienced the cost of implementing such ‘recommendations’—getting paged as key workloads were throttled and performance suffered. As IBM gets practical feedback from the field, it continues to offer better and better iterations of the tuning concept. Once an idea has been well field-tested and enhanced, it’s a good idea to take another look at it. Such is the case with soft-capping.
Many people view outsourcing with fear and dread, believing it to be a term synonymous with ‘job loss’ (or, more specifically, ‘losing your job to someone in another country who will do it just as well for one fifth of the price’). But what if I told you outsourcing could actually be a good thing—and potentially allow you to secure your job?
There are plenty of ways employees can make outsourcing actually work in their favor. One U.S. software developer, for example, decided to hire a Chinese developer to do his job, freeing up his time to enjoy his days shopping on eBay, surfing Reddit and watching cat videos. This entrepreneurial soul found the process so easy and rewarding that he repeated it with several other jobs, cashing in on all of them—until he got caught.
By now, if you’ve been following this blog, you’re probably well aware of the many benefits of ThruPut Manager and are likely wondering how it can impact your organization’s Rolling 4-Hour Average (R4HA). Enter the MSU Analyzer.
The MSU Analyzer is used to determine how ThruPut Manager can benefit your business. It analyzes your installation’s SMF data and calculates the 50 highest R4HA peaks of the month, along with the batch contribution to each of those peaks. It then applies a 25% reduction on batch demand during those peaks and reduces those each by 25% of its batch MSUs, resulting in 50 new R4HA peaks.
Strict caps, as we’ve mentioned in previous posts, can be harmful to application performance. On the flip side, raising a cap to meet the needs of workload conditions can increase the Rolling 4-Hour Average (R4HA) and, as a result, cancel out the benefits of soft capping. So what is an organization to do?
Well, believe it or not, it is possible to take advantage of the financial benefits of soft capping and meet the needs of your organization at the same time. One technique is to lower the multi-programming level (MPL) of a system by controlling the available batch initiators.
As system utilization grows, applications feel the effects gradually, sometimes starting to slow down as early as 70% CPU utilization and increasing gradually until saturation and timeouts are reached. When your Rolling 4-Hour Average (R4HA) exceeds a system’s cap utilization, however, the effects are instantaneous—and come without warning. Your cap utilization can move from 80% to 90% or even 99.9% of a predetermined cap level—a level that may be far below your machine’s full capacity—without any performance interruptions, but once you exceed that threshold, watch out. Some organizations are exploring creative means to better exploit soft capping and avoid the potential impacts described above.
Soft capping—the act of artificially constraining a system so the MLC bill cannot exceed the MSU level of the cap—offers excellent financial benefits. So why isn’t everyone doing it? The answer lies, for the most part, in how IBM penalizes R4HA overages.
Basically, a system’s installation will be billed based on the peak R4HA or the peak Defined Capacity(DC)/Group Capacity(GC) limit, whichever is lower. This means the R4HA may occasionally exceed the soft cap limit without charge—but not without inconvenience.
If you had to guess which workloads were driving up your organization’s Rolling 4-Hour Average (R4HA)—and, consequently, your software costs—what would they be? While it makes sense that spikes in online systems would be the most likely culprits of R4HA peaks, data center analyses show that batch processing plays a greater role than most would expect. Why? Because CPCs usually have multiple LPARs, and batch on a development LPAR can make a surprisingly large contribution to the peak R4HA.
To add to the confusion, standard tools are used to detect workload peaks—and these are often different from R4HA peaks. In many analyses, the R4HA peaks occur at completely different times than the standard workload peaks and, as a result, usually go undetected.