Tag Archives: GC
WLM Resource Groups have been around since the introduction of Goal Mode. Their use is often discouraged as their static nature can work against the dynamic nature of Goal Mode.
What if Resource Groups could be automated? What if you could dynamically move selected work in and out of Resource Groups with varying maximums? What if this automation was sensitive to the R4HA as well as application business priorities?
Rather than just limiting consumption of out-of-control workloads, intelligent automation can harness the power of Resource Groups to reduce software costs without capping or impacting critical workloads. Learn how Resource Groups really work under the covers, and how ground breaking automation with ThruPut Manager delivers new value to an old function.
We all have our go-to tools. In z/OS, products associated with capping and automation are becoming more and more common — capping because it offers the most effective method to control software costs, and automation because even your best analyst can’t balance workloads at machine speeds. With z/OS, IBM includes a number of features and tools to assist in these areas, such as Defined and Group Capacity (DC/GC) and Capacity Provisioning Manager (CPM). The question is, what more can be done?
This presentation will explore these free capabilities from IBM and provide details on their use, functionality, and limitations. We’ll explain how ThruPut Manager integrates seamlessly and automatically addresses the inherent limitations of soft capping. By reducing demand, ThruPut Manager allows you to safely reduce your soft caps even further, while its automation capabilities ensure optimal system loading.
While there are definitely some advantages to manual capacity management, as we discussed last week, the most effective method of meeting both the capacity and financial needs of an organization is automated capacity management—provided it’s done effectively.
The goal of any automated system should be to maximize throughput and performance, limit only non-critical workloads, and manage the overall demand within the chosen capacity levels. You ideally want to avoid, or at least limit, the entire LPAR (or LPARs) being capped.
As system utilization grows, applications feel the effects gradually, sometimes starting to slow down as early as 70% CPU utilization and increasing gradually until saturation and timeouts are reached. When your Rolling 4-Hour Average (R4HA) exceeds a system’s cap utilization, however, the effects are instantaneous—and come without warning. Your cap utilization can move from 80% to 90% or even 99.9% of a predetermined cap level—a level that may be far below your machine’s full capacity—without any performance interruptions, but once you exceed that threshold, watch out. Some organizations are exploring creative means to better exploit soft capping and avoid the potential impacts described above.
Soft capping—the act of artificially constraining a system so the MLC bill cannot exceed the MSU level of the cap—offers excellent financial benefits. So why isn’t everyone doing it? The answer lies, for the most part, in how IBM penalizes R4HA overages.
Basically, a system’s installation will be billed based on the peak R4HA or the peak Defined Capacity(DC)/Group Capacity(GC) limit, whichever is lower. This means the R4HA may occasionally exceed the soft cap limit without charge—but not without inconvenience.
Datacenter management and their staff are grappling with the realities of constraint driven service and performance. This webinar discusses the challenges and solutions that z/OS customers are deploying to manage in today’s environment of capacity constraints. We look at the impacts of high-utilization, hard and soft capping, managing the Rolling 4-Hour Average (R4HA) and how to effectively manage IBM’s Sub-Capacity pricing.
Conventional techniques and products focus only on system capacity limits and do not address the impacts on workloads. We look at a novel approach to automatically control demand of low priority work and ensure peak performance of critical applications in capped environments. This unique technique allows the datacenter to further lower capacity levels and save MSUs.