Tag Archives: automation
If you’re in performance or capacity planning, your value is interpreting data and creating plans of action for automation to exploit. Machines can only do this with a complete body of information; they don’t have your intuition, your experience, or your‘sense of the system’ – that intangible sense we develop as we understand our hardware and software at a deep level. This is the good stuff, the work that makes life interesting.
As a child, I was always impressed by how Tom Sawyer got other people to do his work for him. Instead of offering a trade, Sawyer flipped the challenge on its head. He made people feel like white-washing a fence was so pleasurable they should pay him for the privilege. I can only imagine what a speaker and salesman he would have grown up to be. What a gift! I think we all feel overworked much of the time. Between layoffs and retirements, most of us have more than one job we’re trying to manage with too many tasks not to our liking. Even in the rarified waters of complex IT projects, there are still tasks that can feel as unrewarding and uninteresting as white-washing a fence. Either it is a task that challenged you many years ago and no longer does, or it is simply uninteresting to you personally.
WLM Resource Groups have been around since the introduction of Goal Mode. Their use is often discouraged as their static nature can work against the dynamic nature of Goal Mode.
What if Resource Groups could be automated? What if you could dynamically move selected work in and out of Resource Groups with varying maximums? What if this automation was sensitive to the R4HA as well as application business priorities?
Rather than just limiting consumption of out-of-control workloads, intelligent automation can harness the power of Resource Groups to reduce software costs without capping or impacting critical workloads. Learn how Resource Groups really work under the covers, and how ground breaking automation with ThruPut Manager delivers new value to an old function.
We all have our go-to tools. In z/OS, products associated with capping and automation are becoming more and more common — capping because it offers the most effective method to control software costs, and automation because even your best analyst can’t balance workloads at machine speeds. With z/OS, IBM includes a number of features and tools to assist in these areas, such as Defined and Group Capacity (DC/GC) and Capacity Provisioning Manager (CPM). The question is, what more can be done?
This presentation will explore these free capabilities from IBM and provide details on their use, functionality, and limitations. We’ll explain how ThruPut Manager integrates seamlessly and automatically addresses the inherent limitations of soft capping. By reducing demand, ThruPut Manager allows you to safely reduce your soft caps even further, while its automation capabilities ensure optimal system loading.
We take it for granted that technology will continue to get faster. In enterprise computing, this means that we have counted on faster CPUs to come along to help us cope with ever-growing workloads. IBM has stated that the current CMOS-based processors are reaching their design limits. In short, mainframe engines will not get much faster, and single threaded workloads are at risk!
A direct consequence of this fact is that datacenters need to look at more automation to improve throughput and optimize system resources. Add the financial pressures that have led to widespread use of resource capping, and datacenter staff are overwhelmed adjusting to machine speeds. They simply cannot keep up with the rapidly changing environment of systems under constraints.
Datacenter management and their staff are grappling with the realities of constraint driven service and performance. This webinar discusses the challenges and solutions that z/OS customers are deploying to manage in today’s environment of capacity constraints. We look at the impacts of high-utilization, hard and soft capping, managing the Rolling 4-Hour Average (R4HA) and how to effectively manage IBM’s Sub-Capacity pricing.
Conventional techniques and products focus only on system capacity limits and do not address the impacts on workloads. We look at a novel approach to automatically control demand of low priority work and ensure peak performance of critical applications in capped environments. This unique technique allows the datacenter to further lower capacity levels and save MSUs.