Tag Archives: IBM
A recent study demonstrated that when you put in all the costs of mainframes, UNIX boxes and Wintel hardware (hardware, support, software, people, environmentals), mainframes were cheaper based on what they could produce. Mainframes work well and fast, aren’t subject to security breaches in the same way as any other hardware, and just a few people can manage thousands of applications on a single box.
A guest post by Denise P. Kalm – When BMC Software releases the results of its latest survey showing that 90% of the participants are confident in a long-term future for mainframes, you have to listen. Or more importantly, the management teams who keep trying to move off of it needs to read the report. While security and availability are frequently cited as important factors – who has hacked a mainframe lately – too often forgotten is the unequalled ability to manage costs on this platform.
Very often, with distributed systems, the cost is the cost; you pay for seat licenses or for the total capacity of the box or some other immutable metric. And let’s not forget the lower availability statistics, nor the fact that Wintel boxes are the biggest targets for hackers. But back to cost, because every systems programmer has had to become an active participant in managing and reducing costs. Which platform is the most flexible in terms of cost?
Over the years, IBM software pricing has changed to reflect shifting industry dynamics. That was true when IBM first introduced sub-capacity pricing. As physical machines were increasingly being split into several LPARs, single installations weren’t consistently using the entire capacity of a CPC.
Sub-capacity pricing took that change into account by offering customers more flexibility. Today’s pricing model is tied to the Rolling 4-Hour Average (R4HA), which reflects the peak usage of the LPARs over the month.
We all have our go-to tools. In z/OS, products associated with capping and automation are becoming more and more common — capping because it offers the most effective method to control software costs, and automation because even your best analyst can’t balance workloads at machine speeds. With z/OS, IBM includes a number of features and tools to assist in these areas, such as Defined and Group Capacity (DC/GC) and Capacity Provisioning Manager (CPM). The question is, what more can be done?
This presentation will explore these free capabilities from IBM and provide details on their use, functionality, and limitations. We’ll explain how ThruPut Manager integrates seamlessly and automatically addresses the inherent limitations of soft capping. By reducing demand, ThruPut Manager allows you to safely reduce your soft caps even further, while its automation capabilities ensure optimal system loading.
We take it for granted that technology will continue to get faster. In enterprise computing, this means that we have counted on faster CPUs to come along to help us cope with ever-growing workloads. IBM has stated that the current CMOS-based processors are reaching their design limits. In short, mainframe engines will not get much faster, and single threaded workloads are at risk!
A direct consequence of this fact is that datacenters need to look at more automation to improve throughput and optimize system resources. Add the financial pressures that have led to widespread use of resource capping, and datacenter staff are overwhelmed adjusting to machine speeds. They simply cannot keep up with the rapidly changing environment of systems under constraints.
Datacenter management and their staff are grappling with the realities of constraint driven service and performance. This webinar discusses the challenges and solutions that z/OS customers are deploying to manage in today’s environment of capacity constraints. We look at the impacts of high-utilization, hard and soft capping, managing the Rolling 4-Hour Average (R4HA) and how to effectively manage IBM’s Sub-Capacity pricing.
Conventional techniques and products focus only on system capacity limits and do not address the impacts on workloads. We look at a novel approach to automatically control demand of low priority work and ensure peak performance of critical applications in capped environments. This unique technique allows the datacenter to further lower capacity levels and save MSUs.
Back in the day, IBM software pricing was pretty straightforward: it was tied to the capacity of the machine to which it was licenced. Once it became possible to split physical machines into one or more LPARs, however, things became much more complicated. That’s not to say the added complexity didn’t come with its share