Reduce Peak 4HRA and Software MLC Costs

ThruPut Manager manages workload demand to reduce capacity utilization, based on the 4HRA, when sub-capacity pricing is used with or without capping. More »

Automate z/OS Batch

ThruPut Manager balances workload arrival, importance, and resource availability. More »

Make the most of scarce resources

Because money doesn’t grow on trees, let us lower your MSU consumption and MLC costs. More »

Make Way for Mobile

As mobile applications take up more CPU at unpredictable times, let ThruPut Manager take low importance batch out of the equation and make room for your high priority workload. More »

Country Multiplex Pricing is here

Use ThruPut Manager automation to lower your MSU baseline today and find software license savings, with or without capping, when you move to CMP. More »

Automate production control

Manage z/OS execution according to your CA 7 schedule and due-out times, ensuring automated on-time completion with minimal intervention that frees you for other valuable tasks. More »

Our Customers

ThruPut Manager installations range from individual corporate datacenters to global outsourcing providers in the major industry sectors, including banking, insurance, and government. More »

 

Tag Archives: IBM

Quality matters, sometimes more than cost

quality matters

A recent study demonstrated that when you put in all the costs of mainframes, UNIX boxes and Wintel hardware (hardware, support, software, people, environmentals), mainframes were cheaper based on what they could produce. Mainframes work well and fast, aren’t subject to security breaches in the same way as any other hardware, and just a few people can manage thousands of applications on a single box.

Who believes the mainframe is dead?

A guest post by Denise P. Kalm – When BMC Software releases the results of its latest survey showing that 90% of the participants are confident in a long-term future for mainframes, you have to listen. Or more importantly, the management teams who keep trying to move off of it needs to read the report. While security and availability are frequently cited as important factors – who has hacked a mainframe lately – too often forgotten is the unequalled ability to manage costs on this platform.

Very often, with distributed systems, the cost is the cost; you pay for seat licenses or for the total capacity of the box or some other immutable metric. And let’s not forget the lower availability statistics, nor the fact that Wintel boxes are the biggest targets for hackers. But back to cost, because every systems programmer has had to become an active participant in managing and reducing costs. Which platform is the most flexible in terms of cost?

Sub-capacity pricing updates

Over the years, IBM software pricing has changed to reflect shifting industry dynamics. That was true when IBM first introduced sub-capacity pricing. As physical machines were increasingly being split into several LPARs, single installations weren’t consistently using the entire capacity of a CPC.

Sub-capacity pricing took that change into account by offering customers more flexibility. Today’s pricing model is tied to the Rolling 4-Hour Average (R4HA), which reflects the peak usage of the LPARs over the month.

z/OS capping and automation: What’s in your tool box? (webinar)

webinar

We all have our go-to tools. In z/OS, products associated with capping and automation are becoming more and more common — capping because it offers the most effective method to control software costs, and automation because even your best analyst can’t balance workloads at machine speeds. With z/OS, IBM includes a number of features and tools to assist in these areas, such as Defined and Group Capacity (DC/GC) and Capacity Provisioning Manager (CPM). The question is, what more can be done?

This presentation will explore these free capabilities from IBM and provide details on their use, functionality, and limitations. We’ll explain how ThruPut Manager integrates seamlessly and automatically addresses the inherent limitations of soft capping. By reducing demand, ThruPut Manager allows you to safely reduce your soft caps even further, while its automation capabilities ensure optimal system loading.

Breaking the speed limit: Automation is the engine (webinar)

webinar

We take it for granted that technology will continue to get faster. In enterprise computing, this means that we have counted on faster CPUs to come along to help us cope with ever-growing workloads. IBM has stated that the current CMOS-based processors are reaching their design limits. In short, mainframe engines will not get much faster, and single threaded workloads are at risk!

A direct consequence of this fact is that datacenters need to look at more automation to improve throughput and optimize system resources. Add the financial pressures that have led to widespread use of resource capping, and datacenter staff are overwhelmed adjusting to machine speeds. They simply cannot keep up with the rapidly changing environment of systems under constraints.

Capping capacity vs. capping demand (webinar)

webinar

Datacenter management and their staff are grappling with the realities of constraint driven service and performance. This webinar discusses the challenges and solutions that z/OS customers are deploying to manage in today’s environment of capacity constraints. We look at the impacts of high-utilization, hard and soft capping, managing the Rolling 4-Hour Average (R4HA) and how to effectively manage IBM’s Sub-Capacity pricing.

Conventional techniques and products focus only on system capacity limits and do not address the impacts on workloads. We look at a novel approach to automatically control demand of low priority work and ensure peak performance of critical applications in capped environments. This unique technique allows the datacenter to further lower capacity levels and save MSUs.

The Benefits of Sub-Capacity Pricing

Back in the day, IBM software pricing was pretty straightforward: it was tied to the capacity of the machine to which it was licenced. Once it became possible to split physical machines into one or more LPARs, however, things became much more complicated. That’s not to say the added complexity didn’t come with its share