Reduce Peak 4HRA and Software MLC Costs

ThruPut Manager manages workload demand to reduce capacity utilization, based on the 4HRA, when sub-capacity pricing is used with or without capping. More »

Automate z/OS Batch

ThruPut Manager balances workload arrival, importance, and resource availability. More »

Make the most of scarce resources

Because money doesn’t grow on trees, let us lower your MSU consumption and MLC costs. More »

Make Way for Mobile

As mobile applications take up more CPU at unpredictable times, let ThruPut Manager take low importance batch out of the equation and make room for your high priority workload. More »

Country Multiplex Pricing is here

Use ThruPut Manager automation to lower your MSU baseline today and find software license savings, with or without capping, when you move to CMP. More »

Automate production control

Manage z/OS execution according to your CA 7 schedule and due-out times, ensuring automated on-time completion with minimal intervention that frees you for other valuable tasks. More »

Our Customers

ThruPut Manager installations range from individual corporate datacenters to global outsourcing providers in the major industry sectors, including banking, insurance, and government. More »

 

Tag Archives: z13

Designed for scalability, how does PCS keep up?

system scalability

Key to the scalability of ThruPut Manager Production Control Services (PCS) is the use of proprietary, high-performance algorithms. Though often not mentioned, the use of the right algorithms can make all the difference when volumes increase. Another important capability is automated recovery. When recovery isn’t possible, PCS ensures that operations are notified to select an action. Any product that interacts with z/OS needs to keep up with IBM releases or scalability will be in doubt.

Can your solutions scale with demand?

production control

There are key improvements that ThruPut Manager users will see in the batch workloads managed by CA 7. This is a result of an add-on component for ThruPut Manager called Production Control Services (PCS). We’ve designed PCS to ensure scalability as your batch workloads increase, and they will certainly increase.

BATCH…BETTER – Help users set SLAs

SLAs - batch better

With the focus on compliance, most companies have SLAs for online work, but many don’t have SLAs for batch. We all know only too well that users have expectations as stringent as SLAs. Everyone’s tracking how well you manage batch, but because the batch SLAs aren’t documented, they are what the users think they should be. How can you possibly manage that!?

Cache me if you can

memory cache

When many of us started working on computers, memory was VERY expensive and very much limited system performance and capacity. But as costs came down, hardware vendors came up with new ideas to effectively implement memory management (including virtual memory). The most recent is offering various levels of cache, which is memory located closer to the CPU designed for frequently-accessed items. We can’t do a lot about the CPU cache, but it is helpful to understand how it works.

Switching to Country Multiplex Pricing? Minimize your baseline now

country multiplex pricing

When moving batch workloads around to lower your R4HA, duplicate product peaks are a common challenge—and can cause their fair share of headaches. To remedy this issue, IBM recently announced a new pricing model, Country Multiplex Pricing (CMP). The new model is designed to give you greater flexibility to move and run workloads across your data centers in a single country with less financial impact than you’d experience by staying with your present VWLC model.

Forecasting the future

As a performance or capacity planner, knowing how to read between the lines of vendor announcements—particularly those from IBM—is an essential skill. In many of these announcements, you will find a roadmap to your future challenges and potentially even have the time to plan for them.

Take IBM’s most recent end-of-May press release, for example, that worldwide mainframe revenue share is leading for $250K servers and growing 118 percent year-to-year. To the uninitiated, this announcement simply heralds the growth of mainframe sales—particularly its new z13, which was recently announced in January. To the veteran mainframer, however, this announcement brings both peace of mind (in that it reinforces the fact that mainframes, and our jobs, are alive and well) and a little bit of fear, knowing that our jobs just got a whole lot more complex.

What’s in your nest? More than ever – less is more! (webinar)

webinar

In an RNI context, IBM says the number of concurrent tasks is the primary factor in workload performance, particularly with today’s high frequency processors. The z13 architecture has a vast capacity to serve multiple, concurrent applications. Regardless of the amount of work you throw at it, z/OS will do its best to provide some service to all workloads. As result, we generally throw all the work at the machine at once.

Are you running too much or too little? Let’s look at some measurements and techniques to control how much concurrent work you are running. Cycles per instruction, Initiators, LPARs, Logical Processors and Weights are a few of the topics we’ll explore. You will find techniques to improve performance, throughput, and efficiencies of your mainframe. Come and see just how much less you should be doing.