Reduce Peak 4HRA and Software MLC Costs

ThruPut Manager manages workload demand to reduce capacity utilization, based on the 4HRA, when sub-capacity pricing is used with or without capping. More »

Automate z/OS Batch

ThruPut Manager balances workload arrival, importance, and resource availability. More »

Make the most of scarce resources

Because money doesn’t grow on trees, let us lower your MSU consumption and MLC costs. More »

Make Way for Mobile

As mobile applications take up more CPU at unpredictable times, let ThruPut Manager take low importance batch out of the equation and make room for your high priority workload. More »

Country Multiplex Pricing is here

Use ThruPut Manager automation to lower your MSU baseline today and find software license savings, with or without capping, when you move to CMP. More »

Automate production control

Manage z/OS execution according to your CA 7 schedule and due-out times, ensuring automated on-time completion with minimal intervention that frees you for other valuable tasks. More »

Our Customers

ThruPut Manager installations range from individual corporate datacenters to global outsourcing providers in the major industry sectors, including banking, insurance, and government. More »

 

Tag Archives: IBM mainframe

Time-tested and proven – DJC and JEC compared

DJC - time-tested and proven

As part of z/OS R2.2, IBM has included Job Execution Control (JEC), a way to group jobs so you can define dependencies within a group. This concept offers a lot of scheduling flexibility. It sounds new and great, until you realize that ThruPut Manager has already had this feature with DJC for 20 years. Time-tested and proven by many satisfied companies, the capabilities of DJC haven’t changed since 1995 and work as well as ever.

Designed for scalability, how does PCS keep up?

system scalability

Key to the scalability of ThruPut Manager Production Control Services (PCS) is the use of proprietary, high-performance algorithms. Though often not mentioned, the use of the right algorithms can make all the difference when volumes increase. Another important capability is automated recovery. When recovery isn’t possible, PCS ensures that operations are notified to select an action. Any product that interacts with z/OS needs to keep up with IBM releases or scalability will be in doubt.

Why did my job run so long? Speeding performance by understanding the cause (webinar)

webinar

Delays are part of our daily lives. We wait in line at the grocery store, at the drive-through, and on the road in our cars. While it may not seem like it, delays are a fact of life in your mainframe as well. As amazingly fast as these machines are, there are inevitably some measurable delays in application response times and throughput. The question is: should you care about these delays?

Performance Analysis is about identifying the distribution of response times and dealing with each component. For example, if a job runs for two hours but only uses ten minutes of CPU time, there is little to be gained by running the job on a faster CPU. The best “bang for the buck” is to look at where the other 110 minutes are being spent and try to reduce those delays.

BATCH…BETTER – Help users set SLAs

SLAs - batch better

With the focus on compliance, most companies have SLAs for online work, but many don’t have SLAs for batch. We all know only too well that users have expectations as stringent as SLAs. Everyone’s tracking how well you manage batch, but because the batch SLAs aren’t documented, they are what the users think they should be. How can you possibly manage that!?

The high cost of making no decision

software vendors

Does this sound familiar? Software vendors drop by to talk about their products, adding another long meeting to your packed schedule. When you get to a conference, vendors try to entice you into the exhibit hall for another meeting. You’re constantly faced with new software options. Who has the time to look at them?

The simple answer is that you have to find the time. What many of us don’t realize is that when you ignore the new offerings and capabilities, you have made a decision. No decision, or the failure to consider new options, is actually a decision to stick with the solutions you already have. You’re making a decision NOT to make a decision.

When CPU isn’t the problem

Remember those days when budgets weren’t so tight? Back then, when you had a performance problem, it was so easy to say, ‘Let’s throw hardware at the problem,’ which always meant getting more processors or a faster CPU. And it almost always worked. But was CPU always the problem?

Performance is a challenging area now because you must provide the best service possible but at the lowest price. To acquire more hardware of any kind, you need to make a case, which means proving which resource is really constrained. You can’t just focus on CPU as the problem; it often isn’t the primary driver for your service level. I/O rates, network and memory can all be a factor, depending on the workloads.

Make a New Year’s resolution that will stick

A guest post by Denise P. Kalm – Each year, with the best of intentions, we set out to make at least one resolution that will help make us healthier, happier or more successful. Only too often, by February, we’ve stopped trying and gone back to our old ways. Isn’t working smarter, not harder, the kind of thing your manager is always telling you to do as he hands you more work than one person could possibly manage?

Aren’t we all working too hard now? But while the phrase has been overused and misused, implying that we are just dogging it instead of thinking of better ways to work, in fact, in IT, there are better options. What if you could have a virtual assistant, one that doesn’t require salary or benefits? That might work well. Working smarter should mean working on smart, valuable tasks.