Category Archives: itt
CPU busy, as plotted on a graph, is not a linear function. And yet, many of us assume it is, and use that assumption for capacity planning. While linear regression may work for CPU busy functions from 20-80%, it doesn’t help you much at the lower or higher end of the utilization graph. And that’s where you actually want to know the impact of changes in transaction volume or batch workload.
Even knowledgeable performance experts make the mistake of throwing more resources at work. If 10 buffers are good, 15 would be even better, right? If we have more batch work, throw initiators at it; it will get things moving faster. But it turns out that overinitiation is very similar to putting up more toll collectors on a bridge. Immediately after the toll is paid, the lanes have to shrink to the number that fits on the bridge. If you have too many toll takers, the merge following the toll gets crazy, accidents can happen and no one is moving fast.
In the 21st century, we’ve given up our tool belts, relinquished the tape robots, and enjoyed the ease of keying in parameters, code fixes, and more on a PC dedicated to our needs. So, perhaps, it’s time to give up things like hand-managing batch performance. Sometimes, we stick with things because they’ve become habit. We know there is probably a better way to do them; but, we don’t want to fight the battle to get new software and face a learning curve.
Even without Automated Capacity Management (ACM), ThruPut Manager’s automation engine – Service Level Manager – can really speed up your batch workload and, in most cases, reduce your batch window. But it can also save you CPU cycles; and, anything you can do to put off an upgrade or reduce your MSU numbers means you’ve saved your company money.
Companies are more cost-focused than ever before. While some industries have always had narrow margins, every company is looking for cost-savings wherever possible. Soft-capping can be scary, but you still need to save money. So what do you do? The solution is LPAR sets.
A recent study demonstrated that when you put in all the costs of mainframes, UNIX boxes and Wintel hardware (hardware, support, software, people, environmentals), mainframes were cheaper based on what they could produce. Mainframes work well and fast, aren’t subject to security breaches in the same way as any other hardware, and just a few people can manage thousands of applications on a single box.
Some sages, particularly in the distributed systems space, like to say that capacity planning isn’t necessary anymore. Hardware is cheaper and virtualization makes better use of resources. Besides, no one seems to know how to do it these days. But the sages are wrong!