Category Archives: webinars
Following a hockey puck can be challenging. It moves really fast and it’s not always obvious what the next player is going to do. Wayne Gretzky famously said: “Skate to where the puck is going; not where it’s been”. The same analogy can be applied in most sports: football quarterbacks throw to where the receiver is going to be; not where they are. Your monthly 4-Hour Rolling Average (and resulting MLC) is also a moving target.
By now, many shops are using soft caps and/or taking advantage of special pricing offerings from IBM such as Mobile and zNALC. These are excellent ideas — but, as all performance and capacity professionals know, reducing one 4HRA bottleneck only creates another.
A properly tuned and conducted orchestra can produce amazing sounds. One out-of-tune instrument can ruin the whole performance. Your mainframe is no different. A well architected and tuned system can support thousands of users and applications with sub-second response. But it only takes one small parameter or threshold to be exceeded and the whole system feels like it’s grinding to a halt.
There is no shortage of available metrics to provide detailed insight into system performance. The problem is sorting through them all and recognizing what’s important. Utilization, for example, comes in many forms. Hiperdispatch can make or break performance, and processor cache… well, a faster CPU won’t help you with a cache miss. This session will discuss these and other key performance areas to watch, what to look for, what to ignore, and what… well, as they say – it depends.
IBM’s Country Multiplex Pricing (CMP) became available last October. This is arguably the most significant software pricing announcement from IBM in ten years. Virtually every mainframe shop with more than one CPC/CEC should be interested in this announcement.
But don’t think you can just move to CMP and immediately see lower software bills – if you don’t do it right, your annual costs could actually be higher. Whether you’re in finance, capacity planning or performance, you don’t need to be a ThruPut Manager user to get significant value from this webinar.
Imagine sitting in traffic in a taxi. The engine – and the meter – continues to run but you’re not going anywhere. This is very much what it is like in your computer when your CPU has a cache miss. Today’s high frequency processors have the capacity to process instructions at an incredible rate.
Virtually all CPU’s utilize a multi-stage cache infrastructure to optimize this process. Small, but fast Level 1 caches will reside very close to the CPU core with larger L2, L3 and so on further out. If the required data and instructions are resident in the local L1 cache, the latency to fetch will be very small; perhaps 1 or 2 clock cycles. If, on the other hand the data is out is a large main memory, the wait time could be hundreds of clock cycles. During this wait time, the CPU is spinning away.
Many datacenters are enjoying the software savings provided by ThruPut Manager’s Automated Capacity Management (ACM) component, a safe and selective method to reduce MSU consumption and resulting MLC costs. Now, ACM introduces a significant enhancement – LPAR Sets.Monthly License Charges are implemented on a CPC basis, but each LPAR may contribute to the total in different ways with different software stacks, varying business requirements, or various MSU costs for each LPAR.
Many customers have asked for more granular control over reducing MLC costs than simply running ACM with a single Group Capacity limit across a CPC. Installations can now group LPARs into sets, each with their own assigned limits. Managing batch workload at this granular level provides a flexible means to ensure capacity is delivered when and where it is needed, all while controlling CPU consumption where it provides the most financial benefit, with or without capping.
Delays are part of our daily lives. We wait in line at the grocery store, at the drive-through, and on the road in our cars. While it may not seem like it, delays are a fact of life in your mainframe as well. As amazingly fast as these machines are, there are inevitably some measurable delays in application response times and throughput. The question is: should you care about these delays?
Performance Analysis is about identifying the distribution of response times and dealing with each component. For example, if a job runs for two hours but only uses ten minutes of CPU time, there is little to be gained by running the job on a faster CPU. The best “bang for the buck” is to look at where the other 110 minutes are being spent and try to reduce those delays.
As modern mainframers, our careers rest on a foundation of automation—business applications are improved versions of manual labor, after all—and we all love it when automation takes on chores that we’d rather not do.
When considering additional automation, the natural questions are “How will this really make my job easier?” and “What will it mean for my career?”
By listening to this webinar, you will:
- Learn to assess automation tools in terms of how well they eliminate tedious or uninteresting, but complex work, allowing you time to take on interesting, career-advancing challenges.
- Leave with a better approach to software acquisition and an introduction to ThruPut Manager, a solution that will both eliminate the mundane and accelerate your career.