Tag Archives: z13
There are key improvements that ThruPut Manager users will see in the batch workloads managed by CA 7. This is a result of an add-on component for ThruPut Manager called Production Control Services (PCS). We’ve designed PCS to ensure scalability as your batch workloads increase, and they will certainly increase.
With the focus on compliance, most companies have SLAs for online work, but many don’t have SLAs for batch. We all know only too well that users have expectations as stringent as SLAs. Everyone’s tracking how well you manage batch, but because the batch SLAs aren’t documented, they are what the users think they should be. How can you possibly manage that!?
When many of us started working on computers, memory was VERY expensive and very much limited system performance and capacity. But as costs came down, hardware vendors came up with new ideas to effectively implement memory management (including virtual memory). The most recent is offering various levels of cache, which is memory located closer to the CPU designed for frequently-accessed items. We can’t do a lot about the CPU cache, but it is helpful to understand how it works.
When moving batch workloads around to lower your R4HA, duplicate product peaks are a common challenge—and can cause their fair share of headaches. To remedy this issue, IBM recently announced a new pricing model, Country Multiplex Pricing (CMP). The new model is designed to give you greater flexibility to move and run workloads across your data centers in a single country with less financial impact than you’d experience by staying with your present VWLC model.
As a performance or capacity planner, knowing how to read between the lines of vendor announcements—particularly those from IBM—is an essential skill. In many of these announcements, you will find a roadmap to your future challenges and potentially even have the time to plan for them.
Take IBM’s most recent end-of-May press release, for example, that worldwide mainframe revenue share is leading for $250K servers and growing 118 percent year-to-year. To the uninitiated, this announcement simply heralds the growth of mainframe sales—particularly its new z13, which was recently announced in January. To the veteran mainframer, however, this announcement brings both peace of mind (in that it reinforces the fact that mainframes, and our jobs, are alive and well) and a little bit of fear, knowing that our jobs just got a whole lot more complex.
In an RNI context, IBM says the number of concurrent tasks is the primary factor in workload performance, particularly with today’s high frequency processors. The z13 architecture has a vast capacity to serve multiple, concurrent applications. Regardless of the amount of work you throw at it, z/OS will do its best to provide some service to all workloads. As result, we generally throw all the work at the machine at once.
Are you running too much or too little? Let’s look at some measurements and techniques to control how much concurrent work you are running. Cycles per instruction, Initiators, LPARs, Logical Processors and Weights are a few of the topics we’ll explore. You will find techniques to improve performance, throughput, and efficiencies of your mainframe. Come and see just how much less you should be doing.