DTS Webinar Recap: An Introduction to IBM® z/OS® Utilities

Whether you’re new to the mainframe space or a seasoned veteran, spending an hour with someone who is a walking knowledge base is never a bad idea. That’s exactly what happens each month in DTS Software’s educational webinar series. DTS CTO and z/OS storage management expert Steve Pryor covers topics aimed at giving z/OS users and programmers old and new tools to help you get the job done more efficiently and more effectively.

Users unfamiliar with a new operating system such as z/OS face a learning curve when trying to perform even the most basic functions, such as moving, copying, and comparing files, creating test data, and editing file contents. In the mainframe environment these functions aren’t always performed interactively but by specialized z/OS “utility” programs, which was the topic of our January webinar.

These utilities have a long history with a sometimes-complex syntax, and specific uses dependent on the organization of datasets, and names that are completely meaningless except to z/OS veterans. So, new z/OS users can face additional learning requirements for seemingly simple tasks.

In this webinar, Pryor discusses some of the basic z/OS utilities, such as IEBGENER, IEBCOPY, IEHLIST, and a few others, as well as more general utilities such as IDCAMS, and how to accomplish some of the same tasks interactively under ISPF. While the utility you choose depends on the job you need to perform, you can often use one of several utilities to accomplish the same task.

Utility by Dataset Type or Purpose
Utilities are often divided by the type of dataset they’re used on, e.g., a partitioned dataset vs. a sequential dataset. In addition to the well-known VSAM utilities, the OMVS utilities are particularly useful when moving data between UNIX system services and z/OS or moving files within the z/OS UNIX system.

Tape utilities are specific to tape management, and VTOC and Load Library utilities are used for viewing a volume table of contents or mapping a load library.

Utilities will also be either system-oriented or dataset-oriented and in the January webinar Pryor gives examples of each of these important utility categories as the webinar progresses.

Why are Utilities Needed in z/OS?

One reason is that z/OS was originally a batch system and a large part of what goes on is still batch-oriented, not updated interactively. So as data is created during the course of a day, it still must be updated, reconciled, and processed outside of the online system.

Batch Utilities in z/OS
Batch utilities are often considered the backbone of z/OS, and while there are many interactive functions in z/OS, Pryor focuses on the advantages of the batch-processing approach:

  • Standard well-defined input, output, and control statements
  • They are part of the base OS so they are always available for execution
  • No special requirement for authorization
  • Almost all types of data manipulation functions
  • Each has a well-defined set of functions and, for the most part, perform them one function at a time

Learn More in our Webinar
“An Introduction to z/OS® Utilities” is a 60-minute informative and educational look at an important topic in the mainframe space. It includes numerous examples, how-tos, and references on where to find more information should you need it. If you weren’t able to attend, you can view it on-demand, download a copy of the slide deck, and view examples shared during the presentation by using this link.

SHARE Dallas 2022

March 27-30, 2022
Dallas, TX – Hyatt Regency Dallas | MORE INFO

DTS is proud to sponsor and exhibit at SHARE Dallas 2022! Visit us at booth 205 during the Technology Exchange and stop by one of our presentations to learn more about mainframe storage management.

To add these educational-track sessions to your SHARE agenda please visit the SHARE.org speaker list here and search for “Steve Pryor.” You will find both presentations there. Within each presentation link, you’ll be able to add each session to your calendar.

DTS Software Speaking Sessions at SHARE Dallas


Class Struggle? The DFSMS Data Class and Your Datasets
Mar 28, 2022 | 2:15 PM  –  3:15 PM CT
Speaker: Steve Pryor, CTO and Lead Developer of DTS Software
Tracks: Storage, z/OS Systems Programming, MVS Storage
Topic: MVS
Room: Pegasus A

At its inception in 1989, the DFSMS Data Class construct consisted of a simple list of attributes that could be used to describe the physical characteristics of a dataset. Now, more than three decades later, the Data Class includes more than 50 different values that can interact in complex ways with other SMS constructs such as Storage Class and with JCL and IDCAMS control statements. Some user SMS configurations are home to dozens or even more overlapping or conflicting data classes. This session aims to provide an understanding of what the important (and unimportant) data class values are, their history, purpose, and their relationship to z/OS dataset allocation and use. Armed with this understanding, you can simplify your SMS environment, eliminate redundancies, and take advantage of new SMS features.


The z/OS Storage Administrator’s View of Pervasive Encryption
Mar 29, 2022 | 2:15 PM  –  3:15 PM CT
Speaker: Steve Pryor, CTO and Lead Developer of DTS Software
Tracks: Storage, z/OS Systems Programming, MVS Storage, Network Management & Security
Topic: MVS
Room: Cumberland AB (live-stream)

IBM’s Pervasive Encryption initiative aims to encrypt all the data, all the time. The Security Administrator controls access. The End User employs the data. But it’s the Storage Administrator who’s responsible for where, when, and how data is created, placed, backed up, and restored. In this session, we’ll take a practical look at just how z/OS dataset-level encryption is implemented, its impact on storage administration, the interactions between DFSMS and RACF, and the interlocking roles of Storage Administration, Security, and the End User in providing data protection.


 

 

 

 About our Presenter

 

Steve Pryor has a more than 30-year background in storage management, design, and support, and speaks frequently at industry events. He has been with DTS Software since the early 2000s.

DTS Webinar Recap: We’ve Got a Problem — An Introduction to z/OS® Dumps and Debugging Tools

In large enterprises, the most complex and mission-critical business applications are entrusted to z/OS because of its unrivaled security and reliability. In any complex environment, however, unexpected errors and unplanned failures are bound to occur. When they do, there is an immediate need to understand the problem, find/fix the root cause, and prevent future errors. If you can apply automation to the remediation, you might classify as a Mainframe Champion, as defined in the latest BMC Mainframe Survey.

Fortunately, z/OS programmers have access to a large set of debugging tools, including dumps, traces, log records, and more. The ability to leverage these tools, particularly system dumps, is an important part of a programmer’s job description and a daily workflow.

As with all operating systems, z/OS is unique in both its ability to debug and the way debugging is performed. This was the focus of our November educational webinar, presented by DTS CTO Steve Pryor.

While it is a vast subject, there are aspects of Dumps and Debugging exclusive to z/OS, which is where Pryor spends most of the hour-long session. In-depth examples can be found in the slide deck from the presentation, available for download here.

What can an ABEND Tell You?

ABENDs (abnormal terminations) are of two types – User ABENDs (generated from an application or utility) and System ABENDs (caused by an error performing a system-related function). Debugging user abends requires an understanding of what the program is trying to do and what condition is indicated by the user abend, as specified in the program or utility documentation. System abends occur when a system function, such as obtaining virtual storage or other resources, fails, or when an instruction cannot correctly be executed. Typically, application programmers are called upon to resolve user abends, while system abends are addressed by the system programmer or storage administrator.

Most abends will be accompanied by a formatted dump, placed on either a SYSUDUMP or SYSABEND dataset, or for a larger system-related problem, a SYSMDUMP dataset . Many abends are related to supervisor call (SVC) instructions. In these cases, the last two digits of the abend code will identify the SVC. This can be a useful clue as to which type of system function failed and how to attack the problem.

In addition to SVCs, Program Exceptions indicate that the CPU cannot continue operation due to a problem with the instruction being executed. Program exceptions can be identified in many different ways, which are also covered in the webinar.

Common Error Types
What are the most common error types encountered in z/OS? From addressing errors and data errors to instruction errors and others, such as timing, loop and wait errors, each is identifiable if you know what you are looking for.

While an ABEND is a “hard” error, errors such as “incorrect output” are logic errors, which are more difficult to debug and require more in-depth knowledge of the application.

You’ve Identified the Error – now What?

Once the source of the error is known, what tools are available? The most common are discussed in the webinar, what they do and what to expect from them. This includes a brief conversation about sending the dump to IBM when necessary.

At closing, Pryor recommends a number of reference materials available and the types of issues covered in each.

More about Dumps and Debugging Tools in our webinar: “We’ve Got a Problem: An Introduction to z/OS® Dumps and Debugging Tools” is a 60-minute informative and educational look at an important topic in the mainframe space. If you weren’t able to attend, you can view it on-demand and download a copy of the slide deck used in the presentation by using this link.

DTS Webinar Recap: Understanding IBM® z/OS® Disk Metadata: Catalogs, VTOCs, VVDSs, Indexes, and More!

The way that DASD (direct access storage device) data is stored on z/OS is very different from distributed/open systems. An understanding of just how z/OS datasets are created, located, accessed, and deleted is as important to end-users as it is to storage administrators and system programmers.

In our October 2021 educational webinar, DTS CTO Steve Pryor discussed this important topic, and how the concepts and structures invented in the 1960s have evolved into today’s high-availability, high-performance, high-reliability disk storage subsystems.

To the more seasoned Big Iron mainframers (and by seasoned we mean 20 years or more), the information covered might seem familiar. But much of this information is no longer widely disseminated, so while it may be a good review for some, for many the concepts of z/OS disk metadata and catalogs, presented by an authority in storage management, is high-value information.

The z/OS Storage Hierarchy
At the top of the food chain is an extensive catalog that keeps track of every dataset in every volume in the system. Each volume contains a table of contents (VTOC) just like the table of contents in the front of a book. The VTOC contains information about the locations of all the datasets, as well as what space on a particular volume is used and not used.

The SMS (system-managed storage) subsystem handles all the information about the allocation needs of the dataset: physical characteristics, logical record length, performance needs, etc., and decides, given all of its characteristics, on which DASD volume to place the dataset. SMS is important to every aspect of the dataset life cycle, beginning with allocation and volume placement and extending to dataset usage, archiving, recall, and eventual expiration.

Naming conventions, which are very specific in z/OS and are critical for managing the large volumes of data common in z/OS systems, are covered in-depth during the webinar. Slides with examples of Mapping, Dataset Extents, and much more were included in the presentation.

The Evolution of z/OS Storage Systems
How did all of this evolve from the systems of old? With the development of larger, faster CPUs in the late 1970s / early 1980s came the need for expanded storage capacity.

Traditional disk drives, made up of a stack of platters in a cylinder with 56,664 bytes per track and 15 tracks per cylinder, existed well into the 1990s. Traditional drives were eventually replaced by RAID (redundant array of independent disks) drives, which were introduced in a famous paper in 1988 by David Patterson, Garth Gibson, and Randy Katz at SIGMOD. RAID simply spreads storage blocks across many different drives and lets a controller do the work of knitting them together. Not only does this make data access faster, but it also ensures the data is accessible in case of a hardware failure in a disk within the RAID due to parity blocks.  Even with the advent of more modern storage systems, z/OS still operates its storage mapping as it did pre-1980, ensuring the absolute compatibility and continuing value of software investments that are the hallmark of mainframe systems.

Carl Sagan said in 1980 that “You have to know the past to understand the present.” As installations grow ever larger and more complex, an understanding of how these critical legacy systems developed, and how they still operate, can inform today’s extensive data modernization initiatives.

Understanding z/OS Disk Metadata: Catalogs, VTOCs, VVDSs, Indexes, and More! is a 60-minute informative, educational look at a historic topic in the mainframe space. If you weren’t able to attend, you can view it on-demand and download a copy of the slide deck used in the presentation by using this link.

DTS Webinar Recap: Secure Data for Everyone – Pervasive Encryption and z/OS® Storage Management

Data security is making the news these days far too often, and for all the wrong reasons. Because the data in your IBM® z/OS® system is the most valuable and useful data in the enterprise, it is also the data cybercriminals want most. So while it has to remain available for your business to run, it must also be completely secure.

In our September 2021 Webinar, DTS Software CTO Steve Pryor discussed, from a storage management perspective, some of the practical steps involved in making Pervasive Encryption a reality in your z/OS environment. IBM’s Pervasive Encryption initiative aims to achieve these availability and security goals by making encryption so seamless for the user that it’s implemented by default.

But as easy as IBM’s intentions might be, you still must consider the following:

  1. Are you sure you’re taking the proper steps when encrypting datasets? And,
  2. How are you ensuring (and verifying) that it’s encrypted?

A few highlights of Pryor’s overview of encryption and z/OS storage management:

Why Encrypt?
Some of the reasons are obvious, such as regulations and data breaches. But there are other, less obvious reasons. Accidental (or intentional) exposure of sensitive data and insider attacks are two very real threats that must be considered.

Who Encrypts (or Decrypts)?
Pryor identifies three primary personas typically involved with encryption: the security administrator (most often the RACF security administrator), who’s responsible for system security policies; the storage administrator, who’s responsible for managing the data sets – the creation of the data sets and which device they’re allocated on; and, ultimately, the end-user uses encryption (or decryption) to read and write data.

Encryption in z/OS
The concept of “pervasive encryption” is simply that everything is encrypted. This includes at-rest, in-use and in-flight data. While this may seem like overkill, the upside is that by encrypting everything, regulatory requirements are met 100% of the time. Furthermore, existing security policy mechanisms are used to provide dataset-level encryption and allow access by user privileges for added security.

Crypto Hardware
Crypto hardware for z/OS consists of two possible components as well as ZPDT emulated adjunct process. How does each function and what are the features and benefits? Pryor clears up any questions with a quick overview.

Deep Dive in a Live Demo
Once the baseline is set, Pryor dives into the nuts and bolts of dataset encryption with a live demonstration and on-screen explanation of options and elements. He then addresses one of the most important aspects of encryption: key distribution. How do you distribute the keys for those people who need them and control the use of the keys? How do you rotate the keys and avoid compromised keys? How do you audit the system? All are crucial questions that must be considered carefully.

Secure Data for Everyone – Pervasive Encryption and z/OS Storage Management is an informative, educational look at a timely topic in the mainframe space. If you weren’t able to attend, you can view it on-demand and download a copy of the slide deck used in the presentation by using this link.

Enabling Event Analysis to Spot Unusual Access Patterns With DTS Software’s ACC Monarch

The Direct Access Device Space Manager (DADSM) handles key functions in z/OS that dictate much of what happens to a dataset during its lifecycle. Creation and deletion are the most obvious, but this component can also extend a dataset to a new volume, release unused space using the partial release function, rename the dataset, and more. Just as on any other platform, datasets on z/OS have a largely predictable use pattern, which is why it’s a good idea to investigate when usage defies expectations. With the right solution in place, anomalies in the typical pattern of events can provide valuable insights to system administrators.

All DADSM events go through a system exit point or control point such as IDGPRE00 and IDGPOST0, at which point DTS Software’s ACC Monarch product can take control with automation and perform an array of actions. Using a policy rules language, ACC Monarch relies on IF statements to take action based on user-defined dataset characteristics. If the specified condition is met, the Dynamic Install Facility (DIF) started task performs the action.

A basic example of an action might be updating system control blocks, but actions could also include analysis operations such as writing records to DASD, writing records to a log file, or writing reports. These resources can be created using an arbitrary, user-defined record that isn’t necessarily an SMF record, and they can also be written directly to TCP/IP for analysis by Splunk or any other SIEM system. By enabling this kind of thorough analysis during the dataset lifecycle, organizations can spot unusual access patterns that could indicate a threat — and they can do it without the need to know assembler coding.

For more information about how storage event awareness can contribute to security, we encourage you to view our recent webinar on TechChannel, “Aggregation without Aggravation: When Putting More Log Data in Your SIEM is a Good Thing.” DTS Software CTO Steve Pryor and veteran mainframe expert Reg Harbeck offer insights into how you can leverage dfSMS events in conjunction with your existing SIEM data to build a more complete picture of the threats facing your organization.

Whitepaper Download: A Data Center Without Enforceable Standards Risks Much More Than Mere Storage Mismanagement.

ACC Whitepaper