DTS Webinar Recap: Secure Data for Everyone – Pervasive Encryption and z/OS® Storage Management

Data security is making the news these days far too often, and for all the wrong reasons. Because the data in your IBM® z/OS® system is the most valuable and useful data in the enterprise, it is also the data cybercriminals want most. So while it has to remain available for your business to run, it must also be completely secure.

In our September 2021 Webinar, DTS Software CTO Steve Pryor discussed, from a storage management perspective, some of the practical steps involved in making Pervasive Encryption a reality in your z/OS environment. IBM’s Pervasive Encryption initiative aims to achieve these availability and security goals by making encryption so seamless for the user that it’s implemented by default.

But as easy as IBM’s intentions might be, you still must consider the following:

  1. Are you sure you’re taking the proper steps when encrypting datasets? And,
  2. How are you ensuring (and verifying) that it’s encrypted?

A few highlights of Pryor’s overview of encryption and z/OS storage management:

Why Encrypt?
Some of the reasons are obvious, such as regulations and data breaches. But there are other, less obvious reasons. Accidental (or intentional) exposure of sensitive data and insider attacks are two very real threats that must be considered.

Who Encrypts (or Decrypts)?
Pryor identifies three primary personas typically involved with encryption: the security administrator (most often the RACF security administrator), who’s responsible for system security policies; the storage administrator, who’s responsible for managing the data sets – the creation of the data sets and which device they’re allocated on; and, ultimately, the end-user uses encryption (or decryption) to read and write data.

Encryption in z/OS
The concept of “pervasive encryption” is simply that everything is encrypted. This includes at-rest, in-use and in-flight data. While this may seem like overkill, the upside is that by encrypting everything, regulatory requirements are met 100% of the time. Furthermore, existing security policy mechanisms are used to provide dataset-level encryption and allow access by user privileges for added security.

Crypto Hardware
Crypto hardware for z/OS consists of two possible components as well as ZPDT emulated adjunct process. How does each function and what are the features and benefits? Pryor clears up any questions with a quick overview.

Deep Dive in a Live Demo
Once the baseline is set, Pryor dives into the nuts and bolts of dataset encryption with a live demonstration and on-screen explanation of options and elements. He then addresses one of the most important aspects of encryption: key distribution. How do you distribute the keys for those people who need them and control the use of the keys? How do you rotate the keys and avoid compromised keys? How do you audit the system? All are crucial questions that must be considered carefully.

Secure Data for Everyone – Pervasive Encryption and z/OS Storage Management is an informative, educational look at a timely topic in the mainframe space. If you weren’t able to attend, you can view it on-demand and download a copy of the slide deck used in the presentation by using this link.

Enabling Event Analysis to Spot Unusual Access Patterns With DTS Software’s ACC Monarch

The Direct Access Device Space Manager (DADSM) handles key functions in z/OS that dictate much of what happens to a dataset during its lifecycle. Creation and deletion are the most obvious, but this component can also extend a dataset to a new volume, release unused space using the partial release function, rename the dataset, and more. Just as on any other platform, datasets on z/OS have a largely predictable use pattern, which is why it’s a good idea to investigate when usage defies expectations. With the right solution in place, anomalies in the typical pattern of events can provide valuable insights to system administrators.

All DADSM events go through a system exit point or control point such as IDGPRE00 and IDGPOST0, at which point DTS Software’s ACC Monarch product can take control with automation and perform an array of actions. Using a policy rules language, ACC Monarch relies on IF statements to take action based on user-defined dataset characteristics. If the specified condition is met, the Dynamic Install Facility (DIF) started task performs the action.

A basic example of an action might be updating system control blocks, but actions could also include analysis operations such as writing records to DASD, writing records to a log file, or writing reports. These resources can be created using an arbitrary, user-defined record that isn’t necessarily an SMF record, and they can also be written directly to TCP/IP for analysis by Splunk or any other SIEM system. By enabling this kind of thorough analysis during the dataset lifecycle, organizations can spot unusual access patterns that could indicate a threat — and they can do it without the need to know assembler coding.

For more information about how storage event awareness can contribute to security, we encourage you to view our recent webinar on TechChannel, “Aggregation without Aggravation: When Putting More Log Data in Your SIEM is a Good Thing.” DTS Software CTO Steve Pryor and veteran mainframe expert Reg Harbeck offer insights into how you can leverage dfSMS events in conjunction with your existing SIEM data to build a more complete picture of the threats facing your organization.

Whitepaper Download: A Data Center Without Enforceable Standards Risks Much More Than Mere Storage Mismanagement.

ACC Whitepaper

Breaking Down Sequential Data Sets and Their Limitations on z/OS®

Unlike Partitioned data sets, which are useful if you have a bunch of different types of data that you want to keep all in a single data set, or VSAM data sets, which are useful for more sophisticated types of access by key or by relative record, sequential data sets are the simplest form of data set. Not surprisingly, they’re useful if you want to read the data in sequence, and when we specify the data set organization in the JCL or define the data set under ISPF, we specify a data set organization or DSORG equal to PS (or occasionally, PSU, or DA).

Sequential data sets can be one of three types, BASIC, LARGE, or Extended Format. If you don’t specify anything out of the ordinary, you get a Basic format data set, which is simply a collection of extents on a disk that is being pointed to by the Volume Table of Contents (VTOC) and the VSAM Volume Data Set (VVDS). Because it’s ordinary sequential data, you can use the Queued or Basic Sequential Access Methods (QSAM or BSAM) to write to it, but it has to be written sequentially — before you write record two, you must write record one.

Sequential data sets are original to z/OS (or MVS, back in the 60s), and thus have some limitations. Basic data sets can have no more than 16 extents per volume and a maximum of 65,535 tracks total even if the data set goes to more than one volume. If you need to exceed that number, you can create a large sequential data set by specifying the DSNTYPE=LARGE parameter. A large format dataset is still limited to 16 extents per volume, but you can have 16,777,215 tracks per volume.

Because the limit of 16 extents is somewhat restrictive, you can use an extended-format sequential data set. Extended-format datasets are implicitly LARGE and can exceed 65,535 tracks, but more importantly, you can get up to 123 extents per volume, offering a few clear advantages. For one, an extended-format data set allows you to fail with anOut-Of-Space error much less often, but it can also be striped. With a single stripe, it’s just an ordinary sequential data set, but if it’s multi-striped, each volume can be read and written in parallel. For an application such as SMF data, where there’s a large amount of sequential data, it’s useful to stripe the data across multiple volumes so they can be read much more quickly.

Wealth of z/OS Webinar Training on DTSSoftware.com

This is a simple breakdown of sequential data sets, but DTS Software has a wealth of additional information on partitioned datasets, PDEs, generation data groups, hierarchical file structures, and UNIX services on z/OS. To learn more, click here to view our August 2021 webinar on demand: “PDS, PDSE, GDG, HFS, USS and Other Strange z/OS Animals.” In this presentation, DTS Software CTO Steve Pryor takes a deep dive into the peculiarities of storage elements on z/OS to help you make sense of these often confounding creatures.

Cyber Forensics — How Storage Plays a Critical Role in Security and Regulatory Compliance

Mainframe storage has changed a lot over the years, and the amount of it available in terms of both memory and disk/tape has grown substantially. Although this gradual progression has over time liberated mainframers from many long-standing limitations, careful storage management has remained a central tenet of mainframe culture and a major differentiator on the platform.

As General and eventual President Eisenhower once said, “Plans are useless. Planning is indispensable.” By understanding current storage availability and outlining future needs, mainframers are able to make heavy and advanced use of the resources and features available to them, and they can do so without interruption. According to Simson Garfinkel and Gene Spafford, Ph.D., in their book, Practical Unix and Internet Security, a computing system is secure when you can count on it to behave in the expected manner. Storage is clearly important because it impacts availability on the mainframe, but it can also offer insights from a more conventional InfoSec standpoint.

Expanding the Event Horizon to Mainframe Storage

On most platforms, external security managers (ESM) or internal security handlers are monitoring user accesses, failed logins, changing permissions, and other mainstream threat indicators. What they aren’t thinking about are certain events on the mainframe that could impact security from a storage management perspective such as allocation, open/close, I/O, extents, and more.

At the same time, storage elements including generation data groups (GDGs), archives, and hierarchical storage management (HSM) products play a major role in overall functionality, which is why it’s crucial to have a system management facility (SMF) keeping records of salient happenings on the mainframe. Some of this record-keeping is automatic, but you also get to decide in some cases what’s salient. Those events might include logins, access to a secured resource, or when something happens that’s outside of the normal, everyday activity. Relevant events in both SMF and other mainframe facilities will allow you to view security issues (hopefully in real time) and send alerts for remediation.

Storage is critical to security because it’s another vantage point from which to view the mainframe’s operation to its “expected manner.” When storage events are given the care and attention they deserve, they can help inform security and reliability improvements that protect your organization’s most valuable IT asset.

For more information about how storage impacts security on the mainframe, check out DTS Software’s webinar Aggregation without Aggravation: When Putting More Log Data in Your SIEM Is a Good Thing. The presentation, which features DTS Software CTO Steve Pryor and Mainframe Analytics Ltd. Founder Reg Harbeck, is available on demand.

Dive Into DTS Software’s “Mysteries of SMS” Webinar Series

In Spring 2021, DTS Software CTO Steve Pryor hosted four different webinars designed to walk attendees through the ins and outs of the four major SMS constructs: the Data Class, Storage Class, Management Class, and Storage Group.

Pryor has been designing and developing z/OS software for more than 30 years, and he’s a frequent speaker at SHARE and other major industry conferences. In the “Mysteries of SMS Series,” he brought his considerable experience to bear discussing the different values and attributes specified in each SMS construct, along with a host of performance and availability requirements that can directly impact the reliability of z/OS. For the Data Class, which is probably the most commonly used construct, Pryor paid particular attention to the values that interact with JCL and IDCAMS control statements, while the Storage Class webinar pointed out which attributes have become obsolete before turning to relevant topics including Guaranteed Space and Data Striping.

The Management Class installment addressed the eight pages of attributes detailing the backup, recovery, and availability needs of a dataset, along with other storage management components like OAM, disk storage products such as FDRABR and DFSMShsm, and DFSMSrmm tape management. “Mysteries of the Storage Group” concluded the four-part “Mysteries of SMS” series by examining the seven different types of Storage Groups, how each one impacts the availability of free space in the system and best practices for defining and using them in DFSMS to maximize reliability.

Each webinar runs approximately an hour, and all are available on-demand by clicking here. In addition, we encourage you to spend time browsing our other educational offerings and please reach out to us if you have a topic you’d like to see covered.

“Mysteries of SMS” Webinar Series

USE CASE: How ACC Monarch Can Help As You Migrate Away From Obsolete Programs

Allocation Control Center (now ACC Monarch) was one of DTS Software’s very first products, and it’s difficult to describe exactly what it does because the potential use cases are so varied. In essence, ACC Monarch enforces a set of user-defined rules, or policies, somewhat analogous to ACS Routines. The Policy Rules Engine, which acts as the enforcer, runs under the Dynamic Install Facility (DIF) Started Task. Any time a job starts, or a dataset is CREATEd, OPENed, or DELETEd, the user can examine each dataset down to the attribute level and change attributes to ensure they meet broad, systemwide standards.

Over the years, it’s been interesting to hear about the many different ways users have taken advantage of ACC Monarch. In one recent example, a customer found ACC Monarch invaluable during a migration away from two obsolete programs: LISTUTIL and IEHPROGM.

First, the customer created a rule for the two programs that would identify which jobs and users were using them. By defining a new SMF record, PGM_STANDARD, the system would record info about the job, user, step, program, and more each time LISTUTIL or IEHPROGM were utilized. In addition to writing SMF records, ACC Monarch enabled the customer to write messages to the user warning that the two programs were obsolete, and which to use instead in the future. Because IEHPROGM would still need to be used for some jobs, the customer used ACC Monarch to allow for some exceptions where the messages wouldn’t be sent.

In this case, all the variables within the customer’s SMF record were filled in at EXECUTE, offering a wealth of information about the jobs, users, and programs attempting to use the obsolete programs. Instead of sending a questionably effective company-wide email or requiring someone to pore over large quantities of JCL, ACC Monarch allowed the customer to send automated messages and pinpoint exactly where to focus its migration efforts moving forward.

ACC Whitepaper

Download Storage Allocation Whitepaper

At its core, ACC Monarch is primarily used to prevent the incorrect use of resources. It ensures that production resources such as datasets, logstreams, and volumes get used on the production system while test resources get used on the test system, and it can restrict the data and time in which resources can be used. Perhaps most importantly, when allocations or usages are incorrect, ACC Monarch can intervene and correct them, saving customers the time and expense associated with manual efforts. For more information about the many ways DTS Software customers are using ACC Monarch, view our webinar on-demand or download our whitepaper today.