Enabling Event Analysis to Spot Unusual Access Patterns With DTS Software’s ACC Monarch

The Direct Access Device Space Manager (DADSM) handles key functions in z/OS that dictate much of what happens to a dataset during its lifecycle. Creation and deletion are the most obvious, but this component can also extend a dataset to a new volume, release unused space using the partial release function, rename the dataset, and more. Just as on any other platform, datasets on z/OS have a largely predictable use pattern, which is why it’s a good idea to investigate when usage defies expectations. With the right solution in place, anomalies in the typical pattern of events can provide valuable insights to system administrators.

All DADSM events go through a system exit point or control point such as IDGPRE00 and IDGPOST0, at which point DTS Software’s ACC Monarch product can take control with automation and perform an array of actions. Using a policy rules language, ACC Monarch relies on IF statements to take action based on user-defined dataset characteristics. If the specified condition is met, the Dynamic Install Facility (DIF) started task performs the action.

A basic example of an action might be updating system control blocks, but actions could also include analysis operations such as writing records to DASD, writing records to a log file, or writing reports. These resources can be created using an arbitrary, user-defined record that isn’t necessarily an SMF record, and they can also be written directly to TCP/IP for analysis by Splunk or any other SIEM system. By enabling this kind of thorough analysis during the dataset lifecycle, organizations can spot unusual access patterns that could indicate a threat — and they can do it without the need to know assembler coding.

For more information about how storage event awareness can contribute to security, we encourage you to view our recent webinar on TechChannel, “Aggregation without Aggravation: When Putting More Log Data in Your SIEM is a Good Thing.” DTS Software CTO Steve Pryor and veteran mainframe expert Reg Harbeck offer insights into how you can leverage dfSMS events in conjunction with your existing SIEM data to build a more complete picture of the threats facing your organization.

Whitepaper Download: A Data Center Without Enforceable Standards Risks Much More Than Mere Storage Mismanagement.

ACC Whitepaper

Breaking Down Sequential Data Sets and Their Limitations on z/OS®

Unlike Partitioned data sets, which are useful if you have a bunch of different types of data that you want to keep all in a single data set, or VSAM data sets, which are useful for more sophisticated types of access by key or by relative record, sequential data sets are the simplest form of data set. Not surprisingly, they’re useful if you want to read the data in sequence, and when we specify the data set organization in the JCL or define the data set under ISPF, we specify a data set organization or DSORG equal to PS (or occasionally, PSU, or DA).

Sequential data sets can be one of three types, BASIC, LARGE, or Extended Format. If you don’t specify anything out of the ordinary, you get a Basic format data set, which is simply a collection of extents on a disk that is being pointed to by the Volume Table of Contents (VTOC) and the VSAM Volume Data Set (VVDS). Because it’s ordinary sequential data, you can use the Queued or Basic Sequential Access Methods (QSAM or BSAM) to write to it, but it has to be written sequentially — before you write record two, you must write record one.

Sequential data sets are original to z/OS (or MVS, back in the 60s), and thus have some limitations. Basic data sets can have no more than 16 extents per volume and a maximum of 65,535 tracks total even if the data set goes to more than one volume. If you need to exceed that number, you can create a large sequential data set by specifying the DSNTYPE=LARGE parameter. A large format dataset is still limited to 16 extents per volume, but you can have 16,777,215 tracks per volume.

Because the limit of 16 extents is somewhat restrictive, you can use an extended-format sequential data set. Extended-format datasets are implicitly LARGE and can exceed 65,535 tracks, but more importantly, you can get up to 123 extents per volume, offering a few clear advantages. For one, an extended-format data set allows you to fail with anOut-Of-Space error much less often, but it can also be striped. With a single stripe, it’s just an ordinary sequential data set, but if it’s multi-striped, each volume can be read and written in parallel. For an application such as SMF data, where there’s a large amount of sequential data, it’s useful to stripe the data across multiple volumes so they can be read much more quickly.

Wealth of z/OS Webinar Training on DTSSoftware.com

This is a simple breakdown of sequential data sets, but DTS Software has a wealth of additional information on partitioned datasets, PDEs, generation data groups, hierarchical file structures, and UNIX services on z/OS. To learn more, click here to view our August 2021 webinar on demand: “PDS, PDSE, GDG, HFS, USS and Other Strange z/OS Animals.” In this presentation, DTS Software CTO Steve Pryor takes a deep dive into the peculiarities of storage elements on z/OS to help you make sense of these often confounding creatures.

Cyber Forensics — How Storage Plays a Critical Role in Security and Regulatory Compliance

Mainframe storage has changed a lot over the years, and the amount of it available in terms of both memory and disk/tape has grown substantially. Although this gradual progression has over time liberated mainframers from many long-standing limitations, careful storage management has remained a central tenet of mainframe culture and a major differentiator on the platform.

As General and eventual President Eisenhower once said, “Plans are useless. Planning is indispensable.” By understanding current storage availability and outlining future needs, mainframers are able to make heavy and advanced use of the resources and features available to them, and they can do so without interruption. According to Simson Garfinkel and Gene Spafford, Ph.D., in their book, Practical Unix and Internet Security, a computing system is secure when you can count on it to behave in the expected manner. Storage is clearly important because it impacts availability on the mainframe, but it can also offer insights from a more conventional InfoSec standpoint.

Expanding the Event Horizon to Mainframe Storage

On most platforms, external security managers (ESM) or internal security handlers are monitoring user accesses, failed logins, changing permissions, and other mainstream threat indicators. What they aren’t thinking about are certain events on the mainframe that could impact security from a storage management perspective such as allocation, open/close, I/O, extents, and more.

At the same time, storage elements including generation data groups (GDGs), archives, and hierarchical storage management (HSM) products play a major role in overall functionality, which is why it’s crucial to have a system management facility (SMF) keeping records of salient happenings on the mainframe. Some of this record-keeping is automatic, but you also get to decide in some cases what’s salient. Those events might include logins, access to a secured resource, or when something happens that’s outside of the normal, everyday activity. Relevant events in both SMF and other mainframe facilities will allow you to view security issues (hopefully in real time) and send alerts for remediation.

Storage is critical to security because it’s another vantage point from which to view the mainframe’s operation to its “expected manner.” When storage events are given the care and attention they deserve, they can help inform security and reliability improvements that protect your organization’s most valuable IT asset.

For more information about how storage impacts security on the mainframe, check out DTS Software’s webinar Aggregation without Aggravation: When Putting More Log Data in Your SIEM Is a Good Thing. The presentation, which features DTS Software CTO Steve Pryor and Mainframe Analytics Ltd. Founder Reg Harbeck, is available on demand.

Dive Into DTS Software’s “Mysteries of SMS” Webinar Series

In Spring 2021, DTS Software CTO Steve Pryor hosted four different webinars designed to walk attendees through the ins and outs of the four major SMS constructs: the Data Class, Storage Class, Management Class, and Storage Group.

Pryor has been designing and developing z/OS software for more than 30 years, and he’s a frequent speaker at SHARE and other major industry conferences. In the “Mysteries of SMS Series,” he brought his considerable experience to bear discussing the different values and attributes specified in each SMS construct, along with a host of performance and availability requirements that can directly impact the reliability of z/OS. For the Data Class, which is probably the most commonly used construct, Pryor paid particular attention to the values that interact with JCL and IDCAMS control statements, while the Storage Class webinar pointed out which attributes have become obsolete before turning to relevant topics including Guaranteed Space and Data Striping.

The Management Class installment addressed the eight pages of attributes detailing the backup, recovery, and availability needs of a dataset, along with other storage management components like OAM, disk storage products such as FDRABR and DFSMShsm, and DFSMSrmm tape management. “Mysteries of the Storage Group” concluded the four-part “Mysteries of SMS” series by examining the seven different types of Storage Groups, how each one impacts the availability of free space in the system and best practices for defining and using them in DFSMS to maximize reliability.

Each webinar runs approximately an hour, and all are available on-demand by clicking here. In addition, we encourage you to spend time browsing our other educational offerings and please reach out to us if you have a topic you’d like to see covered.

“Mysteries of SMS” Webinar Series

USE CASE: How ACC Monarch Can Help As You Migrate Away From Obsolete Programs

Allocation Control Center (now ACC Monarch) was one of DTS Software’s very first products, and it’s difficult to describe exactly what it does because the potential use cases are so varied. In essence, ACC Monarch enforces a set of user-defined rules, or policies, somewhat analogous to ACS Routines. The Policy Rules Engine, which acts as the enforcer, runs under the Dynamic Install Facility (DIF) Started Task. Any time a job starts, or a dataset is CREATEd, OPENed, or DELETEd, the user can examine each dataset down to the attribute level and change attributes to ensure they meet broad, systemwide standards.

Over the years, it’s been interesting to hear about the many different ways users have taken advantage of ACC Monarch. In one recent example, a customer found ACC Monarch invaluable during a migration away from two obsolete programs: LISTUTIL and IEHPROGM.

First, the customer created a rule for the two programs that would identify which jobs and users were using them. By defining a new SMF record, PGM_STANDARD, the system would record info about the job, user, step, program, and more each time LISTUTIL or IEHPROGM were utilized. In addition to writing SMF records, ACC Monarch enabled the customer to write messages to the user warning that the two programs were obsolete, and which to use instead in the future. Because IEHPROGM would still need to be used for some jobs, the customer used ACC Monarch to allow for some exceptions where the messages wouldn’t be sent.

In this case, all the variables within the customer’s SMF record were filled in at EXECUTE, offering a wealth of information about the jobs, users, and programs attempting to use the obsolete programs. Instead of sending a questionably effective company-wide email or requiring someone to pore over large quantities of JCL, ACC Monarch allowed the customer to send automated messages and pinpoint exactly where to focus its migration efforts moving forward.

ACC Whitepaper

Download Storage Allocation Whitepaper

At its core, ACC Monarch is primarily used to prevent the incorrect use of resources. It ensures that production resources such as datasets, logstreams, and volumes get used on the production system while test resources get used on the test system, and it can restrict the data and time in which resources can be used. Perhaps most importantly, when allocations or usages are incorrect, ACC Monarch can intervene and correct them, saving customers the time and expense associated with manual efforts. For more information about the many ways DTS Software customers are using ACC Monarch, view our webinar on-demand or download our whitepaper today.

Shoring Up Security with SIEM

In 2005, Gartner’s Mark Nicolett and Amrit Williams described a new breed of security tool combining Security Information Management (SIM) with Security Event Management (SEM). Today, the Security Information and Event Management (SIEM) market exceeds $4 billion — a figure that’s expected to grow to $5.5 billion by 2025 — and these capable tools are helping organizations shore up security, meet their regulatory compliance goals, and preserve and generate business value.

SIEM is about gleaning insights from all kinds of machine data, which is generated by networks, databases, servers, mainframes, and applications. Once this data is aggregated, correlation techniques can be applied to produce meaningful information about everything from attempted cyberattacks on your systems to detailed reports about the number of devices or users operating on the network.

Since the idea’s inception, SIEM products have worked their way up the security tool hierarchy, and they’re now one of the more integral parts of a complete enterprise security posture. Here are the three main benefits a SIEM solution can bring to your organization.

  1. Security
    It’s in the name, so security is obviously the top priority of any SIEM tool. By correlating logs and event data from your networks, devices, other security systems, and more, a SIEM enables you to spot threats before they land and respond to incidents promptly and effectively. As Simson Garfinkel and Gene Spafford, Ph.D. explain in Practical Unix and Internet Security, a computer system is secure when you can rely on it to behave in the manner expected — and SIEM is all about preventing surprises.
  2. Regulatory Compliance
    SIEM adoption was originally spurred by regulations such as the Payment Card Industry Data Security Standard (PCI DSS) implemented in 2004. Today, the regulatory landscape has never been more crowded, and businesses hoping to achieve compliance with GDPR, CCPA, HIPAA, FISMA, and more will need to rely on all the tools at their disposal. While most regulations have no specific mention of SIEM tools, the capabilities offered by these solutions are essential to gather intelligence, perform log management, conduct analysis, and generate reports and visualizations.
  3. Business Value
    SIEM systems might have security as their top priority, but they can also help your team identify suboptimal network configurations, troubleshoot workflows, and so much more. Splunk, for example, can collect information from almost any source and use machine learning to identify patterns and conduct insightful analysis, turning any type of data into a source of value. With so much information generated in the modern enterprise, SIEM tools are increasingly becoming a means to sift through it all. In the coming years, they’ll be capable of improving all aspects of a business instead of just security.

SIEM tools are a fundamental part of security in today’s enterprises, but they don’t always include all the necessary information. To help get a better picture of your organization’s cyberthreat readiness, it’s a good idea to collect data on mainframe storage management events, including those not necessarily tracked by RACF, CA ACF2, CA Top Secret, and other External Security Managers on z/OS.

For more information, we encourage you to watch our July 14 webinar on demand. It was hosted by TechChannel and featured CTO Steve Pryor and veteran mainframe analyst Reg Harbeck. The pair offered insights into how you can leverage dfSMS events in conjunction with your existing SIEM data to get a more accurate picture of the threats facing your most valuable IT asset.