Logstash for Mainframes

Is there a version of Logstash for Z/OS Unix for Mainframes?

I’m sorry, I do not know the answer to your question.

However, for what it’s worth (@welcome2017, you might also be interested in this), I do have some experience forwarding a wide variety of z/OS-based “logs”—hundreds of different record types, including SMF records, IMS logs, DB2 logs—off z/OS to an instance of Logstash that is running on Linux (for example, in a Docker container).

Specifically, I run z/OS batch jobs that use IBM Transaction Analysis Workbench for z/OS (“Workbench”) to extract and transform logs from their original—typically, proprietary binary—format, and then stream the data in JSON Lines format over a TCP network to a listening instance of Logstash on a remote Linux system.

Disclaimer: I am the author of the Workbench product documentation. The current published edition of that documentation describes forwarding to Elastic using a different method.

Can you offer any more details of your specific use case? For example, what type of logs do you want to forward?

Workbench JCL

The JCL for Workbench log forwarding batch jobs is concise and self-contained; it doesn’t involve referring to configuration files that you have had to create using a separate configuration tool.

The following example forwards CICS monitoring facility (CMF) performance class (SMF 110) records from the dumped SMF data set 'SMF.MVS1(-1)' to TCP port 5044 on a remote Linux host named “elastic”:

//FUW2LS JOB NOTIFY=&SYSUID
//FUWBATCH EXEC PGM=FUWBATCH
//STEPLIB DD  DISP=SHR,
//            DSN=FUW.SFUWLINK
//SYSPRINT DD SYSOUT=*
//SMFIN001 DD DISP=SHR,
//            DSN=SMF.MVS1(-1)
//SYSIN DD *
STREAM NAME(LOGSTASH) TRANSPORT(TCP) +
       HOST(elastic) PORT(5044) +
       LINES FLAT OMITNULL NOTITLE FIELDCASE(LOWER) ASCII LF
JSON CODE(CMF) STREAM(LOGSTASH)
FIELDS(
* Insert the list of CMF field names you want to forward in each event
)

In addition to specifying which fields you want to forward, you can, optionally, specify filters to select which records you want to forward. The filters specify one or more conditions based on field values. For example, you can specify conditions to select records for transactions with a response time longer than 2 seconds, or records that match a particular transaction code or application ID pattern.

Logstash config

Here’s the corresponding Logstash config:

input {
  tcp {
    port => 5044
    codec => json_lines
  }
}
filter {
  date {
    # Set @timestamp, if you want to; feel free to ask about the ellipsis
    match => ["time", "..."]
  }
}
output {
  elasticsearch {
    document_type => "%{type}"
    index => "fuw-%{+YYYY.MM.dd}-%{type}"
    manage_template => false
  }
}

If you run Logstash in a Docker container, then you’ll be using port forwarding, and it’s under your control whether the port number that Workbench sends to matches the port number on which Logstash, inside the container, is listening.

(I have not included here the corresponding Elasticsearch index template. Setting aside a detail relating to that ellipsis for the time format, all it does is stop Elasticsearch analyzing string fields.)

Elasticsearch bulk API

Depending on your specific use case, you could choose to bypass Logstash and forward events directly to the Elasticsearch HTTP-based bulk API (which is what the Logstash elasticsearch output plugin does). If you know how to read (programmatically parse) the logs you want to forward—or you’re responsible for creating those logs in the first place—then there are many options available to you for sending those logs over HTTP to Elasticsearch, including writing a little Java, or even using the z/OS port of cURL.

1 Like