Configuring Logstash

You need to configure Logstash to ingest data from ODP.

The starter dashboards require a Logstash pipeline config with the following characteristics:

  • Maps the value of the write_time field in the incoming JSON Lines to the @timestamp field.

    The starter dashboards use @timestamp as the event time stamp.

  • Matches Kibana index patterns omegamon-%{product_code}-%{table_name}-*

    The starter dashboards use index patterns such as omegamon-km5-ascpuutil-*.

  • Creates data streams (create action) rather than time-based indices.

Here is a starter Logstash pipeline config:

input {
  tcp {
    id => "odp_tcp_input"
    port => 15046
    codec => json_lines
  }
}
filter {
  date {
    match => ["write_time", "ISO8601"]
  }
  if [product_code] in ["kc5", "kd5", "kgw", "ki5", "kjj", "km5", "kmq", "kn3", "kqi", "ks3"] {
     mutate { add_field => { "[@metadata][index_namespace]" => "omegamon" } }
  } else {
     mutate { add_field => { "[@metadata][index_namespace]" => "odp" } }
  }
}
output {
  elasticsearch {
    id => "elasticsearch"
    hosts => ["elasticsearch:9200"]
    index => "%{[@metadata][index_namespace]}-%{product_code}-%{table_name}-ds"
    action => "create"
    manage_template => false
  }
}



 
















 





1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26

This starter config assumes that you have configured the OMEGAMON Data Connect component of ODP to forward data over TCP in JSON Lines format.

This starter config assumes unsecure TCP: no Transport Layer Security (SSL/TLS).

In input.tcp.port, specify the port on which to listen for data from ODP.

TIP

If you deploy Elastic Stack in Docker containers, then you need to understand the difference between port numbers exposed by the Docker host and port numbers used inside the Docker containers.

In output.elasticsearch.hosts, specify the host name of the computer that is running Elasticsearch.

The supplied starter Docker Compose file automatically deploys a Logstash pipeline config that is similar to this starter config.

One data stream per table

The combination of the create action in this starter Logstash config and the data_stream object in the corresponding sample Elasticsearch index template cause Elasticsearch to store ODP data in data streamsopen in new window.

These data streams are table-specific: each combination of product code and table name has its own data stream.

Table-specific data streams (and backing indices) avoid the following issues:

  • Mapping conflicts: fields with the same names but different data types in different attribute tables
  • Mapping explosion: defining too many fields in an index

Single or multiple Logstash pipelines?

You need to know whether your instance of Logstash is for use only with these starter dashboards or is also used for other purposes, other inputs. Specifically, you need to know whether your use of Logstash involves a single pipeline or multiple pipelines.

If you have installed a new instance of Elastic Stack as a sandbox environment for testing these starter dashboards, then you can use a single Logstash pipeline.

However, if you are using these starter dashboards in an existing instance of Elastic Stack that already has other inputs, then it is more likely that you will need to use multiple pipelines.

Single pipeline

If your instance of Logstash is for use only with these starter dashboards, then you can delete the contents of the default Logstash config directory, and then copy the supplied starter config file into that directory.

For example:

  1. Delete the contents of the default Logstash pipeline directory, /etc/logstash/conf.d/.

    The default Logstash pipeline directory path depends on your platform.

  2. Copy the starter config provided here to the file 10-omegamon-tcp-to-local-elasticsearch.conf in the default Logstash pipeline directory.

Multiple pipelines

For information about configuring multiple pipelines, see the Logstash documentationopen in new window.

Refresh the Logstash config

Unless you have configured Logstash to automatically detect new pipeline configurations, stop and then restart Logstash.

For example, in the command shell of a Linux distribution that supports the service init system command wrapper, enter:

service logstash stop

Logstash can take a while to respond to that command (the signal to stop). If the response from that command ends with:

logstash stop failed; still running.

wait for several seconds, and then enter:

service logstash status

You want to see:

logstash is not running

Enter:

service logstash start

Alternative input: Apache Kafka

If you have configured ODP to publish data in JSON format to Apache Kafka, then you can use the Kafka input plugin for Logstash to subscribe to that topic (or topics, plural).

Here is a rudimentary sample Kafka input for Logstash:

kafka {
   id => "omegamon_kafka_input"
   bootstrap_servers => "kafkaserver.example.com:9092"
   topics => ["omegamon_json"]
   codec => json
}
1
2
3
4
5
6

You can use this as a replacement for the TCP input in the previous starter config.

Last Updated:
Contributors: Graham Hannington, David Dai, Viaceslavas Michalkevicius