My logstash conf file is not creating a test field

I am using ELK 7.6.1.
My configuration file /etc/logstash/conf.d/logstash-cowrie.conf does not appear to be taken in account. The logs show however that the file is being read (see below). So, probably it is not doing what I'd expect it to do.

[2020-03-16T09:09:04,203][INFO ][logstash.javapipeline ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>250, "pipeline.sources"=>["/etc/logstash/conf.d/logstash-cowrie.conf"], :thread=>"#<Thread:0x2dfe0b84 run>"}

In my configuration file, for instance, I ask it to add a field (just a test - see axelle-test). And it doesn't do it :frowning:

input {
       beats {
       	     port => 5044
	     type => "cowrie"
       }	  
}

filter {
#    if [type] == "cowrie" {
    
        json {
	    source => message
	    add_field => { "axelle-test" => "blah blah" }
	}

At first, as you can see, I was only doing that for documents of type cowrie. In case that was not being set correctly, I decided to do it in all case removing the if. But still no field axelle-test in my logs :frowning:

Hi there,

apart from the fact that you're not closing the filter section with a closing curly bracket, but can you post here what that pipeline returns in standard output simply adding the section

output {
  stdout{}
}

Thanks

Hi,
I am sorry, I a new to logstash: how should I see the output of the pipeline? I launched logstash manually, but as you can see below, did not get any output for the logs:

$ sudo /usr/share/logstash/bin/logstash -f ./logstash-cowrie.conf
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by com.headius.backport9.modules.Modules (file:/usr/share/logstash/logstash-core/lib/jars/jruby-complete-9.2.9.0.jar) to method sun.nio.ch.NativeThread.signal(long)
WARNING: Please consider reporting this to the maintainers of com.headius.backport9.modules.Modules
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
[WARN ] 2020-03-16 10:24:03.229 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified
[INFO ] 2020-03-16 10:24:03.251 [LogStash::Runner] runner - Starting Logstash {"logstash.version"=>"7.6.1"}
[INFO ] 2020-03-16 10:24:06.916 [Converge PipelineAction::Create<main>] Reflections - Reflections took 49 ms to scan 1 urls, producing 20 keys and 40 values 
[INFO ] 2020-03-16 10:24:07.952 [[main]-pipeline-manager] geoip - Using geoip database {:path=>"/opt/logstash/vendor/geoip/GeoLite2-City.mmdb"}
[WARN ] 2020-03-16 10:24:08.241 [[main]-pipeline-manager] LazyDelegatingGauge - A gauge metric of an unknown type (org.jruby.RubyArray) has been create for key: cluster_uuids. This may result in invalid serialization.  It is recommended to log an issue to the responsible developer/development team.
[INFO ] 2020-03-16 10:24:08.245 [[main]-pipeline-manager] javapipeline - Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>250, "pipeline.sources"=>["/etc/logstash/conf.d/logstash-cowrie.conf"], :thread=>"#<Thread:0x5b9d692c run>"}
[INFO ] 2020-03-16 10:24:10.075 [[main]-pipeline-manager] beats - Beats inputs: Starting input listener {:address=>"0.0.0.0:5044"}
[INFO ] 2020-03-16 10:24:10.121 [[main]-pipeline-manager] javapipeline - Pipeline started {"pipeline.id"=>"main"}
[INFO ] 2020-03-16 10:24:10.224 [Agent thread] agent - Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[INFO ] 2020-03-16 10:24:10.311 [[main]<beats] Server - Starting server on port: 5044
[INFO ] 2020-03-16 10:24:10.707 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600}

Hi,

no problem. Two questions:

1 - Is there any beats agent actually writing on the 5044 port?

2 - Can you share here your whole conf file (properly indented and formatted)?

Thanks

Hi,

1- Yes, I have current beats and they are sending to port 5044. It works as in the end I see then in Kibana :slight_smile:

2- My full logstash configuration file:

input {
       beats {
       	     port => 5044
                  # type => "cowrie"
       }	  
}

filter {
#    if [fields][type] == "cowrie" {
     json {
	    source => message
	    add_field => { "axelle-test" => "blah blah" }
     }
     date {
           match => [ "timestamp", "ISO8601" ]
     }
     if [src_ip]  {
            mutate {
                add_field => { "src_host" => "%{src_ip}" }
            }

            dns {
                reverse => [ "src_host" ]
                nameserver => [ "8.8.8.8", "8.8.4.4" ]
                action => "replace"
                hit_cache_size => 4096
                hit_cache_ttl => 900
                failed_cache_size => 512
                failed_cache_ttl => 900
            }

            geoip {
                source => "src_ip"
                target => "geoip"
                database => "/opt/logstash/vendor/geoip/GeoLite2-City.mmdb"
            }
        }
        
        mutate {
            remove_tag => [ "beats_input_codec_plain_applied"]
	     remove_field => [ "log.file.path", "log.offset" ]
        }
 #   } -- commented  if [fields][type]  
}

output {
       stdout{}
    # if [type] == "cowrie" {
    #     elasticsearch {
    #         hosts => ["localhost:9200"]
    # 	    # index is configured by IML
    #     }
    #     file {
    #         path => "/tmp/cowrie-logstash.json"
    #         codec => json
    #     }
    #     stdout {
    #         codec => rubydebug { metadata => true }
    #     }
    # }
}

Ok so, can you post here your beats configuration, too? Because if you don't see it in stdout it means your logstash is not receiveing anything.

Ok so, can you post here your beats configuration, too? Because if you don't see it in stdout it means your logstash is not receiveing anything.

Kibana receives the beats for sure, because I see them.

My filebeat.yml config. Most of it is default, except

  • log input: I read from cowrie (honeypot)
  • added 2 fields (for testing)
  • output to elasticsearch localhost 9200 (is that wrong and should I just be outputting to 5044?)
  • logging.level to debug (for troubleshooting) and to /var/log/filebeat

Those are the only modifications I have to the default filebeat.yml, all the other lines are commented:

filebeat.modules:
filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /home/axelle/cowrie/var/log/cowrie/cowrie.json*
  encoding: plain
  fields:
     type: cowrie
     axelle: test
- type: google-pubsub
  enabled: false
  project_id: my-gcp-project-id
  topic: my-gcp-pubsub-topic-name
  subscription.name: my-gcp-pubsub-subscription-name
  credentials_file: ${path.config}/my-pubsub-subscriber-credentials.json
output.elasticsearch:
  hosts: ["localhost:9200"]
setup.template.settings:
setup.kibana:
logging.level: debug
logging.selectors: [ "*" ]
logging.to_files: true
logging.files:
  path: /var/log/filebeat
  name: filebeat
  rotateeverybytes: 10485760 # = 10MB
  keepfiles: 7
  permissions: 0600
  interval: 24h

I shouldn't have that google-pubsub... I wonder why it is there!

output to elasticsearch localhost 9200 (is that wrong and should I just be outputting to 5044?)

This is exactly the point. You're not outputting anything to the poor Logstash, who's doomed to listen on the 5044 all alone, without receiveing anything at all.

Of course you see the logs in Kibana, since you're firing them directly to elasticsearch. That way though, you'll never see any changes made in Logstash.

You should enable your output.logstash section in the filebeat.yml (which outputs event to 5044 by default if I'm not wrong) and disable the output to elasticsearch.

Then, in Logstash, you'll output your events to elasticsearch.

1 Like

That makes sense indeed.
So, I modified filebeat to send to logstash. I do see the logs being processed by logstash, with for instance my added field, so that's good :slight_smile:

But now Kibana does not receive the data any longer :frowning:

This is my new filebeat configuration. I turned off output to elasticsearch and am sending to logstash on localhost:5044

filebeat.modules:
filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /home/user/cowrie/var/log/cowrie/cowrie.json*
  encoding: plain
  fields:
     type: cowrie
     axelle: test
output.elasticsearch:
  enabled: false
output.logstash:
  enabled: true
  hosts: ["localhost:5044"]
  worker: 1
setup.template.settings:
setup.kibana:
logging.level: debug
logging.selectors: [ "*" ]
logging.to_files: true
logging.files:
  path: /var/log/filebeat
  name: filebeat
  rotateeverybytes: 10485760 # = 10MB
  keepfiles: 7
  permissions: 0600
  interval: 24h

This is my logstash conf. To my understanding, it is sending to elasticsearch on localhost:9200

input {
       beats {
       	     port => 5044
	     type => "cowrie"
       }	  
}

filter {
    if [type] == "cowrie" {
        json {
	    source => message
	    add_field => { "axelle-test" => "blah blah" }
	}
        date {
            match => [ "timestamp", "ISO8601" ]
        }

        if [src_ip]  {

            mutate {
                add_field => { "src_host" => "%{src_ip}" }
            }

            dns {
                reverse => [ "src_host" ]
                nameserver => [ "8.8.8.8", "8.8.4.4" ]
                action => "replace"
                hit_cache_size => 4096
                hit_cache_ttl => 900
                failed_cache_size => 512
                failed_cache_ttl => 900
            }


            geoip {
                source => "src_ip"
                target => "geoip"
                database => "/opt/logstash/vendor/geoip/GeoLite2-City.mmdb"
		#add_field => [ "[geoip][location]", "%{[geoip][longitude]}" ]
		#add_field => [ "[geoip][location]", "%{[geoip][latitude]}"  ]
            }

        }
        
        mutate {
	      # 	    convert => [ "[geoip][coordinates]", "float" ]
            remove_tag => [ "beats_input_codec_plain_applied"]
	    remove_field => [ "log.file.path", "log.offset" ]

        }
    }
}

output {
      if [type] == "cowrie" {
         elasticsearch {
             hosts => ["localhost:9200"]
     	    # index is configured by IML
         }
         file {
             path => "/tmp/cowrie-logstash.json"
             codec => json
         }
         stdout {
             codec => rubydebug { metadata => true }
         }
     }
}

Hi there,

output.elasticsearch:
  enabled: false

This is not needed. Just comment out the section.

# index is configured by IML

What does this mean? Have you tried setting an index here?

This is not needed. Just comment out the section.

Sure. It was kind of a way to remember this was intentional :slight_smile:

What does this mean? Have you tried setting an index here?

Yes, concerning another issue I had, somebody made me realize that setting index in logstash configuration file was overruled by Firebeat ILM feature which is enabled by default. The default name is filebeat-version-date-number. See setup.ilm.* entries in filebeat.yml

Before you start Logstash in production, test your configuration file . If you run Logstash from the command line, you can specify parameters that will verify your configuration for you. This will run through your configuration , verify the configuration syntax, and then exit

Before you start Logstash in production, test your configuration file . If you run Logstash from the command line, you can specify parameters that will verify your configuration for you. This will run through your configuration , verify the configuration syntax, and then exit

The logstash configuration file looks correct at this stage, and I get no error when running it manually.

OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by com.headius.backport9.modules.Modules (file:/usr/share/logstash/logstash-core/lib/jars/jruby-complete-9.2.9.0.jar) to method sun.nio.ch.NativeThread.signal(long)
WARNING: Please consider reporting this to the maintainers of com.headius.backport9.modules.Modules
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
Sending Logstash logs to /var/log/logstash which is now configured via log4j2.properties
[2020-03-17T09:15:11,373][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2020-03-17T09:15:11,620][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"7.6.1"}
[2020-03-17T09:15:14,861][INFO ][org.reflections.Reflections] Reflections took 67 ms to scan 1 urls, producing 20 keys and 40 values 
[2020-03-17T09:15:16,777][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
[2020-03-17T09:15:17,123][WARN ][logstash.outputs.elasticsearch][main] Restored connection to ES instance {:url=>"http://localhost:9200/"}
[2020-03-17T09:15:17,257][INFO ][logstash.outputs.elasticsearch][main] ES Output version determined {:es_version=>7}
[2020-03-17T09:15:17,263][WARN ][logstash.outputs.elasticsearch][main] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2020-03-17T09:15:17,381][INFO ][logstash.outputs.elasticsearch][main] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//localhost:9200"]}
[2020-03-17T09:15:17,465][INFO ][logstash.filters.geoip   ][main] Using geoip database {:path=>"/opt/logstash/vendor/geoip/GeoLite2-City.mmdb"}
[2020-03-17T09:15:17,468][INFO ][logstash.outputs.elasticsearch][main] Using default mapping template
[2020-03-17T09:15:17,650][INFO ][logstash.outputs.elasticsearch][main] Attempting to install template {:manage_template=>{"index_patterns"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s", "number_of_shards"=>1, "index.lifecycle.name"=>"logstash-policy", "index.lifecycle.rollover_alias"=>"logstash"}, "mappings"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}
[2020-03-17T09:15:17,749][WARN ][org.logstash.instrument.metrics.gauge.LazyDelegatingGauge][main] A gauge metric of an unknown type (org.jruby.specialized.RubyArrayOneObject) has been create for key: cluster_uuids. This may result in invalid serialization.  It is recommended to log an issue to the responsible developer/development team.
[2020-03-17T09:15:17,755][INFO ][logstash.javapipeline    ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>250, "pipeline.sources"=>["/etc/logstash/conf.d/logstash-cowrie.conf"], :thread=>"#<Thread:0x116ac182 run>"}
[2020-03-17T09:15:19,601][INFO ][logstash.inputs.beats    ][main] Beats inputs: Starting input listener {:address=>"0.0.0.0:5044"}
[2020-03-17T09:15:19,616][INFO ][logstash.javapipeline    ][main] Pipeline started {"pipeline.id"=>"main"}
[2020-03-17T09:15:19,746][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2020-03-17T09:15:19,895][INFO ][org.logstash.beats.Server][main] Starting server on port: 5044
[2020-03-17T09:15:20,287][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}

I even see my events being processed. But I don't get anything on Kibana :frowning:

Mar 17 09:20:08 instance-39 logstash[10685]:                 "log" => {
Mar 17 09:20:08 instance-39 logstash[10685]:           "file" => {
Mar 17 09:20:08 instance-39 logstash[10685]:             "path" => "/home/user/cowrie/var/log/cowrie/cowrie.json"
Mar 17 09:20:08 instance-39 logstash[10685]:         },
Mar 17 09:20:08 instance-39 logstash[10685]:         "offset" => 3407571
Mar 17 09:20:08 instance-39 logstash[10685]:     },
Mar 17 09:20:08 instance-39 logstash[10685]:             "session" => "21831af1e664",
Mar 17 09:20:08 instance-39 logstash[10685]:               "hassh" => "d7f0b97eb79d6533c492545cc6a207e1",
Mar 17 09:20:08 instance-39 logstash[10685]:               "encCS" => [
Mar 17 09:20:08 instance-39 logstash[10685]:         [0] "aes256-ctr",
Mar 17 09:20:08 instance-39 logstash[10685]:         [1] "aes192-ctr",
Mar 17 09:20:08 instance-39 logstash[10685]:         [2] "aes128-ctr",
Mar 17 09:20:08 instance-39 logstash[10685]:         [3] "aes256-cbc",
Mar 17 09:20:08 instance-39 logstash[10685]:         [4] "aes192-cbc",
Mar 17 09:20:08 instance-39 logstash[10685]:         [5] "aes128-cbc",
Mar 17 09:20:08 instance-39 logstash[10685]:         [6] "blowfish-cbc",
Mar 17 09:20:08 instance-39 logstash[10685]:         [7] "3des-cbc"
Mar 17 09:20:08 instance-39 logstash[10685]:     ],
Mar 17 09:20:08 instance-39 logstash[10685]:             "keyAlgs" => [
Mar 17 09:20:08 instance-39 logstash[10685]:         [0] "ssh-ed25519",
Mar 17 09:20:08 instance-39 logstash[10685]:         [1] "ecdsa-sha2-nistp256",
Mar 17 09:20:08 instance-39 logstash[10685]:         [2] "ecdsa-sha2-nistp384",
Mar 17 09:20:08 instance-39 logstash[10685]:         [3] "ecdsa-sha2-nistp521",
Mar 17 09:20:08 instance-39 logstash[10685]:         [4] "ssh-rsa",
Mar 17 09:20:08 instance-39 logstash[10685]:         [5] "ssh-dss"
Mar 17 09:20:08 instance-39 logstash[10685]:     ],
Mar 17 09:20:08 instance-39 logstash[10685]:              "langCS" => [
Mar 17 09:20:08 instance-39 logstash[10685]:         [0] ""
Mar 17 09:20:08 instance-39 logstash[10685]:     ],
Mar 17 09:20:08 instance-39 logstash[10685]:                "type" => "cowrie",
Mar 17 09:20:08 instance-39 logstash[10685]:              "src_ip" => "XXXXX",
Mar 17 09:20:08 instance-39 logstash[10685]:               "macCS" => [
Mar 17 09:20:08 instance-39 logstash[10685]:         [0] "hmac-sha2-256",
Mar 17 09:20:08 instance-39 logstash[10685]:         [1] "hmac-sha2-512",
Mar 17 09:20:08 instance-39 logstash[10685]:         [2] "hmac-sha1"
Mar 17 09:20:08 instance-39 logstash[10685]:     ],
Mar 17 09:20:08 instance-39 logstash[10685]:                 "ecs" => {
Mar 17 09:20:08 instance-39 logstash[10685]:         "version" => "1.4.0"
Mar 17 09:20:08 instance-39 logstash[10685]:     },
Mar 17 09:20:08 instance-39 logstash[10685]:            "@version" => "1",
Mar 17 09:20:08 instance-39 logstash[10685]:                "host" => {
Mar 17 09:20:08 instance-39 logstash[10685]:         "name" => "instance-39"
Mar 17 09:20:08 instance-39 logstash[10685]:     },
Mar 17 09:20:08 instance-39 logstash[10685]:             "kexAlgs" => [
Mar 17 09:20:08 instance-39 logstash[10685]:         [0] "curve25519-sha256",
Mar 17 09:20:08 instance-39 logstash[10685]:         [1] "curve25519-sha256@libssh.org",
Mar 17 09:20:08 instance-39 logstash[10685]:         [2] "ecdh-sha2-nistp256",
Mar 17 09:20:08 instance-39 logstash[10685]:         [3] "ecdh-sha2-nistp384",
Mar 17 09:20:08 instance-39 logstash[10685]:         [4] "ecdh-sha2-nistp521",
Mar 17 09:20:08 instance-39 logstash[10685]:         [5] "diffie-hellman-group14-sha1",
Mar 17 09:20:08 instance-39 logstash[10685]:         [6] "diffie-hellman-group1-sha1"
Mar 17 09:20:08 instance-39 logstash[10685]:     ],
Mar 17 09:20:08 instance-39 logstash[10685]:           "timestamp" => "2020-03-17T08:55:57.561664Z",
Mar 17 09:20:08 instance-39 logstash[10685]:               "geoip" => {
Mar 17 09:20:08 instance-39 logstash[10685]:              "city_name" => "Shanghai",
Mar 17 09:20:08 instance-39 logstash[10685]:               "timezone" => "Asia/Shanghai",
Mar 17 09:20:08 instance-39 logstash[10685]:               "latitude" => 31.0449,
Mar 17 09:20:08 instance-39 logstash[10685]:                     "ip" => "XXXXXXXXXXX",
Mar 17 09:20:08 instance-39 logstash[10685]:           "country_name" => "China",
Mar 17 09:20:08 instance-39 logstash[10685]:          "country_code2" => "CN",
Mar 17 09:20:08 instance-39 logstash[10685]:         "continent_code" => "AS",
Mar 17 09:20:08 instance-39 logstash[10685]:          "country_code3" => "CN",
Mar 17 09:20:08 instance-39 logstash[10685]:            "region_name" => "Shanghai",
Mar 17 09:20:08 instance-39 logstash[10685]:               "location" => {
Mar 17 09:20:08 instance-39 logstash[10685]:             "lon" => 121.4012,
Mar 17 09:20:08 instance-39 logstash[10685]:             "lat" => 31.0449
Mar 17 09:20:08 instance-39 logstash[10685]:         },
Mar 17 09:20:08 instance-39 logstash[10685]:            "region_code" => "SH",
Mar 17 09:20:08 instance-39 logstash[10685]:              "longitude" => 121.4012
Mar 17 09:20:08 instance-39 logstash[10685]:     },
Mar 17 09:20:08 instance-39 logstash[10685]:           "@metadata" => {
Mar 17 09:20:08 instance-39 logstash[10685]:         "ip_address" => "0:0:0:0:0:0:0:1",
Mar 17 09:20:08 instance-39 logstash[10685]:               "beat" => "filebeat",
Mar 17 09:20:08 instance-39 logstash[10685]:               "type" => "_doc",
Mar 17 09:20:08 instance-39 logstash[10685]:            "version" => "7.6.1"
Mar 17 09:20:08 instance-39 logstash[10685]:     },
Mar 17 09:20:38 instance-39 systemd-journald[234]: Suppressed 17354 messages from logstash.service
Mar 17 09:20:38 instance-39 logstash[10685]: [2020-03-17T09:20:38,266][INFO ][logstash.outputs.file    ][main] Closing file /tmp/cowrie-logstash.json

If you send events from filebeat to elastic, that happens for sure.

Not so sure about it if your events go through logstash first. If you don't specify an index in logstash it should set it to something like logstash-%{timestamp} if I remember well.

Have you tried looking for that pattern?

You are right. If filebeat sends to logstash, then it does not use that filebeat ILM index, and I need to specify it in logstash.

Thanks for your help!

No problem! :slight_smile:

Before you start Logstash in production, test your configuration file . If you run Logstash from the command line, you can specify parameters that will verify your configuration for you. This will run through your configuration , verify the configuration syntax, and then exit