Failed to execute action

Recently upgraded Logstash to 7.8.1 and unable to start Logstash. The error message I am getting:

[2020-08-18T22:56:00,488][ERROR][logstash.agent           ] Failed to execute action {:id=>:main, :action_type=>LogStash::ConvergeResult::FailedAction, :message=>"Could not execute action: PipelineAction::Create<main>, action_result: false", :backtrace=>nil}
[2020-08-18T22:56:00,619][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
[2020-08-18T22:56:05,667][INFO ][logstash.runner          ] Logstash shut down.
[2020-08-18T22:56:18,927][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"7.8.1", "jruby.version"=>"jruby 9.2.11.1 (2.5.7) 2020-03-25 b1f55b1a40 OpenJDK 64-Bit Server VM 11.0.8+10-post-Ubuntu-0ubuntu118.04.1 on 11.0.8+10-post-Ubuntu-0ubuntu118.04.1 +indy +jit [linux-x86_64]"}
[2020-08-18T22:56:21,981][INFO ][org.reflections.Reflections] Reflections took 27 ms to scan 1 urls, producing 21 keys and 41 values
[2020-08-18T22:59:13,114][ERROR][logstash.agent           ] Failed to execute action {:id=>:main, :action_type=>LogStash::ConvergeResult::FailedAction, :message=>"Could not execute action: PipelineAction::Create<main>, action_result: false", :backtrace=>nil}

Here is my configs:
01_input.conf:

input {
        beats {
                port => 5044
                ssl => true
                ssl_certificate => "/etc/logstash/conf.d/certs/logstash-forwarder.crt"
                ssl_key => "/etc/logstash/conf.d/certs/logstash-forwarder.key"
        }
}

23_outputs.conf:

output {
        if "%ASA-" in [message] {
                s3 {
                        access_key_id => "Removed"
                        secret_access_key => "Removed"
                        region => "us-east-1"
                        bucket => "logs"
                        prefix => "logs/cisco-asa/%{+YYYY}/%{+MM}/%{+dd}"
                        size_file => "500000000"
                        time_file => "5"
                        codec => "json_lines"
                        storage_class => "STANDARD"
                }
        }
        if [pan_type] == "TRAFFIC" {
                s3 {
                        access_key_id => "Removed"
                        secret_access_key => "Removed"
                        region => "us-east-1"
                        bucket => "logs"
                        prefix => "logs/palo-alto/%{+YYYY}/%{+MM}/%{+dd}"
                        size_file => "500000000"
                        time_file => "5"
                        codec => "json_lines"
                        storage_class => "STANDARD"
                }
        }
                else if [pan_type] == "THREAT" {
                        s3 {
                                access_key_id => "Removed"
                                secret_access_key => "Removed"
                                region => "us-east-1"
                                bucket => "logs"
                                prefix => "logs/palo-alto/%{+YYYY}/%{+MM}/%{+dd}"
                                size_file => "500000000"
                                time_file => "5"
                                codec => "json_lines"
                                storage_class => "STANDARD"
                }
        }
                else if [pan_type] == "SYSTEM" {
                        s3 {
                                access_key_id => "Removed"
                                secret_access_key => "Removed"
                                region => "us-east-1"
                                bucket => "logs"
                                prefix => "logs/palo-alto/%{+YYYY}/%{+MM}/%{+dd}"
                                size_file => "500000000"
                                time_file => "5"
                                codec => "json_lines"
                                storage_class => "STANDARD"
                        }
                }
}

logstash.yml:

node.name: logstash
path.data: /var/lib/logstash
log.level: info
path.logs: /var/log/logstash

FWIW I am using an AWS instance with Ubuntu 18.04 installed

error message seems to point to pipeline file but I don't see any issues with it.
pipeline.yml:

- pipeline.id: main
  path.config: "/etc/logstash/conf.d/*.conf"

could this be a permissions issue? from the error message it appears to be saying its not successful with creating pipeline main.
What should the permissions be for logstash directory?
Here is what I currently have set:

-rw-r--r--  1 logstash logstash 10706 Aug 19 16:23 logstash.yml
-rw-r--r--  1 logstash logstash   285 Aug 11 23:57 pipelines.yml

@aaron-nimocks @Badger

I created a new conf file titled ifo.conf and tried the following config and it works:

input {
  beats {
    port => 5044
  }
}

output {
  stdout { }
}

So I believe its pointing to a permissions issue just not sure what is actually the issue with the permissions I have set. The permissions are the same for the new conf file I just created and the conf files I want to use, see below:

-rw-r--r-- 1 logstash logstash  221 Aug 19 18:17 01_input.conf
-rw-r--r-- 1 logstash logstash 8469 Aug 19 15:48 02_cisco-asa.conf
-rw-r--r-- 1 logstash logstash 6719 Aug 19 15:48 03_palo-alto.conf
-rw-r--r-- 1 logstash logstash 2620 Aug 19 15:49 10_output.conf
-rw-r--r-- 1 logstash logstash   96 Aug 19 18:31 ifo.conf

I'd just start doing step by step testing.

input {
        beats {
                port => 5044
                ssl => true
                ssl_certificate => "/etc/logstash/conf.d/certs/logstash-forwarder.crt"
                ssl_key => "/etc/logstash/conf.d/certs/logstash-forwarder.key"
        }
}
output {
  stdout { }
}

Run this and does it work? If so you verified the input. Then do the filter if you have one. Then 1 by 1 add in the output until you hit the error.

I know it doesn't sound fun but nothing is really sticking out to me at a quick look.

Thanks for the suggestion. I went through each filter conf file I want to use and the culprit was the ciscoasa conf file. Nothing was actually wrong with that conf file but I remembered that I added additional cisco firewall tags to my grok pattern which in turn I had to update the firewalls pattern file.

For anybody else that come across this issue make sure you double check the firewalls file located at /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-patterns-core-4.1.2/patterns to make sure each ciscotag you use in your filter is listed in that file.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.