Invalid Index Name-Index name must be lowercase error

I am getting error in logstash logs saying that the index is invalid and reason is it should be lower case.but the index name in the log is different from the index i declared in the logstash.config file.in log file it is showing Tapas-sys but i declared in config file tsys.Tapas-sys the event name i tried to use 2 months back but after seeing the message i changed it still i am seeing this message.Please help on this topic.This is logstash 7.2.0 version.

Please share your logstash configuration, it is not clear what you are doing.

input {
tcp {
port => 12345
codec => json {}
}

stdin {
codec => json {}
}

}

filter {
date {
match => [ "event_start_datetime", "MM/dd/yyyy HH-mm-ss.SSS"]
target => "event_start_datetime"
}
date {
match => [ "event_end_datetime", "MM/dd/yyyy HH-mm-ss.SSS"]
target => "event_end_datetime"
}
date {
match => [ "monitor_datetime", "MM/dd/yyyy HH-mm-ss"]
target => "monitor_datetime"
}

}

output {

if [type] == "syslog" {
elasticsearch {
hosts => ["localhost:9200"]
index => "ts091222"
}
} else {
elasticsearch {
hosts => ["localhost:9200"]
index => "te091222"
}
}
}

this is the logstash.config file content

Well, there is nothing wrong in this config.

What is the log you are receiving? Please share it using the preformatted button, the </> button.

Paste your text, select it and click in this button.

[2022-09-13T03:00:29,403][ERROR][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"Tapas-Event", :_type=>"_doc", :routing=>nil}, #<LogStash::Event:0x42ebd4cf>], :response=>{"index"=>{"_index"=>"Tapas-Event", "_type"=>"_doc", "_id"=>nil, "status"=>400, "error"=>{"type"=>"invalid_index_name_exception", "reason"=>"Invalid index name [Tapas-Event], must be lowercase", "index_uuid"=>"_na_", "index"=>"Tapas-Event"}}}}
[2022-09-13T03:00:29,409][ERROR][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"Tapas-Event", :_type=>"_doc", :routing=>nil}, #<LogStash::Event:0x65e34690>], :response=>{"index"=>{"_index"=>"Tapas-Event", "_type"=>"_doc", "_id"=>nil, "status"=>400, "error"=>{"type"=>"invalid_index_name_exception", "reason"=>"Invalid index name [Tapas-Event], must be lowercase", "index_uuid"=>"_na_", "index"=>"Tapas-Event"}}}}
[2022-09-13T03:00:29,409][ERROR][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"Tapas-Event", :_type=>"_doc", :routing=>nil}, #<LogStash::Event:0x397ae3e8>], :response=>{"index"=>{"_index"=>"Tapas-Event", "_type"=>"_doc", "_id"=>nil, "status"=>400, "error"=>{"type"=>"invalid_index_name_exception", "reason"=>"Invalid index name [Tapas-Event], must be lowercase", "index_uuid"=>"_na_", "index"=>"Tapas-Event"}}}}

There is nothing in your configuration that would do that, you have other pipelines running.

How are you running Logstash? What does your pipelines.yml looks like?

nohup ./logstash -f logstash.config >>nohup.out 2>&1 &
this command used to start the logstash

# - pipeline.id: test
#   pipeline.workers: 1
#   pipeline.batch.size: 1
#   config.string: "input { generator {} } filter { sleep { time => 1 } } output { stdout { codec => dots } }"
# - pipeline.id: another_test
#   queue.type: persisted
#   path.config: "/tmp/logstash/*.config"
#
# Available options:
#
#   # name of the pipeline
#   pipeline.id: mylogs
#
#   # The configuration string to be used by this pipeline
#   config.string: "input { generator {} } filter { sleep { time => 1 } } output { stdout { codec => dots } }"
#
#   # The path from where to read the configuration text
#   path.config: "/etc/conf.d/logstash/myconfig.cfg"
#
#   # How many worker threads execute the Filters+Outputs stage of the pipeline
#   pipeline.workers: 1 (actually defaults to number of CPUs)
#
#   # How many events to retrieve from inputs before sending to filters+workers
#   pipeline.batch.size: 125
#
#   # How long to wait in milliseconds while polling for the next event
#   # before dispatching an undersized batch to filters+outputs
#   pipeline.batch.delay: 50
#
#   # Internal queuing model, "memory" for legacy in-memory based queuing and
#   # "persisted" for disk-based acked queueing. Defaults is memory
#   queue.type: memory
#
#   # If using queue.type: persisted, the page data files size. The queue data consists of
#   # append-only data files separated into pages. Default is 64mb
#   queue.page_capacity: 64mb
#
#   # If using queue.type: persisted, the maximum number of unread events in the queue.
#   # Default is 0 (unlimited)
#   queue.max_events: 0
#
#   # If using queue.type: persisted, the total capacity of the queue in number of bytes.
#   # Default is 1024mb or 1gb
#   queue.max_bytes: 1024mb
#
#   # If using queue.type: persisted, the maximum number of acked events before forcing a checkpoint
#   # Default is 1024, 0 for unlimited
#   queue.checkpoint.acks: 1024
#
#   # If using queue.type: persisted, the maximum number of written events before forcing a checkpoint
#   # Default is 1024, 0 for unlimited
#   queue.checkpoint.writes: 1024
#
#   # If using queue.type: persisted, the interval in milliseconds when a checkpoint is forced on the head page
#   # Default is 1000, 0 for no periodic checkpoint.
#   queue.checkpoint.interval: 1000
#
#   # Enable Dead Letter Queueing for this pipeline.
#   dead_letter_queue.enable: false
#
#   If using dead_letter_queue.enable: true, the maximum size of dead letter queue for this pipeline. Entries
#   will be dropped if they would increase the size of the dead letter queue beyond this setting.
#   Default is 1024mb
#   dead_letter_queue.max_bytes: 1024mb
#
#   If using dead_letter_queue.enable: true, the directory path where the data files will be stored.
#   Default is path.data/dead_letter_queue
#
#   path.dead_letter_queue:

If you are starting using the command line with the -f parameter then the pipelines.yml is being ignored, which would not make any difference as everything seems to be commented.

What is the content of this file: logstash.config? It can be not the same that you shared before.

Your issue is that you have an output trying to index on a index named Tapas-Event, which has upper case letter that are not allowed, you need to find this output and edit it to lower case.

i am trying to find where i am using this index name Tapas-Event ,i am actually using ts091222 and te091222 as 2 index names.

You will need to check your configurations and how you are running logstash, maybe you are not running what you think you are.

The error is pretty clear, there is something trying to create an index with the name Tapas-Event, the configuration you shared does not have any output with those index, so you have another configuration running.

Where can i find the other config files, will it be in logstash/config folder only or anywhere else.

Only you can know that, you need to check how are you running Logstash, if you may have another logstash in your system, things like that.

Check the logstash process with ps for some hints.

I killed the elastic instance first so it automatically killed the kibana and logstash instances.
then i started the elastic,kibana and logstash instance with below commands in their respective bin folder.then i ran the python script to load the event and syslog files from a file.
nohup ./elasticsearch >> nohup.out 2>&1 &
nohup ./kibana >> nohup.out 2>&1 &
nohup ./logstash -f /apps/eagle/ELK/logstash-7.2.0/config/logstash.config >> nohup.out 2>&1 &
nohup python events_to_elk_2.py --mode dir --ls_host prf-strdata01-a01.eagleinvsys.com --ls_port 12345 --dir_path /apps/eagle/data/Engine-Logs/Engine-Logs/r42lp40/C4A/logs >> nohup.out 2>&1 &

then after this i checked the log present in the logstash logs folder,now i am seeing a different error

 Pipeline_id:main
  Plugin: <LogStash::Inputs::Stdin codec=><LogStash::Codecs::JSON id=>"de311fb9-5222-4b8f-a9e4-fb4cfdcee32b", enable_metric=>true, charset=>"UTF-8">, id=>"daaabc0b607edb4ad36728e9ed3b570b67d18a5d0492129180e07863eee7062d", enable_metric=>true>
  Error: Bad file descriptor - Bad file descriptor
  Exception: Errno::EBADF
  Stack: com/jrubystdinchannel/StdinChannelLibrary.java:101:in `read'
/apps/eagle/ELK/logstash-7.2.0/vendor/bundle/jruby/2.5.0/gems/logstash-input-stdin-3.2.6/lib/logstash/inputs/stdin.rb:77:in `channel_read'
/apps/eagle/ELK/logstash-7.2.0/vendor/bundle/jruby/2.5.0/gems/logstash-input-stdin-3.2.6/lib/logstash/inputs/stdin.rb:37:in `run'
/apps/eagle/ELK/logstash-7.2.0/logstash-core/lib/logstash/java_pipeline.rb:309:in `inputworker'
/apps/eagle/ELK/logstash-7.2.0/logstash-core/lib/logstash/java_pipeline.rb:302:in `block in start_input'
[2022-09-15T04:47:31,854][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2022-09-15T04:47:32,105][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
[2022-09-15T04:47:32,823][ERROR][logstash.javapipeline    ] A plugin had an unrecoverable error. Will restart this plugin.
  Pipeline_id:main
  Plugin: <LogStash::Inputs::Stdin codec=><LogStash::Codecs::JSON id=>"de311fb9-5222-4b8f-a9e4-fb4cfdcee32b", enable_metric=>true, charset=>"UTF-8">, id=>"daaabc0b607edb4ad36728e9ed3b570b67d18a5d0492129180e07863eee7062d", enable_metric=>true>
  Error: Bad file descriptor - Bad file descriptor

Never saw this error, but it is something related to your file system.

Do you need the stdin input? If you do not need it, I recommend removing it, the stdin input makes sense when you are testing things, running Logstash in background with the stdin input does not make any sense.

I killed the elastic instance first so it automatically killed the kibana and logstash instances.

Killing Elasticsearch does not kill Logstash or Kibana, if this happened then there is something weird ni your system.

Any reason to use nohup instead of running as a service with systemd?

how to stop the logstash(any command)?if i start the logstash till what time it will run?
any command to know how many instances of logstash running?

Usually you start Logstash as a service, so for distros that use it you can use:
systemctl start logstash
systemctl restart logstash
systemctl stop logstash
commands.

There is no possibility to run more than one. Maybe you think of pipelines/listeners?