Problem Couldn't find any Elasticsearch data

hi my friend, i installed ealstichsearch, logstach, kibana and filebeat on the same server but my problem is that cant visulaize my data in kibana, when tryin to create an index ig give me an error "Couldn't find any Elasticsearch data" when i try to check : http://localhost:9200/_cat/indices?v i cant see logstach indicekebanaerror logstach%20index

You are not giving anyone anything to work with. No information on your environment or anything. Please include as much information as possible. OS, versions, logs, etc....
Have you even set up logstash to ingest any files? Or filebeat?

i follwed this tutoriel (https://www.howtoforge.com/tutorial/how-to-install-elastic-stack-on-ubuntu-16-04/ )and have done the configuration just i chaged elk-master with localhost
https://www.howtoforge.com/tutorial/how-to-install-elastic-stack-on-ubuntu-16-04/

You still don't give us any logs or anything. Please read instructions on how to format logs/code using format codes and post them.

i have no error just in kibana it show the message mentionned in the first post there is n error i checked logstch is runing , same thing for elastichsearch, nginx and filebeat

It looks to me like what you're trying to create is an index pattern, not an index, and as explained in the tutorial:

When you define an index pattern, the indices that match that pattern must exist in Elasticsearch and they must contain data.

So you will first have to start up Logstash and (or) Filebeat in order to create and populate logstash-YYYY.MMM.DD and filebeat-YYYY.MMM.DD indices in your Elasticsearch instance. After that you can create index patterns for these indices in Kibana.

Good luck!

before coonecting to bibana i have already started Logstash , Filebeat and elsatic search, but i have the same problem i cant see filebeat index i find only kibana indexes

Then I would guess that there is a problem with the configuration of Logstash and Metricbeat or that there is a firewall stopping them from indexing data into Elasticsearch.

Perhaps you could try to run Metricbeat manually to see what kind of errors you get? Something like this:

metricbeat -e -c metricbeat.yml

When I run it manually I get information about host and port number that Metricbeat connects to, and if a connection was established:

2019-04-29T16:42:34.667+0200 INFO pipeline/output.go:95 Connecting to backoff(async(tcp://es-host:5044))
2019-04-29T16:42:34.696+0200 INFO pipeline/output.go:105 Connection to backoff(async(tcp://es-host:5044)) established

I hope you can get some useful error messages this way.

but in my configuration i use filebeat not metricbeat, so can i use metricbeat even if i didnt insall it and what where the file metricbeat.yml situated

Sorry, I forgot that. I have not used Filebeat but I assume you can run that manually too and get better error messages.

but in my configuration i use filebeat not metricbeat, so can i use metricbeat even if i didnt insall it and what where the file metricbeat.yml situatedbut in my configuration i use filebeat not metricbeat, so can i use metricbeat even if i didnt insall it and what where the file metricbeat.yml situated;this what i obteain when i chek the status of and the content of filebeat.yml ,logstash.yml ,elastichsearxh.yml and kibana.yml

this is kibana.yml

Kibana is served by a back end server. This setting specifies the port to use.

server.port: 5601

Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.

The default is 'localhost', which usually means remote machines will not be able to connect.

To allow connections from remote users, set this parameter to a non-loopback address.

server.host: "localhost"
elasticsearch.url: "http://localhost:9200"

Enables you to specify a path to mount Kibana at if you are running behind a proxy.

Use the server.rewriteBasePath setting to tell Kibana if it should remove the basePath

from requests it receives, and to prevent a deprecation warning at startup.

This setting cannot end in a slash.

#server.basePath: ""

Specifies whether Kibana should rewrite requests that are prefixed with

server.basePath or require that they are rewritten by your reverse proxy.

This setting was effectively always false before Kibana 6.3 and will

default to true starting in Kibana 7.0.

#server.rewriteBasePath: false

The maximum payload size in bytes for incoming server requests.

#server.maxPayloadBytes: 1048576

The Kibana server's name. This is used for display purposes.

#server.name: "your-hostname"

The URLs of the Elasticsearch instances to use for all your queries.

#elasticsearch.hosts: ["http://localhost:9200"]

When this setting's value is true Kibana uses the hostname specified in the server.host

setting. When the value of this setting is false, Kibana uses the hostname of the host

that connects to this Kibana instance.

#elasticsearch.preserveHost: true

Kibana uses an index in Elasticsearch to store saved searches, visualizations and

dashboards. Kibana creates a new index if the index doesn't already exist.

#kibana.index: ".kibana"

The default application to load.

#kibana.defaultAppId: "home"

If your Elasticsearch is protected with basic authentication, these settings provide

the username and password that the Kibana server uses to perform maintenance on the Kibana

index at startup. Your Kibana users still need to authenticate with Elasticsearch, which

is proxied through the Kibana server.

#elasticsearch.username: "user"
#elasticsearch.password: "pass"

Enables SSL and paths to the PEM-format SSL certificate and SSL key files, respectively.

These settings enable SSL for outgoing requests from the Kibana server to the browser.

#server.ssl.enabled: false
#server.ssl.certificate: /path/to/your/server.crt
#server.ssl.key: /path/to/your/server.key

Optional settings that provide the paths to the PEM-format SSL certificate and key files.

These files validate that your Elasticsearch backend uses the same key files.

#elasticsearch.ssl.certificate: /path/to/your/client.crt
#elasticsearch.ssl.key: /path/to/your/client.key

Optional setting that enables you to specify a path to the PEM file for the certificate

authority for your Elasticsearch instance.

#elasticsearch.ssl.certificateAuthorities: [ "/path/to/your/CA.pem" ]

To disregard the validity of SSL certificates, change this setting's value to 'none'.

#elasticsearch.ssl.verificationMode: full

Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of

the elasticsearch.requestTimeout setting.

#elasticsearch.pingTimeout: 1500

Time in milliseconds to wait for responses from the back end or Elasticsearch. This value

must be a positive integer.

#elasticsearch.requestTimeout: 30000

List of Kibana client-side headers to send to Elasticsearch. To send no client-side

headers, set this value to (an empty list).

#elasticsearch.requestHeadersWhitelist: [ authorization ]

Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten

by client-side headers, regardless of the elasticsearch.requestHeadersWhitelist configuration.

#elasticsearch.customHeaders: {}

Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable.

#elasticsearch.shardTimeout: 30000

Time in milliseconds to wait for Elasticsearch at Kibana startup before retrying.

#elasticsearch.startupTimeout: 5000

Logs queries sent to Elasticsearch. Requires logging.verbose set to true.

#elasticsearch.logQueries: false

Specifies the path where Kibana creates the process ID file.

#pid.file: /var/run/kibana.pid

Enables you specify a file where Kibana stores log output.

#logging.dest: stdout

Set the value of this setting to true to suppress all logging output.

#logging.silent: false

Set the value of this setting to true to suppress all logging output other than error messages.

#logging.quiet: false

Set the value of this setting to true to log all events, including system usage information

and all requests.

#logging.verbose: false

Set the interval in milliseconds to sample system and process performance

metrics. Minimum is 100ms. Defaults to 5000.

#ops.interval: 5000

Specifies locale to be used for all localizable strings, dates and number formats.

#i18n.locale: "en"

this is what i have done in the first section of filebeat.yml
filebeat.inputs:

Each - is an input. Most options can be set at the input level, so

you can use different inputs for various configurations.

Below are the input specific configurations.

  • type: log

    Change to true to enable this input configuration.

    enabled: false

    Paths that should be crawled and fetched. Glob based paths.

    paths:

    • /var/log/auth.log
    • /var/log/syslog
      document_type: syslog
      #- /var/log/*.log
      #- c:\programdata\elasticsearch\logs*

    Exclude lines. A list of regular expressions to match. It drops the lines that are

    matching any regular expression from the list.

    #exclude_lines: ['^DBG']

    Include lines. A list of regular expressions to match. It exports the lines that are

    matching any regular expression from the list.

    #include_lines: ['^ERR', '^WARN']

    Exclude files. A list of regular expressions to match. Filebeat drops the files that

    are matching any regular expression from the list. By default, no files are dropped.

    #exclude_files: ['.gz$']

    Optional additional fields. These fields can be freely picked

    to add additional information to the crawled log files for filtering

    #fields:
    #document_type:syslog

    level: debug

    review: 1

    Multiline options

    Multiline can be used for log messages spanning multiple lines. This is common

    for Java Stack Traces or C-Line Continuation

    The regexp Pattern that has to be matched. The example pattern matches all lines starting with [

    #multiline.pattern: ^[

    Defines if the pattern set under pattern should be negated or not. Default is false.

    #multiline.negate: false

    Match can be set to "after" or "before". It is used to define if lines should be append to a pattern

    that was (not) matched before or after or as long as a pattern is not matched based on negate.

    Note: After is the equivalent to previous and before is the equivalent to to next in Logstash

    #multiline.match: after

logstash.yml is :
path.data: /var/lib/logstash
path.logs: /var/log/logstash

here are the diffrents files : inoput filter and outpout :

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.