The filebeat dashboard : No results found

so if i use filebeat& modules with elasticsearch directly without using logstash, the dashboard will pick up my data without any custom configration,right?

Yes. If files are at assumed location and log format is not changed in services configuration.

if yes, could u plz recommend the beat, elasticsearch and kibana version that i can use and it's recommended to use tar or rpm way.

modules are still a very new feature (still in beta) and are still being improved upon. I'd use the most recent versions of the complete stack. Currently version 5.4.1.
Personally I'd prefer rpm of tar, so I can use the systems packet management tools. Just for testing/playing with beats, tar files are ok though.

2017-06-18T00:02:30+01:00 ERR Failed to publish events caused by: write tcp ELK_IP:49410->ELK_IP:5044: write: connection reset by peer

Logstash is closing (supposed to be) idle connections. That is, here it is Logstash closing the connection. Depending on 'timing' this occurs (before sending), it can be ok or bad (if this happens when beats is waiting for ACK). Filebeat will automatically reconnect and send again. Updating logstash to most recent version and increasing the client_inconnectivity_timeout in the beats input normally helps. A bug in older logstash versions did sometimes close connections while Filebeat was waiting for ACK. Having this fixed, the error is tolerable, as filebeat will reconnect and send new events (no data loss).

Actually i tried to read the mentioned links to create custom logstash filter but i can't understand them

The link points to a script in the logstash development branch. You will need a development environment with Java (and maybe NodeJS) to build and run this script. Still, the script can not translate all filters in the pipeline configuration and it's a non-trivial task. I'd recommend using the ingest node pipeline. If you really need to use Logstash, but want to use the ingest pipeline from ES as well (it's somewhat inefficient as it duplicates some effort), here is another trick you can try:

filebeat.prospectors:
- type: log
  fields:
    logtype: "mysqlerror"
    pipeline: "mysqlerror"
  paths:
    - /var/log/mysql/error.log*
    - /var/log/mysqld.log*
  exclude_files: [".gz$"]
- type: log
  fields:
    logtype: "mysqlslow"
    pipeline: "mysqlslow"
  paths:
    - /var/log/mysql/mysql-slow.log*
    - /var/lib/mysql/{{.builtin.hostname}}-slow.log
  exclude_files: ['.gz$']
  multiline:
    pattern: '^# User@Host: '
    negate: true
    match: after
  exclude_lines: ['^[\/\w\.]+, Version: .* started with:.*']   # Exclude the header

in logstash:

input {
  beat {
    port => 5044
  }
}

outputs {
  elasticsearch {
    hosts => "localhost:9200"
    manage_template => false
    index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
    document_type => "%{[@metadata][type]}"
    pipeline => "%{[fields][pipeline]}"
  }
}

Here we configure the fields.pipeline event field in filebeat and use the field in the logstash output to use the ingest pipeline configured in filebeat. The pipeline you will have to install by yourself using curl on the modules pipeline definition (shipped with filebeat). Look for the ingest directories in the modules file, to find the json file defining the pipeline. With curl, the file can be installed into ES using the ingest API as is. Again, this is a not so nice workaround. Better connect Filebeat directly to Elastchsearch if you want to use modules and the dashboards.