Problem to start filebeat

Hello
II set a ELK Server(172.16.223.19) and I installed a filebeat (172.16.28.153) using the shipper for apache logs.

When i run a filebeat, normally transfer a access log for a few seconds, then I got an error message..

2015/12/13 07:28:39.251785 publish.go:98: DBG Publish: {
"@timestamp": "2015-12-13T07:28:36.536Z",
"beat": {
"hostname": "HSNCEOWS1P",
"name": "HSNCEOWS1P"
},
"count": 1,
"fields": null,
"input_type": "log",
"message": "- 172.16.8.123 - GET - [11/Dec/2015:16:59:08 +0900] /neo/js/ewt/jquery/jquery-ui.min.1.11.4.js"GET /neo/js/ewt/jquery/jquery-ui.min.1.11.4.js HTTP/1.1" 304 -",
"offset": 170262,
"source": "/data1/apache/apache229/logs/access_log",
"type": "log"
}
2015/12/13 07:28:39.251793 preprocess.go:94: DBG Forward preprocessed events
2015/12/13 07:28:39.251824 output.go:103: DBG output worker: publish 1024 events
2015/12/13 07:30:09.256843 single.go:75: INFO Error publishing events (retrying): read tcp 172.16.28.153:55386->172.16.223.19:11010: i/o timeout
2015/12/13 07:30:09.256860 single.go:143: INFO send fail
2015/12/13 07:30:09.256866 single.go:150: INFO backoff retry: 1s
^C2015/12/13 07:34:24.302072 registrar.go:129: INFO Stopping Registrar
2015/12/13 07:34:24.302084 registrar.go:93: INFO Ending Registrar
2015/12/13 07:34:24.302174 registrar.go:157: INFO Registry file updated. 0 states written.
2015/12/13 07:34:24.302189 beat.go:181: INFO Cleaning up filebeat before shutting down.

Then This is my filebeat config.
filebeat:

List of prospectors to fetch data.

prospectors:
# Each - is a prospector. Below are the prospector specific configurations
-
# Paths that should be crawled and fetched. Glob based paths.
# To fetch all ".log" files from a specific level of subdirectories
# /var/log//.log can be used.
# For each file found under this path, a harvester is started.
# Make sure not file is defined twice as this can lead to unexpected behaviour.
paths:
# - /var/log/*.log
- /data1/apache/apache229/logs/access_log
# - c:\programdata\elasticsearch\logs*
...

Type of the files. Based on this the way the file is read is decided.

  # The different types cannot be mixed in one prospector
  #
  # Possible options are:
  # * log: Reads every line of the log file (default)
  # * stdin: Reads the standard in
  input_type: log

...
output:

Elasticsearch as output

#elasticsearch:
# Array of hosts to connect to.
# Scheme and port can be left out and will be set to the default (http and 9200)
# In case you specify and additional path, the scheme is required: http://localhost:9200/path
# IPv6 addresses should always be defined as: https://[2001:db8::1]:9200

hosts: ["localhost:9200"]

...

Logstash as output

logstash:
# The Logstash hosts
hosts: ["172.16.223.19:11010"]
# Number of workers per Logstash host.
#worker: 1
# Optional load balance the events between the Logstash hosts
#loadbalance: true
# Optional index name. The default index name depends on the each beat.
# For Packetbeat, the default is set to packetbeat, for Topbeat
# top topbeat and for Filebeat to filebeat.
index: filebeat

How can I fix it ?
Please help me ASAP.......

What does your Logstash configuration look like? Have you verified that you're able to connect to 172.16.223.19:11010 from the Filebeat machine?

Yes. I checked a connection between apache server and logstash server.
There is a normal.

This is my logstash configuration

input {
tcp {
port => "11030"
}
}

output {
elasticsearch {
host => "eplog01"
index => "sharepoint-%{+YYYY.MM.dd}"
}
}

Use the beats input with Filebeat, not tcp.