Elasticsearch throws exceptions while using with logstash and kibana

Hello Experts,

I am new to elasticsearch and i am using elasticsearch in embedded mode
with logstash.

I have recently started exploring logstash after i heard it from my peers
who attended puppetconf 2012

We have deployed logstash 1.1.9 in standalone mode with embedded
elasticsearch on a EC2 instance.
Elasticsearch version is 0.20.2
We have also installed kibana as a frontend for logstash.
Since, we are using the embedded version of elasticsearch, i assume it
doesn't have any compatibility issues with logstash version.

{:message=>"Failed to index an event, will retry",
:exception=>org.elasticsearch.discovery.MasterNotDiscoveredException:
waited for [1m],
:event=>{"@source"=>"file://ip-10-xx-xx-xx.ec2.internal/var/log/messages",
"@tags"=>[], "@fields"=>{}, "@timestamp"=>"2013-03-01T09:08:39.998Z",
"@source_host"=>"ip-10-157-xx-xx.ec2.internal",
"@source_path"=>"/var/log/messages", "@message"=>"Mar 1 00:14:52
ip-10-122-70-221@type"=>"linuxsyslog"}, :level=>:warn}

I am also seeing following exception

{:message=>"Failed to index an event, will retry",
:exception=>org.elasticsearch.transport.RemoteTransportException: [Imperial
Hydra][inet[/10.157.38.34:9300]][index],
:event=>{"@source"=>"file://ip-10-xx-xx-xx.ec2.internal/var/log/",
"@tags"=>[], "@fields"=>{}, "@timestamp"=>"2013-02-27T13:49:13.515Z",
"@source_host"=>"ip-10-xx-xx-xx.ec2.internal", "@source_path"=>"/var/log/",

{:message=>"Failed to index an event, will retry",
:exception=>org.elasticsearch.transport.RemoteTransportException: [Ariann]
{:message=>"Failed to index an event, will retry",
:exception=>org.elasticsearch.transport.RemoteTransportException: [Gorgon]
{:message=>"Failed to index an event, will retry",
:exception=>org.elasticsearch.transport.RemoteTransportException:
[Forgotten One]

following processes are running on the host

root 22155 1 1 00:11 pts/0 00:01:52 /usr/bin/java -jar
/opt/logstash/bin/logstash.jar agent --config
/opt/logstash/conf/logstash.conf --log /opt/logstash/log/logstash.log
--grok-patterns-path /opt/logstash/patterns
root 22039 1 0 00:08 ? 00:00:01 ruby kibana.rb

root 28932 13009 0 02:05 pts/0 00:00:00 grep -i java
tcp 0 0 0.0.0.0:9200 0.0.0.0:*
LISTEN 22155/java
tcp 0 0 0.0.0.0:80 0.0.0.0:*
LISTEN 22039/ruby
tcp 0 0 0.0.0.0:9300 0.0.0.0:*
LISTEN 22155/java
tcp 0 0 0.0.0.0:9301 0.0.0.0:*
LISTEN 22155/java

logstash.conf

input {
file {
type => "linuxsyslog"

        # Wildcards work, here :)
        path => [ "/var/log/messages" ]
    sincedb_path    => "/opt/logstash"
  }

file {
    type => "merrors"
    path => [ "/var/log/m.errors*" ]
    sincedb_path    => "/opt/logstash"
}

file {
            type     => "mall"
            path     => [ "/var/log/m*" ]
    exclude    => [ "/var/log/m.errors*" ]
    sincedb_path    => "/opt/logstash"
    }

}

filter {

  date {
          type => "syslog"
          syslog_timestamp => [ "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
 }

}

output {
elasticsearch {
embedded => true
host => "10.157.3x.3x"
}
}

If i restart logstash and try to use the search engine, it works and gives
output for a while, before it starts complaining about the above errors.

curl -XGET 'http://127.0.0.1:9200/_cluster/health?pretty=true'

{
"cluster_name" : "elasticsearch",
"status" : "red",
"timed_out" : false,
"number_of_nodes" : 2,
"number_of_data_nodes" : 1,
"active_primary_shards" : 26,
"active_shards" : 26,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 34
}

Also, i am not sure why the above output shows "number of nodes" = 2
I am using logstash and elasticsearch in standalone mode.
The above output was taken when everything was running smoothly.

I am running these on an small EC2 instance running RHEL 5.4 . ( 1.7 GB RAM
)
java version is 1.6

Please suggest on how to fix these errors.

Thanks

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Here are the exceptions i am seeing

:exception=>org.elasticsearch.action.UnavailableShardsException:
:exception=>org.elasticsearch.discovery.MasterNotDiscoveredException:
:exception=>org.elasticsearch.transport.RemoteTransportException:

On Monday, 1 April 2013 18:02:09 UTC+5:30, Sunny Jaisinghani wrote:

Hello Experts,

I am new to elasticsearch and i am using elasticsearch in embedded mode
with logstash.

I have recently started exploring logstash after i heard it from my peers
who attended puppetconf 2012

We have deployed logstash 1.1.9 in standalone mode with embedded
elasticsearch on a EC2 instance.
Elasticsearch version is 0.20.2
We have also installed kibana as a frontend for logstash.
Since, we are using the embedded version of elasticsearch, i assume it
doesn't have any compatibility issues with logstash version.

{:message=>"Failed to index an event, will retry",
:exception=>org.elasticsearch.discovery.MasterNotDiscoveredException:
waited for [1m],
:event=>{"@source"=>"file://ip-10-xx-xx-xx.ec2.internal/var/log/messages",
"@tags"=>, "@fields"=>{}, "@timestamp"=>"2013-03-01T09:08:39.998Z",
"@source_host"=>"ip-10-157-xx-xx.ec2.internal",
"@source_path"=>"/var/log/messages", "@message"=>"Mar 1 00:14:52
ip-10-122-70-221@type"=>"linuxsyslog"}, :level=>:warn}

I am also seeing following exception

{:message=>"Failed to index an event, will retry",
:exception=>org.elasticsearch.transport.RemoteTransportException: [Imperial
Hydra][inet[/10.157.38.34:9300]][index],
:event=>{"@source"=>"file://ip-10-xx-xx-xx.ec2.internal/var/log/",
"@tags"=>, "@fields"=>{}, "@timestamp"=>"2013-02-27T13:49:13.515Z",
"@source_host"=>"ip-10-xx-xx-xx.ec2.internal", "@source_path"=>"/var/log/",

{:message=>"Failed to index an event, will retry",
:exception=>org.elasticsearch.transport.RemoteTransportException: [Ariann]
{:message=>"Failed to index an event, will retry",
:exception=>org.elasticsearch.transport.RemoteTransportException: [Gorgon]
{:message=>"Failed to index an event, will retry",
:exception=>org.elasticsearch.transport.RemoteTransportException:
[Forgotten One]

following processes are running on the host

root 22155 1 1 00:11 pts/0 00:01:52 /usr/bin/java -jar
/opt/logstash/bin/logstash.jar agent --config
/opt/logstash/conf/logstash.conf --log /opt/logstash/log/logstash.log
--grok-patterns-path /opt/logstash/patterns
root 22039 1 0 00:08 ? 00:00:01 ruby kibana.rb

root 28932 13009 0 02:05 pts/0 00:00:00 grep -i java
tcp 0 0 0.0.0.0:9200
0.0.0.0:* LISTEN 22155/java
tcp 0 0 0.0.0.0:80
0.0.0.0:* LISTEN 22039/ruby
tcp 0 0 0.0.0.0:9300
0.0.0.0:* LISTEN 22155/java
tcp 0 0 0.0.0.0:9301
0.0.0.0:* LISTEN 22155/java

logstash.conf

input {
file {
type => "linuxsyslog"

        # Wildcards work, here :)
        path => [ "/var/log/messages" ]
    sincedb_path    => "/opt/logstash"
  }

file {
    type => "merrors"
    path => [ "/var/log/m.errors*" ]
    sincedb_path    => "/opt/logstash"
}

file {
            type     => "mall"
            path     => [ "/var/log/m*" ]
    exclude    => [ "/var/log/m.errors*" ]
    sincedb_path    => "/opt/logstash"
    }

}

filter {

  date {
          type => "syslog"
          syslog_timestamp => [ "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
 }

}

output {
elasticsearch {
embedded => true
host => "10.157.3x.3x"
}
}

If i restart logstash and try to use the search engine, it works and gives
output for a while, before it starts complaining about the above errors.

curl -XGET 'http://127.0.0.1:9200/_cluster/health?pretty=true'

{
"cluster_name" : "elasticsearch",
"status" : "red",
"timed_out" : false,
"number_of_nodes" : 2,
"number_of_data_nodes" : 1,
"active_primary_shards" : 26,
"active_shards" : 26,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 34
}

Also, i am not sure why the above output shows "number of nodes" = 2
I am using logstash and elasticsearch in standalone mode.
The above output was taken when everything was running smoothly.

I am running these on an small EC2 instance running RHEL 5.4 . ( 1.7 GB
RAM )
java version is 1.6

Please suggest on how to fix these errors.

Thanks

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

When elasticsearch hangs, the curl request does not give any output

curl -XGET 'http://127.0.0.1:9200/_cluster/health?pretty=true'

and i keep seeing these exceptions

:exception=>org.elasticsearch.discovery.MasterNotDiscoveredException:

However; connections to port 9200 are getting through.

telnet 127.0.0.1 9200

Trying 127.0.0.1...
Connected to localhost.localdomain (127.0.0.1).
Escape character is '^]'.
^]
telnet> quit
Connection closed.

On Monday, 1 April 2013 18:02:09 UTC+5:30, Sunny Jaisinghani wrote:

Hello Experts,

I am new to elasticsearch and i am using elasticsearch in embedded mode
with logstash.

I have recently started exploring logstash after i heard it from my peers
who attended puppetconf 2012

We have deployed logstash 1.1.9 in standalone mode with embedded
elasticsearch on a EC2 instance.
Elasticsearch version is 0.20.2
We have also installed kibana as a frontend for logstash.
Since, we are using the embedded version of elasticsearch, i assume it
doesn't have any compatibility issues with logstash version.

{:message=>"Failed to index an event, will retry",
:exception=>org.elasticsearch.discovery.MasterNotDiscoveredException:
waited for [1m],
:event=>{"@source"=>"file://ip-10-xx-xx-xx.ec2.internal/var/log/messages",
"@tags"=>, "@fields"=>{}, "@timestamp"=>"2013-03-01T09:08:39.998Z",
"@source_host"=>"ip-10-157-xx-xx.ec2.internal",
"@source_path"=>"/var/log/messages", "@message"=>"Mar 1 00:14:52
ip-10-122-70-221@type"=>"linuxsyslog"}, :level=>:warn}

I am also seeing following exception

{:message=>"Failed to index an event, will retry",
:exception=>org.elasticsearch.transport.RemoteTransportException: [Imperial
Hydra][inet[/10.157.38.34:9300]][index],
:event=>{"@source"=>"file://ip-10-xx-xx-xx.ec2.internal/var/log/",
"@tags"=>, "@fields"=>{}, "@timestamp"=>"2013-02-27T13:49:13.515Z",
"@source_host"=>"ip-10-xx-xx-xx.ec2.internal", "@source_path"=>"/var/log/",

{:message=>"Failed to index an event, will retry",
:exception=>org.elasticsearch.transport.RemoteTransportException: [Ariann]
{:message=>"Failed to index an event, will retry",
:exception=>org.elasticsearch.transport.RemoteTransportException: [Gorgon]
{:message=>"Failed to index an event, will retry",
:exception=>org.elasticsearch.transport.RemoteTransportException:
[Forgotten One]

following processes are running on the host

root 22155 1 1 00:11 pts/0 00:01:52 /usr/bin/java -jar
/opt/logstash/bin/logstash.jar agent --config
/opt/logstash/conf/logstash.conf --log /opt/logstash/log/logstash.log
--grok-patterns-path /opt/logstash/patterns
root 22039 1 0 00:08 ? 00:00:01 ruby kibana.rb

root 28932 13009 0 02:05 pts/0 00:00:00 grep -i java
tcp 0 0 0.0.0.0:9200
0.0.0.0:* LISTEN 22155/java
tcp 0 0 0.0.0.0:80
0.0.0.0:* LISTEN 22039/ruby
tcp 0 0 0.0.0.0:9300
0.0.0.0:* LISTEN 22155/java
tcp 0 0 0.0.0.0:9301
0.0.0.0:* LISTEN 22155/java

logstash.conf

input {
file {
type => "linuxsyslog"

        # Wildcards work, here :)
        path => [ "/var/log/messages" ]
    sincedb_path    => "/opt/logstash"
  }

file {
    type => "merrors"
    path => [ "/var/log/m.errors*" ]
    sincedb_path    => "/opt/logstash"
}

file {
            type     => "mall"
            path     => [ "/var/log/m*" ]
    exclude    => [ "/var/log/m.errors*" ]
    sincedb_path    => "/opt/logstash"
    }

}

filter {

  date {
          type => "syslog"
          syslog_timestamp => [ "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
 }

}

output {
elasticsearch {
embedded => true
host => "10.157.3x.3x"
}
}

If i restart logstash and try to use the search engine, it works and gives
output for a while, before it starts complaining about the above errors.

curl -XGET 'http://127.0.0.1:9200/_cluster/health?pretty=true'

{
"cluster_name" : "elasticsearch",
"status" : "red",
"timed_out" : false,
"number_of_nodes" : 2,
"number_of_data_nodes" : 1,
"active_primary_shards" : 26,
"active_shards" : 26,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 34
}

Also, i am not sure why the above output shows "number of nodes" = 2
I am using logstash and elasticsearch in standalone mode.
The above output was taken when everything was running smoothly.

I am running these on an small EC2 instance running RHEL 5.4 . ( 1.7 GB
RAM )
java version is 1.6

Please suggest on how to fix these errors.

Thanks

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.