No index created - Question from a total noob


(Cristian) #1

Hi all!
Im trying to create my index from forex-data that I get through an API. Logstash seems to pick up the data but no index is created. Since I never use ES I don't really know what to look for to understand why the index is not created.
Im using ES 5.5.1.

This is my logstash-conf:

input{
    http_poller{
        urls => {
            myurl => {
                url => "https://api-fxpractice.oanda.com/v1/candles?instrument=EUR_USD&granularity=D&count=1&candleFormat=midpoint"
                headers => {
                    "Authorization" =>"Bearer0c6dXXXXXXXXXXX46198169XXXXXXXXXXX883160101"
                }
            }

        }
        schedule => { cron => "*/1 * * * * UTC"}
     request_timeout => 60
    codec => "json"

    }
}
output {
   stdout { codec => rubydebug }
   elasticsearch {
        hosts => ["localhost:9200"]
        index => "Forex1" 
    }
}

When watching the terminal where I run logstash I can see that I get a response:

[2017-08-02T10:53:16,294][INFO ][logstash.pipeline        ] Pipeline main started
[2017-08-02T10:53:16,413][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
{
     "@timestamp" => 2017-08-02T08:54:00.981Z,
    "granularity" => "D",
       "@version" => "1",
     "instrument" => "EUR_USD",
        "candles" => [
        [0] {
              "volume" => 21830,
            "closeMid" => 1.185715,
             "highMid" => 1.18688,
             "openMid" => 1.180195,
              "lowMid" => 1.17942,
                "time" => "2017-08-01T21:00:00.000000Z",
            "complete" => false
        }
    ]
}

Im expecting an index with name "Forex1" with the fields in the JSON response. But no index is created. When I check ES there's only a .kibana-index created.

 curl 'localhost:9200/_cat/indices?v'
health status index   uuid                   pri rep docs.count docs.deleted store.size pri.store.size
yellow open   .kibana 4pKGMNBJQzWJzmw35z27tw   1   1          1            0      3.2kb          3.2kb

Probably Im doing something wrong here. Could any of you guide me in the correct direction? Any suggestion would be helpful.
Thanks in advance.
Br
Cristian


(Cristian) #2

If I remove the "index => "Forex1" in the output it will create an index with the default name "logstash-....". Why cant I give my index a name?


(Mark Walkom) #3

You can, is there anything in the logs of Elasticsearch?


(Cristian) #4

Nothing strange (I think). Some warnings about the disk watermark but nothing else.


(Mark Walkom) #5

Posting them may be helpful :slight_smile:


(Cristian) #6

ok. Dont know how to upload the file but pasting some parts of the log here:

[2017-08-02T10:06:16,196][INFO ][o.e.n.Node               ] [] initializing ...
[2017-08-02T10:06:16,275][INFO ][o.e.e.NodeEnvironment    ] [AdjU5O0] using [1] data paths, mounts [[/ (/dev/disk1)]], net usable_space [26gb], net total_space [464.7gb], spins? [unknown], types [hfs]
[2017-08-02T10:06:16,276][INFO ][o.e.e.NodeEnvironment    ] [AdjU5O0] heap size [1.9gb], compressed ordinary object pointers [true]
[2017-08-02T10:06:16,277][INFO ][o.e.n.Node               ] node name [AdjU5O0] derived from node ID [AdjU5O05SzK5NwG0diXNWw]; set [node.name] to override
[2017-08-02T10:06:16,277][INFO ][o.e.n.Node               ] version[5.5.1], pid[1000], build[19c13d0/2017-07-18T20:44:24.823Z], OS[Mac OS X/10.12.6/x86_64], JVM[Oracle Corporation/Java HotSpot(TM) 64-Bit Server VM/1.8.0_111/25.111-b14]
[2017-08-02T10:06:16,277][INFO ][o.e.n.Node               ] JVM arguments [-Xms2g, -Xmx2g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -Djdk.io.permissionsUseCanonicalPath=true, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Dlog4j.skipJansi=true, -XX:+HeapDumpOnOutOfMemoryError, -Des.path.home=/Users/cristian/documents/elastic.co/ES_5_5_1/elasticsearch-5.5.1]
[2017-08-02T10:06:17,181][INFO ][o.e.p.PluginsService     ] [AdjU5O0] loaded module [aggs-matrix-stats]
[2017-08-02T10:06:17,181][INFO ][o.e.p.PluginsService     ] [AdjU5O0] loaded module [ingest-common]
[2017-08-02T10:06:17,181][INFO ][o.e.p.PluginsService     ] [AdjU5O0] loaded module [lang-expression]
[2017-08-02T10:06:17,181][INFO ][o.e.p.PluginsService     ] [AdjU5O0] loaded module [lang-groovy]
[2017-08-02T10:06:17,182][INFO ][o.e.p.PluginsService     ] [AdjU5O0] loaded module [lang-mustache]
[2017-08-02T10:06:17,182][INFO ][o.e.p.PluginsService     ] [AdjU5O0] loaded module [lang-painless]
[2017-08-02T10:06:17,182][INFO ][o.e.p.PluginsService     ] [AdjU5O0] loaded module [parent-join]
[2017-08-02T10:06:17,182][INFO ][o.e.p.PluginsService     ] [AdjU5O0] loaded module [percolator]
[2017-08-02T10:06:17,182][INFO ][o.e.p.PluginsService     ] [AdjU5O0] loaded module [reindex]
[2017-08-02T10:06:17,182][INFO ][o.e.p.PluginsService     ] [AdjU5O0] loaded module [transport-netty3]
[2017-08-02T10:06:17,182][INFO ][o.e.p.PluginsService     ] [AdjU5O0] loaded module [transport-netty4]
[2017-08-02T10:06:17,183][INFO ][o.e.p.PluginsService     ] [AdjU5O0] no plugins loaded
[2017-08-02T10:06:18,408][INFO ][o.e.d.DiscoveryModule    ] [AdjU5O0] using discovery type [zen]
[2017-08-02T10:06:19,003][INFO ][o.e.n.Node               ] initialized
[2017-08-02T10:06:19,003][INFO ][o.e.n.Node               ] [AdjU5O0] starting ...
[2017-08-02T10:06:19,225][INFO ][o.e.t.TransportService   ] [AdjU5O0] publish_address {127.0.0.1:9300}, bound_addresses {[fe80::1]:9300}, {[::1]:9300}, {127.0.0.1:9300}
[2017-08-02T10:06:22,371][INFO ][o.e.c.s.ClusterService   ] [AdjU5O0] new_master {AdjU5O0}{AdjU5O05SzK5NwG0diXNWw}{sNd5y4r0TFqM4EuKOdAQbw}{127.0.0.1}{127.0.0.1:9300}, reason: zen-disco-elected-as-master ([0] nodes joined)
[2017-08-02T10:06:22,393][INFO ][o.e.g.GatewayService     ] [AdjU5O0] recovered [0] indices into cluster_state
[2017-08-02T10:06:22,395][INFO ][o.e.h.n.Netty4HttpServerTransport] [AdjU5O0] publish_address {127.0.0.1:9200}, bound_addresses {[fe80::1]:9200}, {[::1]:9200}, {127.0.0.1:9200}
[2017-08-02T10:06:22,395][INFO ][o.e.n.Node               ] [AdjU5O0] started
[2017-08-02T10:06:52,389][WARN ][o.e.c.r.a.DiskThresholdMonitor] [AdjU5O0] high disk watermark [90%] exceeded on [AdjU5O05SzK5NwG0diXNWw][AdjU5O0][/Users/cristian/documents/elastic.co/ES_5_5_1/elasticsearch-5.5.1/data/nodes/0] free: 26gb[5.6%], shards will be relocated away from this node

And the last rows:

[2017-08-02T11:53:03,469][INFO ][o.e.c.r.a.DiskThresholdMonitor] [AdjU5O0] rerouting shards: [high disk watermark exceeded on one or more nodes]
[2017-08-02T11:53:33,477][WARN ][o.e.c.r.a.DiskThresholdMonitor] [AdjU5O0] high disk watermark [90%] exceeded on [AdjU5O05SzK5NwG0diXNWw][AdjU5O0][/Users/cristian/documents/elastic.co/ES_5_5_1/elasticsearch-5.5.1/data/nodes/0] free: 25.9gb[5.5%], shards will be relocated away from this node
[2017-08-02T11:54:03,481][WARN ][o.e.c.r.a.DiskThresholdMonitor] [AdjU5O0] high disk watermark [90%] exceeded on [AdjU5O05SzK5NwG0diXNWw][AdjU5O0][/Users/cristian/documents/elastic.co/ES_5_5_1/elasticsearch-5.5.1/data/nodes/0] free: 25.9gb[5.5%], shards will be relocated away from this node
[2017-08-02T11:54:03,482][INFO ][o.e.c.r.a.DiskThresholdMonitor] [AdjU5O0] rerouting shards: [high disk watermark exceeded on one or more nodes]
[2017-08-02T11:54:33,485][WARN ][o.e.c.r.a.DiskThresholdMonitor] [AdjU5O0] high disk watermark [90%] exceeded on [AdjU5O05SzK5NwG0diXNWw][AdjU5O0][/Users/cristian/documents/elastic.co/ES_5_5_1/elasticsearch-5.5.1/data/nodes/0] free: 25.9gb[5.5%], shards will be relocated away from this node
[2017-08-02T11:55:03,489][WARN ][o.e.c.r.a.DiskThresholdMonitor] [AdjU5O0] high disk watermark [90%] exceeded on [AdjU5O05SzK5NwG0diXNWw][AdjU5O0][/Users/cristian/documents/elastic.co/ES_5_5_1/elasticsearch-5.5.1/data/nodes/0] free: 25.9gb[5.5%], shards will be relocated away from this node
[2017-08-02T11:55:03,489][INFO ][o.e.c.r.a.DiskThresholdMonitor] [AdjU5O0] rerouting shards: [high disk watermark exceeded on one or more nodes]
[2017-08-02T11:55:33,492][WARN ][o.e.c.r.a.DiskThresholdMonitor] [AdjU5O0] high disk watermark [90%] exceeded on [AdjU5O05SzK5NwG0diXNWw][AdjU5O0][/Users/cristian/documents/elastic.co/ES_5_5_1/elasticsearch-5.5.1/data/nodes/0] free: 25.9gb[5.5%], shards will be relocated away from this node
[2017-08-02T11:56:03,498][WARN ][o.e.c.r.a.DiskThresholdMonitor] [AdjU5O0] high disk watermark [90%] exceeded on [AdjU5O05SzK5NwG0diXNWw][AdjU5O0][/Users/cristian/documents/elastic.co/ES_5_5_1/elasticsearch-5.5.1/data/nodes/0] free: 25.9gb[5.5%], shards will be relocated away from this node
[2017-08-02T11:56:03,498][INFO ][o.e.c.r.a.DiskThresholdMonitor] [AdjU5O0] rerouting shards: [high disk watermark exceeded on one or more nodes]

(Mark Walkom) #7

There's no uploads, just post the text as you have or use gist/pastebin/etc and link :slight_smile:

Other than you needing more disk space, it shouldn't stop the index creation. There's nothing else in the Logstash log?


(Cristian) #8

This is the logstash log:

[2017-08-02T11:39:34,079][INFO ][logstash.pipeline        ] Pipeline main started
[2017-08-02T11:39:34,154][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
[2017-08-02T11:40:12,196][WARN ][logstash.runner          ] SIGINT received. Shutting down the agent.
[2017-08-02T11:40:12,201][WARN ][logstash.agent           ] stopping pipeline {:id=>"main"}
[2017-08-02T11:48:51,121][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
[2017-08-02T11:48:51,125][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://localhost:9200/, :path=>"/"}
[2017-08-02T11:48:51,266][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>#<Java::JavaNet::URI:0x3802db36>}
[2017-08-02T11:48:51,267][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2017-08-02T11:48:51,329][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>50001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"_all"=>{"enabled"=>true, "norms"=>false}, "dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date", "include_in_all"=>false}, "@version"=>{"type"=>"keyword", "include_in_all"=>false}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2017-08-02T11:48:51,341][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>[#<Java::JavaNet::URI:0x185a8610>]}
[2017-08-02T11:48:51,343][INFO ][logstash.pipeline        ] Starting pipeline {"id"=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>500}
[2017-08-02T11:48:51,347][INFO ][logstash.inputs.http_poller] Registering http_poller Input {:type=>nil, :urls=>{"myurl"=>{"url"=>"https://api-fxpractice.oanda.com/v1/candles?instrument=EUR_USD&granularity=D&count=1&candleFormat=midpoint", "headers"=>{"Authorization"=>"Bearer0c6d181xxxxxxxxxxxxxxxxec2f0064xxxxxxxx60101"}}}, :interval=>nil, :schedule=>{"cron"=>"*/1 * * * * UTC"}, :timeout=>nil}
[2017-08-02T11:48:51,351][INFO ][logstash.pipeline        ] Pipeline main started
[2017-08-02T11:48:51,461][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}

(Magnus Bäck) #9

Not sure you're even getting that far, but you can't have uppercase letters in the index name. I suggest you use a stdout { codec => rubydebug } output to debug your http_poller input until you know that part is working.


(Cristian) #10

Well, the index is created as expected when I use the default name. Now that I tried again ( index name with lower case) I got another strange error that I have to fix. I'll update the thread when I have solved this.

ERROR StatusLogger No log4j2 configuration file found. Using default configuration: logging only errors to the console.
Sending Logstash's logs to /Users/cristian/.Trash/logstash-5.5.1/logs which is now configured via log4j2.properties
[2017-08-02T16:01:23,451][FATAL][logstash.runner          ] Logstash could not be started because there is already another instance using the configured data directory.  If you wish to run multiple instances, you must change the "path.data" setting.

(Cristian) #11

Ok now I have created the index. It seems that it was a problem with the uppercase in the name that I gave it. Now I have another problem. The index was not created exactly as expected. The data I want as "rows" are the forex values within "candles. How can I change this? Now I get this:
{
"_index": "forex1",
"_type": "logs",
"_id": "AV2jdBpw4NJPBVY_fU_M",
"_version": 1,
"_score": null,
"_source": {
"@timestamp": "2017-08-02T14:58:01.346Z",
"granularity": "D",
"@version": "1",
"instrument": "EUR_USD",
"candles": [
{
"volume": 19976,
"closeMid": 1.13581,
"highMid": 1.13805,
"openMid": 1.13552,
"lowMid": 1.13388,
"time": "2016-06-06T21:00:00.000000Z",
"complete": true
},
{
"volume": 19186,
"closeMid": 1.1395,
"highMid": 1.14106,
"openMid": 1.13584,
"lowMid": 1.13545,
"time": "2016-06-07T21:00:00.000000Z",
"complete": true
},


(Cristian) #12

Easy...I made a filter with a split. Case may be closed. Thanks for all help.


(system) #13

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.