ElasticSearch and save the information into a file! Urgent... Thank you!

Hello everyone!,

I am working on my final project work and I have to use ElasticSearch but I
am new so I don`t have enough idea, and I am running out of time...

I have installed the river twitter for elasticsearch and I collect
information from there depending on some search terms. What I want to know
is how I can dump the collected information to a file. For example .txt, to
process it later.

What I have done is to colect all information from the river (with the code
attached putting it on a. Sh) and save the information in a fime using >>
file.txt

The code that I use to get the information is this:

-XPOST curl -d '
{

"query":

{

"query_string":

{

query ":" * "/ / To catch all the tweets

}

}

} '

The output is the set of tweets with a lot of information like post id,
name, text, language...all mixed.

I wonder if you could tell me another option easier or with the output
clearer.

Regards, and thank you very much!

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/a13a2205-27e5-442b-b200-022b780fe74c%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

You should use scan and scroll API because the query will just return by default the 10 more relevant docs, not the whole resultset.
Though it won't format your result. You need to parse JSON on your client and render it as you need.

--
David :wink:
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 29 mai 2014 à 10:05, Francho punto franblantor@gmail.com a écrit :

Hello everyone!,

I am working on my final project work and I have to use ElasticSearch but I am new so I don`t have enough idea, and I am running out of time...

I have installed the river twitter for elasticsearch and I collect information from there depending on some search terms. What I want to know is how I can dump the collected information to a file. For example .txt, to process it later.

What I have done is to colect all information from the river (with the code attached putting it on a. Sh) and save the information in a fime using >> file.txt

The code that I use to get the information is this:

-XPOST curl -d '
{
"query":
{
"query_string":
{
query ":" * "/ / To catch all the tweets
}
}
} '

The output is the set of tweets with a lot of information like post id, name, text, language...all mixed.

I wonder if you could tell me another option easier or with the output clearer.

Regards, and thank you very much!

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/a13a2205-27e5-442b-b200-022b780fe74c%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/F6B62E3D-B5FB-49CA-B536-E1330BAF2EC6%40pilato.fr.
For more options, visit https://groups.google.com/d/optout.

Thank you David,

I will try it now, but I'm worried because I don't have to much idea about
using this program, but I will try tu use that API, and I keep you informed.

Than you!!! :slight_smile:

El jueves, 29 de mayo de 2014 10:33:29 UTC+2, David Pilato escribió:

You should use scan and scroll API because the query will just return by
default the 10 more relevant docs, not the whole resultset.
Though it won't format your result. You need to parse JSON on your client
and render it as you need.

--
David :wink:
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 29 mai 2014 à 10:05, Francho punto <franb...@gmail.com <javascript:>>
a écrit :

Hello everyone!,

I am working on my final project work and I have to use Elasticsearch but
I am new so I don`t have enough idea, and I am running out of time...

I have installed the river twitter for elasticsearch and I collect
information from there depending on some search terms. What I want to know
is how I can dump the collected information to a file. For example .txt, to
process it later.

What I have done is to colect all information from the river (with the
code attached putting it on a. Sh) and save the information in a fime using

file.txt

The code that I use to get the information is this:

-XPOST curl -d '
{

"query":

{

"query_string":

{

query ":" * "/ / To catch all the tweets

}

}

} '

The output is the set of tweets with a lot of information like post id,
name, text, language...all mixed.

I wonder if you could tell me another option easier or with the output
clearer.

Regards, and thank you very much!

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearc...@googlegroups.com <javascript:>.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/a13a2205-27e5-442b-b200-022b780fe74c%40googlegroups.comhttps://groups.google.com/d/msgid/elasticsearch/a13a2205-27e5-442b-b200-022b780fe74c%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/7050b803-f7c1-4569-a664-d343b872517f%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.