Indexing files with file system _river in scrutmydocs won't read any files

Hi

I was using a couchdb _river but switched to using scrutmydocs using a file
system river.

Configured the river as in the attached pic, but it just won't find
anything.
If I manually add docs using the app button, it works, but it won't index
anything automatically from the river.

I revised the config three times, everything looks allright.
Is there anything to look at, or another way to check if it is working?

Thanks

--

Fátima Castiglione Maldonado
castiglionemaldonado@gmail.com

                 ____
               ,'_   |

|||
<
) .------.
-----------,------.-' ,-'-.
| | | ,' . ,' | | ,'.
| ,-' |
/
,'-' .---.|
_____
.--' -----. | _____________________ -. ----- | | ___| | | \ ,- \ | | ___| |===========================((|) | | | | | | _____________________/- / |
--._ -----' | _________________,-' ----- |.-._ ,' __.---' | /
| -. | \ /. | | . ,' | | |. ,'
_____,------------------. -._ _,-' <___________________________)------'
| | |
`.
___|

=================================

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Hi

I was using a couchdb _river but switched to using scrutmydocs using a file
system river.

Configured the river as in the attached pic, but it just won't find
anything.

If I manually add docs using the app button, it works, but it won't index
anything automatically from the river.

I revised the config three times, everything looks allright.
Is there anything to look at, or another way to check if it is working?

Thanks

--

Fátima Castiglione Maldonado
castiglionemaldonado@gmail.com

                 ____
               ,'_   |

_______|||
<
) .------.
-----------,------.-' ,-'-.

             |    |  |              ,'                `.
            ,'    |  |            ,'                    `.
            |  _,-'  |__         /                        \
          _,'-'    `.   `---.___|_____________             \

      .--'  -----.  | _____________________   `-. -----     |
      |    ___|  |  |                      \  ,- \          |
      |    ___|  |===========================((|) |         |
      |       |  |  | _____________________/  `- /          |

      `--._ -----'  |        _________________,-' -----     |
           `.-._   ,' __.---'   |                          /
            |   `-.  |           \                        /
            `.    |  |            `.                    ,'

             |    |  |              `.                ,'

_____,------------------. -._ _,-' <___________________________)------'
| _| |

               `.____|

=================================

--

Fátima Castiglione Maldonado
castiglionemaldonado@gmail.com

                 ____
               ,'_   |

|||
<
) .------.
-----------,------.-' ,-'-.
| | | ,' . ,' | | ,'.
| ,-' |
/
,'-' .---.|
_____
.--' -----. | _____________________ -. ----- | | ___| | | \ ,- \ | | ___| |===========================((|) | | | | | | _____________________/- / |
--._ -----' | _________________,-' ----- |.-._ ,' __.---' | /
| -. | \ /. | | . ,' | | |. ,'
_____,------------------. -._ _,-' <___________________________)------'
| | |
`.
___|

=================================

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

It should work.
Do your elasticsearch account has read access to the dir?

How do you know that ES did not index anything? Searching for nothing does not show documents in that dir?

May be you could modify log level and set org.scrutmydocs package to DEBUG.
Did you only download scrutmydocs.war 0.2.0 and put it in tomcat?

What do you get when running:
curl -XGET http://localhost:8080/scrutmydocs/api/1/settings/rivers

David

David Pilato | Technical Advocate | Elasticsearch.com
@dadoonet | @elasticsearchfr | @scrutmydocs

Le 11 juin 2013 à 16:36, Fatima Castiglione Maldonado 发 castiglionemaldonado@gmail.com a écrit :

Hi

I was using a couchdb _river but switched to using scrutmydocs using a file system river.

Configured the river as in the attached pic, but it just won't find anything.

If I manually add docs using the app button, it works, but it won't index anything automatically from the river.

I revised the config three times, everything looks allright.
Is there anything to look at, or another way to check if it is working?

Thanks

--

Fátima Castiglione Maldonado
castiglionemaldonado@gmail.com

                 ____
               ,'_   |

_______|||
<
) .------.
-----------,------.-' ,-' -.

             |    |  |              ,'                `.
            ,'    |  |            ,'                    `.
            |  _,-'  |__         /                        \
          _,'-'    `.   `---.___|_____________             \


      .--'  -----.  | _____________________   `-. -----     |
      |    ___|  |  |                      \  ,- \          |
      |    ___|  |===========================((|) |         |
      |       |  |  | _____________________/  `- /          |


      `--._ -----'  |        _________________,-' -----     |
           `.-._   ,' __.---'   |                          /
            |   `-.  |           \                        /

            `.    |  |            `.                    ,'

             |    |  |              `.                ,'

_____,------------------. -._ _,-' <___________________________) ------'
| _| |

               `.____|

=================================

--

Fátima Castiglione Maldonado
castiglionemaldonado@gmail.com

                 ____
               ,'_   |

_______|||
<
) .------.
-----------,------.-' ,-' -.

             |    |  |              ,'                `.
            ,'    |  |            ,'                    `.
            |  _,-'  |__         /                        \
          _,'-'    `.   `---.___|_____________             \

      .--'  -----.  | _____________________   `-. -----     |
      |    ___|  |  |                      \  ,- \          |
      |    ___|  |===========================((|) |         |
      |       |  |  | _____________________/  `- /          |

      `--._ -----'  |        _________________,-' -----     |
           `.-._   ,' __.---'   |                          /
            |   `-.  |           \                        /
            `.    |  |            `.                    ,'

             |    |  |              `.                ,'

_____,------------------. -._ _,-' <___________________________) ------'
| _| |

               `.____|

=================================

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

<fs_river.png>

Hi David,
thanks for your kind response, and for your definitely clear and focused
questions.
Answers below.

  1. Do your elasticsearch account has read access to the dir?
    yes, now it looks as it has access...

a. running processes (do not know why so many)
[image: Images intégrées 1]

b. folder permissions
[image: Images intégrées 2]

  1. How do you know that ES did not index anything?

a. I created this fs river
[image: Images intégrées 1]

b. I put this .txt file in the river path, to test if elasticsearch would
see it
[image: Images intégrées 2]

c. it does not
[image: Images intégrées 3]

  1. Searching for nothing does not show documents in that dir?

if I search for nothing, I only find .htm and .html docs, the ones that I
uploaded manually
(yes, I did check all 16 pages of results)
[image: Images intégrées 1]

  1. May be you could modify log level and set org.scrutmydocs package to
    DEBUG.

Yes, definitely I should, but I am new to elasticsearch (some 2 weeks) and
have never seriously used log4j. While this week I have started to find
around a lot of elasticsearch things, yet I am not able to find the logs.

Would you be so kind as to provide me with and example log config file?

  1. Did you only download scrutmydocs.war 0.2.0 and put it in tomcat?

I just downloaded scrutmydocs.war 0.2.0 and put it in GlassFish

[image: Images intégrées 4]

[image: Images intégrées 5]

[image: Images intégrées 6]

  1. What do you get when running:
    curl -XGET http://localhost:8080/scrutmydocs/api/1/settings/rivers

Well...

a. elasticsearch says it is runnng ok, or so it looks
[image: Images intégrées 7]

b. after a refresh, I see no indexes in _head
[image: Images intégrées 8]

c. when I run http://localhost:8080/scrutmydocs-0.2.0/api/1/settings/rivers
the river is there but to me it looks strange that the river type is dummy
(should not it be fs ???)
[image: Images intégrées 9]

Thanks in advance,
Fatima

--

Fátima Castiglione Maldonado
castiglionemaldonado@gmail.com

                 ____
               ,'_   |

_______|||
<
) .------.
-----------,------.-' ,-'-.

             |    |  |              ,'                `.
            ,'    |  |            ,'                    `.
            |  _,-'  |__         /                        \
          _,'-'    `.   `---.___|_____________             \


      .--'  -----.  | _____________________   `-. -----     |
      |    ___|  |  |                      \  ,- \          |
      |    ___|  |===========================((|) |         |
      |       |  |  | _____________________/  `- /          |


      `--._ -----'  |        _________________,-' -----     |
           `.-._   ,' __.---'   |                          /
            |   `-.  |           \                        /

            `.    |  |            `.                    ,'

             |    |  |              `.                ,'

_____,------------------. -._ _,-' <___________________________)------'
| _| |

               `.____|

=================================

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Hey

I can't see images 1 and 2.
BTW, you should only copy and paste text when possible.

c. when I run http://localhost:8080/scrutmydocs-0.2.0/api/1/settings/rivers
the river is there but to me it looks strange that the river type is dummy
(should not it be fs ???)
This is really weird. I can't understand how it created a dummy river.

As far as I remember the source code, it can't happen. Do you see anything interesting in your glass fish logs?

--
David Pilato | Technical Advocate | Elasticsearch.com
@dadoonet | @elasticsearchfr | @scrutmydocs

Oh, excuse me, did not really take into account that pasting images could
be a problem.

  1. missing images

Image 1 is a print of my running processes seen usinghtop, there is a dozen
or so processes for user "elasticsearch".
Image 2 is a print of the folder permissions, belongs to root and
"everybody else" has read permission, so I understand it should be ok to be
read by the elasticsearch user processes.

  1. glassfish logs

not much in jvm log (30 KB), but way too many things happening
in server log (1.2 MB)
should I attach them?

  1. elasticsearch logs
    I do not understand where to find them
    do you have a logging.yml example file that you could spare?

Thanks a lot

2013/6/14 David Pilato david@pilato.fr

Hey

I can't see images 1 and 2.
BTW, you should only copy and paste text when possible.

c. when I run
http://localhost:8080/scrutmydocs-0.2.0/api/1/settings/rivers
the river is there but to me it looks strange that the river type is dummy
(should not it be fs ???)

This is really weird. I can't understand how it created a dummy river.

As far as I remember the source code, it can't happen. Do you see anything
interesting in your glass fish logs?

--
David Pilato | Technical Advocate | Elasticsearch.com
@dadoonet https://twitter.com/dadoonet | @elasticsearchfrhttps://twitter.com/elasticsearchfr
| @scrutmydocs https://twitter.com/scrutmydocs

--

Fátima Castiglione Maldonado
castiglionemaldonado@gmail.com

                 ____
               ,'_   |

|||
<
) .------.
-----------,------.-' ,-' -.
| | | ,' . ,' | | ,' .
| ,-' |
/
,'-' . ---.
|
_________
.--' -----. | _____________________ -. ----- | | ___| | | \ ,- \ | | ___| |===========================((|) | | | | | | _____________________/ - / |
--._ -----' | _________________,-' ----- | .-._ ,' __.---' | /
| -. | \ / . | | . ,' | | | . ,'
_____,------------------. -._ _,-' <___________________________) ------'
| | |
`.
___|

=================================

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

If I restart the app, the example fs rivers are created again...

Interestingly enough, all of them get created as "dummy".

{"object":[{"name":"myfirstriver","id":"myfirstriver","type":"dummy","start":false,"indexname":"docs","typename":"doc"},{"name":"TigerRiver","id":"TigerRiver","type":"dummy","start":true,"indexname":"TigerDocs","typename":"TigerDoc"},{"name":"mysecondriver","id":"mysecondriver","type":"dummy","start":false,"indexname":"docs","typename":"doc"}],"ok":true,"errors":null}

I was thinking that it was a problem with Java 7, because it does not
includes the "jar" command (guess it was an install error that I did
not notice)

But I removed java 7 and, working with java 6, everything is quite the
same... dummy.

Then I found that individual docs have not permissions for anyone
else... So I gave the permission.

But nothing. Will not index my docs.

Then I checked the GlassFish logs once again, and found an error
telling me that the name of the river should be all lowercase.

So I delete the river and create it again...

Now the logs says it is ok (see below) but it still gets created as dummy.

[#|2013-06-15T06:43:24.435-0300|INFO|glassfish3.0.1|javax.enterprise.system.std.com.sun.enterprise.v3.services.impl|_ThreadID=28;_ThreadName=Thread-1;|06:43:24,435
DEBUG [RiverService]
createIndexIfNeeded(FSRiver=[id="tiger",start=true,excludes=".mp4",updateRate=1,typename="tigerdoc",name="tiger",analyzer="standard",type="fs",includes=".txt,
*.html, *.pdf, *.doc",url="/usr/local/Tiger/text_files",indexname="tigerdocs"])
|#]

[#|2013-06-15T06:43:24.437-0300|INFO|glassfish3.0.1|javax.enterprise.system.std.com.sun.enterprise.v3.services.impl|_ThreadID=28;_ThreadName=Thread-1;|06:43:24,435
DEBUG [ESHelper] createIndexIfNeeded(tigerdocs, tigerdoc, standard)
|#]

[#|2013-06-15T06:43:24.437-0300|INFO|glassfish3.0.1|javax.enterprise.system.std.com.sun.enterprise.v3.services.impl|_ThreadID=28;_ThreadName=Thread-1;|06:43:24,437
DEBUG [ESHelper] Index tigerdocs doesn't exist. Creating it.
|#]

[#|2013-06-15T06:43:24.688-0300|INFO|glassfish3.0.1|javax.enterprise.system.std.com.sun.enterprise.v3.services.impl|_ThreadID=28;_ThreadName=Thread-1;|06:43:24,688
DEBUG [ESHelper] Mapping [tigerdocs]/[folder] doesn't exist. Creating
it.
|#]

[#|2013-06-15T06:43:24.749-0300|INFO|glassfish3.0.1|javax.enterprise.system.std.com.sun.enterprise.v3.services.impl|_ThreadID=28;_ThreadName=Thread-1;|06:43:24,748
DEBUG [ESHelper] Mapping definition for [tigerdocs]/[folder]
succesfully created.
|#]

[#|2013-06-15T06:43:24.751-0300|INFO|glassfish3.0.1|javax.enterprise.system.std.com.sun.enterprise.v3.services.impl|_ThreadID=28;_ThreadName=Thread-1;|06:43:24,751
DEBUG [ESHelper] Mapping [tigerdocs]/[tigerdoc] doesn't exist.
Creating it.
|#]

[#|2013-06-15T06:43:24.765-0300|INFO|glassfish3.0.1|javax.enterprise.system.std.com.sun.enterprise.v3.services.impl|_ThreadID=28;_ThreadName=Thread-1;|06:43:24,765
DEBUG [ESHelper] Mapping definition for [tigerdocs]/[tigerdoc]
succesfully created.
|#]

[#|2013-06-15T06:43:24.765-0300|INFO|glassfish3.0.1|javax.enterprise.system.std.com.sun.enterprise.v3.services.impl|_ThreadID=28;_ThreadName=Thread-1;|06:43:24,765
DEBUG [ESHelper] /createIndexIfNeeded()
|#]

[#|2013-06-15T06:43:24.767-0300|INFO|glassfish3.0.1|javax.enterprise.system.std.com.sun.enterprise.v3.services.impl|_ThreadID=28;_ThreadName=Thread-1;|06:43:24,767
DEBUG [RiverService]
/createIndexIfNeeded(FSRiver=[id="tiger",start=true,excludes=".mp4",updateRate=1,typename="tigerdoc",name="tiger",analyzer="standard",type="fs",includes=".txt,
*.html, *.pdf, *.doc",url="/usr/local/Tiger/text_files",indexname="tigerdocs"])
|#]

[#|2013-06-15T06:43:24.768-0300|INFO|glassfish3.0.1|javax.enterprise.system.std.com.sun.enterprise.v3.services.impl|_ThreadID=28;_ThreadName=Thread-1;|06:43:24,768
DEBUG [RiverService]
checkState(FSRiver=[id="tiger",start=true,excludes=".mp4",updateRate=1,typename="tigerdoc",name="tiger",analyzer="standard",type="fs",includes=".txt,
*.html, *.pdf, *.doc",url="/usr/local/Tiger/text_files",indexname="tigerdocs"])
|#]

[#|2013-06-15T06:43:25.771-0300|INFO|glassfish3.0.1|javax.enterprise.system.std.com.sun.enterprise.v3.services.impl|_ThreadID=28;_ThreadName=Thread-1;|06:43:25,771
DEBUG [RiverService]
checkState(FSRiver=[id="tiger",start=true,excludes=".mp4",updateRate=1,typename="tigerdoc",name="tiger",analyzer="standard",type="fs",includes=".txt,
*.html, *.pdf, *.doc",url="/usr/local/Tiger/text_files",indexname="tigerdocs"])
|#]

[#|2013-06-15T06:43:26.774-0300|INFO|glassfish3.0.1|javax.enterprise.system.std.com.sun.enterprise.v3.services.impl|_ThreadID=28;_ThreadName=Thread-1;|06:43:26,774
DEBUG [RiverService]
/add(FSRiver=[id="tiger",start=true,excludes=".mp4",updateRate=1,typename="tigerdoc",name="tiger",analyzer="standard",type="fs",includes=".txt,
*.html, *.pdf, *.doc",url="/usr/local/Tiger/text_files",indexname="tigerdocs"])
|#]

[#|2013-06-15T06:43:26.776-0300|INFO|glassfish3.0.1|javax.enterprise.system.std.com.sun.enterprise.v3.services.impl|_ThreadID=28;_ThreadName=Thread-1;|06:43:26,776
DEBUG [AdminFSRiverService]
/start(FSRiver=[id="tiger",start=true,excludes=".mp4",updateRate=1,typename="tigerdoc",name="tiger",analyzer="standard",type="fs",includes=".txt,
*.html, *.pdf, *.doc",url="/usr/local/Tiger/text_files",indexname="tigerdocs"])
|#]

[#|2013-06-15T06:43:35.579-0300|INFO|glassfish3.0.1|javax.enterprise.system.std.com.sun.enterprise.v3.services.impl|_ThreadID=28;_ThreadName=Thread-1;|06:43:35,579
DEBUG [SearchService] google('globish', 0, 10)
|#]

[#|2013-06-15T06:43:35.656-0300|INFO|glassfish3.0.1|javax.enterprise.system.std.com.sun.enterprise.v3.services.impl|_ThreadID=28;_ThreadName=Thread-1;|06:43:35,656
DEBUG [SearchService] /google(globish) : 0
|#]

[#|2013-06-15T06:43:37.296-0300|INFO|glassfish3.0.1|javax.enterprise.system.std.com.sun.enterprise.v3.services.impl|_ThreadID=24;_ThreadName=Thread-1;|06:43:37,296
DEBUG [SearchService] google('globish', 0, 10)
|#]

[#|2013-06-15T06:43:37.338-0300|INFO|glassfish3.0.1|javax.enterprise.system.std.com.sun.enterprise.v3.services.impl|_ThreadID=24;_ThreadName=Thread-1;|06:43:37,338
DEBUG [SearchService] /google(globish) : 0
|#]

[#|2013-06-15T06:43:37.678-0300|INFO|glassfish3.0.1|javax.enterprise.system.std.com.sun.enterprise.v3.services.impl|_ThreadID=28;_ThreadName=Thread-1;|06:43:37,678
DEBUG [SearchService] google('globish', 0, 10)
|#]

[#|2013-06-15T06:43:37.694-0300|INFO|glassfish3.0.1|javax.enterprise.system.std.com.sun.enterprise.v3.services.impl|_ThreadID=28;_ThreadName=Thread-1;|06:43:37,694
DEBUG [SearchService] /google(globish) : 0
|#]

[#|2013-06-15T06:43:37.931-0300|INFO|glassfish3.0.1|javax.enterprise.system.std.com.sun.enterprise.v3.services.impl|_ThreadID=24;_ThreadName=Thread-1;|06:43:37,931
DEBUG [SearchService] google('globish', 0, 10)
|#]

[#|2013-06-15T06:43:37.960-0300|INFO|glassfish3.0.1|javax.enterprise.system.std.com.sun.enterprise.v3.services.impl|_ThreadID=24;_ThreadName=Thread-1;|06:43:37,960
DEBUG [SearchService] /google(globish) : 0
|#]

[#|2013-06-15T06:43:38.087-0300|INFO|glassfish3.0.1|javax.enterprise.system.std.com.sun.enterprise.v3.services.impl|_ThreadID=28;_ThreadName=Thread-1;|06:43:38,087
DEBUG [SearchService] google('globish', 0, 10)
|#]

[#|2013-06-15T06:43:38.099-0300|INFO|glassfish3.0.1|javax.enterprise.system.std.com.sun.enterprise.v3.services.impl|_ThreadID=28;_ThreadName=Thread-1;|06:43:38,099
DEBUG [SearchService] /google(globish) : 0
|#]

[#|2013-06-15T06:43:38.205-0300|INFO|glassfish3.0.1|javax.enterprise.system.std.com.sun.enterprise.v3.services.impl|_ThreadID=24;_ThreadName=Thread-1;|06:43:38,205
DEBUG [SearchService] google('globish', 0, 10)
|#]

[#|2013-06-15T06:43:38.218-0300|INFO|glassfish3.0.1|javax.enterprise.system.std.com.sun.enterprise.v3.services.impl|_ThreadID=24;_ThreadName=Thread-1;|06:43:38,218
DEBUG [SearchService] /google(globish) : 0
|#]

[#|2013-06-15T06:43:48.016-0300|INFO|glassfish3.0.1|javax.enterprise.system.std.com.sun.enterprise.v3.services.impl|_ThreadID=28;_ThreadName=Thread-1;|06:43:48,016
DEBUG [AdminRiverService] get()
|#]

[#|2013-06-15T06:43:48.019-0300|INFO|glassfish3.0.1|javax.enterprise.system.std.com.sun.enterprise.v3.services.impl|_ThreadID=28;_ThreadName=Thread-1;|06:43:48,019
DEBUG [RiverService]
checkState(org.scrutmydocs.webapp.api.settings.rivers.basic.data.BasicRiver@7c6589ea)
|#]

[#|2013-06-15T06:43:48.021-0300|INFO|glassfish3.0.1|javax.enterprise.system.std.com.sun.enterprise.v3.services.impl|_ThreadID=28;_ThreadName=Thread-1;|06:43:48,021
DEBUG [AdminRiverService]
/get()=[org.scrutmydocs.webapp.api.settings.rivers.basic.data.BasicRiver@7c6589ea]
|#]

[#|2013-06-15T06:43:48.021-0300|INFO|glassfish3.0.1|javax.enterprise.system.std.com.sun.enterprise.v3.services.impl|_ThreadID=28;_ThreadName=Thread-1;|06:43:48,021
DEBUG [RiverService]
checkState(org.scrutmydocs.webapp.api.settings.rivers.basic.data.BasicRiver@7c6589ea)
|#]

Do not really know what else to check.

Thanks in advance.

2013/6/14 Fatima Castiglione Maldonado 发 castiglionemaldonado@gmail.com

Oh, excuse me, did not really take into account that pasting images could
be a problem.

  1. missing images

Image 1 is a print of my running processes seen usinghtop, there is a
dozen or so processes for user "elasticsearch".
Image 2 is a print of the folder permissions, belongs to root and
"everybody else" has read permission, so I understand it should be ok to be
read by the elasticsearch user processes.

  1. glassfish logs

not much in jvm log (30 KB), but way too many things happening
in server log (1.2 MB)
should I attach them?

  1. elasticsearch logs
    I do not understand where to find them
    do you have a logging.yml example file that you could spare?

Thanks a lot

2013/6/14 David Pilato david@pilato.fr

Hey

I can't see images 1 and 2.
BTW, you should only copy and paste text when possible.

c. when I run
http://localhost:8080/scrutmydocs-0.2.0/api/1/settings/rivers
the river is there but to me it looks strange that the river type is dummy
(should not it be fs ???)

This is really weird. I can't understand how it created a dummy river.

As far as I remember the source code, it can't happen. Do you see
anything interesting in your glass fish logs?

--
David Pilato | Technical Advocate | Elasticsearch.com
@dadoonet https://twitter.com/dadoonet | @elasticsearchfrhttps://twitter.com/elasticsearchfr
| @scrutmydocs https://twitter.com/scrutmydocs

--

Fátima Castiglione Maldonado
castiglionemaldonado@gmail.com

                 ____
               ,'_   |

____________|||

<_____ ) .------.
-----------,------.-' ,-' -.
| | | ,' . ,' | | ,' .

            |  _,-'  |__         /                        \
          _,'-'    `.   `---.___|_____________             \
      .--'  -----.  | _____________________   `-. -----     |
      |    ___|  |  |                      \  ,- \          |

      |    ___|  |===========================((|) |         |
      |       |  |  | _____________________/  `- /          |
      `--._ -----'  |        _________________,-' -----     |
           `.-._   ,' __.---'   |                          /

            |   `-.  |           \                        /
            `.    |  |            `.                    ,'
             |    |  |              `.                ,'

____,------------------. `-. _,-'

<_______________________) ------' | _| | .|

=================================

--

Fátima Castiglione Maldonado
castiglionemaldonado@gmail.com

                 ____
               ,'_   |

|||
<
) .------.
-----------,------.-' ,-' -.
| | | ,' . ,' | | ,' .
| ,-' |
/
,'-' . ---.
|
_________
.--' -----. | _____________________ -. ----- | | ___| | | \ ,- \ | | ___| |===========================((|) | | | | | | _____________________/ - / |
--._ -----' | _________________,-' ----- | .-._ ,' __.---' | /
| -. | \ / . | | . ,' | | | . ,'
_____,------------------. -._ _,-' <___________________________) ------'
| | |
`.
___|

=================================

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

  1. I changed the folder name from:

/usr/local/Tiger/text_files

to:

/usr/local/tiger/textfiles

(just in case the uppercase was a problem there too)

but when
http://localhost:8080/scrutmydocs-0.2.0/api/1/settings/rivers

again got:

{"object":[{"name":"tiger","id":"tiger","type":"dummy","start":true,"indexname":"tigerdocs","typename":"tigerdoc"}],"ok":true,"errors":null}

  1. I checked the full glassfish log searching for exceptions

Only this did I find, from a few days ago. I guess this was raised trying
to create and index on a folder that does not exists in my system.

[#|2013-06-10T07:02:35.556-0300|INFO|glassfish3.0.1|javax.enterprise.system.std.com.sun.enterprise.v3.services.impl|_ThreadID=34;_ThreadName=Thread-1;|07:02:35,556
WARN [RiverService]
checkState(FSRiver=[id="myfirstriver",start=false,updateRate=30,typename="doc",name="myfirstriver",analyzer="standard",type="fs",url="/tmp_es",indexname="docs"])
: Exception raised : class org.elasticsearch.indices.IndexMissingException
|#]

[#|2013-06-10T07:02:35.558-0300|INFO|glassfish3.0.1|javax.enterprise.system.std.com.sun.enterprise.v3.services.impl|_ThreadID=34;_ThreadName=Thread-1;|07:02:35,557
DEBUG [RiverService] - Exception stacktrace :
org.elasticsearch.indices.IndexMissingException: [_river] missing
at
org.elasticsearch.cluster.metadata.MetaData.concreteIndex(MetaData.java:538)
at
org.elasticsearch.action.get.TransportGetAction.resolveRequest(TransportGetAction.java:90)
at
org.elasticsearch.action.get.TransportGetAction.resolveRequest(TransportGetAction.java:42)
at
org.elasticsearch.action.support.single.shard.TransportShardSingleOperationAction$AsyncSingleAction.(TransportShardSingleOperationAction.java:115)
at
org.elasticsearch.action.support.single.shard.TransportShardSingleOperationAction$AsyncSingleAction.(TransportShardSingleOperationAction.java:95)
at
org.elasticsearch.action.support.single.shard.TransportShardSingleOperationAction.doExecute(TransportShardSingleOperationAction.java:72)
at
org.elasticsearch.action.support.single.shard.TransportShardSingleOperationAction.doExecute(TransportShardSingleOperationAction.java:47)
at
org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:61)
at org.elasticsearch.client.node.NodeClient.execute(NodeClient.java:90)
at
org.elasticsearch.client.support.AbstractClient.get(AbstractClient.java:175)
at
org.elasticsearch.action.get.GetRequestBuilder.doExecute(GetRequestBuilder.java:135)
at
org.elasticsearch.action.support.BaseRequestBuilder.execute(BaseRequestBuilder.java:53)
at
org.elasticsearch.action.support.BaseRequestBuilder.execute(BaseRequestBuilder.java:47)
at
org.scrutmydocs.webapp.service.settings.rivers.RiverService.checkState(RiverService.java:60)
at
org.scrutmydocs.webapp.api.settings.rivers.fs.facade.FSRiversApi.get(FSRiversApi.java:103)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:616)
at
org.springframework.web.method.support.InvocableHandlerMethod.invoke(InvocableHandlerMethod.java:213)
at
org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:126)
at
org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:96)
at
org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:617)
at
org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:578)
at
org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:80)
at
org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:923)
at
org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:852)
at
org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:882)
at
org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:778)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:734)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:847)
at
org.apache.catalina.core.StandardWrapper.service(StandardWrapper.java:1523)
at
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:279)
at
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:188)
at
org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:641)
at com.sun.enterprise.web.WebPipeline.invoke(WebPipeline.java:97)
at
com.sun.enterprise.web.PESessionLockingStandardPipeline.invoke(PESessionLockingStandardPipeline.java:85)
at
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:185)
at
org.apache.catalina.connector.CoyoteAdapter.doService(CoyoteAdapter.java:325)
at
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:226)
at
com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:165)
at com.sun.grizzly.http.ProcessorTask.invokeAdapter(ProcessorTask.java:791)
at com.sun.grizzly.http.ProcessorTask.doProcess(ProcessorTask.java:693)
at com.sun.grizzly.http.ProcessorTask.process(ProcessorTask.java:954)
at
com.sun.grizzly.http.DefaultProtocolFilter.execute(DefaultProtocolFilter.java:170)
at
com.sun.grizzly.DefaultProtocolChain.executeProtocolFilter(DefaultProtocolChain.java:135)
at
com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:102)
at
com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:88)
at com.sun.grizzly.http.HttpProtocolChain.execute(HttpProtocolChain.java:76)
at
com.sun.grizzly.ProtocolChainContextTask.doCall(ProtocolChainContextTask.java:53)
at
com.sun.grizzly.SelectionKeyContextTask.call(SelectionKeyContextTask.java:57)
at com.sun.grizzly.ContextTask.run(ContextTask.java:69)
at
com.sun.grizzly.util.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:330)
at
com.sun.grizzly.util.AbstractThreadPool$Worker.run(AbstractThreadPool.java:309)
at java.lang.Thread.run(Thread.java:679)

(pasted that here just in case)

2013/6/15 Fatima Castiglione Maldonado 发 castiglionemaldonado@gmail.com

If I restart the app, the example fs rivers are created again...

Interestingly enough, all of them get created as "dummy".

{"object":[{"name":"myfirstriver","id":"myfirstriver","type":"dummy","start":false,"indexname":"docs","typename":"doc"},{"name":"TigerRiver","id":"TigerRiver","type":"dummy","start":true,"indexname":"TigerDocs","typename":"TigerDoc"},{"name":"mysecondriver","id":"mysecondriver","type":"dummy","start":false,"indexname":"docs","typename":"doc"}],"ok":true,"errors":null}

I was thinking that it was a problem with Java 7, because it does not includes the "jar" command (guess it was an install error that I did not notice)

But I removed java 7 and, working with java 6, everything is quite the same... dummy.

Then I found that individual docs have not permissions for anyone else... So I gave the permission.

But nothing. Will not index my docs.

Then I checked the GlassFish logs once again, and found an error telling me that the name of the river should be all lowercase.

So I delete the river and create it again...

Now the logs says it is ok (see below) but it still gets created as dummy.

[#|2013-06-15T06:43:24.435-0300|INFO|glassfish3.0.1|javax.enterprise.system.std.com.sun.enterprise.v3.services.impl|_ThreadID=28;_ThreadName=Thread-1;|06:43:24,435 DEBUG [RiverService] createIndexIfNeeded(FSRiver=[id="tiger",start=true,excludes=".mp4",updateRate=1,typename="tigerdoc",name="tiger",analyzer="standard",type="fs",includes=".txt, *.html, *.pdf, *.doc",url="/usr/local/Tiger/text_files",indexname="tigerdocs"])
|#]

[#|2013-06-15T06:43:24.437-0300|INFO|glassfish3.0.1|javax.enterprise.system.std.com.sun.enterprise.v3.services.impl|_ThreadID=28;_ThreadName=Thread-1;|06:43:24,435 DEBUG [ESHelper] createIndexIfNeeded(tigerdocs, tigerdoc, standard)
|#]

[#|2013-06-15T06:43:24.437-0300|INFO|glassfish3.0.1|javax.enterprise.system.std.com.sun.enterprise.v3.services.impl|_ThreadID=28;_ThreadName=Thread-1;|06:43:24,437 DEBUG [ESHelper] Index tigerdocs doesn't exist. Creating it.
|#]

[#|2013-06-15T06:43:24.688-0300|INFO|glassfish3.0.1|javax.enterprise.system.std.com.sun.enterprise.v3.services.impl|_ThreadID=28;_ThreadName=Thread-1;|06:43:24,688 DEBUG [ESHelper] Mapping [tigerdocs]/[folder] doesn't exist. Creating it.
|#]

[#|2013-06-15T06:43:24.749-0300|INFO|glassfish3.0.1|javax.enterprise.system.std.com.sun.enterprise.v3.services.impl|_ThreadID=28;_ThreadName=Thread-1;|06:43:24,748 DEBUG [ESHelper] Mapping definition for [tigerdocs]/[folder] succesfully created.
|#]

[#|2013-06-15T06:43:24.751-0300|INFO|glassfish3.0.1|javax.enterprise.system.std.com.sun.enterprise.v3.services.impl|_ThreadID=28;_ThreadName=Thread-1;|06:43:24,751 DEBUG [ESHelper] Mapping [tigerdocs]/[tigerdoc] doesn't exist. Creating it.
|#]

[#|2013-06-15T06:43:24.765-0300|INFO|glassfish3.0.1|javax.enterprise.system.std.com.sun.enterprise.v3.services.impl|_ThreadID=28;_ThreadName=Thread-1;|06:43:24,765 DEBUG [ESHelper] Mapping definition for [tigerdocs]/[tigerdoc] succesfully created.
|#]

[#|2013-06-15T06:43:24.765-0300|INFO|glassfish3.0.1|javax.enterprise.system.std.com.sun.enterprise.v3.services.impl|_ThreadID=28;_ThreadName=Thread-1;|06:43:24,765 DEBUG [ESHelper] /createIndexIfNeeded()
|#]

[#|2013-06-15T06:43:24.767-0300|INFO|glassfish3.0.1|javax.enterprise.system.std.com.sun.enterprise.v3.services.impl|_ThreadID=28;_ThreadName=Thread-1;|06:43:24,767 DEBUG [RiverService] /createIndexIfNeeded(FSRiver=[id="tiger",start=true,excludes=".mp4",updateRate=1,typename="tigerdoc",name="tiger",analyzer="standard",type="fs",includes=".txt, *.html, *.pdf, *.doc",url="/usr/local/Tiger/text_files",indexname="tigerdocs"])
|#]

[#|2013-06-15T06:43:24.768-0300|INFO|glassfish3.0.1|javax.enterprise.system.std.com.sun.enterprise.v3.services.impl|_ThreadID=28;_ThreadName=Thread-1;|06:43:24,768 DEBUG [RiverService] checkState(FSRiver=[id="tiger",start=true,excludes=".mp4",updateRate=1,typename="tigerdoc",name="tiger",analyzer="standard",type="fs",includes=".txt, *.html, *.pdf, *.doc",url="/usr/local/Tiger/text_files",indexname="tigerdocs"])
|#]

[#|2013-06-15T06:43:25.771-0300|INFO|glassfish3.0.1|javax.enterprise.system.std.com.sun.enterprise.v3.services.impl|_ThreadID=28;_ThreadName=Thread-1;|06:43:25,771 DEBUG [RiverService] checkState(FSRiver=[id="tiger",start=true,excludes=".mp4",updateRate=1,typename="tigerdoc",name="tiger",analyzer="standard",type="fs",includes=".txt, *.html, *.pdf, *.doc",url="/usr/local/Tiger/text_files",indexname="tigerdocs"])
|#]

[#|2013-06-15T06:43:26.774-0300|INFO|glassfish3.0.1|javax.enterprise.system.std.com.sun.enterprise.v3.services.impl|_ThreadID=28;_ThreadName=Thread-1;|06:43:26,774 DEBUG [RiverService] /add(FSRiver=[id="tiger",start=true,excludes=".mp4",updateRate=1,typename="tigerdoc",name="tiger",analyzer="standard",type="fs",includes=".txt, *.html, *.pdf, *.doc",url="/usr/local/Tiger/text_files",indexname="tigerdocs"])
|#]

[#|2013-06-15T06:43:26.776-0300|INFO|glassfish3.0.1|javax.enterprise.system.std.com.sun.enterprise.v3.services.impl|_ThreadID=28;_ThreadName=Thread-1;|06:43:26,776 DEBUG [AdminFSRiverService] /start(FSRiver=[id="tiger",start=true,excludes=".mp4",updateRate=1,typename="tigerdoc",name="tiger",analyzer="standard",type="fs",includes=".txt, *.html, *.pdf, *.doc",url="/usr/local/Tiger/text_files",indexname="tigerdocs"])
|#]

[#|2013-06-15T06:43:35.579-0300|INFO|glassfish3.0.1|javax.enterprise.system.std.com.sun.enterprise.v3.services.impl|_ThreadID=28;_ThreadName=Thread-1;|06:43:35,579 DEBUG [SearchService] google('globish', 0, 10)
|#]

[#|2013-06-15T06:43:35.656-0300|INFO|glassfish3.0.1|javax.enterprise.system.std.com.sun.enterprise.v3.services.impl|_ThreadID=28;_ThreadName=Thread-1;|06:43:35,656 DEBUG [SearchService] /google(globish) : 0
|#]

[#|2013-06-15T06:43:37.296-0300|INFO|glassfish3.0.1|javax.enterprise.system.std.com.sun.enterprise.v3.services.impl|_ThreadID=24;_ThreadName=Thread-1;|06:43:37,296 DEBUG [SearchService] google('globish', 0, 10)
|#]

[#|2013-06-15T06:43:37.338-0300|INFO|glassfish3.0.1|javax.enterprise.system.std.com.sun.enterprise.v3.services.impl|_ThreadID=24;_ThreadName=Thread-1;|06:43:37,338 DEBUG [SearchService] /google(globish) : 0
|#]

[#|2013-06-15T06:43:37.678-0300|INFO|glassfish3.0.1|javax.enterprise.system.std.com.sun.enterprise.v3.services.impl|_ThreadID=28;_ThreadName=Thread-1;|06:43:37,678 DEBUG [SearchService] google('globish', 0, 10)
|#]

[#|2013-06-15T06:43:37.694-0300|INFO|glassfish3.0.1|javax.enterprise.system.std.com.sun.enterprise.v3.services.impl|_ThreadID=28;_ThreadName=Thread-1;|06:43:37,694 DEBUG [SearchService] /google(globish) : 0
|#]

[#|2013-06-15T06:43:37.931-0300|INFO|glassfish3.0.1|javax.enterprise.system.std.com.sun.enterprise.v3.services.impl|_ThreadID=24;_ThreadName=Thread-1;|06:43:37,931 DEBUG [SearchService] google('globish', 0, 10)
|#]

[#|2013-06-15T06:43:37.960-0300|INFO|glassfish3.0.1|javax.enterprise.system.std.com.sun.enterprise.v3.services.impl|_ThreadID=24;_ThreadName=Thread-1;|06:43:37,960 DEBUG [SearchService] /google(globish) : 0
|#]

[#|2013-06-15T06:43:38.087-0300|INFO|glassfish3.0.1|javax.enterprise.system.std.com.sun.enterprise.v3.services.impl|_ThreadID=28;_ThreadName=Thread-1;|06:43:38,087 DEBUG [SearchService] google('globish', 0, 10)
|#]

[#|2013-06-15T06:43:38.099-0300|INFO|glassfish3.0.1|javax.enterprise.system.std.com.sun.enterprise.v3.services.impl|_ThreadID=28;_ThreadName=Thread-1;|06:43:38,099 DEBUG [SearchService] /google(globish) : 0
|#]

[#|2013-06-15T06:43:38.205-0300|INFO|glassfish3.0.1|javax.enterprise.system.std.com.sun.enterprise.v3.services.impl|_ThreadID=24;_ThreadName=Thread-1;|06:43:38,205 DEBUG [SearchService] google('globish', 0, 10)
|#]

[#|2013-06-15T06:43:38.218-0300|INFO|glassfish3.0.1|javax.enterprise.system.std.com.sun.enterprise.v3.services.impl|_ThreadID=24;_ThreadName=Thread-1;|06:43:38,218 DEBUG [SearchService] /google(globish) : 0
|#]

[#|2013-06-15T06:43:48.016-0300|INFO|glassfish3.0.1|javax.enterprise.system.std.com.sun.enterprise.v3.services.impl|_ThreadID=28;_ThreadName=Thread-1;|06:43:48,016 DEBUG [AdminRiverService] get()
|#]

[#|2013-06-15T06:43:48.019-0300|INFO|glassfish3.0.1|javax.enterprise.system.std.com.sun.enterprise.v3.services.impl|_ThreadID=28;_ThreadName=Thread-1;|06:43:48,019 DEBUG [RiverService] checkState(org.scrutmydocs.webapp.api.settings.rivers.basic.data.BasicRiver@7c6589ea)
|#]

[#|2013-06-15T06:43:48.021-0300|INFO|glassfish3.0.1|javax.enterprise.system.std.com.sun.enterprise.v3.services.impl|_ThreadID=28;_ThreadName=Thread-1;|06:43:48,021 DEBUG [AdminRiverService] /get()=[org.scrutmydocs.webapp.api.settings.rivers.basic.data.BasicRiver@7c6589ea]
|#]

[#|2013-06-15T06:43:48.021-0300|INFO|glassfish3.0.1|javax.enterprise.system.std.com.sun.enterprise.v3.services.impl|_ThreadID=28;_ThreadName=Thread-1;|06:43:48,021 DEBUG [RiverService] checkState(org.scrutmydocs.webapp.api.settings.rivers.basic.data.BasicRiver@7c6589ea)
|#]

Do not really know what else to check.

Thanks in advance.

2013/6/14 Fatima Castiglione Maldonado 发 castiglionemaldonado@gmail.com

Oh, excuse me, did not really take into account that pasting images could
be a problem.

  1. missing images

Image 1 is a print of my running processes seen usinghtop, there is a
dozen or so processes for user "elasticsearch".
Image 2 is a print of the folder permissions, belongs to root and
"everybody else" has read permission, so I understand it should be ok to be
read by the elasticsearch user processes.

  1. glassfish logs

not much in jvm log (30 KB), but way too many things
happening in server log (1.2 MB)
should I attach them?

  1. elasticsearch logs
    I do not understand where to find them
    do you have a logging.yml example file that you could spare?

Thanks a lot

2013/6/14 David Pilato david@pilato.fr

Hey

I can't see images 1 and 2.
BTW, you should only copy and paste text when possible.

c. when I run
http://localhost:8080/scrutmydocs-0.2.0/api/1/settings/rivers
the river is there but to me it looks strange that the river type is
dummy
(should not it be fs ???)

This is really weird. I can't understand how it created a dummy river.

As far as I remember the source code, it can't happen. Do you see
anything interesting in your glass fish logs?

--
David Pilato | Technical Advocate | Elasticsearch.com
@dadoonet https://twitter.com/dadoonet | @elasticsearchfrhttps://twitter.com/elasticsearchfr
| @scrutmydocs https://twitter.com/scrutmydocs

--

Fátima Castiglione Maldonado
castiglionemaldonado@gmail.com

                 ____
               ,'_   |

____________|||

<_____ ) .------.
-----------,------.-' ,-' -.
| | | ,' . ,' | | ,' .

            |  _,-'  |__         /                        \
          _,'-'    `.   `---.___|_____________             \
      .--'  -----.  | _____________________   `-. -----     |

      |    ___|  |  |                      \  ,- \          |

      |    ___|  |===========================((|) |         |
      |       |  |  | _____________________/  `- /          |
      `--._ -----'  |        _________________,-' -----     |
           `.-._   ,' __.---'   |                          /


            |   `-.  |           \                        /
            `.    |  |            `.                    ,'
             |    |  |              `.                ,'

____,------------------. `-. _,-'

<_______________________) ------' | _| | .|

=================================

--

Fátima Castiglione Maldonado
castiglionemaldonado@gmail.com

                 ____
               ,'_   |

_______|||
<
) .------.
-----------,------.-' ,-' -.

             |    |  |              ,'                `.
            ,'    |  |            ,'                    `.
            |  _,-'  |__         /                        \
          _,'-'    `.   `---.___|_____________             \

      .--'  -----.  | _____________________   `-. -----     |
      |    ___|  |  |                      \  ,- \          |
      |    ___|  |===========================((|) |         |
      |       |  |  | _____________________/  `- /          |

      `--._ -----'  |        _________________,-' -----     |
           `.-._   ,' __.---'   |                          /
            |   `-.  |           \                        /
            `.    |  |            `.                    ,'

             |    |  |              `.                ,'

_____,------------------. -._ _,-' <___________________________) ------'
| _| |

               `.____|

=================================

--

Fátima Castiglione Maldonado
castiglionemaldonado@gmail.com

                 ____
               ,'_   |

|||
<
) .------.
-----------,------.-' ,-' -.
| | | ,' . ,' | | ,' .
| ,-' |
/
,'-' . ---.
|
_________
.--' -----. | _____________________ -. ----- | | ___| | | \ ,- \ | | ___| |===========================((|) | | | | | | _____________________/ - / |
--._ -----' | _________________,-' ----- | .-._ ,' __.---' | /
| -. | \ / . | | . ,' | | | . ,'
_____,------------------. -._ _,-' <___________________________) ------'
| | |
`.
___|

=================================

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

  1. this is my /etc/elasticsearch/elasticsearch.yml

Mandatory cluster Name. You should be able to modify it in a future

release.
cluster.name: "scrutmydocs"

If you want to check plugins before starting

plugin.mandatory: mapper-attachments, river-fs

If you want to disable multicast

discovery.zen.ping.multicast.enabled: false

replaced a sample logging.yml that I was using with this one:
/etc/elasticsearch/logging.yml

path:
logs: /usr/local/tiger/

no logs at all in :

a.
/usr/local/tiger/textfiles

b.
/usr/share/elasticsearch/logs/

(service elasticsearch stop)
(service elasticsearch start)

c.
but here they are:

/var/log/elasticsearch/elasticsearch.log <--- is from monday !!!
(where is the new one??? cannot find it)

/var/log/elasticsearch/scrutmydocs.log <--- is from today

[2013-06-15 04:30:44,949][INFO ][node ] [Agony]
{0.90.1}[1372]: stopping ...
[2013-06-15 04:30:45,352][INFO ][node ] [Agony]
{0.90.1}[1372]: stopped
[2013-06-15 04:30:45,353][INFO ][node ] [Agony]
{0.90.1}[1372]: closing ...
[2013-06-15 04:30:45,463][INFO ][node ] [Agony]
{0.90.1}[1372]: closed
[2013-06-15 04:30:51,771][INFO ][node ] [Scarlet
Spider] {0.90.1}[11475]: initializing ...
[2013-06-15 04:30:52,314][INFO ][plugins ] [Scarlet
Spider] loaded [mapper-attachments, river-fs, river-couchdb], sites [head]
[2013-06-15 04:30:58,463][INFO ][node ] [Scarlet
Spider] {0.90.1}[11475]: initialized
[2013-06-15 04:30:58,463][INFO ][node ] [Scarlet
Spider] {0.90.1}[11475]: starting ...
[2013-06-15 04:30:58,735][INFO ][transport ] [Scarlet
Spider] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/
192.168.1.100:9300]}
[2013-06-15 04:31:01,800][INFO ][cluster.service ] [Scarlet
Spider] new_master [Scarlet
Spider][kSl76l1qTcibZEWWvZxuGA][inet[/192.168.1.100:9300]], reason:
zen-disco-join (elected_as_master)
[2013-06-15 04:31:01,864][INFO ][discovery ] [Scarlet
Spider] scrutmydocs/kSl76l1qTcibZEWWvZxuGA
[2013-06-15 04:31:01,894][INFO ][http ] [Scarlet
Spider] bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/
192.168.1.100:9200]}
[2013-06-15 04:31:01,895][INFO ][node ] [Scarlet
Spider] {0.90.1}[11475]: started
[2013-06-15 04:31:02,024][INFO ][gateway ] [Scarlet
Spider] recovered [0] indices into cluster_state
[2013-06-15 05:56:04,008][INFO ][node ] [Scarlet
Spider] {0.90.1}[11475]: stopping ...
[2013-06-15 05:56:04,229][INFO ][node ] [Scarlet
Spider] {0.90.1}[11475]: stopped
[2013-06-15 05:56:04,230][INFO ][node ] [Scarlet
Spider] {0.90.1}[11475]: closing ...
[2013-06-15 05:56:04,351][INFO ][node ] [Scarlet
Spider] {0.90.1}[11475]: closed
[2013-06-15 05:56:09,812][INFO ][node ] [Century,
Turner] {0.90.1}[12759]: initializing ...
[2013-06-15 05:56:10,354][INFO ][plugins ] [Century,
Turner] loaded [mapper-attachments, river-fs, river-couchdb], sites [head]
[2013-06-15 05:56:16,030][INFO ][node ] [Century,
Turner] {0.90.1}[12759]: initialized
[2013-06-15 05:56:16,031][INFO ][node ] [Century,
Turner] {0.90.1}[12759]: starting ...
[2013-06-15 05:56:16,279][INFO ][transport ] [Century,
Turner] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/
192.168.1.100:9300]}
[2013-06-15 05:56:19,343][INFO ][cluster.service ] [Century,
Turner] new_master [Century,
Turner][kGloCf6DQ5WLKY7ECZ-y_Q][inet[/192.168.1.100:9300]], reason:
zen-disco-join (elected_as_master)
[2013-06-15 05:56:19,422][INFO ][discovery ] [Century,
Turner] scrutmydocs/kGloCf6DQ5WLKY7ECZ-y_Q
[2013-06-15 05:56:19,457][INFO ][http ] [Century,
Turner] bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/
192.168.1.100:9200]}
[2013-06-15 05:56:19,457][INFO ][node ] [Century,
Turner] {0.90.1}[12759]: started
[2013-06-15 05:56:19,576][INFO ][gateway ] [Century,
Turner] recovered [0] indices into cluster_state
[2013-06-15 06:01:47,246][INFO ][node ] [Century,
Turner] {0.90.1}[12759]: stopping ...
[2013-06-15 06:01:47,310][INFO ][node ] [Century,
Turner] {0.90.1}[12759]: stopped
[2013-06-15 06:01:47,310][INFO ][node ] [Century,
Turner] {0.90.1}[12759]: closing ...
[2013-06-15 06:01:47,331][INFO ][node ] [Century,
Turner] {0.90.1}[12759]: closed
[2013-06-15 06:07:10,166][INFO ][node ] [Golden Oldie]
{0.90.1}[13614]: initializing ...
[2013-06-15 06:07:10,249][INFO ][plugins ] [Golden Oldie]
loaded [mapper-attachments, river-fs, river-couchdb], sites [head]
[2013-06-15 06:07:14,763][INFO ][node ] [Golden Oldie]
{0.90.1}[13614]: initialized
[2013-06-15 06:07:14,764][INFO ][node ] [Golden Oldie]
{0.90.1}[13614]: starting ...
[2013-06-15 06:07:14,967][INFO ][transport ] [Golden Oldie]
bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/
192.168.1.100:9300]}
[2013-06-15 06:07:18,012][INFO ][cluster.service ] [Golden Oldie]
new_master [Golden Oldie][sFW4r1-_SvifIaBHlwww_g][inet[/192.168.1.100:9300]],
reason: zen-disco-join (elected_as_master)
[2013-06-15 06:07:18,095][INFO ][discovery ] [Golden Oldie]
scrutmydocs/sFW4r1-_SvifIaBHlwww_g
[2013-06-15 06:07:18,129][INFO ][http ] [Golden Oldie]
bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/
192.168.1.100:9200]}
[2013-06-15 06:07:18,130][INFO ][node ] [Golden Oldie]
{0.90.1}[13614]: started
[2013-06-15 06:07:18,219][INFO ][gateway ] [Golden Oldie]
recovered [0] indices into cluster_state
[2013-06-15 06:21:00,357][INFO ][node ] [Golden Oldie]
{0.90.1}[13614]: stopping ...
[2013-06-15 06:21:00,422][INFO ][node ] [Golden Oldie]
{0.90.1}[13614]: stopped
[2013-06-15 06:21:00,422][INFO ][node ] [Golden Oldie]
{0.90.1}[13614]: closing ...
[2013-06-15 06:21:00,445][INFO ][node ] [Golden Oldie]
{0.90.1}[13614]: closed
[2013-06-15 06:21:03,547][INFO ][node ] [Mr. Fish]
{0.90.1}[13980]: initializing ...
[2013-06-15 06:21:03,652][INFO ][plugins ] [Mr. Fish]
loaded [mapper-attachments, river-fs, river-couchdb], sites [head]
[2013-06-15 06:21:08,056][INFO ][node ] [Mr. Fish]
{0.90.1}[13980]: initialized
[2013-06-15 06:21:08,057][INFO ][node ] [Mr. Fish]
{0.90.1}[13980]: starting ...
[2013-06-15 06:21:08,382][INFO ][transport ] [Mr. Fish]
bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/
192.168.1.100:9300]}
[2013-06-15 06:21:11,427][INFO ][cluster.service ] [Mr. Fish]
new_master [Mr. Fish][9cLc8fRhQ6C3kSgKs4TtYw][inet[/192.168.1.100:9300]],
reason: zen-disco-join (elected_as_master)
[2013-06-15 06:21:11,587][INFO ][discovery ] [Mr. Fish]
scrutmydocs/9cLc8fRhQ6C3kSgKs4TtYw
[2013-06-15 06:21:11,715][INFO ][http ] [Mr. Fish]
bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/
192.168.1.100:9200]}
[2013-06-15 06:21:11,717][INFO ][node ] [Mr. Fish]
{0.90.1}[13980]: started
[2013-06-15 06:21:12,014][INFO ][gateway ] [Mr. Fish]
recovered [0] indices into cluster_state
[2013-06-15 06:21:21,056][INFO ][node ] [Mr. Fish]
{0.90.1}[13980]: stopping ...
[2013-06-15 06:21:21,122][INFO ][node ] [Mr. Fish]
{0.90.1}[13980]: stopped
[2013-06-15 06:21:21,123][INFO ][node ] [Mr. Fish]
{0.90.1}[13980]: closing ...
[2013-06-15 06:21:21,150][INFO ][node ] [Mr. Fish]
{0.90.1}[13980]: closed
[2013-06-15 06:21:25,877][INFO ][node ] [Slipstream]
{0.90.1}[14076]: initializing ...
[2013-06-15 06:21:25,963][INFO ][plugins ] [Slipstream]
loaded [mapper-attachments, river-fs, river-couchdb], sites [head]
[2013-06-15 06:21:30,466][INFO ][node ] [Slipstream]
{0.90.1}[14076]: initialized
[2013-06-15 06:21:30,467][INFO ][node ] [Slipstream]
{0.90.1}[14076]: starting ...
[2013-06-15 06:21:30,921][INFO ][transport ] [Slipstream]
bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/
192.168.1.100:9300]}
[2013-06-15 06:21:33,965][INFO ][cluster.service ] [Slipstream]
new_master [Slipstream][01b1e3odSDC5AumjVsiWFw][inet[/192.168.1.100:9300]],
reason: zen-disco-join (elected_as_master)
[2013-06-15 06:21:34,058][INFO ][discovery ] [Slipstream]
scrutmydocs/01b1e3odSDC5AumjVsiWFw
[2013-06-15 06:21:34,092][INFO ][http ] [Slipstream]
bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/
192.168.1.100:9200]}
[2013-06-15 06:21:34,092][INFO ][node ] [Slipstream]
{0.90.1}[14076]: started
[2013-06-15 06:21:34,193][INFO ][gateway ] [Slipstream]
recovered [0] indices into cluster_state
[2013-06-15 06:53:02,605][INFO ][node ] [Slipstream]
{0.90.1}[14076]: stopping ...
[2013-06-15 06:53:02,761][INFO ][node ] [Slipstream]
{0.90.1}[14076]: stopped
[2013-06-15 06:53:02,761][INFO ][node ] [Slipstream]
{0.90.1}[14076]: closing ...
[2013-06-15 06:53:02,859][INFO ][node ] [Slipstream]
{0.90.1}[14076]: closed
[2013-06-15 06:53:09,634][INFO ][node ] [Khaos]
{0.90.1}[14658]: initializing ...
[2013-06-15 06:53:10,353][INFO ][plugins ] [Khaos] loaded
[mapper-attachments, river-fs, river-couchdb], sites [head]
[2013-06-15 06:53:16,002][INFO ][node ] [Khaos]
{0.90.1}[14658]: initialized
[2013-06-15 06:53:16,003][INFO ][node ] [Khaos]
{0.90.1}[14658]: starting ...
[2013-06-15 06:53:16,215][INFO ][transport ] [Khaos]
bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/
192.168.1.100:9300]}
[2013-06-15 06:53:19,297][INFO ][cluster.service ] [Khaos]
new_master [Khaos][kq03c5NsRrOcb0Tuj1LxlA][inet[/192.168.1.100:9300]],
reason: zen-disco-join (elected_as_master)
[2013-06-15 06:53:19,421][INFO ][discovery ] [Khaos]
scrutmydocs/kq03c5NsRrOcb0Tuj1LxlA
[2013-06-15 06:53:19,498][INFO ][http ] [Khaos]
bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/
192.168.1.100:9200]}
[2013-06-15 06:53:19,499][INFO ][node ] [Khaos]
{0.90.1}[14658]: started
[2013-06-15 06:53:19,710][INFO ][gateway ] [Khaos]
recovered [0] indices into cluster_state
[2013-06-15 06:53:50,209][INFO ][node ] [Khaos]
{0.90.1}[14658]: stopping ...
[2013-06-15 06:53:50,279][INFO ][node ] [Khaos]
{0.90.1}[14658]: stopped
[2013-06-15 06:53:50,279][INFO ][node ] [Khaos]
{0.90.1}[14658]: closing ...
[2013-06-15 06:53:50,300][INFO ][node ] [Khaos]
{0.90.1}[14658]: closed
[2013-06-15 06:53:52,829][INFO ][node ] [Alexander,
Caleb] {0.90.1}[14914]: initializing ...
[2013-06-15 06:53:52,921][INFO ][plugins ] [Alexander,
Caleb] loaded [mapper-attachments, river-fs, river-couchdb], sites [head]
[2013-06-15 06:53:57,503][INFO ][node ] [Alexander,
Caleb] {0.90.1}[14914]: initialized
[2013-06-15 06:53:57,504][INFO ][node ] [Alexander,
Caleb] {0.90.1}[14914]: starting ...
[2013-06-15 06:53:57,719][INFO ][transport ] [Alexander,
Caleb] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/
192.168.1.100:9300]}
[2013-06-15 06:54:00,766][INFO ][cluster.service ] [Alexander,
Caleb] new_master [Alexander,
Caleb][BUrKsf6OR9uylkQe9_yedQ][inet[/192.168.1.100:9300]], reason:
zen-disco-join (elected_as_master)
[2013-06-15 06:54:00,851][INFO ][discovery ] [Alexander,
Caleb] scrutmydocs/BUrKsf6OR9uylkQe9_yedQ
[2013-06-15 06:54:00,889][INFO ][http ] [Alexander,
Caleb] bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/
192.168.1.100:9200]}
[2013-06-15 06:54:00,890][INFO ][node ] [Alexander,
Caleb] {0.90.1}[14914]: started
[2013-06-15 06:54:00,984][INFO ][gateway ] [Alexander,
Caleb] recovered [0] indices into cluster_state
[2013-06-15 07:29:00,424][INFO ][node ] [Alexander,
Caleb] {0.90.1}[14914]: stopping ...
[2013-06-15 07:29:00,479][INFO ][node ] [Alexander,
Caleb] {0.90.1}[14914]: stopped
[2013-06-15 07:29:00,479][INFO ][node ] [Alexander,
Caleb] {0.90.1}[14914]: closing ...
[2013-06-15 07:29:00,509][INFO ][node ] [Alexander,
Caleb] {0.90.1}[14914]: closed

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

  1. I see, there is no more elasticsearch.log, now the log file has the name
    of the cluster.
    Anyway, It goes on creating the river as dummy.

2013/6/15 Fatima Castiglione Maldonado 发 castiglionemaldonado@gmail.com

  1. this is my /etc/elasticsearch/elasticsearch.yml

Mandatory cluster Name. You should be able to modify it in a future

release.
cluster.name: "scrutmydocs"

If you want to check plugins before starting

plugin.mandatory: mapper-attachments, river-fs

If you want to disable multicast

discovery.zen.ping.multicast.enabled: false

replaced a sample logging.yml that I was using with this one:
/etc/elasticsearch/logging.yml

path:
logs: /usr/local/tiger/

no logs at all in :

a.
/usr/local/tiger/textfiles

b.
/usr/share/elasticsearch/logs/

(service elasticsearch stop)
(service elasticsearch start)

c.
but here they are:

/var/log/elasticsearch/elasticsearch.log <--- is from monday !!!
(where is the new one??? cannot find it)

/var/log/elasticsearch/scrutmydocs.log <--- is from today

[2013-06-15 04:30:44,949][INFO ][node ] [Agony]
{0.90.1}[1372]: stopping ...
[2013-06-15 04:30:45,352][INFO ][node ] [Agony]
{0.90.1}[1372]: stopped
[2013-06-15 04:30:45,353][INFO ][node ] [Agony]
{0.90.1}[1372]: closing ...
[2013-06-15 04:30:45,463][INFO ][node ] [Agony]
{0.90.1}[1372]: closed
[2013-06-15 04:30:51,771][INFO ][node ] [Scarlet
Spider] {0.90.1}[11475]: initializing ...
[2013-06-15 04:30:52,314][INFO ][plugins ] [Scarlet
Spider] loaded [mapper-attachments, river-fs, river-couchdb], sites [head]
[2013-06-15 04:30:58,463][INFO ][node ] [Scarlet
Spider] {0.90.1}[11475]: initialized
[2013-06-15 04:30:58,463][INFO ][node ] [Scarlet
Spider] {0.90.1}[11475]: starting ...
[2013-06-15 04:30:58,735][INFO ][transport ] [Scarlet
Spider] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/
192.168.1.100:9300]}
[2013-06-15 04:31:01,800][INFO ][cluster.service ] [Scarlet
Spider] new_master [Scarlet
Spider][kSl76l1qTcibZEWWvZxuGA][inet[/192.168.1.100:9300]], reason:
zen-disco-join (elected_as_master)
[2013-06-15 04:31:01,864][INFO ][discovery ] [Scarlet
Spider] scrutmydocs/kSl76l1qTcibZEWWvZxuGA
[2013-06-15 04:31:01,894][INFO ][http ] [Scarlet
Spider] bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/
192.168.1.100:9200]}
[2013-06-15 04:31:01,895][INFO ][node ] [Scarlet
Spider] {0.90.1}[11475]: started
[2013-06-15 04:31:02,024][INFO ][gateway ] [Scarlet
Spider] recovered [0] indices into cluster_state
[2013-06-15 05:56:04,008][INFO ][node ] [Scarlet
Spider] {0.90.1}[11475]: stopping ...
[2013-06-15 05:56:04,229][INFO ][node ] [Scarlet
Spider] {0.90.1}[11475]: stopped
[2013-06-15 05:56:04,230][INFO ][node ] [Scarlet
Spider] {0.90.1}[11475]: closing ...
[2013-06-15 05:56:04,351][INFO ][node ] [Scarlet
Spider] {0.90.1}[11475]: closed
[2013-06-15 05:56:09,812][INFO ][node ] [Century,
Turner] {0.90.1}[12759]: initializing ...
[2013-06-15 05:56:10,354][INFO ][plugins ] [Century,
Turner] loaded [mapper-attachments, river-fs, river-couchdb], sites [head]
[2013-06-15 05:56:16,030][INFO ][node ] [Century,
Turner] {0.90.1}[12759]: initialized
[2013-06-15 05:56:16,031][INFO ][node ] [Century,
Turner] {0.90.1}[12759]: starting ...
[2013-06-15 05:56:16,279][INFO ][transport ] [Century,
Turner] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/
192.168.1.100:9300]}
[2013-06-15 05:56:19,343][INFO ][cluster.service ] [Century,
Turner] new_master [Century,
Turner][kGloCf6DQ5WLKY7ECZ-y_Q][inet[/192.168.1.100:9300]], reason:
zen-disco-join (elected_as_master)
[2013-06-15 05:56:19,422][INFO ][discovery ] [Century,
Turner] scrutmydocs/kGloCf6DQ5WLKY7ECZ-y_Q
[2013-06-15 05:56:19,457][INFO ][http ] [Century,
Turner] bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/
192.168.1.100:9200]}
[2013-06-15 05:56:19,457][INFO ][node ] [Century,
Turner] {0.90.1}[12759]: started
[2013-06-15 05:56:19,576][INFO ][gateway ] [Century,
Turner] recovered [0] indices into cluster_state
[2013-06-15 06:01:47,246][INFO ][node ] [Century,
Turner] {0.90.1}[12759]: stopping ...
[2013-06-15 06:01:47,310][INFO ][node ] [Century,
Turner] {0.90.1}[12759]: stopped
[2013-06-15 06:01:47,310][INFO ][node ] [Century,
Turner] {0.90.1}[12759]: closing ...
[2013-06-15 06:01:47,331][INFO ][node ] [Century,
Turner] {0.90.1}[12759]: closed
[2013-06-15 06:07:10,166][INFO ][node ] [Golden Oldie]
{0.90.1}[13614]: initializing ...
[2013-06-15 06:07:10,249][INFO ][plugins ] [Golden Oldie]
loaded [mapper-attachments, river-fs, river-couchdb], sites [head]
[2013-06-15 06:07:14,763][INFO ][node ] [Golden Oldie]
{0.90.1}[13614]: initialized
[2013-06-15 06:07:14,764][INFO ][node ] [Golden Oldie]
{0.90.1}[13614]: starting ...
[2013-06-15 06:07:14,967][INFO ][transport ] [Golden Oldie]
bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/
192.168.1.100:9300]}
[2013-06-15 06:07:18,012][INFO ][cluster.service ] [Golden Oldie]
new_master [Golden Oldie][sFW4r1-_SvifIaBHlwww_g][inet[/192.168.1.100:9300]],
reason: zen-disco-join (elected_as_master)
[2013-06-15 06:07:18,095][INFO ][discovery ] [Golden Oldie]
scrutmydocs/sFW4r1-_SvifIaBHlwww_g
[2013-06-15 06:07:18,129][INFO ][http ] [Golden Oldie]
bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/
192.168.1.100:9200]}
[2013-06-15 06:07:18,130][INFO ][node ] [Golden Oldie]
{0.90.1}[13614]: started
[2013-06-15 06:07:18,219][INFO ][gateway ] [Golden Oldie]
recovered [0] indices into cluster_state
[2013-06-15 06:21:00,357][INFO ][node ] [Golden Oldie]
{0.90.1}[13614]: stopping ...
[2013-06-15 06:21:00,422][INFO ][node ] [Golden Oldie]
{0.90.1}[13614]: stopped
[2013-06-15 06:21:00,422][INFO ][node ] [Golden Oldie]
{0.90.1}[13614]: closing ...
[2013-06-15 06:21:00,445][INFO ][node ] [Golden Oldie]
{0.90.1}[13614]: closed
[2013-06-15 06:21:03,547][INFO ][node ] [Mr. Fish]
{0.90.1}[13980]: initializing ...
[2013-06-15 06:21:03,652][INFO ][plugins ] [Mr. Fish]
loaded [mapper-attachments, river-fs, river-couchdb], sites [head]
[2013-06-15 06:21:08,056][INFO ][node ] [Mr. Fish]
{0.90.1}[13980]: initialized
[2013-06-15 06:21:08,057][INFO ][node ] [Mr. Fish]
{0.90.1}[13980]: starting ...
[2013-06-15 06:21:08,382][INFO ][transport ] [Mr. Fish]
bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/
192.168.1.100:9300]}
[2013-06-15 06:21:11,427][INFO ][cluster.service ] [Mr. Fish]
new_master [Mr. Fish][9cLc8fRhQ6C3kSgKs4TtYw][inet[/192.168.1.100:9300]],
reason: zen-disco-join (elected_as_master)
[2013-06-15 06:21:11,587][INFO ][discovery ] [Mr. Fish]
scrutmydocs/9cLc8fRhQ6C3kSgKs4TtYw
[2013-06-15 06:21:11,715][INFO ][http ] [Mr. Fish]
bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/
192.168.1.100:9200]}
[2013-06-15 06:21:11,717][INFO ][node ] [Mr. Fish]
{0.90.1}[13980]: started
[2013-06-15 06:21:12,014][INFO ][gateway ] [Mr. Fish]
recovered [0] indices into cluster_state
[2013-06-15 06:21:21,056][INFO ][node ] [Mr. Fish]
{0.90.1}[13980]: stopping ...
[2013-06-15 06:21:21,122][INFO ][node ] [Mr. Fish]
{0.90.1}[13980]: stopped
[2013-06-15 06:21:21,123][INFO ][node ] [Mr. Fish]
{0.90.1}[13980]: closing ...
[2013-06-15 06:21:21,150][INFO ][node ] [Mr. Fish]
{0.90.1}[13980]: closed
[2013-06-15 06:21:25,877][INFO ][node ] [Slipstream]
{0.90.1}[14076]: initializing ...
[2013-06-15 06:21:25,963][INFO ][plugins ] [Slipstream]
loaded [mapper-attachments, river-fs, river-couchdb], sites [head]
[2013-06-15 06:21:30,466][INFO ][node ] [Slipstream]
{0.90.1}[14076]: initialized
[2013-06-15 06:21:30,467][INFO ][node ] [Slipstream]
{0.90.1}[14076]: starting ...
[2013-06-15 06:21:30,921][INFO ][transport ] [Slipstream]
bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/
192.168.1.100:9300]}
[2013-06-15 06:21:33,965][INFO ][cluster.service ] [Slipstream]
new_master [Slipstream][01b1e3odSDC5AumjVsiWFw][inet[/192.168.1.100:9300]],
reason: zen-disco-join (elected_as_master)
[2013-06-15 06:21:34,058][INFO ][discovery ] [Slipstream]
scrutmydocs/01b1e3odSDC5AumjVsiWFw
[2013-06-15 06:21:34,092][INFO ][http ] [Slipstream]
bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/
192.168.1.100:9200]}
[2013-06-15 06:21:34,092][INFO ][node ] [Slipstream]
{0.90.1}[14076]: started
[2013-06-15 06:21:34,193][INFO ][gateway ] [Slipstream]
recovered [0] indices into cluster_state
[2013-06-15 06:53:02,605][INFO ][node ] [Slipstream]
{0.90.1}[14076]: stopping ...
[2013-06-15 06:53:02,761][INFO ][node ] [Slipstream]
{0.90.1}[14076]: stopped
[2013-06-15 06:53:02,761][INFO ][node ] [Slipstream]
{0.90.1}[14076]: closing ...
[2013-06-15 06:53:02,859][INFO ][node ] [Slipstream]
{0.90.1}[14076]: closed
[2013-06-15 06:53:09,634][INFO ][node ] [Khaos]
{0.90.1}[14658]: initializing ...
[2013-06-15 06:53:10,353][INFO ][plugins ] [Khaos] loaded
[mapper-attachments, river-fs, river-couchdb], sites [head]
[2013-06-15 06:53:16,002][INFO ][node ] [Khaos]
{0.90.1}[14658]: initialized
[2013-06-15 06:53:16,003][INFO ][node ] [Khaos]
{0.90.1}[14658]: starting ...
[2013-06-15 06:53:16,215][INFO ][transport ] [Khaos]
bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/
192.168.1.100:9300]}
[2013-06-15 06:53:19,297][INFO ][cluster.service ] [Khaos]
new_master [Khaos][kq03c5NsRrOcb0Tuj1LxlA][inet[/192.168.1.100:9300]],
reason: zen-disco-join (elected_as_master)
[2013-06-15 06:53:19,421][INFO ][discovery ] [Khaos]
scrutmydocs/kq03c5NsRrOcb0Tuj1LxlA
[2013-06-15 06:53:19,498][INFO ][http ] [Khaos]
bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/
192.168.1.100:9200]}
[2013-06-15 06:53:19,499][INFO ][node ] [Khaos]
{0.90.1}[14658]: started
[2013-06-15 06:53:19,710][INFO ][gateway ] [Khaos]
recovered [0] indices into cluster_state
[2013-06-15 06:53:50,209][INFO ][node ] [Khaos]
{0.90.1}[14658]: stopping ...
[2013-06-15 06:53:50,279][INFO ][node ] [Khaos]
{0.90.1}[14658]: stopped
[2013-06-15 06:53:50,279][INFO ][node ] [Khaos]
{0.90.1}[14658]: closing ...
[2013-06-15 06:53:50,300][INFO ][node ] [Khaos]
{0.90.1}[14658]: closed
[2013-06-15 06:53:52,829][INFO ][node ] [Alexander,
Caleb] {0.90.1}[14914]: initializing ...
[2013-06-15 06:53:52,921][INFO ][plugins ] [Alexander,
Caleb] loaded [mapper-attachments, river-fs, river-couchdb], sites [head]
[2013-06-15 06:53:57,503][INFO ][node ] [Alexander,
Caleb] {0.90.1}[14914]: initialized
[2013-06-15 06:53:57,504][INFO ][node ] [Alexander,
Caleb] {0.90.1}[14914]: starting ...
[2013-06-15 06:53:57,719][INFO ][transport ] [Alexander,
Caleb] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/
192.168.1.100:9300]}
[2013-06-15 06:54:00,766][INFO ][cluster.service ] [Alexander,
Caleb] new_master [Alexander,
Caleb][BUrKsf6OR9uylkQe9_yedQ][inet[/192.168.1.100:9300]], reason:
zen-disco-join (elected_as_master)
[2013-06-15 06:54:00,851][INFO ][discovery ] [Alexander,
Caleb] scrutmydocs/BUrKsf6OR9uylkQe9_yedQ
[2013-06-15 06:54:00,889][INFO ][http ] [Alexander,
Caleb] bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/
192.168.1.100:9200]}
[2013-06-15 06:54:00,890][INFO ][node ] [Alexander,
Caleb] {0.90.1}[14914]: started
[2013-06-15 06:54:00,984][INFO ][gateway ] [Alexander,
Caleb] recovered [0] indices into cluster_state
[2013-06-15 07:29:00,424][INFO ][node ] [Alexander,
Caleb] {0.90.1}[14914]: stopping ...
[2013-06-15 07:29:00,479][INFO ][node ] [Alexander,
Caleb] {0.90.1}[14914]: stopped
[2013-06-15 07:29:00,479][INFO ][node ] [Alexander,
Caleb] {0.90.1}[14914]: closing ...
[2013-06-15 07:29:00,509][INFO ][node ] [Alexander,
Caleb] {0.90.1}[14914]: closed

--

Fátima Castiglione Maldonado
castiglionemaldonado@gmail.com

                 ____
               ,'_   |

|||
<
) .------.
-----------,------.-' ,-' -.
| | | ,' . ,' | | ,' .
| ,-' |
/
,'-' . ---.
|
_________
.--' -----. | _____________________ -. ----- | | ___| | | \ ,- \ | | ___| |===========================((|) | | | | | | _____________________/ - / |
--._ -----' | _________________,-' ----- | .-._ ,' __.---' | /
| -. | \ / . | | . ,' | | | . ,'
_____,------------------. -._ _,-' <___________________________) ------'
| | |
`.
___|

=================================

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Ok. Sounds like you are trying to connect scrutmydocs 0.2.0 to Elasticsearch 0.90.1
Have a look at README. GitHub - scrutmydocs/scrutmydocs: Search Web Application for hard drive documents

0.2.0 works only with 0.19.x version.

We need to release a new version as soon as possible.

That said, you can build your own version of scrutmydocs on master and it should work fine with 0.90.1.

Does it help?

--
David Pilato | Technical Advocate | Elasticsearch.com
@dadoonet | @elasticsearchfr | @scrutmydocs

Le 15 juin 2013 à 12:53, Fatima Castiglione Maldonado 发 castiglionemaldonado@gmail.com a écrit :

[2013-06-15 04:30:44,949][INFO ][node ] [Agony] {0.90.1}[1372]: stopping ...
[2013-06-15 04:30:45,352][INFO ][node ] [Agony] {0.90.1}[1372]: stopped
[2013-06-15 04:30:45,353][INFO ][node ] [Agony] {0.90.1}[1372]: closing ...
[2013-06-15 04:30:45,463][INFO ][node ] [Agony] {0.90.1}[1372]: closed
[2013-06-15 04:30:51,771][INFO ][node ] [Scarlet Spider] {0.90.1}[11475]: initializing ...

Yes, that is my configuration...
And yes, that helps.

(in a previous message, while i was still using CouchDB, I explained my
config... but did not repeat it when switching to StrutMyDocs, sorry)

Ubuntu 13.04
(CouchDB 1.2.0) <--- now it is still installed but unrelated
ElasticSearch 0.90.1
_head plug-in (downloaded a few days ago)
_river plug-in (downloaded a few days ago)
ScrutMyDocs 0.2.0 (downloaded a few days ago)

Switching back to 0.19.x.... Mmmmm....I do not really like the idea.
So yes, I will build a new version.
Are there any instructions / docs / anything about how to do that?
(I will try with my standard development config, using Netbeans, for
starters)

Merci monsieur.

---------- Forwarded message ----------
From: David Pilato david@pilato.fr
Date: 2013/6/15
Subject: Re: indexing files with file system _river in scrutmydocs won't
read any files
To: elasticsearch@googlegroups.com

Ok. Sounds like you are trying to connect scrutmydocs 0.2.0 to
Elasticsearch 0.90.1
Have a look at README. https://github.com/scrutmydocs/scrutmydocs

0.2.0 works only with 0.19.x version.

We need to release a new version as soon as possible.

That said, you can build your own version of scrutmydocs on master and it
should work fine with 0.90.1.

Does it help?

--
David Pilato | Technical Advocate | Elasticsearch.com
@dadoonet https://twitter.com/dadoonet |
@elasticsearchfrhttps://twitter.com/elasticsearchfr
| @scrutmydocs https://twitter.com/scrutmydocs

Le 15 juin 2013 à 12:53, Fatima Castiglione Maldonado 发 <
castiglionemaldonado@gmail.com> a écrit :

[2013-06-15 04:30:44,949][INFO ][node ] [Agony]
{0.90.1}[1372]: stopping ...
[2013-06-15 04:30:45,352][INFO ][node ] [Agony]
{0.90.1}[1372]: stopped
[2013-06-15 04:30:45,353][INFO ][node ] [Agony]
{0.90.1}[1372]: closing ...
[2013-06-15 04:30:45,463][INFO ][node ] [Agony]
{0.90.1}[1372]: closed
[2013-06-15 04:30:51,771][INFO ][node ] [Scarlet
Spider] {0.90.1}[11475]: initializing ...

--

Fátima Castiglione Maldonado
castiglionemaldonado@gmail.com

                 ____
               ,'_   |

_______|||
<
) .------.
-----------,------.-' ,-'-.
| | | ,' `.

            ,'    |  |            ,'                    `.
            |  _,-'  |__         /                        \
          _,'-'    `.   `---.___|_____________             \

      .--'  -----.  | _____________________   `-. -----     |
      |    ___|  |  |                      \  ,- \          |
      |    ___|  |===========================((|) |         |
      |       |  |  | _____________________/  `- /          |

      `--._ -----'  |        _________________,-' -----     |
           `.-._   ,' __.---'   |                          /
            |   `-.  |           \                        /
            `.    |  |            `.                    ,'

             |    |  |              `.                ,'

_____,------------------. -._ _,-' <___________________________)------'
| _| |

               `.____|

=================================

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Speak french? You may be interested to know that there is also a french speaking community.

I will try to release a new version of scrutmydocs soon.

Basically, to build it yourself, install Maven and run

mvn install

That's it.

HTH

--
David :wink:
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 15 juin 2013 à 22:04, Fatima Castiglione Maldonado 发 castiglionemaldonado@gmail.com a écrit :

Yes, that is my configuration...
And yes, that helps.

(in a previous message, while i was still using CouchDB, I explained my config... but did not repeat it when switching to StrutMyDocs, sorry)

Ubuntu 13.04
(CouchDB 1.2.0) <--- now it is still installed but unrelated
Elasticsearch 0.90.1
_head plug-in (downloaded a few days ago)
_river plug-in (downloaded a few days ago)
ScrutMyDocs 0.2.0 (downloaded a few days ago)

Switching back to 0.19.x.... Mmmmm....I do not really like the idea.
So yes, I will build a new version.
Are there any instructions / docs / anything about how to do that?
(I will try with my standard development config, using Netbeans, for starters)

Merci monsieur.

---------- Forwarded message ----------
From: David Pilato david@pilato.fr
Date: 2013/6/15
Subject: Re: indexing files with file system _river in scrutmydocs won't read any files
To: elasticsearch@googlegroups.com

Ok. Sounds like you are trying to connect scrutmydocs 0.2.0 to Elasticsearch 0.90.1
Have a look at README. GitHub - scrutmydocs/scrutmydocs: Search Web Application for hard drive documents

0.2.0 works only with 0.19.x version.

We need to release a new version as soon as possible.

That said, you can build your own version of scrutmydocs on master and it should work fine with 0.90.1.

Does it help?

--
David Pilato | Technical Advocate | Elasticsearch.com
@dadoonet | @elasticsearchfr | @scrutmydocs

Le 15 juin 2013 à 12:53, Fatima Castiglione Maldonado 发 castiglionemaldonado@gmail.com a écrit :

[2013-06-15 04:30:44,949][INFO ][node ] [Agony] {0.90.1}[1372]: stopping ...
[2013-06-15 04:30:45,352][INFO ][node ] [Agony] {0.90.1}[1372]: stopped
[2013-06-15 04:30:45,353][INFO ][node ] [Agony] {0.90.1}[1372]: closing ...
[2013-06-15 04:30:45,463][INFO ][node ] [Agony] {0.90.1}[1372]: closed
[2013-06-15 04:30:51,771][INFO ][node ] [Scarlet Spider] {0.90.1}[11475]: initializing ...

--

Fátima Castiglione Maldonado
castiglionemaldonado@gmail.com

                 ____
               ,'_   |

_______|||
<
) .------.
-----------,------.-' ,-' -.
| | | ,' `.

            ,'    |  |            ,'                    `.
            |  _,-'  |__         /                        \
          _,'-'    `.   `---.___|_____________             \


      .--'  -----.  | _____________________   `-. -----     |
      |    ___|  |  |                      \  ,- \          |
      |    ___|  |===========================((|) |         |
      |       |  |  | _____________________/  `- /          |


      `--._ -----'  |        _________________,-' -----     |
           `.-._   ,' __.---'   |                          /
            |   `-.  |           \                        /

            `.    |  |            `.                    ,'

             |    |  |              `.                ,'

_____,------------------. -._ _,-' <___________________________) ------'
| _| |

               `.____|

=================================

You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

<smime.p7s>

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Thanks David

I do read french but I do not really write or speak so much.
Thanks anyway...
Maybe I should join.
It maybe a good way to practice.


I am now trying to keep on going forward with ScrutMyDocs.

A couple questions:

  1. there are two URLs in GitHub, which one should I use ?
    (the first in my humble opinion, but..)

a.

(last updated a month ago)

b.

(las updated 9 months ago)

  1. please do excuse me if this is an stupid question
    I have read the git guide and did this many times but always get the same
    result
    there must be something bad in my commands, but just cannot find it

using the first URL, this is what I get:

fatima@FatiLinux:~$ sudo su
[sudo] password for fatima:
root@FatiLinux:/home/fatima# cd /media/fatima/Elements/Tiger/elasticSearch/
root@FatiLinux:/media/fatima/Elements/Tiger/elasticSearch# git clone

Cloning into 'scrutmydocs'...
remote: Counting objects: 4218, done.
remote: Compressing objects: 100% (1980/1980), done
remote: Total 4218 (delta 1367), reused 4123 (delta 1277)
Receiving objects: 100% (4218/4218), 1.34 MiB | 311 KiB/s, done.
Resolving deltas: 100% (1367/1367), done.
root@FatiLinux:/media/fatima/Elements/Tiger/elasticSearch# cd scrutmydocs
root@FatiLinux:/media/fatima/Elements/Tiger/elasticSearch/scrutmydocs# ls
-ralh
total 8,5K
-rw------- 1 fatima fatima 3,0K jun 17 06:55 index.html
-rw------- 1 fatima fatima 10 jun 17 06:55 .gitignore
drwx------ 1 fatima fatima 440 jun 17 06:55 .git
drwx------ 1 fatima fatima 4,0K jun 17 06:55 ..
drwx------ 1 fatima fatima 352 jun 17 06:55 .
root@FatiLinux:/media/fatima/Elements/Tiger/elasticSearch/scrutmydocs#

(of course I downloaded the .zip but will need to work with the repository
surely sooner or later)

Thanks in advance

2013/6/15 David Pilato david@pilato.fr

Speak french? You may be interested to know that there is also a french
speaking community.

I will try to release a new version of scrutmydocs soon.

Basically, to build it yourself, install Maven and run

mvn install

That's it.

HTH

--
David :wink:
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 15 juin 2013 à 22:04, Fatima Castiglione Maldonado 发 <
castiglionemaldonado@gmail.com> a écrit :

Yes, that is my configuration...
And yes, that helps.

(in a previous message, while i was still using CouchDB, I explained my
config... but did not repeat it when switching to StrutMyDocs, sorry)

Ubuntu 13.04
(CouchDB 1.2.0) <--- now it is still installed but unrelated
Elasticsearch 0.90.1
_head plug-in (downloaded a few days ago)
_river plug-in (downloaded a few days ago)
ScrutMyDocs 0.2.0 (downloaded a few days ago)

Switching back to 0.19.x.... Mmmmm....I do not really like the idea.
So yes, I will build a new version.
Are there any instructions / docs / anything about how to do that?
(I will try with my standard development config, using Netbeans, for
starters)

Merci monsieur.

---------- Forwarded message ----------
From: David Pilato david@pilato.fr
Date: 2013/6/15
Subject: Re: indexing files with file system _river in scrutmydocs won't
read any files
To: elasticsearch@googlegroups.com

Ok. Sounds like you are trying to connect scrutmydocs 0.2.0 to
Elasticsearch 0.90.1
Have a look at README. GitHub - scrutmydocs/scrutmydocs: Search Web Application for hard drive documents

0.2.0 works only with 0.19.x version.

We need to release a new version as soon as possible.

That said, you can build your own version of scrutmydocs on master and it
should work fine with 0.90.1.

Does it help?

--
David Pilato | Technical Advocate | Elasticsearch.com
@dadoonet https://twitter.com/dadoonet | @elasticsearchfrhttps://twitter.com/elasticsearchfr
| @scrutmydocs https://twitter.com/scrutmydocs

Le 15 juin 2013 à 12:53, Fatima Castiglione Maldonado 发 <
castiglionemaldonado@gmail.com> a écrit :

[2013-06-15 04:30:44,949][INFO ][node ] [Agony]
{0.90.1}[1372]: stopping ...
[2013-06-15 04:30:45,352][INFO ][node ] [Agony]
{0.90.1}[1372]: stopped
[2013-06-15 04:30:45,353][INFO ][node ] [Agony]
{0.90.1}[1372]: closing ...
[2013-06-15 04:30:45,463][INFO ][node ] [Agony]
{0.90.1}[1372]: closed
[2013-06-15 04:30:51,771][INFO ][node ] [Scarlet
Spider] {0.90.1}[11475]: initializing ...

--

Fátima Castiglione Maldonado
castiglionemaldonado@gmail.com

                 ____
               ,'_   |

_______|||
<
) .------.
-----------,------.-' ,-' -.
| | | ,' `.

            ,'    |  |            ,'                    `.
            |  _,-'  |__         /                        \
          _,'-'    `.   `---.___|_____________             \


      .--'  -----.  | _____________________   `-. -----     |
      |    ___|  |  |                      \  ,- \          |
      |    ___|  |===========================((|) |         |
      |       |  |  | _____________________/  `- /          |


      `--._ -----'  |        _________________,-' -----     |
           `.-._   ,' __.---'   |                          /
            |   `-.  |           \                        /

            `.    |  |            `.                    ,'

             |    |  |              `.                ,'

_____,------------------. -._ _,-' <___________________________) ------'
| _| |

               `.____|

=================================

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

<smime.p7s>

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--

Fátima Castiglione Maldonado
castiglionemaldonado@gmail.com

                 ____
               ,'_   |

|||
<
) .------.
-----------,------.-' ,-' -.
| | | ,' . ,' | | ,' .
| ,-' |
/
,'-' . ---.
|
_________
.--' -----. | _____________________ -. ----- | | ___| | | \ ,- \ | | ___| |===========================((|) | | | | | | _____________________/ - / |
--._ -----' | _________________,-' ----- | .-._ ,' __.---' | /
| -. | \ / . | | . ,' | | | . ,'
_____,------------------. -._ _,-' <___________________________) ------'
| | |
`.
___|

=================================

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

and it is exactly the same no matter what URL I use:

root@FatiLinux:/media/fatima/Elements/Tiger/elasticSearch# git clone

Cloning into 'scrutmydocs'...
remote: Counting objects: 4218, done.
remote: Compressing objects: 100% (1980/1980), done.
remote: Total 4218 (delta 1367), reused 4123 (delta 1277)
Receiving objects: 100% (4218/4218), 1.34 MiB | 278 KiB/s, done.
Resolving deltas: 100% (1367/1367), done.
root@FatiLinux:/media/fatima/Elements/Tiger/elasticSearch# ls -ralh
total 41M
-rw------- 1 fatima fatima 711K jun 17 06:59 scrutmydocs-master.zip
drwx------ 1 fatima fatima 4,0K jun 17 07:21 scrutmydocs-master
-rw------- 1 fatima fatima 40M jun 10 02:22 scrutmydocs-0.2.0.war
drwx------ 1 fatima fatima 352 jun 17 07:24 scrutmydocs
drwx------ 1 fatima fatima 4,0K jun 16 21:27 ..
drwx------ 1 fatima fatima 4,0K jun 17 07:23 .
root@FatiLinux:/media/fatima/Elements/Tiger/elasticSearch# cd scrutmydocs
root@FatiLinux:/media/fatima/Elements/Tiger/elasticSearch/scrutmydocs# ls
-ralh
total 8,5K
-rw------- 1 fatima fatima 3,0K jun 17 07:24 index.html
-rw------- 1 fatima fatima 10 jun 17 07:24 .gitignore
drwx------ 1 fatima fatima 440 jun 17 07:24 .git
drwx------ 1 fatima fatima 4,0K jun 17 07:23 ..
drwx------ 1 fatima fatima 352 jun 17 07:24 .
root@FatiLinux:/media/fatima/Elements/Tiger/elasticSearch/scrutmydocs#

2013/6/17 Fatima Castiglione Maldonado 发 castiglionemaldonado@gmail.com

Thanks David

I do read french but I do not really write or speak so much.
Thanks anyway...
Maybe I should join.
It maybe a good way to practice.


I am now trying to keep on going forward with ScrutMyDocs.

A couple questions:

  1. there are two URLs in GitHub, which one should I use ?
    (the first in my humble opinion, but..)

a.
GitHub - scrutmydocs/scrutmydocs: Search Web Application for hard drive documents
(last updated a month ago)

b.
GitHub - scrutmydocs/scrutmydocs.github.com: Scrut My Docs Public Website
(las updated 9 months ago)

  1. please do excuse me if this is an stupid question
    I have read the git guide and did this many times but always get the same
    result
    there must be something bad in my commands, but just cannot find it

using the first URL, this is what I get:

fatima@FatiLinux:~$ sudo su
[sudo] password for fatima:
root@FatiLinux:/home/fatima# cd
/media/fatima/Elements/Tiger/elasticSearch/
root@FatiLinux:/media/fatima/Elements/Tiger/elasticSearch# git clone
GitHub - scrutmydocs/scrutmydocs: Search Web Application for hard drive documents
Cloning into 'scrutmydocs'...
remote: Counting objects: 4218, done.
remote: Compressing objects: 100% (1980/1980), done
remote: Total 4218 (delta 1367), reused 4123 (delta 1277)
Receiving objects: 100% (4218/4218), 1.34 MiB | 311 KiB/s, done.
Resolving deltas: 100% (1367/1367), done.
root@FatiLinux:/media/fatima/Elements/Tiger/elasticSearch# cd scrutmydocs
root@FatiLinux:/media/fatima/Elements/Tiger/elasticSearch/scrutmydocs# ls
-ralh
total 8,5K
-rw------- 1 fatima fatima 3,0K jun 17 06:55 index.html
-rw------- 1 fatima fatima 10 jun 17 06:55 .gitignore
drwx------ 1 fatima fatima 440 jun 17 06:55 .git
drwx------ 1 fatima fatima 4,0K jun 17 06:55 ..
drwx------ 1 fatima fatima 352 jun 17 06:55 .
root@FatiLinux:/media/fatima/Elements/Tiger/elasticSearch/scrutmydocs#

(of course I downloaded the .zip but will need to work with the repository
surely sooner or later)

Thanks in advance

2013/6/15 David Pilato david@pilato.fr

Speak french? You may be interested to know that there is also a french
speaking community.

I will try to release a new version of scrutmydocs soon.

Basically, to build it yourself, install Maven and run

mvn install

That's it.

HTH

--
David :wink:
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 15 juin 2013 à 22:04, Fatima Castiglione Maldonado 发 <
castiglionemaldonado@gmail.com> a écrit :

Yes, that is my configuration...
And yes, that helps.

(in a previous message, while i was still using CouchDB, I explained my
config... but did not repeat it when switching to StrutMyDocs, sorry)

Ubuntu 13.04
(CouchDB 1.2.0) <--- now it is still installed but unrelated
Elasticsearch 0.90.1
_head plug-in (downloaded a few days ago)
_river plug-in (downloaded a few days ago)
ScrutMyDocs 0.2.0 (downloaded a few days ago)

Switching back to 0.19.x.... Mmmmm....I do not really like the idea.
So yes, I will build a new version.
Are there any instructions / docs / anything about how to do that?
(I will try with my standard development config, using Netbeans, for
starters)

Merci monsieur.

---------- Forwarded message ----------
From: David Pilato david@pilato.fr
Date: 2013/6/15
Subject: Re: indexing files with file system _river in scrutmydocs won't
read any files
To: elasticsearch@googlegroups.com

Ok. Sounds like you are trying to connect scrutmydocs 0.2.0 to
Elasticsearch 0.90.1
Have a look at README. GitHub - scrutmydocs/scrutmydocs: Search Web Application for hard drive documents

0.2.0 works only with 0.19.x version.

We need to release a new version as soon as possible.

That said, you can build your own version of scrutmydocs on master and it
should work fine with 0.90.1.

Does it help?

--
David Pilato | Technical Advocate | Elasticsearch.com
@dadoonet https://twitter.com/dadoonet | @elasticsearchfrhttps://twitter.com/elasticsearchfr
| @scrutmydocs https://twitter.com/scrutmydocs

Le 15 juin 2013 à 12:53, Fatima Castiglione Maldonado 发 <
castiglionemaldonado@gmail.com> a écrit :

[2013-06-15 04:30:44,949][INFO ][node ] [Agony]
{0.90.1}[1372]: stopping ...
[2013-06-15 04:30:45,352][INFO ][node ] [Agony]
{0.90.1}[1372]: stopped
[2013-06-15 04:30:45,353][INFO ][node ] [Agony]
{0.90.1}[1372]: closing ...
[2013-06-15 04:30:45,463][INFO ][node ] [Agony]
{0.90.1}[1372]: closed
[2013-06-15 04:30:51,771][INFO ][node ] [Scarlet
Spider] {0.90.1}[11475]: initializing ...

--

Fátima Castiglione Maldonado
castiglionemaldonado@gmail.com

                 ____
               ,'_   |

_______|||
<
) .------.
-----------,------.-' ,-' -.
| | | ,' `.

            ,'    |  |            ,'                    `.
            |  _,-'  |__         /                        \
          _,'-'    `.   `---.___|_____________             \



      .--'  -----.  | _____________________   `-. -----     |
      |    ___|  |  |                      \  ,- \          |
      |    ___|  |===========================((|) |         |
      |       |  |  | _____________________/  `- /          |



      `--._ -----'  |        _________________,-' -----     |
           `.-._   ,' __.---'   |                          /
            |   `-.  |           \                        /


            `.    |  |            `.                    ,'

             |    |  |              `.                ,'

_____,------------------. -._ _,-' <___________________________) ------'
| _| |

               `.____|

=================================

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

<smime.p7s>

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--

Fátima Castiglione Maldonado
castiglionemaldonado@gmail.com

                 ____
               ,'_   |

_______|||
<
) .------.
-----------,------.-' ,-' -.

             |    |  |              ,'                `.
            ,'    |  |            ,'                    `.
            |  _,-'  |__         /                        \
          _,'-'    `.   `---.___|_____________             \

      .--'  -----.  | _____________________   `-. -----     |
      |    ___|  |  |                      \  ,- \          |
      |    ___|  |===========================((|) |         |
      |       |  |  | _____________________/  `- /          |

      `--._ -----'  |        _________________,-' -----     |
           `.-._   ,' __.---'   |                          /
            |   `-.  |           \                        /
            `.    |  |            `.                    ,'

             |    |  |              `.                ,'

_____,------------------. -._ _,-' <___________________________) ------'
| _| |

               `.____|

=================================

--

Fátima Castiglione Maldonado
castiglionemaldonado@gmail.com

                 ____
               ,'_   |

|||
<
) .------.
-----------,------.-' ,-' -.
| | | ,' . ,' | | ,' .
| ,-' |
/
,'-' . ---.
|
_________
.--' -----. | _____________________ -. ----- | | ___| | | \ ,- \ | | ___| |===========================((|) | | | | | | _____________________/ - / |
--._ -----' | _________________,-' ----- | .-._ ,' __.---' | /
| -. | \ / . | | . ,' | | | . ,'
_____,------------------. -._ _,-' <___________________________) ------'
| | |
`.
___|

=================================

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

You probably downloaded gh-pages branch instead of master.
Try git checkout master

The right repository is this one: GitHub - scrutmydocs/scrutmydocs: Search Web Application for hard drive documents

--
David Pilato | Technical Advocate | Elasticsearch.com
@dadoonet | @elasticsearchfr | @scrutmydocs

Le 17 juin 2013 à 12:25, Fatima Castiglione Maldonado 发 castiglionemaldonado@gmail.com a écrit :

and it is exactly the same no matter what URL I use:

root@FatiLinux:/media/fatima/Elements/Tiger/elasticSearch# git clone GitHub - scrutmydocs/scrutmydocs: Search Web Application for hard drive documents
Cloning into 'scrutmydocs'...
remote: Counting objects: 4218, done.
remote: Compressing objects: 100% (1980/1980), done.
remote: Total 4218 (delta 1367), reused 4123 (delta 1277)
Receiving objects: 100% (4218/4218), 1.34 MiB | 278 KiB/s, done.
Resolving deltas: 100% (1367/1367), done.
root@FatiLinux:/media/fatima/Elements/Tiger/elasticSearch# ls -ralh
total 41M
-rw------- 1 fatima fatima 711K jun 17 06:59 scrutmydocs-master.zip
drwx------ 1 fatima fatima 4,0K jun 17 07:21 scrutmydocs-master
-rw------- 1 fatima fatima 40M jun 10 02:22 scrutmydocs-0.2.0.war
drwx------ 1 fatima fatima 352 jun 17 07:24 scrutmydocs
drwx------ 1 fatima fatima 4,0K jun 16 21:27 ..
drwx------ 1 fatima fatima 4,0K jun 17 07:23 .
root@FatiLinux:/media/fatima/Elements/Tiger/elasticSearch# cd scrutmydocs
root@FatiLinux:/media/fatima/Elements/Tiger/elasticSearch/scrutmydocs# ls -ralh
total 8,5K
-rw------- 1 fatima fatima 3,0K jun 17 07:24 index.html
-rw------- 1 fatima fatima 10 jun 17 07:24 .gitignore
drwx------ 1 fatima fatima 440 jun 17 07:24 .git
drwx------ 1 fatima fatima 4,0K jun 17 07:23 ..
drwx------ 1 fatima fatima 352 jun 17 07:24 .
root@FatiLinux:/media/fatima/Elements/Tiger/elasticSearch/scrutmydocs#

Yes... That was the problem.
Thank you very much.
Now I can download it from the repository and compile it without problems.

Anyway, I got still two problems left:

  1. did you configure someone special in scrutmydocs about the location for
    the logs?

they are not in the usual places:

/usr/share/elasticsearch/logs/
/var/log/elasticsearch/elasticsearch.log

  1. when I re-create the river, all is the same as before

a. it won't read my files, just shows the one which were manually uploaded

b. the index is not shown in http://localhost:9200/_plugin/head/

c. when I do
http://localhost:8080/scrutmydocs-0.3.1-SNAPSHOT-test/api/1/settings/rivers
I get:
{"ok":true,"errors":null,"object":[{"id":"tiger","name":"tiger","indexname":"docstiger","typename":"doctiger","start":true,"type":"dummy"}]}

...so it is yet a dummy river still.

  1. after this works, I am planning to add Twitter, Wikipedia and RSS
    capabilities to it. At least that is what my client wants, so sometime in
    the future you will get a nice version, as a way to thank you for all your
    help.

Thanks in advance,
Fatima

2013/6/20 David Pilato david@pilato.fr

You probably downloaded gh-pages branch instead of master.
Try git checkout master

The right repository is this one:
GitHub - scrutmydocs/scrutmydocs: Search Web Application for hard drive documents

--
David Pilato | Technical Advocate | Elasticsearch.com
@dadoonet https://twitter.com/dadoonet | @elasticsearchfrhttps://twitter.com/elasticsearchfr
| @scrutmydocs https://twitter.com/scrutmydocs

Le 17 juin 2013 à 12:25, Fatima Castiglione Maldonado 发 <
castiglionemaldonado@gmail.com> a écrit :

and it is exactly the same no matter what URL I use:

root@FatiLinux:/media/fatima/Elements/Tiger/elasticSearch# git clone
GitHub - scrutmydocs/scrutmydocs: Search Web Application for hard drive documents
Cloning into 'scrutmydocs'...
remote: Counting objects: 4218, done.
remote: Compressing objects: 100% (1980/1980), done.
remote: Total 4218 (delta 1367), reused 4123 (delta 1277)
Receiving objects: 100% (4218/4218), 1.34 MiB | 278 KiB/s, done.
Resolving deltas: 100% (1367/1367), done.
root@FatiLinux:/media/fatima/Elements/Tiger/elasticSearch# ls -ralh
total 41M
-rw------- 1 fatima fatima 711K jun 17 06:59 scrutmydocs-master.zip
drwx------ 1 fatima fatima 4,0K jun 17 07:21 scrutmydocs-master
-rw------- 1 fatima fatima 40M jun 10 02:22 scrutmydocs-0.2.0.war
drwx------ 1 fatima fatima 352 jun 17 07:24 scrutmydocs
drwx------ 1 fatima fatima 4,0K jun 16 21:27 ..
drwx------ 1 fatima fatima 4,0K jun 17 07:23 .
root@FatiLinux:/media/fatima/Elements/Tiger/elasticSearch# cd scrutmydocs
root@FatiLinux:/media/fatima/Elements/Tiger/elasticSearch/scrutmydocs# ls
-ralh
total 8,5K
-rw------- 1 fatima fatima 3,0K jun 17 07:24 index.html
-rw------- 1 fatima fatima 10 jun 17 07:24 .gitignore
drwx------ 1 fatima fatima 440 jun 17 07:24 .git
drwx------ 1 fatima fatima 4,0K jun 17 07:23 ..
drwx------ 1 fatima fatima 352 jun 17 07:24 .
root@FatiLinux:/media/fatima/Elements/Tiger/elasticSearch/scrutmydocs#

--

Fátima Castiglione Maldonado
castiglionemaldonado@gmail.com

                 ____
               ,'_   |

|||
<
) .------.
-----------,------.-' ,-' -.
| | | ,' . ,' | | ,' .
| ,-' |
/
,'-' . ---.
|
_________
.--' -----. | _____________________ -. ----- | | ___| | | \ ,- \ | | ___| |===========================((|) | | | | | | _____________________/ - / |
--._ -----' | _________________,-' ----- | .-._ ,' __.---' | /
| -. | \ / . | | . ,' | | | . ,'
_____,------------------. -._ _,-' <___________________________) ------'
| | |
`.
___|

=================================

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

By default, Scrutmydocs runs an embedded node. So elasticsearch logs appears within your container logs.
If you run an external elasticsearch node, you should know where you put logs.

I still don't understand how you get this. Logs would help a lot here I think.

Could you describe each step, one by one, of what you are doing? And do it from start?
Also remove ~/.scrutmydocs dir
Perhaps you have strange data/config here???

--
David Pilato | Technical Advocate | Elasticsearch.com
@dadoonet | @elasticsearchfr | @scrutmydocs

Le 21 juin 2013 à 05:15, Fatima Castiglione Maldonado 发 castiglionemaldonado@gmail.com a écrit :

Yes... That was the problem.
Thank you very much.
Now I can download it from the repository and compile it without problems.

Anyway, I got still two problems left:

  1. did you configure someone special in scrutmydocs about the location for the logs?

they are not in the usual places:

/usr/share/elasticsearch/logs/
/var/log/elasticsearch/elasticsearch.log

  1. when I re-create the river, all is the same as before

a. it won't read my files, just shows the one which were manually uploaded

b. the index is not shown in http://localhost:9200/_plugin/head/

c. when I do http://localhost:8080/scrutmydocs-0.3.1-SNAPSHOT-test/api/1/settings/rivers
I get:
{"ok":true,"errors":null,"object":[{"id":"tiger","name":"tiger","indexname":"docstiger","typename":"doctiger","start":true,"type":"dummy"}]}

...so it is yet a dummy river still.

  1. after this works, I am planning to add Twitter, Wikipedia and RSS capabilities to it. At least that is what my client wants, so sometime in the future you will get a nice version, as a way to thank you for all your help.

Thanks in advance,
Fatima

2013/6/20 David Pilato david@pilato.fr
You probably downloaded gh-pages branch instead of master.
Try git checkout master

The right repository is this one: GitHub - scrutmydocs/scrutmydocs: Search Web Application for hard drive documents

--
David Pilato | Technical Advocate | Elasticsearch.com
@dadoonet | @elasticsearchfr | @scrutmydocs

Le 17 juin 2013 à 12:25, Fatima Castiglione Maldonado 发 castiglionemaldonado@gmail.com a écrit :

and it is exactly the same no matter what URL I use:

root@FatiLinux:/media/fatima/Elements/Tiger/elasticSearch# git clone GitHub - scrutmydocs/scrutmydocs: Search Web Application for hard drive documents
Cloning into 'scrutmydocs'...
remote: Counting objects: 4218, done.
remote: Compressing objects: 100% (1980/1980), done.
remote: Total 4218 (delta 1367), reused 4123 (delta 1277)
Receiving objects: 100% (4218/4218), 1.34 MiB | 278 KiB/s, done.
Resolving deltas: 100% (1367/1367), done.
root@FatiLinux:/media/fatima/Elements/Tiger/elasticSearch# ls -ralh
total 41M
-rw------- 1 fatima fatima 711K jun 17 06:59 scrutmydocs-master.zip
drwx------ 1 fatima fatima 4,0K jun 17 07:21 scrutmydocs-master
-rw------- 1 fatima fatima 40M jun 10 02:22 scrutmydocs-0.2.0.war
drwx------ 1 fatima fatima 352 jun 17 07:24 scrutmydocs
drwx------ 1 fatima fatima 4,0K jun 16 21:27 ..
drwx------ 1 fatima fatima 4,0K jun 17 07:23 .
root@FatiLinux:/media/fatima/Elements/Tiger/elasticSearch# cd scrutmydocs
root@FatiLinux:/media/fatima/Elements/Tiger/elasticSearch/scrutmydocs# ls -ralh
total 8,5K
-rw------- 1 fatima fatima 3,0K jun 17 07:24 index.html
-rw------- 1 fatima fatima 10 jun 17 07:24 .gitignore
drwx------ 1 fatima fatima 440 jun 17 07:24 .git
drwx------ 1 fatima fatima 4,0K jun 17 07:23 ..
drwx------ 1 fatima fatima 352 jun 17 07:24 .
root@FatiLinux:/media/fatima/Elements/Tiger/elasticSearch/scrutmydocs#

--

Fátima Castiglione Maldonado
castiglionemaldonado@gmail.com

                 ____
               ,'_   |

_______|||
<
) .------.
-----------,------.-' ,-' -.

             |    |  |              ,'                `.
            ,'    |  |            ,'                    `.
            |  _,-'  |__         /                        \
          _,'-'    `.   `---.___|_____________             \

      .--'  -----.  | _____________________   `-. -----     |
      |    ___|  |  |                      \  ,- \          |
      |    ___|  |===========================((|) |         |
      |       |  |  | _____________________/  `- /          |

      `--._ -----'  |        _________________,-' -----     |
           `.-._   ,' __.---'   |                          /
            |   `-.  |           \                        /
            `.    |  |            `.                    ,'

             |    |  |              `.                ,'

_____,------------------. -._ _,-' <___________________________) ------'
| _| |

               `.____|

=================================

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Thank you very much for your support.
I will redo all from the beginning, so you can see if I am doing something
weird or wrong. I followed the instructions from the web, and did not do
anything special.

  1. folders and files

a. I do not have a ~/.scrutmydocs folder, nor a ~/.elasticsearch folder
but there is a /root/.scrutmydocs/config folder with a scrutmydocs.properties
file

(this happens because I start-up GlassFish while su... maybe this is the
root (no pun intended) of my problems?)

scrutmydocs.properties file contents:

################################################################

Licensed to scrutmydocs.org (the "Author") under one

or more contributor license agreements. See the NOTICE file

distributed with this work for additional information

regarding copyright ownership. Author licenses this

file to you under the Apache License, Version 2.0 (the

"License"); you may not use this file except in compliance

with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing,

software distributed under the License is distributed on an

"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY

KIND, either express or implied. See the License for the

specific language governing permissions and limitations

under the License.

################################################################

################################################################

Scrutmydocs configuration file

This file should be in ~/.scrutmydocs/config/ directory

under the name scrutsmydocs.properties

If not present, it will be created the first time you start

the web application...

################################################################

Set to false if you want to connect your webapp to an existing

Elasticsearch cluster, default to true

node.embedded=false

If false, you have to define your node(s) address(es), default to :

localhost:9300,localhost:9301

node.addresses=localhost:9300,localhost:9301

Define the cluster name, default to : scrutmydocs

cluster.name=scrutmydocs

Define the Elasticsearch data dir, default to ~/.scrutmydocs/esdata,

where ~ is the user home dir

path.data=/home/user/.scrutmydocs/esdata

b. I do have a /usr/share/elasticsearch/conf/elasticsearch.yml, with the
following contents:

Mandatory cluster Name. You should be able to modify it in a future

release.
cluster.name: scrutmydocs

If you want to check plugins before starting

plugin.mandatory: mapper-attachments, river-fs

If you want to disable multicast

discovery.zen.ping.multicast.enabled: false

#cluster:

name: TigerCluster

#network:

host: 127.0.0.1

#discovery:

zen:

multicast.enabled: false

#http:

max_content_length: 100000

#index:

number_of_shards: 1

analysis:

analyzer:

default:

type: standard

lowercase_analyzer:

type: custom

tokenizer: standard

filter: [standard, lowercase]

  1. I will un-install Elasticsearch, undeploy ScrutMyDocs, reboot,
    re-install Elasticsearch and re-deploy ScrutMyDocs to be sure nothing is
    wrong and so you can follow the procedure

  2. un-install Elasticsearch (in bash)

fatima@FatiLinux:~$ sudo su
[sudo] password for fatima:
root@FatiLinux:/home/fatima# apt-get purge elasticsearch
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following packages were automatically installed and are no longer
required:
antlr3 cup default-jdk-doc javahelp2 junit4-doc libbeansbinding-java
libbetter-appframework-java libbindex-java
libbytelist-java libcommons-compress-java libcommons-net1-java libdb-java
libdb-je-java libdb5.1-java libdb5.1-java-jni
libfelix-framework-java libfelix-main-java libflute-java libfsplib0
libhamcrest-java-doc libicu4j-java libini4j-java
libjcodings-java libjemmy2-java libjna-java libjoda-convert-java
libjoda-time-java libjvyamlb-java libjzlib-java
liblucene2-java libmysql-java libnb-absolutelayout-java
libnb-apisupport3-java libnb-ide14-java libnb-java5-java
libnb-javaparser-java libnb-org-openide-modules-java
libnb-org-openide-util-java libnb-org-openide-util-lookup-java
libnb-platform-devel-java libnb-platform13-java libnetx-java
libpostgresql-jdbc-java libsac-java libsac-java-gcj
libsequence-library-java libserf1 libsimple-validation-java
libsqljet-java libstringtemplate-java libsvn-java libsvn1
libsvnclientadapter-java libsvnkit-java libswing-layout-java
libswingx1-java libswt-cairo-gtk-3-jni libswt-gnome-gtk-3-jni
libswt-gtk-3-java libswt-gtk-3-jni libswt-webkit-gtk-3-jni libtre5
libtrilead-ssh2-java libxz-java openjdk-7-doc weka
Use 'apt-get autoremove' to remove them.
The following packages will be REMOVED:
elasticsearch*
0 upgraded, 0 newly installed, 1 to remove and 82 not upgraded.
After this operation, 19,9 MB disk space will be freed.
Do you want to continue [Y/n]? y
(Reading database ... 305350 files and directories currently installed.)
Removing elasticsearch ...

  • Stopping Elasticsearch Server
    [ OK ]
    Purging configuration files for elasticsearch ...
    Removing user elasticsearch' ... Warning: group elasticsearch' has no more members.
    Done.
    The group `elasticsearch' does not exist.
    dpkg: warning: while removing elasticsearch, directory '/etc/elasticsearch'
    not empty so not removed
    dpkg: warning: while removing elasticsearch, directory
    '/usr/share/elasticsearch' not empty so not removed
    Processing triggers for ureadahead ...
    ureadahead will be reprofiled on next reboot
    root@FatiLinux:/home/fatima#
  1. went to GlassFish, undeployed ScrutMyDocs

  2. reboot

  3. re-install Elasticsearch (from bash)... skip this step, as ScrutMyDocs
    uses and embedded server, so it will be useless

  4. re-deploy ScrutMyDocs
    did it in GlassFish again.

When it starts, I found four things:

a. it still has all the docs that were manually uploaded

b. it still has the river, pointing to the same path as before

c. it still won't read my docs from the file system

d. when I do
http://localhost:8080/scrutmydocs-0.3.1-SNAPSHOT-test/api/1/settings/rivers

{"ok":true,"errors":null,"object":[{"id":"tiger","name":"tiger","indexname":"docstiger","typename":"doctiger","start":true,"type":"dummy"}]}

(the change in the name of the app is because I downloaded the code from
GitHub and compiled it locally with Maven)

Thanks in advance.

2013/6/21 David Pilato david@pilato.fr

By default, Scrutmydocs runs an embedded node. So elasticsearch logs
appears within your container logs.
If you run an external elasticsearch node, you should know where you put
logs.

I still don't understand how you get this. Logs would help a lot here I
think.

Could you describe each step, one by one, of what you are doing? And do it
from start?
Also remove ~/.scrutmydocs dir
Perhaps you have strange data/config here???

--
David Pilato | Technical Advocate | Elasticsearch.com
@dadoonet https://twitter.com/dadoonet | @elasticsearchfrhttps://twitter.com/elasticsearchfr
| @scrutmydocs https://twitter.com/scrutmydocs

Le 21 juin 2013 à 05:15, Fatima Castiglione Maldonado 发 <
castiglionemaldonado@gmail.com> a écrit :

Yes... That was the problem.
Thank you very much.
Now I can download it from the repository and compile it without problems.

Anyway, I got still two problems left:

  1. did you configure someone special in scrutmydocs about the location for
    the logs?

they are not in the usual places:

/usr/share/elasticsearch/logs/
/var/log/elasticsearch/elasticsearch.log

  1. when I re-create the river, all is the same as before

a. it won't read my files, just shows the one which were manually uploaded

b. the index is not shown in http://localhost:9200/_plugin/head/

c. when I do
http://localhost:8080/scrutmydocs-0.3.1-SNAPSHOT-test/api/1/settings/rivers
I get:

{"ok":true,"errors":null,"object":[{"id":"tiger","name":"tiger","indexname":"docstiger","typename":"doctiger","start":true,"type":"dummy"}]}

...so it is yet a dummy river still.

  1. after this works, I am planning to add Twitter, Wikipedia and RSS
    capabilities to it. At least that is what my client wants, so sometime in
    the future you will get a nice version, as a way to thank you for all your
    help.

Thanks in advance,
Fatima

2013/6/20 David Pilato david@pilato.fr

You probably downloaded gh-pages branch instead of master.
Try git checkout master

The right repository is this one:
GitHub - scrutmydocs/scrutmydocs: Search Web Application for hard drive documents

--
David Pilato | Technical Advocate | *Elasticsearch.comhttp://elasticsearch.com/
*
@dadoonet https://twitter.com/dadoonet | @elasticsearchfrhttps://twitter.com/elasticsearchfr
| @scrutmydocs https://twitter.com/scrutmydocs

Le 17 juin 2013 à 12:25, Fatima Castiglione Maldonado 发 <
castiglionemaldonado@gmail.com> a écrit :

and it is exactly the same no matter what URL I use:

root@FatiLinux:/media/fatima/Elements/Tiger/elasticSearch# git clone
GitHub - scrutmydocs/scrutmydocs: Search Web Application for hard drive documents
Cloning into 'scrutmydocs'...
remote: Counting objects: 4218, done.
remote: Compressing objects: 100% (1980/1980), done.
remote: Total 4218 (delta 1367), reused 4123 (delta 1277)
Receiving objects: 100% (4218/4218), 1.34 MiB | 278 KiB/s, done.
Resolving deltas: 100% (1367/1367), done.
root@FatiLinux:/media/fatima/Elements/Tiger/elasticSearch# ls -ralh
total 41M
-rw------- 1 fatima fatima 711K jun 17 06:59 scrutmydocs-master.zip
drwx------ 1 fatima fatima 4,0K jun 17 07:21 scrutmydocs-master
-rw------- 1 fatima fatima 40M jun 10 02:22 scrutmydocs-0.2.0.war
drwx------ 1 fatima fatima 352 jun 17 07:24 scrutmydocs
drwx------ 1 fatima fatima 4,0K jun 16 21:27 ..
drwx------ 1 fatima fatima 4,0K jun 17 07:23 .
root@FatiLinux:/media/fatima/Elements/Tiger/elasticSearch# cd scrutmydocs
root@FatiLinux:/media/fatima/Elements/Tiger/elasticSearch/scrutmydocs#
ls -ralh
total 8,5K
-rw------- 1 fatima fatima 3,0K jun 17 07:24 index.html
-rw------- 1 fatima fatima 10 jun 17 07:24 .gitignore
drwx------ 1 fatima fatima 440 jun 17 07:24 .git
drwx------ 1 fatima fatima 4,0K jun 17 07:23 ..
drwx------ 1 fatima fatima 352 jun 17 07:24 .
root@FatiLinux:/media/fatima/Elements/Tiger/elasticSearch/scrutmydocs#

--

Fátima Castiglione Maldonado
castiglionemaldonado@gmail.com

                 ____
               ,'_   |

_______|||
<
) .------.
-----------,------.-' ,-' -.

             |    |  |              ,'                `.
            ,'    |  |            ,'                    `.
            |  _,-'  |__         /                        \
          _,'-'    `.   `---.___|_____________             \

      .--'  -----.  | _____________________   `-. -----     |
      |    ___|  |  |                      \  ,- \          |
      |    ___|  |===========================((|) |         |
      |       |  |  | _____________________/  `- /          |

      `--._ -----'  |        _________________,-' -----     |
           `.-._   ,' __.---'   |                          /
            |   `-.  |           \                        /
            `.    |  |            `.                    ,'

             |    |  |              `.                ,'

_____,------------------. -._ _,-' <___________________________) ------'
| _| |

               `.____|

=================================

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--

Fátima Castiglione Maldonado
castiglionemaldonado@gmail.com

                 ____
               ,'_   |

|||
<
) .------.
-----------,------.-' ,-' -.
| | | ,' . ,' | | ,' .
| ,-' |
/
,'-' . ---.
|
_________
.--' -----. | _____________________ -. ----- | | ___| | | \ ,- \ | | ___| |===========================((|) | | | | | | _____________________/ - / |
--._ -----' | _________________,-' ----- | .-._ ,' __.---' | /
| -. | \ / . | | . ,' | | | . ,'
_____,------------------. -._ _,-' <___________________________) ------'
| | |
`.
___|

=================================

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Clean everything in /root/.scrutmydocs
And restart glassfish.

Your old docs/rivers should disappear.
If not, could you list running process? ps -ef

Note: when sharing content like this on the mailing list, please use Gist instead of pasting your code here.

--
David :wink:
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 22 juin 2013 à 07:32, Fatima Castiglione Maldonado 发 castiglionemaldonado@gmail.com a écrit :

Thank you very much for your support.
I will redo all from the beginning, so you can see if I am doing something weird or wrong. I followed the instructions from the web, and did not do anything special.

  1. folders and files

a. I do not have a ~/.scrutmydocs folder, nor a ~/.elasticsearch folder
but there is a /root/.scrutmydocs/config folder with a scrutmydocs.properties file

(this happens because I start-up GlassFish while su... maybe this is the root (no pun intended) of my problems?)

scrutmydocs.properties file contents:

################################################################

Licensed to scrutmydocs.org (the "Author") under one

or more contributor license agreements. See the NOTICE file

distributed with this work for additional information

regarding copyright ownership. Author licenses this

file to you under the Apache License, Version 2.0 (the

"License"); you may not use this file except in compliance

with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing,

software distributed under the License is distributed on an

"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY

KIND, either express or implied. See the License for the

specific language governing permissions and limitations

under the License.

################################################################

################################################################

Scrutmydocs configuration file

This file should be in ~/.scrutmydocs/config/ directory

under the name scrutsmydocs.properties

If not present, it will be created the first time you start

the web application...

################################################################

Set to false if you want to connect your webapp to an existing Elasticsearch cluster, default to true

node.embedded=false

If false, you have to define your node(s) address(es), default to : localhost:9300,localhost:9301

node.addresses=localhost:9300,localhost:9301

Define the cluster name, default to : scrutmydocs

cluster.name=scrutmydocs

Define the Elasticsearch data dir, default to ~/.scrutmydocs/esdata, where ~ is the user home dir

path.data=/home/user/.scrutmydocs/esdata

b. I do have a /usr/share/elasticsearch/conf/elasticsearch.yml, with the following contents:

Mandatory cluster Name. You should be able to modify it in a future release.

cluster.name: scrutmydocs

If you want to check plugins before starting

plugin.mandatory: mapper-attachments, river-fs

If you want to disable multicast

discovery.zen.ping.multicast.enabled: false

#cluster:

name: TigerCluster

#network:

host: 127.0.0.1

#discovery:

zen:

multicast.enabled: false

#http:

max_content_length: 100000

#index:

number_of_shards: 1

analysis:

analyzer:

default:

type: standard

lowercase_analyzer:

type: custom

tokenizer: standard

filter: [standard, lowercase]

  1. I will un-install Elasticsearch, undeploy ScrutMyDocs, reboot, re-install Elasticsearch and re-deploy ScrutMyDocs to be sure nothing is wrong and so you can follow the procedure

  2. un-install Elasticsearch (in bash)

fatima@FatiLinux:~$ sudo su
[sudo] password for fatima:
root@FatiLinux:/home/fatima# apt-get purge elasticsearch
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following packages were automatically installed and are no longer required:
antlr3 cup default-jdk-doc javahelp2 junit4-doc libbeansbinding-java libbetter-appframework-java libbindex-java
libbytelist-java libcommons-compress-java libcommons-net1-java libdb-java libdb-je-java libdb5.1-java libdb5.1-java-jni
libfelix-framework-java libfelix-main-java libflute-java libfsplib0 libhamcrest-java-doc libicu4j-java libini4j-java
libjcodings-java libjemmy2-java libjna-java libjoda-convert-java libjoda-time-java libjvyamlb-java libjzlib-java
liblucene2-java libmysql-java libnb-absolutelayout-java libnb-apisupport3-java libnb-ide14-java libnb-java5-java
libnb-javaparser-java libnb-org-openide-modules-java libnb-org-openide-util-java libnb-org-openide-util-lookup-java
libnb-platform-devel-java libnb-platform13-java libnetx-java libpostgresql-jdbc-java libsac-java libsac-java-gcj
libsequence-library-java libserf1 libsimple-validation-java libsqljet-java libstringtemplate-java libsvn-java libsvn1
libsvnclientadapter-java libsvnkit-java libswing-layout-java libswingx1-java libswt-cairo-gtk-3-jni libswt-gnome-gtk-3-jni
libswt-gtk-3-java libswt-gtk-3-jni libswt-webkit-gtk-3-jni libtre5 libtrilead-ssh2-java libxz-java openjdk-7-doc weka
Use 'apt-get autoremove' to remove them.
The following packages will be REMOVED:
elasticsearch*
0 upgraded, 0 newly installed, 1 to remove and 82 not upgraded.
After this operation, 19,9 MB disk space will be freed.
Do you want to continue [Y/n]? y
(Reading database ... 305350 files and directories currently installed.)
Removing elasticsearch ...

  • Stopping Elasticsearch Server [ OK ]
    Purging configuration files for elasticsearch ...
    Removing user elasticsearch' ... Warning: group elasticsearch' has no more members.
    Done.
    The group `elasticsearch' does not exist.
    dpkg: warning: while removing elasticsearch, directory '/etc/elasticsearch' not empty so not removed
    dpkg: warning: while removing elasticsearch, directory '/usr/share/elasticsearch' not empty so not removed
    Processing triggers for ureadahead ...
    ureadahead will be reprofiled on next reboot
    root@FatiLinux:/home/fatima#
  1. went to GlassFish, undeployed ScrutMyDocs

  2. reboot

  3. re-install Elasticsearch (from bash)... skip this step, as ScrutMyDocs uses and embedded server, so it will be useless

  4. re-deploy ScrutMyDocs
    did it in GlassFish again.

When it starts, I found four things:

a. it still has all the docs that were manually uploaded

b. it still has the river, pointing to the same path as before

c. it still won't read my docs from the file system

d. when I do http://localhost:8080/scrutmydocs-0.3.1-SNAPSHOT-test/api/1/settings/rivers
{"ok":true,"errors":null,"object":[{"id":"tiger","name":"tiger","indexname":"docstiger","typename":"doctiger","start":true,"type":"dummy"}]}
(the change in the name of the app is because I downloaded the code from GitHub and compiled it locally with Maven)

Thanks in advance.

2013/6/21 David Pilato david@pilato.fr

By default, Scrutmydocs runs an embedded node. So elasticsearch logs appears within your container logs.
If you run an external elasticsearch node, you should know where you put logs.

I still don't understand how you get this. Logs would help a lot here I think.

Could you describe each step, one by one, of what you are doing? And do it from start?
Also remove ~/.scrutmydocs dir
Perhaps you have strange data/config here???

--
David Pilato | Technical Advocate | Elasticsearch.com
@dadoonet | @elasticsearchfr | @scrutmydocs

Le 21 juin 2013 à 05:15, Fatima Castiglione Maldonado 发 castiglionemaldonado@gmail.com a écrit :

Yes... That was the problem.
Thank you very much.
Now I can download it from the repository and compile it without problems.

Anyway, I got still two problems left:

  1. did you configure someone special in scrutmydocs about the location for the logs?

they are not in the usual places:

/usr/share/elasticsearch/logs/
/var/log/elasticsearch/elasticsearch.log

  1. when I re-create the river, all is the same as before

a. it won't read my files, just shows the one which were manually uploaded

b. the index is not shown in http://localhost:9200/_plugin/head/

c. when I do http://localhost:8080/scrutmydocs-0.3.1-SNAPSHOT-test/api/1/settings/rivers
I get:
{"ok":true,"errors":null,"object":[{"id":"tiger","name":"tiger","indexname":"docstiger","typename":"doctiger","start":true,"type":"dummy"}]}

...so it is yet a dummy river still.

  1. after this works, I am planning to add Twitter, Wikipedia and RSS capabilities to it. At least that is what my client wants, so sometime in the future you will get a nice version, as a way to thank you for all your help.

Thanks in advance,
Fatima

2013/6/20 David Pilato david@pilato.fr

You probably downloaded gh-pages branch instead of master.
Try git checkout master

The right repository is this one: GitHub - scrutmydocs/scrutmydocs: Search Web Application for hard drive documents

--
David Pilato | Technical Advocate | Elasticsearch.com
@dadoonet | @elasticsearchfr | @scrutmydocs

Le 17 juin 2013 à 12:25, Fatima Castiglione Maldonado 发 castiglionemaldonado@gmail.com a écrit :

and it is exactly the same no matter what URL I use:

root@FatiLinux:/media/fatima/Elements/Tiger/elasticSearch# git clone GitHub - scrutmydocs/scrutmydocs: Search Web Application for hard drive documents
Cloning into 'scrutmydocs'...
remote: Counting objects: 4218, done.
remote: Compressing objects: 100% (1980/1980), done.
remote: Total 4218 (delta 1367), reused 4123 (delta 1277)
Receiving objects: 100% (4218/4218), 1.34 MiB | 278 KiB/s, done.
Resolving deltas: 100% (1367/1367), done.
root@FatiLinux:/media/fatima/Elements/Tiger/elasticSearch# ls -ralh
total 41M
-rw------- 1 fatima fatima 711K jun 17 06:59 scrutmydocs-master.zip
drwx------ 1 fatima fatima 4,0K jun 17 07:21 scrutmydocs-master
-rw------- 1 fatima fatima 40M jun 10 02:22 scrutmydocs-0.2.0.war
drwx------ 1 fatima fatima 352 jun 17 07:24 scrutmydocs
drwx------ 1 fatima fatima 4,0K jun 16 21:27 ..
drwx------ 1 fatima fatima 4,0K jun 17 07:23 .
root@FatiLinux:/media/fatima/Elements/Tiger/elasticSearch# cd scrutmydocs
root@FatiLinux:/media/fatima/Elements/Tiger/elasticSearch/scrutmydocs# ls -ralh
total 8,5K
-rw------- 1 fatima fatima 3,0K jun 17 07:24 index.html
-rw------- 1 fatima fatima 10 jun 17 07:24 .gitignore
drwx------ 1 fatima fatima 440 jun 17 07:24 .git
drwx------ 1 fatima fatima 4,0K jun 17 07:23 ..
drwx------ 1 fatima fatima 352 jun 17 07:24 .
root@FatiLinux:/media/fatima/Elements/Tiger/elasticSearch/scrutmydocs#

--

Fátima Castiglione Maldonado
castiglionemaldonado@gmail.com

                 ____
               ,'_   |

_______|||
<
) .------.
-----------,------.-' ,-' -.

             |    |  |              ,'                `.
            ,'    |  |            ,'                    `.
            |  _,-'  |__         /                        \
          _,'-'    `.   `---.___|_____________             \


      .--'  -----.  | _____________________   `-. -----     |
      |    ___|  |  |                      \  ,- \          |
      |    ___|  |===========================((|) |         |
      |       |  |  | _____________________/  `- /          |


      `--._ -----'  |        _________________,-' -----     |
           `.-._   ,' __.---'   |                          /
            |   `-.  |           \                        /

            `.    |  |            `.                    ,'

             |    |  |              `.                ,'

_____,------------------. -._ _,-' <___________________________) ------'
| _| |

               `.____|

=================================

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--

Fátima Castiglione Maldonado
castiglionemaldonado@gmail.com

                 ____
               ,'_   |

_______|||
<
) .------.
-----------,------.-' ,-' -.

             |    |  |              ,'                `.
            ,'    |  |            ,'                    `.
            |  _,-'  |__         /                        \
          _,'-'    `.   `---.___|_____________             \

      .--'  -----.  | _____________________   `-. -----     |
      |    ___|  |  |                      \  ,- \          |
      |    ___|  |===========================((|) |         |
      |       |  |  | _____________________/  `- /          |

      `--._ -----'  |        _________________,-' -----     |
           `.-._   ,' __.---'   |                          /
            |   `-.  |           \                        /
            `.    |  |            `.                    ,'

             |    |  |              `.                ,'

_____,------------------. -._ _,-' <___________________________) ------'
| _| |

               `.____|

=================================

You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.