I get the error "No processor type exists with name [attachment]"

i get
{
"error": {
"root_cause": [
{
"type": "parse_exception",
"reason": "No processor type exists with name [attachment]",
"processor_type": "attachment"
}
],
"type": "parse_exception",
"reason": "No processor type exists with name [attachment]",
"processor_type": "attachment"
},
"status": 400
}

when i try to do the pipeline here in the docs https://www.elastic.co/guide/en/elasticsearch/plugins/7.2/using-ingest-attachment.html
PUT _ingest/pipeline/attachment
{
"description" : "Extract attachment information",
"processors" : [
{
"attachment" : {
"field" : "data",
"indexed_chars" : 11
}
}
]
}
can anyone help me! i'm new to ES i just install it, everything is working and i followed the docs i can index and search fine ..
i installed the plugin : sudo bin/elasticsearch-plugin install ingest-attachment
and when i hit : GET _cat/plugins?v .. i get :
name component version
ubuntu-TUF-Gaming-FX505GE-FX505GE ingest-attachment 7.2.0

Did you restart the node before running the PUT _ingest/pipeline/attachment?

Could you share the full logs? (formatted please with </> icon).

if you mean by restarting : sudo systemctl restart elasticsearch.service yes i did it.
can you please specify how can i get the log you want?

It depends on your source package. Read from https://www.elastic.co/guide/en/elasticsearch/reference/current/install-elasticsearch.html

For RPM for example (https://www.elastic.co/guide/en/elasticsearch/reference/current/rpm.html) the log dir is /var/log/elasticsearch by default.

ok i downloaded the archive package from the download page. i choose download for linux, then extracted and started the elasticsearch with ./bin/elasticsearch in cmd.
what is the log that i should show? image

elasticsearch.log

here it is image

i saw you talk about this in old issues and you said that we have to install the plugin in every node.. first i don't understand why i have 2 nodes, i only had installed elasticsearch for 5 days and just followed the docs on how to index and do simple stuff !


if the problem is that i dind't installed the plugin in the two ndes.. how can i do that ?

You can't have 2 nodes running on the same machine unless if you changed some settings.
Kill the node you don't need. Or just kill them all and restart only one

ok but can you please tell me how to do that, i don't want to do it wrong .. i don't remember myself doing 2 nodes,

What gives

ps -ef | grep elasticsearch

it gives this

Please don't post images of text as they are hardly readable and not searchable.

Instead paste the text and format it with </> icon. Check the preview window.

You have started one elasticsearch as a service and another one manually.
If you don't really care about any of them, just kill -9 the process.

sorry ubuntu 10512 6875 0 16:43 pts/0 00:02:09 /home/ubuntu/Downloads/elasticsearch-7.2.0/jdk/bin/java -Xms1g -Xmx1g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -Des.networkaddress.cache.ttl=60 -Des.networkaddress.cache.negative.ttl=10 -XX:+AlwaysPreTouch -Xss1m -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djna.nosys=true -XX:-OmitStackTraceInFastThrow -Dio.netty.noUnsafe=true -Dio.netty.noKeySetOptimization=true -Dio.netty.recycler.maxCapacityPerThread=0 -Dlog4j.shutdownHookEnabled=false -Dlog4j2.disable.jmx=true -Djava.io.tmpdir=/tmp/elasticsearch-4654280176806404706 -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=data -XX:ErrorFile=logs/hs_err_pid%p.log -Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,pid,tags:filecount=32,filesize=64m -Djava.locale.providers=COMPAT -Dio.netty.allocator.type=unpooled -XX:MaxDirectMemorySize=536870912 -Des.path.home=/home/ubuntu/Downloads/elasticsearch-7.2.0 -Des.path.conf=/home/ubuntu/Downloads/elasticsearch-7.2.0/config -Des.distribution.flavor=default -Des.distribution.type=tar -Des.bundled_jdk=true -cp /home/ubuntu/Downloads/elasticsearch-7.2.0/lib/* org.elasticsearch.bootstrap.Elasticsearch ubuntu 10634 10512 0 16:43 pts/0 00:00:00 /home/ubuntu/Downloads/elasticsearch-7.2.0/modules/x-pack-ml/platform/linux-x86_64/bin/controller elastic+ 15282 1 1 17:24 ? 00:02:54 /usr/share/elasticsearch/jdk/bin/java -Xms1g -Xmx1g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -Des.networkaddress.cache.ttl=60 -Des.networkaddress.cache.negative.ttl=10 -XX:+AlwaysPreTouch -Xss1m -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djna.nosys=true -XX:-OmitStackTraceInFastThrow -Dio.netty.noUnsafe=true -Dio.netty.noKeySetOptimization=true -Dio.netty.recycler.maxCapacityPerThread=0 -Dlog4j.shutdownHookEnabled=false -Dlog4j2.disable.jmx=true -Djava.io.tmpdir=/tmp/elasticsearch-17667327696129157755 -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/var/lib/elasticsearch -XX:ErrorFile=/var/log/elasticsearch/hs_err_pid%p.log -Xlog:gc*,gc+age=trace,safepoint:file=/var/log/elasticsearch/gc.log:utctime,pid,tags:filecount=32,filesize=64m -Djava.locale.providers=COMPAT -Dio.netty.allocator.type=unpooled -XX:MaxDirectMemorySize=536870912 -Des.path.home=/usr/share/elasticsearch -Des.path.conf=/etc/elasticsearch -Des.distribution.flavor=default -Des.distribution.type=deb -Des.bundled_jdk=true -cp /usr/share/elasticsearch/lib/* org.elasticsearch.bootstrap.Elasticsearch -p /var/run/elasticsearch/elasticsearch.pid --quiet elastic+ 15426 15282 0 17:24 ? 00:00:00 /usr/share/elasticsearch/modules/x-pack-ml/platform/linux-x86_64/bin/controller ubuntu 26003 23027 0 20:27 pts/2 00:00:00 grep --color=auto elasticsearch

ok i will kill them , but them how can i start one ?

Which one do you want to start?

i did kill second one by
kill -10512
so i need to restart the first one i think

so i did kill one and did sudo systemctl restart elasticsearch.service and now when i did GET /_cat/nodes?v i get ip heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name 127.0.0.1 15 83 8 0.67 1.20 0.98 mdi * ubuntu-TUF-Gaming-FX505GE-FX505GE
but i still get { "error": { "root_cause": [ { "type": "parse_exception", "reason": "No processor type exists with name [attachment]", "processor_type": "attachment" } ], "type": "parse_exception", "reason": "No processor type exists with name [attachment]", "processor_type": "attachment" }, "status": 400 }

when i do
PUT _ingest/pipeline/attachment { "description" : "Extract attachment information", "processors" : [ { "attachment" : { "field" : "data", "indexed_chars" : 11 } } ] }
PS: i removed and reinstalled the Ingest Attachment Processor Plugin!

What does the cat plugins API returns?

i uninstalled elasticsearch and reinstalled it and everything works fine and i understood now that at first i installed and run it twice: with service and with./bin/elasticsearch that is why i got two nodes. thanks