Multiple logstash intances

Hi,
I am trying to run multiple instances of logstash using multiple pipeline.
- pipeline.id: product_info path.config: "/home/ubuntu/dev/V4QaShop/logstash/products.conf" #pipeline.workers: 3 path.data: "/home/ubuntu/dev/V4QaShop/logstash/product" - pipeline.id: product_detail path.config: "/home/ubuntu/dev/V4QaShop/logstash/products_detail.conf" #queue.type: persisted path.data: "/home/ubuntu/dev/V4QaShop/logstash/productDetail"

after running logstash getting following error.

Logstash could not be started because there is already another instance using the configured data directory. If you wish to run multiple instances, you must change the "path.data" setting.

no other instance is running can any one help me

thanks

You should not be setting path.data in pipelines.yml. The configuration options supported in pipeline.yml are:

  • config.debug
  • config.support_escapes
  • config.reload.automatic
  • config.reload.interval
  • config.string
  • dead_letter_queue.enable
  • dead_letter_queue.max_bytes
  • metric.collect
  • pipeline.java_execution
  • pipeline.plugin_classloaders
  • path.config
  • path.dead_letter_queue
  • path.queue
  • pipeline.batch.delay
  • pipeline.batch.size
  • pipeline.id
  • pipeline.reloadable
  • pipeline.system
  • pipeline.workers
  • queue.checkpoint.acks
  • queue.checkpoint.interval
  • queue.checkpoint.writes
  • queue.checkpoint.retry
  • queue.drain
  • queue.max_bytes
  • queue.max_events
  • queue.page_capacity
  • queue.type

Rob

GitHub YouTube LinkedIn
How to install Elasticsearch & Kibana on Ubuntu - incl. hardware recommendations
What is the best storage technology for Elasticsearch?

that really help full, even better if you can share what does those option means any link will be help full.

Haris

Just a google search away... https://www.elastic.co/guide/en/logstash/7.6/logstash-settings-file.html

Following is my config file

input {
jdbc {
jdbc_driver_library => "/usr/share/java/mysql.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"
jdbc_connection_string => "jdbc:mysql://localhost:3309/v4qadb"
jdbc_user => "v4qashop"
jdbc_password => "V4admin09&#!"
jdbc_paging_enabled => true
jdbc_page_size => 10000
tracking_column => "pro_updated_date"
use_column_value=>true
# statement => "select * from es_product_detail_info where pro_updated_date >=:sql_last_value"
schedule => "*/10101010101010101010 * * * *"
statement => "SELECT
item_id,product_id,item_color_id,item_size_id,item_sku,item_image,item_price,brand_id,control_number,description,
display_product_color_id,item_details,item_name,gender,pro_updated_date ,shopify_product_id,image_product_name,image_sort_order,
image_title,image_id,updated_at,document_id,item_color_image,item_v4_size_title,cat_name,cat_shopify_collection_id
from es_product_info where pro_updated_date >=:sql_last_value"

}
}
output {
elasticsearch {
document_id=> "%{document_id}"
document_type => "doc"
index => "product_detail_info"
hosts => ["http://localhost:9200"]
}
stdout{
codec => rubydebug
}
}

Is there is a limit on free version of ELK total records are 547242 it stops at 270034. Plus I am running on multiple pipelines yaml file is

  • pipeline.id: morph_styles
    path.config: "/home/ubuntu/dev/V4QaShop/logstash/morph_styles.conf"
    pipeline.workers: 6
    #batch:

    size: 50000

    delay: ${BATCH_DELAY:50}

  • pipeline.id: demo_morph_styles
    path.config: "/home/ubuntu/dev/V4QaShop/logstash/demo_morph_styles.conf"
    pipeline.workers: 6
    #batch:

    size: 50000

    delay: ${BATCH_DELAY:50}

  • pipeline.id: product_detail_info
    #batch:

    size: 20000

    delay: ${BATCH_DELAY:50}

    path.config: "/home/ubuntu/dev/V4QaShop/logstash/products_detail.conf"
    queue.type: persisted
    #pipline.workers: 5

other never get a chance to run why ?

Haris

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.