Parse json from inside message

i trying to parse json from inside message .

filebeat.autodiscover:
  providers:
    - type: docker
      hints.enabled: true

used this in file beat

used labels in docker-compose

labels:
  co.elastic.logs/json.keys_under_root: true
  co.elastic.logs/json.overwrite_keys: true
  co.elastic.logs/json.add_error_key: true
  co.elastic.logs/json.expand_keys: true

but still the message doesn't parsing

  "message": [
      "{\"@timestamp\":\"2021-03-15T13:08:43.613Z\",\"body\":{\"actionid\":\"1989\",\"cameraid\":\"2\",\"cartid\":\"5e14b2e301c4a4130000004e\",\"data\":{\"images\":[\"/usr/data/cart-data/5e14b2e301c4a4130000004e/1234567890/1989/2/pipeline/1615813722248.jpg\",\"/usr/data/cart-data/5e14b2e301c4a4130000004e/1234567890/1989/2/pipeline/1615813722279.jpg\",\"/usr/data/cart-data/5e14b2e301c4a4130000004e/1234567890/1989/2/pipeline/1615813722312.jpg\",\"/usr/data/cart-data/5e14b2e301c4a4130000004e/1234567890/1989/2/pipeline/1615813722345.jpg\",\"/usr/data/cart-data/5e14b2e301c4a4130000004e/1234567890/1989/2/pipeline/1615813722381.jpg\",\"/usr/data/cart-data/5e14b2e301c4a4130000004e/1234567890/1989/2/pipeline/1615813722416.jpg\",\"/usr/data/cart-data/5e14b2e301c4a4130000004e/1234567890/1989/2/pipeline/1615813722449.jpg\",\"/usr/data/cart-data/5e14b2e301c4a4130000004e/1234567890/1989/2/pipeline/1615813722479.jpg\",\"/usr/data/cart-data/5e14b2e301c4a4130000004e/1234567890/1989/2/pipeline/1615813722515.jpg\",\"/usr/data/cart-data/5e14b2e301c4a4130000004e/1234567890/1989/2/pipeline/1615813722553.jpg\"]},\"@kibana-highlighted-field@event@/kibana-highlighted-field@\":\"/gpu_server.pipeline/send_data\",\"eventtype\":\"pipeline\",\"handle\":3704418958442821600,\"journeyid\":\"1234567890\"},\"ecs.version\":\"1.6.0\",\"log-@kibana-highlighted-field@event@/kibana-highlighted-field@-type\":\"@kibana-highlighted-field@aic@/kibana-highlighted-field@-@kibana-highlighted-field@event@/kibana-highlighted-field@\",\"log.level\":\"info\",\"message\":\"\"}"

i have docker there which using ecs go dumps logs to stdout of docker i want filebeat to pickup this messages and ingest them to elk i can't manage to parse the messages as json not string

Hmmm, what you have should work.

Could you try enclosing the label keys in double quotes, like so:

labels:
  "co.elastic.logs/json.keys_under_root": true
  "co.elastic.logs/json.overwrite_keys": true
  "co.elastic.logs/json.add_error_key": true
  "co.elastic.logs/json.expand_keys": true

If that doesn't work, could you try using the array form for the labels like so:

labels:
  - "co.elastic.logs/json.keys_under_root=true"
  - "co.elastic.logs/json.overwrite_keys=true"
  - "co.elastic.logs/json.add_error_key=true"
  - "co.elastic.logs/json.expand_keys=true"

Thanks,

Shaunak

fixed it ! it was that the logs was partly json partly not . i talked to the developer to make all logs json.
now i have 2 problems

as you can see it doesn't parse only one field


 "body": {
      "data": {
        "bar_codes": [
          {
            "bar_code_information": [
              {
                "score": 0,
                "bar_code": "NoAction"
              }
            ]
          },
          {
            "bar_code_information": [
              {
                "bar_code": "ProductInsertion",
                "score": 0
              }
            ],
            "frame_id": 1
          }
        ],
        "message": "OK"
      },

is there a way to make filebeat to parse this data also as fileds?

the second issue i having duplicate entries , it's one time parsed as json and second time as as string


{
  "_index": "filebeat-7.11.1-2021.03.16",
  "_type": "_doc",
  "_id": "GcbwOXgBJMHPPsfR4NM1",
  "_version": 1,
  "_score": 0,
  "_source": {
    "@timestamp": "2021-03-16T07:28:58.577Z",
    "fields": {
      "env": "dev"
    },
    "container": {
      "id": "ae0c9e7a707cb7df5da7c972404dc748072bf9ae47caaea4550969a206a1ca68",
      "image": {
        "name": "tracx/grpc-proxy:2.0.15.9fc4db8"
      },
      "name": "grpc_proxy_pr",
      "labels": {
        "com_docker_compose_project_working_dir": "/home/avraham/grpc-proxy",
        "co_elastic_logs/json_overwrite_keys": "True",
        "com_docker_compose_oneoff": "False",
        "co_elastic_logs/json_expand_keys": "True",
        "com_docker_compose_container-number": "1",
        "com_docker_compose_service": "grpc_proxy_pr",
        "com_docker_compose_version": "1.26.2",
        "co_elastic_logs/json_keys_under_root": "True",
        "com_docker_compose_config-hash": "a0bc966fca142f4dced4b88c4a0a403a3b3f5932b4bc78d6260286c249163202",
        "com_docker_compose_project": "grpc-proxy",
        "com_docker_compose_project_config_files": "docker-compose.yml",
        "co_elastic_logs/json_add_error_key": "True"
      }
    },
    "stream": "stdout",
    "message": "{\"@timestamp\":\"2021-03-16T07:28:58.577Z\",\"body\":{\"actionid\":\"2274\",\"cameraid\":\"3\",\"cartid\":\"5e14b2e301c4a4130000004e\",\"data\":{\"images\":[\"/usr/data/cart-data/pipeline/5e14b2e301c4a4130000004e/123456789/2274/3/1615879737797.jpg\",\"/usr/data/cart-data/pipeline/5e14b2e301c4a4130000004e/123456789/2274/3/1615879737847.jpg\",\"/usr/data/cart-data/pipeline/5e14b2e301c4a4130000004e/123456789/2274/3/1615879737897.jpg\",\"/usr/data/cart-data/pipeline/5e14b2e301c4a4130000004e/123456789/2274/3/1615879737948.jpg\",\"/usr/data/cart-data/pipeline/5e14b2e301c4a4130000004e/123456789/2274/3/1615879737997.jpg\"]},\"event\":\"/gpu_server.pipeline/send_data\",\"eventtype\":\"pipeline\",\"handle\":16839657125260286000,\"journeyid\":\"123456789\"},\"ecs.version\":\"1.6.0\",\"log-event-type\":\"aic-event\",\"log.level\":\"info\",\"message\":\"\"}",
    "input": {
      "type": "docker"
    },
    "ecs": {
      "version": "1.6.0"
    },
    "host": {
      "architecture": "x86_64",
      "os": {
        "name": "Ubuntu",
        "kernel": "4.15.0-112-generic",
        "codename": "bionic",
        "platform": "ubuntu",
        "version": "18.04.4 LTS (Bionic Beaver)",
        "family": "debian"
      },
      "id": "fa5d375897824427adaa775265d7e126",
      "containerized": false,
      "name": "opdev",
      "ip": [
        "192.168.4.2",
        "fe80::b67a:f1ff:fe33:5fee",
        "16.1.15.2",
        "fe80::e84c:b6ff:fe3e:a48d",
        "192.168.5.51",
        "192.168.122.1",
        "172.21.0.1",
        "172.19.0.1",
        "172.23.0.1",
        "172.18.0.1",
        "fe80::42:a6ff:fec9:dc10",
        "172.17.0.1",
        "fe80::42:dff:fef0:940d",
        "172.31.0.1",
        "fe80::3cc8:fcff:fefa:dce4",
        "fe80::602d:fdff:febe:8da8",
        "fe80::6019:b1ff:feb8:8345",
        "fe80::8874:fbff:feaf:fb74",
        "fe80::3866:51ff:fec7:5a89",
        "fe80::3c27:cff:fe7f:8b25",
        "fe80::40ac:4cff:feac:2e9d",
        "fe80::d086:91ff:feb3:bd40",
        "fe80::e883:f6ff:fe15:911e",
        "fe80::c4e6:2bff:fec2:dab1",
        "fe80::10c1:66ff:fecf:ff53",
        "fe80::a837:59ff:fe2a:be40",
        "fe80::64ef:3fff:fe38:adbd",
        "fe80::34c9:7dff:fe27:7a35",
        "fe80::605b:54ff:fe36:3457",
        "fe80::9c4c:80ff:fe92:7dce",
        "fe80::18a7:96ff:fe80:de2d",
        "fe80::c87:1dff:fe9a:aa0d",
        "fe80::bc32:92ff:fe92:faab",
        "fe80::e8e2:aeff:fef4:defb",
        "fe80::98b9:feff:fe49:dcff",
        "192.168.80.1",
        "fe80::42:8eff:fe9f:14ac",
        "fe80::8ba:9fff:fe9f:9cc5",
        "fe80::143a:8dff:fe69:2675",
        "fe80::e0db:20ff:fe2a:6fe",
        "fe80::a821:5bff:fe59:5364",
        "fe80::b002:8eff:fe66:11a0",
        "fe80::947b:1ff:fe53:cf19",
        "fe80::8ea:e2ff:fe6b:4a46",
        "fe80::58d3:7eff:fed1:b04d",
        "fe80::7897:7bff:fe08:546d",
        "fe80::5c0c:c6ff:fe56:59f2",
        "fe80::588f:a4ff:fe7a:9571",
        "fe80::7cf2:6bff:fe11:7561",
        "fe80::5c4d:a8ff:feb7:8317",
        "fe80::20bd:51ff:fe96:f74d",
        "fe80::906b:6dff:fea1:8b8d",
        "fe80::dc01:44ff:fea6:a57e",
        "fe80::ecf1:d2ff:fe87:f1d2",
        "fe80::a844:11ff:fe3d:33fd",
        "fe80::a82d:8bff:fe0f:f78c",
        "fe80::8099:77ff:fe07:f65b"
      ],
      "mac": [
        "b4:7a:f1:33:5f:ee",
        "b4:7a:f1:33:5f:ef",
        "b4:7a:f1:33:5f:f0",
        "b4:7a:f1:33:5f:f1",
        "ea:4c:b6:3e:a4:8d",
        "4e:b2:e3:e2:ff:35",
        "52:54:00:22:5e:cf",
        "52:54:00:22:5e:cf",
        "02:42:f5:97:23:39",
        "02:42:c5:69:06:7a",
        "02:42:23:62:63:6e",
        "02:42:a6:c9:dc:10",
        "02:42:0d:f0:94:0d",
        "02:42:6e:bb:3d:ee",
        "3e:c8:fc:fa:dc:e4",
        "62:2d:fd:be:8d:a8",
        "62:19:b1:b8:83:45",
        "8a:74:fb:af:fb:74",
        "3a:66:51:c7:5a:89",
        "3e:27:0c:7f:8b:25",
        "42:ac:4c:ac:2e:9d",
        "d2:86:91:b3:bd:40",
        "ea:83:f6:15:91:1e",
        "c6:e6:2b:c2:da:b1",
        "12:c1:66:cf:ff:53",
        "aa:37:59:2a:be:40",
        "66:ef:3f:38:ad:bd",
        "36:c9:7d:27:7a:35",
        "62:5b:54:36:34:57",
        "9e:4c:80:92:7d:ce",
        "1a:a7:96:80:de:2d",
        "0e:87:1d:9a:aa:0d",
        "be:32:92:92:fa:ab",
        "ea:e2:ae:f4:de:fb",
        "9a:b9:fe:49:dc:ff",
        "02:42:8e:9f:14:ac",
        "0a:ba:9f:9f:9c:c5",
        "16:3a:8d:69:26:75",
        "e2:db:20:2a:06:fe",
        "aa:21:5b:59:53:64",
        "b2:02:8e:66:11:a0",
        "96:7b:01:53:cf:19",
        "0a:ea:e2:6b:4a:46",
        "5a:d3:7e:d1:b0:4d",
        "7a:97:7b:08:54:6d",
        "5e:0c:c6:56:59:f2",
        "5a:8f:a4:7a:95:71",
        "7e:f2:6b:11:75:61",
        "5e:4d:a8:b7:83:17",
        "22:bd:51:96:f7:4d",
        "92:6b:6d:a1:8b:8d",
        "de:01:44:a6:a5:7e",
        "ee:f1:d2:87:f1:d2",
        "aa:44:11:3d:33:fd",
        "aa:2d:8b:0f:f7:8c",
        "82:99:77:07:f6:5b"
      ],
      "hostname": "opdev"
    },
    "agent": {
      "ephemeral_id": "7e8dcd0e-d8e6-449d-9e50-e03f9537792b",
      "id": "548383b1-c613-433d-ace3-ce750bd3af07",
      "name": "opdev",
      "type": "filebeat",
      "version": "7.11.1",
      "hostname": "opdev"
    },
    "log": {
      "offset": 123863,
      "file": {
        "path": "/var/lib/docker/containers/ae0c9e7a707cb7df5da7c972404dc748072bf9ae47caaea4550969a206a1ca68/ae0c9e7a707cb7df5da7c972404dc748072bf9ae47caaea4550969a206a1ca68-json.log"
      }
    },
    "tags": [
      "opdev",
      "haifa_office"
    ]
  },
  "fields": {
    "@timestamp": [
      "2021-03-16T07:28:58.577Z"
    ]
  },
  "highlight": {
    "agent.name": [
      "@kibana-highlighted-field@opdev@/kibana-highlighted-field@"
    ],
    "message": [
      "{\"@timestamp\":\"2021-03-16T07:28:58.577Z\",\"body\":{\"actionid\":\"2274\",\"cameraid\":\"3\",\"cartid\":\"5e14b2e301c4a4130000004e\",\"data\":{\"images\":[\"/usr/data/cart-data/pipeline/5e14b2e301c4a4130000004e/123456789/2274/3/1615879737797.jpg\",\"/usr/data/cart-data/pipeline/5e14b2e301c4a4130000004e/123456789/2274/3/1615879737847.jpg\",\"/usr/data/cart-data/pipeline/5e14b2e301c4a4130000004e/123456789/2274/3/1615879737897.jpg\",\"/usr/data/cart-data/pipeline/5e14b2e301c4a4130000004e/123456789/2274/3/1615879737948.jpg\",\"/usr/data/cart-data/pipeline/5e14b2e301c4a4130000004e/123456789/2274/3/1615879737997.jpg\"]},\"@kibana-highlighted-field@event@/kibana-highlighted-field@\":\"/gpu_server.pipeline/send_data\",\"eventtype\":\"pipeline\",\"handle\":16839657125260286000,\"journeyid\":\"123456789\"},\"ecs.version\":\"1.6.0\",\"log-@kibana-highlighted-field@event@/kibana-highlighted-field@-type\":\"@kibana-highlighted-field@aic@/kibana-highlighted-field@-@kibana-highlighted-field@event@/kibana-highlighted-field@\",\"log.level\":\"info\",\"message\":\"\"}"
    ]
  },
  "sort": [
    0,
    1615879738577
  ]
}

as you can see i have everything duplicate now . is there a way to automatically exclude this container from the docker scanning
which uses

container.id
- *

maybe add some more label to the specific container?

For the body.data.bar_codes parsing issue, you may want to look into using the decode_json_fields processor with the process_array setting set to true.

For the duplicate string entry, I assume you are referring to the message field? If you want you can remove this field using the drop_fields processor.

I didn't quite understand your question about excluding the container. Maybe post your complete filebeat.yml configuration (masking any sensitive information)?

###################### Filebeat Configuration Example #########################

# This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html

# For more available modules and options, please see the filebeat.reference.yml sample
# configuration file.

# ============================== Filebeat inputs ===============================

filebeat.autodiscover:
  providers:
    - type: docker
      hints.enabled: true
      templates:
        - condition:
            contains:
              docker.container.image: grpc_proxy
          config:
              log:
                input:
                  type: log
                  paths:
                    - /var/lib/docker/containers/${data.docker.container.id}/*.log
                  exclude_lines: ["^\\s+[\\-`('.|_]"]  # drop asciiart lines


filebeat.inputs:


#docker logs
- type: docker

  containers.ids:
    - '*'
  

#   exclude_files: ['\.gz$']



# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.

- type: log


  # Change to true to enable this input configuration.
  enabled: true

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - /var/log/*.log
    #- /var/log/tracxpoint/grpc-proxy/*.log
    #- c:\programdata\elasticsearch\logs\*

  # Exclude lines. A list of regular expressions to match. It drops the lines that are
  # matching any regular expression from the list.
  #exclude_lines: ['^DBG']

  # Include lines. A list of regular expressions to match. It exports the lines that are
  # matching any regular expression from the list.
  #include_lines: ['^ERR', '^WARN']

  # Exclude files. A list of regular expressions to match. Filebeat drops the files that
  # are matching any regular expression from the list. By default, no files are dropped.
  exclude_files: ['.gz$']

  # Optional additional fields. These fields can be freely picked
  # to add additional information to the crawled log files for filtering
  #fields:
  #  level: debug
  #  review: 1

  ### Multiline options

  # Multiline can be used for log messages spanning multiple lines. This is common
  # for Java Stack Traces or C-Line Continuation

  # The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
  #multiline.pattern: ^\[

  # Defines if the pattern set under pattern should be negated or not. Default is false.
  #multiline.negate: false

  # Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
  # that was (not) matched before or after or as long as a pattern is not matched based on negate.
  # Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
  #multiline.match: after




# filestream is an experimental input. It is going to replace log input in the future.
- type: filestream

  # Change to true to enable this input configuration.
  enabled: false

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - /var/log/tracxpoint/grpc-proxy/*.log
    #- c:\programdata\elasticsearch\logs\*

  # Exclude lines. A list of regular expressions to match. It drops the lines that are
  # matching any regular expression from the list.
  #exclude_lines: ['^DBG']

  # Include lines. A list of regular expressions to match. It exports the lines that are
  # matching any regular expression from the list.
  #include_lines: ['^ERR', '^WARN']

  # Exclude files. A list of regular expressions to match. Filebeat drops the files that
  # are matching any regular expression from the list. By default, no files are dropped.
  #prospector.scanner.exclude_files: ['.gz$']

  # Optional additional fields. These fields can be freely picked
  # to add additional information to the crawled log files for filtering
  #fields:
  #  level: debug
  #  review: 1
json:
#-   message_key: "grpc_proxy"
- keys_under_root: true
- overwrite_keys: true
- add_error_key: false
- expand_keys: true



#    json.keys_under_root: true
#    json.overwrite_keys: true
#    json.add_error_key: true
#    json.expand_keys: true


# ============================== Filebeat modules ==============================

filebeat.config.modules:
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  reload.enabled: false

  # Period on which files under path should be checked for changes
  #reload.period: 10s

# ======================= Elasticsearch template setting =======================

setup.template.settings:
  index.number_of_shards: 1
  index.codec: best_compression
  #_source.enabled: false


# ================================== General ===================================

harvester_buffer_size: 32768

max_bytes: 31457280


# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
name: opdev

# The tags of the shipper are included in their own field with each
# transaction published.
tags: ["opdev", "haifa_office"]

# Optional fields that you can specify to add additional information to the
# output.
fields:
  env: dev

# ================================= Dashboards =================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here or by using the `setup` command.
#setup.dashboards.enabled: false

# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:

# =================================== Kibana ===================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:

  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  #host: "https://elk.txp.link:443/_plugin/kibana"

  # Kibana Space ID
  # ID of the Kibana Space into which the dashboards should be loaded. By default,
  # the Default Space will be used.
  #space.id:

# =============================== Elastic Cloud ================================

# These settings simplify using Filebeat with the Elastic Cloud (https://cloud.elastic.co/).

# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:

# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:

# ================================== Outputs ===================================

# Configure what output to use when sending the data collected by the beat.

# ---------------------------- Elasticsearch Output ----------------------------
output.elasticsearch:
  # Array of hosts to connect to.
  hosts: 

  # Protocol - either `http` (default) or `https`.
  #protocol: "https"

  # Authentication credentials - either API key or username/password.
  #api_key: "id:api_key"
  username: 
  password:

# ------------------------------ Logstash Output -------------------------------
#output.logstash:
  # The Logstash hosts
  #hosts: ["localhost:5044"]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

# ================================= Processors =================================
processors:
  - add_host_metadata:
      when.not.contains.tags: forwarded
  - add_cloud_metadata: ~
  - add_docker_metadata: ~
  - add_kubernetes_metadata: ~

# ================================== Logging ===================================

# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug

# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publisher", "service".
#logging.selectors: ["*"]

# ============================= X-Pack Monitoring ==============================
# Filebeat can export internal metrics to a central Elasticsearch monitoring
# cluster.  This requires xpack monitoring to be enabled in Elasticsearch.  The
# reporting is disabled by default.

# Set to true to enable the monitoring reporter.
#monitoring.enabled: false

# Sets the UUID of the Elasticsearch cluster under which monitoring data for this
# Filebeat instance will appear in the Stack Monitoring UI. If output.elasticsearch
# is enabled, the UUID is derived from the Elasticsearch cluster referenced by output.elasticsearch.
#monitoring.cluster_uuid:

# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well.
# Note that the settings should point to your Elasticsearch *monitoring* cluster.
# Any setting that is not set is automatically inherited from the Elasticsearch
# output configuration, so if you have the Elasticsearch output configured such
# that it is pointing to your Elasticsearch monitoring cluster, you can simply
# uncomment the following line.
#monitoring.elasticsearch:

# ============================== Instrumentation ===============================

# Instrumentation support for the filebeat.
#instrumentation:
    # Set to true to enable instrumentation of filebeat.
    #enabled: false

    # Environment in which filebeat is running on (eg: staging, production, etc.)
    #environment: ""

    # APM Server hosts to report instrumentation results to.
    #hosts:
    #  - http://localhost:8200

    # API Key for the APM Server(s).
    # If api_key is set then secret_token will be ignored.
    #api_key:

    # Secret token for the APM Server(s).
    #secret_token:


# ================================= Migration ==================================

# This allows to enable 6.7 migration aliases
#migration.6_to_7.enabled: true

setup.ilm.enabled: false



found the solution added uniqe id to json and added exclude lines on this