Elastic stack issues Certificates and Kibana is not ready yet

Hello everyone, I am writing this topic because I have an issue on my Elastic Stack. It comes from the day when certificates reached its expiration date. The following message was on in my Kibana's interface :

"Kibana is not ready yet"

So, I investigated on the 3 servers (Logstash, Elasticsearch, Kibana) and I found something about certificate. I decided to change them because they were expirate, but ELK is still down.

This is what I did, and after I will show you syslog errors tht are still alives and what I did for them.
The Elastic Stack was implemented by another employee who no longer works here and I am working on ELK since 3 months, I probably don't know evrything about it.

First of all, I noticed that there are 3 certificates and 3 differents keys. Everything is ok, the communication between servers has to be encrypted by using Xpack security features I think ...
But there was something which disturbed me in formats.

SSL Configuration :

LOGSTASH --> CA.crt + logstash.crt + logstash.p8 (for PKCS8 conversion from .key file I think)
ELASTICSEARCH --> CA.cer + elasticsearch.pem + elasticsearch.key
KIBANA --> CA.cer + kibana.pem + kibana.key

Why there are .crt and .pem certificates ? I saw on Internet that there is no difference between them and my internal CA gives me onlyt .cer and .pem files. So, from this point I decided to start certificates renewal.

I used the commands below :

DOMAIN = my Domain Name

openssl req -key elasticsearch.key -new -out elasticsearch.csr -subj /CN=elasticsearch.DOMAIN -reqexts SAN -extensions SAN -config <(cat /usr/lib/ssl/openssl.cnf <(printf '[SAN]\nsubjectAltName=DNS:elasticsearch.DOMAIN')) -sha256
openssl req -key kibana.key -new -out kibana.csr -subj /CN=kibana.DOMAIN -reqexts SAN -extensions SAN -config <(cat /usr/lib/ssl/openssl.cnf <(printf '[SAN]\nsubjectAltName=DNS:kibana.DOMAIN')) -sha256

For logstash, it asked me a password, I think because it's PKCS8 format. I didn't have the password so I generated a new private key

openssl genrsa -aes256 -out elasticsearch.key 4096
openssl req -key logstash.key -new -out logstash.csr -subj /CN=logstash.DOMAIN -reqexts SAN -extensions SAN -config <(cat /usr/lib/ssl/openssl.cnf <(printf '[SAN]\nsubjectAltName=DNS:logstash.DOMAIN')) -sha256

Now, I have my three .csr files to ask new certificates from the Internal CA.
After that I got 3 certificates from the CA and I replaced the old ones with the new ones. The new certificates are in .pem format because I don't know how to get a .crt certificate. Also, I can't read the .cer format to do a check so I picked certificates in .pem format.
Obviously I changed all paths in configurations files with the new certificates name.

Moreover I use the following command to convert the private key of logstash in the required PKCS8 format :

openssl pkcs8 -in logstash.key -topk8 -out logstash.pkcs8.key

I restarted services but it's still not working.

I think that there are 2 issues in fact.

When I go on https://kibana.DOMAIN:5601, I still have "Kibana is not ready". However, the certificate is valid. A weak algorithm is used by the CA (sha1) but I think that I should access to kibana interface despite this.

Now, I will show you what I get in /var/log/syslog for the 3 servers.

FROM LOGSTASH /var/log/syslog :

Feb 17 15:31:38 logstash logstash[22339]: [2021-02-17T15:31:38,646][ERROR][logstash.inputs.beats    ] Looks like you either have a bad certificate, an invalid key or your private key was not in PKCS8 format.
Feb 17 15:31:38 logstash logstash[22339]: [2021-02-17T15:31:38,646][WARN ][io.netty.channel.ChannelInitializer] Failed to initialize a channel. Closing: [id: 0xfea594e7, L:/10.56.245.132:5044 - R:/10.56.245.14:20534]
Feb 17 15:31:38 logstash logstash[22339]: java.lang.IllegalArgumentException: File does not contain valid private key: /etc/logstash/certs/logstash.pkcs8.key
ecurity.InvalidKeyException: IOException : DER input, Integer tag error
Feb 17 15:31:38 logstash logstash[22339]: #011at sun.security.pkcs.PKCS8Key.decode(PKCS8Key.java:351) ~[?:1.8.0_212]
Feb 17 15:31:38 logstash logstash[22339]: #011at java.security.KeyFactory.generatePrivate(KeyFactory.java:372) ~[?:1.8.0_212]

FROM ELASTICSEARCH /var/log/syslog :

Feb 17 16:19:51 elasticsearch python3[32232]: INFO:elastalert:Queried rule hid from 2021-02-17 16:19 CET to 2021-02-17 16:19 CET: 0 / 0 hits
Feb 17 16:19:51 elasticsearch python3[32232]: /usr/local/lib/python3.5/dist-packages/urllib3/connectionpool.py:851: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
Feb 17 16:19:51 elasticsearch python3[32232]:   InsecureRequestWarning)
Feb 17 16:19:51 elasticsearch python3[32232]: /usr/local/lib/python3.5/dist-packages/urllib3/connectionpool.py:851: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
Feb 17 16:19:51 elasticsearch python3[32232]:   InsecureRequestWarning)

FROM KIBANA /var/log/syslog :

Feb 17 16:27:27 kibana kibana[5792]: {"type":"log","@timestamp":"2021-02-17T15:27:27Z","tags":["warning","elasticsearch","admin"],"pid":5792,"message":"No living connections"}
Feb 17 16:27:27 kibana kibana[5792]: {"type":"log","@timestamp":"2021-02-17T15:27:27Z","tags":["warning","task_manager"],"pid":5792,"message":"PollError No Living connections"}
Feb 17 16:27:29 kibana systemd-timesyncd[408]: Timed out waiting for reply from 212.83.158.83:123 (3.debian.pool.ntp.org).
Feb 17 16:27:30 kibana kibana[5792]: {"type":"log","@timestamp":"2021-02-17T15:27:30Z","tags":["warning","elasticsearch","admin"],"pid":5792,"message":"Unable to revive connection: https://10.56.245.133:9200/"}
Feb 17 16:27:34 kibana kibana[5792]: {"type":"log","@timestamp":"2021-02-17T15:27:34Z","tags":["license","warning","xpack"],"pid":5792,"message":"License information from the X-Pack plugin could not be obtained from Elasticsearch for the [data] cluster. Error: No Living connections"}

About Logstash investigation :

Feb 18 10:37:36 logstash logstash[22339]: [2021-02-18T10:37:36,910][ERROR][logstash.inputs.beats    ] Looks like you either have a bad certificate, an invalid key or your private key was not in PKCS8 format.

I followed this ERROR to find the issue, but I really don't know why it appears because I did the same thing for the two others servers (Kibana and Elasticsearch) and there is no errors like this one for them.

About Elasticsearch investigation :

Feb 17 16:19:55 elasticsearch python3[32232]: /usr/local/lib/python3.5/dist-packages/urllib3/connectionpool.py:851: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
Feb 17 16:19:55 elasticsearch python3[32232]:   InsecureRequestWarning)

This is interesting. I looked at the following file : /usr/local/lib/python3.5/dist-packages/urllib3/connectionpool.py and you can see below the code around the line 851.

def _validate_conn(self, conn):
    """
    Called right before a request is made, after the socket is created.
    """
    super(HTTPSConnectionPool, self)._validate_conn(conn)

    # Force connect early to allow us to validate the connection.
    if not getattr(conn, 'sock', None):  # AppEngine might not have  `.sock`
        conn.connect()

    if not conn.is_verified:
        warnings.warn((
            'Unverified HTTPS request is being made. '
            'Adding certificate verification is strongly advised. See: '
            'https://urllib3.readthedocs.io/en/latest/advanced-usage.html'
            '#ssl-warnings'),
            InsecureRequestWarning)

> #https://urllib3.readthedocs.io/en/latest/advanced-usage.html#tlwarnings
> #https://urllib3.readthedocs.io/en/latest/user-guide.html#ssl

In the following guides, I can remove the error with "disable" or if I comment the function, but both solutions are not recommended. I think that maybe there is an issue on my internal CA-certificate ...

About Kibana investigation :

I followed this : https://discuss.elastic.co/t/kibana-server-inot-ready-yet/241217

root@kibana:/home/adminsys # curl -X GET elasticsearch:9200/
curl: (52) Empty reply from server
root@kibana:/home/adminsys # curl -X GET "https://elasticsearch:9200/" --key elasticsearch.pem  -k -u elastic
Enter host password for user 'elastic':
{
  "name" : "prod-data-1",
  "cluster_name" : "MY CLUSTER NAME",
  "cluster_uuid" : "_4AKCuu-SVSM4z-t6GZQ",
  "version" : {
    "number" : "7.2.0",
    "build_flavor" : "default",
    "build_type" : "deb",
    "build_hash" : "508c38a",
    "build_date" : "2019-06-20T15:54:18.811730Z",
    "build_snapshot" : false,
    "lucene_version" : "8.0.0",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"

So, Elasticsearch is ready to be connected to.

root@kibana:/home/adminsys # curl -v https://elasticsearch:9200/
*   Trying 10.56.245.133...
* TCP_NODELAY set
* Connected to elasticsearch (10.56.245.133) port 9200 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH
* successfully set certificate verify locations:
*   CAfile: /etc/ssl/certs/ca-certificates.crt
  CApath: /etc/ssl/certs
* TLSv1.2 (OUT), TLS header, Certificate Status (22):
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (OUT), TLS alert, Server hello (2):
* SSL certificate problem: self signed certificate in certificate chain
* Curl_http_done: called premature == 1
* stopped the pause stream!
* Closing connection 0
curl: (60) SSL certificate problem: self signed certificate in certificate chain
More details here: https://curl.haxx.se/docs/sslcerts.html

curl performs SSL certificate verification by default, using a "bundle"
 of Certificate Authority (CA) public keys (CA certs). If the default
 bundle file isn't adequate, you can specify an alternate file
 using the --cacert option.
If this HTTPS server uses a certificate signed by a CA represented in
 the bundle, the certificate verification probably failed due to a
 problem with the certificate (it might be expired, or the name might
 not match the domain name in the URL).
If you'd like to turn off curl's verification of the certificate, use
 the -k (or --insecure) option.

In the following part there are some configurations which can helps :
I show you just uncommented lines:

LOGSTASH.yml

node.name: logstash
path.data: /var/lib/logstash
config.reload.automatic: true
config.reload.interval: 60s
queue.type: persisted
queue.page_capacity: 64mb
queue.max_events: 0
queue.max_bytes: 6gb
queue.checkpoint.writes: 1024

path.logs: /var/log/logstash

ELASTICSEARCH.yml

cluster.name: MY CLUSTER NAME
node.name: prod-data-1
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
bootstrap.memory_lock: true
network.host: 10.56.245.133
discovery.seed_hosts: ["10.56.245.133"]
cluster.initial_master_nodes: ["prod-data-1"]
xpack.security.enabled: true
xpack.security.authc.accept_default_password: false
xpack.security.authc.password_hashing.algorithm: pbkdf2

xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: none
xpack.security.transport.ssl.key: certs/elasticsearch.key
xpack.security.transport.ssl.certificate: certs/elasticsearch.pem
xpack.security.transport.ssl.certificate_authorities: [ "certs/CA.cer" ]

xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.key:  certs/elasticsearch.key
xpack.security.http.ssl.certificate: certs/elasticsearch.pem
xpack.security.http.ssl.certificate_authorities: [ "certs/CA.cer" ]

KIBANA.yml

server.port: 5601

server.host: "10.56.245.134"

server.name: "kibana"

elasticsearch.hosts: ["https://10.56.245.133:9200"]

elasticsearch.username: "kibana"

server.ssl.enabled: true
server.ssl.certificate: /etc/kibana/certs/public/kibana.pem
server.ssl.key: /etc/kibana/certs/private/kibana.key

elasticsearch.ssl.certificateAuthorities: [ "/etc/kibana/certs/private/CA.cer" ]

LOGSTASH BEATS INPUT CONFIGURATION :

input {
  beats {
    port => 5044
    client_inactivity_timeout => 6000
    ssl => true
    ssl_certificate_authorities => ["/etc/logstash/certs/CA.cer"]
    ssl_certificate => "/etc/logstash/certs/logstash.pem"
    ssl_key => "/etc/logstash/certs/logstash.pkcs8.key"
    ssl_verify_mode => "peer"
  }
}

LOGSTASH OUTPUT FILE FOR EXCHANGE FILEBEAT AGENT :

output {
 if [agent][hostname] == "EXCHANGE1" or [agent][hostname] == "EXCHANGE2" {
        elasticsearch {
                hosts => ["https://elasticsearch.DOMAIN:9200"]
                index => "filebeat-exchange-%{+dd.MM.YYYY}"
                user => "${ES_USER}"
                password => "${ES_PASS}"
                ssl => true
                cacert => '/etc/logstash/certs/CA.cer'
                ilm_enabled => true
                ilm_rollover_alias => "filebeat-exchange"
                ilm_pattern => "000001"
                ilm_policy => "index_filebeat"
        }
  }
}

Sorry, I know I wrote a lot of things but I'v been stuck for a while and I really don't know why this happened.
Thanks for reading !

Certificate and Key formats are fairly confusing.

PKCS#8 is a private key encoding. It's a way of describing a private key as a stream of bytes. It it technically not a file type, because it doesn't describe how to store that key in a file - just how to store it as bytes.

PEM is a file format. It's a way of writing a cryptographic object in particular encoding, into a file on disk.

It is possible (and common) to have a private key that is encoded as PKCS#8 and then written to a PEM file.

The PEM file format can store a variety of different cryptographic objects. Among other object types, it can store both certificates and keys.
So, when you have elasticsearch.pem + elasticsearch.key, technically those are both PEM files. You can assume that elasticsearch.pem is a certificate written in PEM format and elasticsearch.key is a key (using some encoding), also written in PEM format.

Sometimes people use the .pem extension because they are PEM files, which is fair enough.
Other people use .cer or .crt because they are certificates, written as PEM files, which is also a fair choice.
The Elasticsearch team prefers to use the .crt and .key style of naming (because that emphasizes the main difference between the 2 files) but it doesn't matter.

It is highly likely that your .cer,.crt and .pem files all use the same encoding and format.
If you want to be consistent you can just rename the files.

PKCS#8 files can have password, but don't always. In this case, it's really just that there is a password on that key (which is a good idea) and there isn't one on the Elasticsearch & Kibana keys.

Logstash has a copy of the password, so it's possible you could get it from there, but generating a new one is fine.

This error message is a little bit misleading.
It simply means that Logstash failed to read the key from the file. It can be triggered by a number of reasons that don't necessarily mean that the file is invalid.

In this case:

InvalidKeyException: IOException : DER input, Integer tag error`

The most likely cause is that your private key has a password (that is, it is encrypted) and you didn't provide that password to Logstash. In that case the code that reads the private key in Logstash will assume it is not encrypted, and then fail because it's not encoded correctly.

This is not elasticsearch. This is coming from elastalert, and I really don't know enough to be able to help you with that.

Are there more messages above that? The most useful information appears to be missing.

1 Like

First, thank you very much for your explanations about certificates. I was confused about PKCS8 format and the ecryption of the private key, but now it's OK.

The most likely cause is that your private key has a password (that is, it is encrypted) and you didn't provide that password to Logstash. In that case the code that reads the private key in Logstash will assume it is not encrypted, and then fail because it's not encoded correctly.

Thanks to you I realized that I used the following command :

openssl pkcs8 -in logstash.key -topk8 -out logstash.pkcs8.key

Like you said it, I didn't specify the password to Logstash and in addition I encrypted the private key so the server couldn't read the file.

In the next command I specify -nocrypt option and it generated a new file with the private key in the PKCS8 format without encryption.

openssl pkcs8 -in logstash.key -topk8 -nocrypt -out logstash.pkcs8.key

Note: Thanks to this message too Settings SSL/TLS setup with PKCS8 keys - #2 by ikakavas :slight_smile:

Now the error about certificate disappeared but there is a new one :

Feb 19 14:37:23 logstash logstash[25980]: [2021-02-19T14:37:23,317][INFO ][org.logstash.beats.BeatsHandler] [local: 0.0.0.0:5044, remote: 10.56.244.177:45616] Handling exception: javax.net.ssl.SSLHandshakeException: error:10000412:SSL routines:OPENSSL_internal:SSLV3_ALERT_BAD_CERTIFICATE

For this one I found a topic with a response from you --> Logstash 7.5 with SSL giving SSLV3_ALERT_BAD_CERTIFICATE - #3 by TimV

And I saw that 10.56.244.177 is my suricata IDS server, the certificate was expirated too. So, Logstash says "I don't trust your certificate" to suricata. So, I changed it and there are no more SSL errors on Logstash.

About Elasticsearch and Kibana :

The few logs which are in the topics are repeated thousand of times. There are no others logs which can helps. I will continue to investigate maybe I will find something more explicit about "Kibana is not ready yet" error !

Anyway, thank you a lot for the help with logstash and certificates.

I suspect if your restart Kibana it will provide an explanation.
I think it is going to complain about a certificate (because that's what you changed), but it's hard to guess what the actual problem is without a specific error message.

Hello,

I restarted Kibana but there are no more logs than those I have already shown.

I may have new tracks. Currently I'm looking for the following errors :

root@elasticsearch:/home/adminsys # /usr/share/elasticsearch/bin/elasticsearch --version
Exception in thread "main" java.lang.RuntimeException: starting java failed with [1]
output:
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 8241020928 bytes for committing reserved memory.
# An error report file with more information is saved as:
# /var/log/elasticsearch/hs_err_pid27340.log
error:
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x0000000614cc0000, 8241020928, 0) failed; error='Not enough space' (errno=12)
        at org.elasticsearch.tools.launchers.JvmErgonomics.flagsFinal(JvmErgonomics.java:111)
        at org.elasticsearch.tools.launchers.JvmErgonomics.finalJvmOptions(JvmErgonomics.java:79)
        at org.elasticsearch.tools.launchers.JvmErgonomics.choose(JvmErgonomics.java:57)
        at org.elasticsearch.tools.launchers.JvmOptionsParser.main(JvmOptionsParser.java:89)

I was checking versions and for elasticsearch this error appears. Btw, my elastick stack components are in 7.2.0 versions.

Otherwise the curl command could be interesting :

I don't know why I got this if I didn't specify --key and -u options :

root@kibana:/etc/kibana/certs # curl -XGET https://s-elasticsearch:9200/
curl: (60) SSL certificate problem: self signed certificate in certificate chain
More details here: https://curl.haxx.se/docs/sslcerts.html

curl performs SSL certificate verification by default, using a "bundle"
 of Certificate Authority (CA) public keys (CA certs). If the default
 bundle file isn't adequate, you can specify an alternate file
 using the --cacert option.
If this HTTPS server uses a certificate signed by a CA represented in
 the bundle, the certificate verification probably failed due to a
 problem with the certificate (it might be expired, or the name might
 not match the domain name in the URL).
If you'd like to turn off curl's verification of the certificate, use
 the -k (or --insecure) option.

And if specify the options, evrything is OK:

root@kibana:/etc/kibana/certs # curl -X GET "https://elasticsearch:9200/" --key s-elasticsearch.pem  -k -u elastic
Enter host password for user 'elastic':
{
  "name" : "Prod",
  "cluster_name" : "DOMAIN",
  "cluster_uuid" : "_4AKCuu-SVSM4z-S-t6GZQ",
  "version" : {
    "number" : "7.2.0",
    "build_flavor" : "default",
    "build_type" : "deb",
    "build_hash" : "508c38a",
    "build_date" : "2019-06-20T15:54:18.811730Z",
    "build_snapshot" : false,
    "lucene_version" : "8.0.0",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}

I have some good news ! I fixed the issue with the help of another person.
First I want to thank you again for your help Tim !

Now, this is how I fixed the issue :

First I set my Kibana in Verbose mode in kibana.yml :

# Set the value of this setting to true to log all events, including system usage information
# and all requests.
logging.verbose: true

Don't forget to restart the service :

systemctl restart kibana

Now there are more logs in my /var/log/syslog file

Remember I only had these two repeating lines :

Feb 22 13:10:19 kibana kibana[1341]: {"type":"log","@timestamp":"2021-02-22T12:10:19Z","tags":["warning","elasticsearch","data"],"pid":1341,"message":"No living connections"}
Feb 22 13:10:19 kibana kibana[1341]: {"type":"log","@timestamp":"2021-02-22T12:10:19Z","tags":["warning","elasticsearch","admin"],"pid":1341,"message":"Unable to revive connection: https://10.56.245.133:9200/"}

But with the verbose mode I saw more informations and the most important was the following one :

Feb 22 13:10:18 kibana kibana[1341]: {"type":"log","@timestamp":"2021-02-22T12:10:18Z","tags":["error","elasticsearch","data"],"pid":1341,"message":"Request error, retrying\nGET https://10.56.245.133:9200/_xpack => Hostname/IP does not match certificate's altnames: IP: 10.56.245.133 is not in the cert's list: "}

So, this error is explicit, my hostname or IP doesn't match with the certificate's altname.
The person who helped me advised me to look at the information contained in Elasticsearch's certificate. Well, we can do it with the following command :

openssl x509 -in elasticsearch.pem -text -noout

We can see in the new certificate that there is only "DNS" which is provided in the X509v3 Subject Alternative Name: field.

For example, look at my old certificate :

In the old one, there was the "IP" field in addition of the "DNS".

So, in the Kibana configuration file /etc/kibana/kibana.yml the elasticsearch.hosts field was specified as 10.56.245.133, with the IP address. This is the reason why Kibana gave me a matching error.

To fix this I put the DNS in my kibana.yml file instead of my elasticsearch host IP.

Thank you everyone and see you soon !

1 Like

IMO there's room for improvement here, a bad certificate is worth telling the user about even at the default logging levels. @baddack would you raise a Kibana issue on this topic?

1 Like

Well, you put me in doubt and instantly I hadn't done the test to understand why I hadn't seen the log about Hostname/IP and certificate's altnames matching.

I did it now and it proves that in the default logging levels the error is raised at the beginning when you restart kibana. But the logs are quickly flooded with the following lines:

Feb 22 13:10:19 kibana kibana[1341]: {"type":"log","@timestamp":"2021-02-22T12:10:19Z","tags":["warning","elasticsearch","data"],"pid":1341,"message":"No living connections"}
Feb 22 13:10:19 kibana kibana[1341]: {"type":"log","@timestamp":"2021-02-22T12:10:19Z","tags":["warning","elasticsearch","admin"],"pid":1341,"message":"Unable to revive connection: https://10.56.245.133:9200/"}

In verbose mode the error about certificate is repeated more times, that's why I saw it easily.
My bad, It's human error again ! I should have analyzed more precisely the first lines of logs in the default logging mode.

Thank you @DavidTurner for clarifying this point, but I think it will not be necessary to follow up on this one because the problem came from me.

Thanks for the clarification; "human error" is never a valid root cause IMO, this still sounds to me like there's room for improvement in how these things are logged.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.