Grok condition for two different type of files

Hi,
I have to match two patterns to 2 different log files such that if the pattern matches with one of the errors logs encountered in the log file, it triggers the another log file to grok another pattern. How is that possible?

filter 
	{
		if [type] == "serverfile"
		{
			grok
			{
				patterns_dir => "D:/Logstash/patterns"
				match => [ "message", "%{JAVA_PATTERN}" ]
				add_field =>  { "log" => "javaErrorlogs"}
			}
		}

*** Now I want to grok the pidfile i.e. only those serverfile logs which have encountered the error log should have a field added as PIDlog.. The if condition below is not helping because we are comparing two different scenarios. How to modify the code below to achieve it?

if [log] == "javaErrorlogs" and [type] == "pidfile"  
		{
			grok
			{
				match => [ "message", "%{NUMBER:Pid}" ]
				add_field => {"PIDlog" => "PIDlog"}
			}
		}
	}

I don't think what you want to do is possible, but perhaps you can give an example? Based on your two configuration snippets it's not clear to me what you're trying to do.

Suppose I have two log types in a log file..whenever I encounter the error pattern logs, an ID is generated in some other directory with a file having the error id which caused the error in those logs.
Now I want to extract those generated IDs which may have a specific pattern and index them into elastic.

Is this possible?

Can't you just read and parse all files that end up in the other directory? I don't see why the sequence has to be "when something happens in this file, read that file from that place".

There might be a possibility that some error IDs are already present in the folder. But we want to read only the ones being generated by error logs at present . That is why we were specifying the sequence..

Okay. In that case no, it's not possible do do it as you've described it. What you can do is read all files but afterwards delete what's not interesting. Or just ignore it. Is it actually a problem that you're reading extra error information?

Yet another option could be to publish information about errors found in the first file to a broker and have another service (not Logstash) pull that information asynchronously and selectively read files from the other location. That service could emit messages to a single file that Logstash monitors. That way this custom service wouldn't have to reimplement stuff that Logstash does natively.