Error : not enough master nodes

In my cluster I have two nodes. Both are master and data nodes. I am trying some queries in Kibana its showing this error.

[2016-07-18 17:55:00,451][WARN ][monitor.jvm ] [ravi-2] [gc][young][458531][55917] duration [7.1s], collections [1]/[12.1s], total [7.1s]/[21.3m], memory [2.7gb]->[1.7gb]/[3.8gb], all_pools {[young] [1gb]->[12.5mb]/[1gb]}{[survivor] [136.5mb]->[0b]/[136.5mb]}{[old] [1.5gb]->[1.6gb]/[2.6gb]} [2016-07-18 17:55:24,593][INFO ][monitor.jvm ] [ravi-2] [gc][young][458555][55920] duration [941ms], collections [1]/[1.1s], total [941ms]/[21.3m], memory [2.3gb]->[1.7gb]/[3.8gb], all_pools {[young] [1gb]->[11.5mb]/[1gb]}{[survivor] [136.5mb]->[136.5mb]/[136.5mb]}{[old] [1.1gb]->[1.5gb]/[2.6gb]} [2016-07-18 17:55:28,532][WARN ][monitor.jvm ] [ravi-2] [gc][young][458558][55921] duration [1s], collections [1]/[1.9s], total [1s]/[21.4m], memory [2.4gb]->[2.1gb]/[3.8gb], all_pools {[young] [766.6mb]->[15.1mb]/[1gb]}{[survivor] [136.5mb]->[136.5mb]/[136.5mb]}{[old] [1.5gb]->[2gb]/[2.6gb]} [2016-07-18 17:56:37,713][WARN ][discovery.zen ] [ravi-2] not enough master nodes, current nodes: {{ravi-2}{Mi5bQOPuTneHjuUpCCDisA}{}{}{master=true},} [2016-07-18 17:56:37,715][INFO ][cluster.service ] [ravi-2] removed {{ravi-1}{0mWrxLr0TM6KvO3KF9B3aw}{}{}{master=true},}, reason: zen-disco-node_failed({ravi-1}{0mWrxLr0TM6KvO3KF9B3aw}{}{}{master=true}), reason failed to ping, tried [3] times, each with maximum [30s] timeout [2016-07-18 17:56:37,785][DEBUG][] [ravi-2] failed to execute on node [0mWrxLr0TM6KvO3KF9B3aw] NodeDisconnectedException[[ravi-1][][cluster:monitor/nodes/info[n]] disconnected] [2016-07-18 17:56:37,786][DEBUG][action.admin.indices.stats] [ravi-2] failed to execute [indices:monitor/stats] on node [0mWrxLr0TM6KvO3KF9B3aw] NodeDisconnectedException[[ravi-1][][indices:monitor/stats[n]] disconnected]
What I Have to do for this?????

It looks like that node is having trouble talking to your other node, or the other node went down? Make sure your network setup allows the two nodes to communicate, e.g. no firewall is blocking port 9300?

I am maintaining that cluster since one month. Its working fine upto kibana comes into picture. When I started accessing ES through kibana its showing these. Otherwise everything is going fine.