Yeah, don't do it that way. Multiple data paths are kind of like "index structure aware raid" but they don't work super well in 1.x because files aren't always scattered in ways that make recovery work well. I prefer to just use software raid over multiple data paths personally but there are other Elastic employees who disagree there.
As Mike says, the usual way to have two different types of storage is to use two different nodes and use shard allocation filtering to keep the indexes that you are writing to on your "hot" nodes and move other indexes to "cold" nodes.
One crucial thing here: you can run two elasticsearch nodes on the same physical machine. They just run as separate processes. The deb and rpm aren't rigged out for it but you can do the file copies manually if you are comfortable with that kind of thing. Its probably simpler to just get three whole new machines with two SSDs in raid 0 and call them the "hot" nodes rather than try to share the same machines. They won't have to share a page cache and it lets you bring even more RAM to bear on the problem which is almost always a good thing.
When you do upgrade to 2.x watch out for synchronous commits being on by default. You'll notice a performance hit, significant if you are using small bulk sizes. Its ~7% for large-ish bulk sizes which was deemed worth the safety it provides.