Being a computational physicist, I tested this by just plugging in all my numbers to see what happened (extracted by brute force from WoS). This was delightful little exercise, many papers I consider better works immediately scored higher than by other measures, generating a list that corresponds reasonably closely to their importance, at least as this importance is perceived by me. I would thus be very happy to see this index (or something similar) in use.
But one problem related to using the median immediately became obvious: what to do with all the zeroes that you need to divide by? A median of zero will probably be quite common, for instance, the median number of scitations for anything published in Science this year (2016) is zero, but no doubt there are journals where the median can remain zero for years. Using the mean citation the cure is more intuitively obvious: the only time the mean is identically zero is when no paper in the journal from that year has been cited at all, including the one you are interested in, so just define the quantity to be zero in these circumstances. But I don't see an equally obvious generalization based on the mean. Just setting the denominator to 1 (which will in low-citation circumstances always be the most likely citation count, barring 0) somehow does not seem quite satisfactory to me. So: how do you solve this? Obviously papers must have a well-defined index from day one.
This is really the consequence of an inherent jumpiness in the measure that you get when using the median. To return to the previously mentioned case in detail: I have one paper from this year in Science, currently cited 11 times. If you filter out all the Rubbish (here defined as "anything not original research articles" [can you believe that Rubbish actually accounts for about 2/3's of the contents in Science as listed in WoS!?! Of course you can, silly me...]), the median citation count is 1, which provides a score of 11 from that paper (nice!). But the median citation count will of course at some point reach 2, at which (completely arbitrary) point in time the score suddenly drops by half, even though nothing interesting has really happened. The mean would behave much more smoothly.
That said for mathematical properties, I also worry a little that the index may not be very discriminating between journals in the "lower end" of the spectrum (i.e. with low impact factors and citation counts). Very bad journals generally have citation counts very much lower than the perceived "upper echelon" of the impact factor universe. So far so good, but a lot of perfectly good journals also have very low citation counts. The basic premise of the index is that citation counts are a meaningful measure of quality of both individual papers and the journals themselves. I am reasonably convinced of the former, but the latter I'm not so sure about, especially when using the suggested median measure here. I suspect that the quality span among journals with 0-1 median citation count for two years back is very large indeed.
As said, I like the index a lot, but there are some consequences that look a little risky when applying it across larger datasets. I have the feeling that this index runs a risk of becoming something that is applied only after you have thrown out everyone that has published in only journals with low impact factors.