In method papers and conferences it is easy to encounter a picture like this:
The ‘amount of good stuff’ may be sensitivity, number of clusters guessed correctly or something else. The ‘resources’ represents runtime, the number of samples used by the algorithm etc.. I call these figures ‘comparative efficacy figures’ and I find them very boring. And not because of the usual amount of visual noise produced by unnecessary huge colourful symbols.
Lets measure how much information is carried by this figure. The only valuable thing it tells is that the presented method works better that its alternative. This is one bit of information. And given that the paper got accepted / authors are giving a talk in a conference, this is not a news either. So there is zero bits.
These pictures remind me of the type of figures students do in their homework. The goal there is not to tell something new to the reader (the reader aka the teacher already know the correct solution!) but to demonstrate the amount of work done.
However, if the comparative efficacy figures looks like this it suddenly becomes much more interesting:
Now the figure not just praise the presented method but defines the area where it is optimal. Suddenly, all these numbers on X and Y axis becomes relevant and interesting. If only there were less visual noise…