Yes, interesting. Database performance certainly varies on the application - or sometimes, in large, sprawling applications that normally encompass the enterprise, it varies between certain functions of the application.
We've tried remedying this initially by optimizing queries and the database schema. It's a big trade-off - sometimes, you get a lot of performance, sometimes not so much. What happens consistently, though, is that changing schema leads to unhappy programmers, so it rarely happens unless the new requirement/report/analysis tool really crawls and will get a 10x speedup with the schema change.
Our latest remedies so far have been dealing with those data in memory by manipulating them through arrays. In effect, a hard-coded, application-specifc NoSQL implementation. Unhappy programmers again. In fact, only very few programmers can manage to really be comfortable pulling all data in a table and just manipulating them in a couple of multidimensional arrays. Eats up memory like no tomorrow, but gets to be an order of magnitude faster in a fair number of cases.
Our next step is to try these real NoSQL solutions. We held off on it for a bit due to reluctance to invest in yet another technology (it doesn't matter if they are free or not in terms of money, but in programmers' time it is never free).