The problem (or rather, one of the problems) with standard deviation is that it largely ignores outliers, or minimizes their importance, when (as we're learning all too well right now) outliers are precisely what matters most to the majority of investors.
The problem, I think, is a disconnect between academic/institutional finance and real-world investing. In academic finance, beating a benchmark is the goal, and achieving a maximally efficient portfolio is nirvana. In the real world, beating able to pay your mortgage is the goal and taking a vacation when you retire is nirvana (or as close as we can expect given the current economic climate).
I've been hearing from quite a few investors recently who have quite a lot of money. People's portfolios are down 30%, 40%, even 50% or more, and it's having a real impact on their lives: They can't pay to send their kids to university; they can't afford to fix up the house; they are worried about retirement.
These are folks who had decent asset allocation plans in place—perhaps a bit on the risky side, but nothing outrageous. But they've gotten caught in the "unthinkable" event ... the outlier ... the one scenario that wasn't factored into their mean-variance optimization engines.
Craig Israelsen, a professor at Brigham Young University in the United States, agrees that standard deviation is an imperfect measure of risk. He suggests, and I've been incorporating this into my writing and analysis more frequently, that a better tool is worst 1-, 3- and 5-year drawdown. By using worst drawdown in replacement of or in addition to standard deviation, you can get a feeling for what the "worst case scenario" will be, and whether you can stomach that.
Because the truth, I think, is that most investors—if pressed—would give up a little bit of potential upside for a significant reduction in the worst drawdown scenario. Being up 12% vs. 8% is great, but being down 20% instead of 40% is better.