The second article mentions PC inefficiencies, but fails to mention that the PS4 will also have bottlenecks at the hard drive and is already behind the curve for the GPU.
There's also the definition to consider. Being slower than X does not make a bottleneck. Having excess resources somewhere, and inadequate resources elsewhere, causing those resources in excess to wait, makes for a bottleneck. Uniform slowness can be had, without bottlenecks.
Then, some bottlenecks can be designed around. Some can't. Bottlenecks that can't be worked around by some means of caching or multiprocessing will be the worst.
RAM capacity, FI, can't be worked around, with a HDD (the last console to work around it, IIRC, was the N64, by shifting costs to the cart). The PS3 had 256/256MB, and the XB360 512MB, while we had 1024/128-256MB, and rising. RAM was expensive, at the time, but 256MB system RAM, even without a desktop OS, was just sad. 512MB was only marginally better. Sure, we'll be up higher in a few years, but for 1080P, 8GB total should remain plenty to work with.
Likewise, a CPU that can't complete short functions quickly can't be worked around. No amount of using small arrays with prefetch hints
(or explicit prefetch instructions--in x86 CPUs, that line is blurred, since most prefetches are ignored, and some are just used for cache prefetch hinting), no amount of spatial trie tricks, no Judy, etc., will save you if the CPU is stalled most of the time, due to L1 misses, branch mispredicts, unpipelinable instructions, multicycle basic ALU instructions, and so on and so forth. The last gen were SIMD beasts, but that's the best that could be said. Too much practical performance was sacrificed to make an embedded real-time-capable HPC chip.
There comes a point where you're just screwed, and have to say, "no, we can't do that, because it takes too much CPU time," even when the CPU looks like it should be really good on paper. Now, some of Jaguar is unknown, but it's safe to assume that it will be faster than Bobcat, on average (maybe not by much, but...). If the vector units can perform 2x that of Bobcat (one of the weaker points of Bobcat, so 1.5-2x would not be unreasonable), it should be quite a nice little CPU, at <=2GHz. Most importantly, such decisions about what can be done and not should be able to predicted with high confidence well ahead of time, on top of it being a fairly nice CPU, assuming there aren't any performance regressions from Bobcat.
Take from the above not that it's going to blow PCs away (with no mouse and keyboard, that will be N/A, regardless, for swaths of games, no matter what the giant SoC can do that's special), but that no dev studio should have to reel in their expectations, nor resulting expressiveness, due the hardware not living up to its hopes. They should more or less know what they're getting into, and won't have to pray for a SPE routine to pull off a miracle, or deal with high-GHz low-efficiency ALUs.
The GPU, honestly, will be what it will be. The raw GPU power was often wasted last time, and making it too good, now, would increase cost and power consumption. That actual game developers had some say in what compromises were made is far more important than the resulting number of processing units in the GPU. If they could have gotten a quad-core and more GPU performance, FI, but by and large preferred more CPUs to make use of, then more CPUs with a bit less GPU performance is the better trade.