Well, what upgrades could be made? They could push for higher memory clocks, since the maximum GDDR5 operating speed is 7 Gbps; they're currently at 6 Gbps. I think the primary driving force behind this will be the 28nm process. I wouldn't expect big leaps here, but there is potential for improvement. Assuming Nvidia doesn't go out of spec, the best case improvement you'll see with memory clocks is 16.67%.
Let's take a look at memory clock bumps of the past:
5870 vs 4890: 33% increase
6970 vs 5870: 14.58% increase
7970 vs 6970: 0% increase
7970 GE vs 6970/7970: 9.09% increase
Looking at this, AMD's memory gains have stagnated.
580 vs 480: 8.44% increase
680 vs 580: 49.9% increase
And if we take a look at the GTX 680 review here at AnandTech, we can find that it looks like memory clock improvements will be likely be minimal, if they exist at all:
It'll be difficult for Nvidia to fix what isn't broken, but memory bandwidth does seem to be one of the bigger potential performance gains for Nvidia.
GK104 doesn't appear to be shader-bound. TMUs are tied to shaders, so I doubt they're texture-bound either.
For big gains, all fingers are pointing at increased ROPs, increased memory bus width, and bandwidth. Perhaps they avoided adding more ROPs by using a crossbar like AMD — unlikely, but it's worth a toss out there. As mentioned earlier, it's unlikely for Nvidia to make meaningful improvements with their memory speeds, but technically possible. Bus width is possible, but if we're looking at a GK114, I think it'd be difficult to position a 384 bit bus GK114 next to the 384 bit bus GK110. There'd be some significant differences — clock speeds, less die "wasted" on FP64, less shaders — but would the differences be enough to justify the jump?
There's one final possibility here: HBM, or high bandwidth memory, using stacked ICs and TSVs. AMD has been pushing this. There's potential for it to show up with HD 8000, and given that it's a third party (Hynix) solution, it's possible that Nvidia could use this as well. Nvidia would probably see the most gain out of it. I haven't the damndest idea if Nvidia will use it, but it does exist. Whether it ends up being cost effective or produced in high enough volume by the time GTX 700 or HD 8000 launches is very much up in the air.
Then there's the obligatory tweaks and tunings that go on that will probably make a few, small % difference. No one outside of Nvidia can really give numbers here.
These are just my observations... take them with a grain of salt.