If you could be a little more specific, this might actually be believable. Generally, for vectorizable code, there isn't a "wall" of vector lane count beyond which vectorization becomes hard. If you can do SIMD, you can usually do it in 4, 8, 16 or more lanes.
And SIMD and objected-oriented code will always clash, anyway. OO is a lot of pointer chasing, and SIMD is completely useless there.
However, for physics calculation, higher lane counts may be a problem. You'll usually use 4-way SIMD for doing 3D vector calculations (using only three lanes, ignoring the fourth). So it won't be trivial to use 8-way SIMD on data structure not designed for it. However, this won't be a problem for all engines, as some engines will have been written by non-stupid people who could see that we'd get wider SIMD execution over time. And changing the engine code to accomodate for more flexible SIMD lane width will pay off in the long run, as we'll get wider and wider SIMD engines. Intels Knights family already uses 16 lanes, and AVX is designed to be easily extendable, up to 1024bits/32 lanes IIRC.
However, it'll take a long time until AVX is widely used, simply because you can't depend on it being there. There are still so many C2D, K10, NHM, etc. systems in the market that it doesn't pay off to support AVX. Once you reach more than 50% market saturation that becomes attractive, but we're still a long way off from reaching this point - especially considering the longer PC replacement cycles we have these days.