So you even understood what I was trying to say but you point fingers that I had implied single thread application to make the game multithreaded?

I never said my English skills are perfect but you seriously need to read what the other person is saying before you point fingers.
On the contrary, I obviously didn't understand what you were trying to say, otherwise we wouldn't be here and we wouldn't be having this little lesson on English grammar now, would we?
You obviously know it's not quad optimized even though Crytek says it is. I suppose you would believe it if I told you I'm Queen of England. :?
*sigh* Really? Now who has the reading comprehension issues?
You call this needing optimizations? They need to rewrite the whole engine when threads aren't maxing out on dual cores concurrently. That is a huge problem! It's not something that can be fixed over night.
I'm going to put something out there for you, just so that we get this clear. From reading the thread since yesterday, it's blatantly obvious that you don't fully grasp what
multithreaded and
optimization really means.
For starters
an application does not need to max out CPU time in order to be considered optimized.
In fact, the opposite is exactly what you want as a developer. By making out processing time, that means your code is utilizing the maximum amount of CPU time the operating system is giving it per scheduler slot. By maxing it out, it is implied that your application/process/thread (we'll simply call it a thread from here)
actually needs more time and processing power than the system is capable of providing it. Execution of the thread is interrupted as the operating system says, "Sorry, your turn is up - it's time for the next program to run." Yes my friend, multiple programs are still running all the while your little game is running too - they need processing time as well. So your game here, or whatever app, ends up stalling while waiting for the next time slice to be allocated from the operating system. Is stalling efficient? No, it means you're CPU (or resource in some cases) limited.
On the other hand, if your application isn't maxing out, that means your thread is more than capable of processing all the data it needs in a given time slice, and thereby finishes before the maximal allocated time slice is up. At this point, the application is free to do further processing and/or hand the balance of the slice back so that the OS can schedule the next thread. This means your code is doing what it is supposed to efficiently - it's not waiting across multiple time slices to handle what it is supposed to handle.
Now there's a little give and take here too. Something else may be inefficient if your thread isn't eating up the entire slice - for example your video card may not be able to keep up. Your thread processes all the data, then hands it off to the video card to put it on the screen - but in the case the frame takes longer to render than the slice allows, then all of a sudden your application has to wait before processing the next frame (VSYNC). There are ways to avoid this issue, such as dropping frames in time dependent environments, and these sorts of solutions are usually implemented in some manner.
Long and the short of it though - it is thoroughly
stupid to think that something isn't actually optimized just because it's not maxing out one (or more) cores on a CPU. That's the last thing you want as an end user, because that means your system isn't optimal for the application.