On one hand, Fermi is treated as a trash GPU that has no appeal (on hardware review sites where the focus is set on gaming performance). On the other hand, Fermi is seen as the most amazing revolution (for video editing guys).
Though Adobe CS5 is not officially out as of now, I saw a brief benchmark that was done on the beta version. With its native CUDA support, encoding a H.264 clip that took 45 minutes without CUDA support, took only 4 minutes. Unlike before, this was done without visual penalties and limitations. Needless to say, it is a dramatic difference. Even better, the test was done on GTX 285. Considering how Fermi is desinged to perform much better than GTX 285 in terms of CUDA performance, I can only wonder what Fermi can do.
Keep in mind, unlike gamers, video guys haven't seen this kind of revolution. Modern 4-6 core i7 CPU that is blazingly fast in most situations are still painfully slow for video editing. I know a guy who shot 5400 minutes of documentary using 5D mk2 (h. 264, 40mbps videos), and it literally took him a month just for transcoding them into an editable codec (He used two MacPro along with two of http://www.bhphotovideo.com/c/produc...320_Array.html.)
With Fermi and CS5, such need for transcoding will be gone, and finally real time editing on high bitrate-h.264 video will become reality. Encoding the final work would take far less time.
Given video guys haven't seen this sort of revolution and that those are willing to drop big bucks, I do not doubt Fermi will be very successful in that market.
The notion and usage of GPU is chaning yet we are only talking about games and fps it can do. That, I think is something we need to reconsider when talking about Fermi and nVidia.
--------------------------------------------------------------------------------------------------------------------------------------
What I wrote in the Digital and Video Cameras section:
Too bad Fermi is getting trashed among hardware sites and gamers. Due to the fact its appeal is terrible in the gaming market, I think nVidia will try to cash in the video market. After all, nVidia is the pioneer in the video market and that it's a blue-ocean where competition does not exist.
So far, only below mentioned cards are currently supported:
* GeForce GTX 285 (Windows and Mac OS)
* Quadro FX 3800 (Windows)
* Quadro FX 4800 (Windows and Mac OS)
* Quadro FX 5800 (Windows)
* Quadro CX
However, Adobe mentioned that they are "planning to support additional cards in the future, including some of the new NVIDIA solutions based on the upcoming Fermi parallel computing architecture."
Keep it mind there's limitation set on GTX 285: only 3 tracks on timeline are CUDA supported. Fermit will, I bet, be supported fully. Given the limitation set on GTX 285, the future of other cheap GPUs being supported look somewhat gloomy. If they want to cash-in with Fermi, they will limit and limit other cards, I foresee.
Notheless, this is a revolution and a great step toward a great future. I have high hopes!
Though Adobe CS5 is not officially out as of now, I saw a brief benchmark that was done on the beta version. With its native CUDA support, encoding a H.264 clip that took 45 minutes without CUDA support, took only 4 minutes. Unlike before, this was done without visual penalties and limitations. Needless to say, it is a dramatic difference. Even better, the test was done on GTX 285. Considering how Fermi is desinged to perform much better than GTX 285 in terms of CUDA performance, I can only wonder what Fermi can do.
Keep in mind, unlike gamers, video guys haven't seen this kind of revolution. Modern 4-6 core i7 CPU that is blazingly fast in most situations are still painfully slow for video editing. I know a guy who shot 5400 minutes of documentary using 5D mk2 (h. 264, 40mbps videos), and it literally took him a month just for transcoding them into an editable codec (He used two MacPro along with two of http://www.bhphotovideo.com/c/produc...320_Array.html.)
With Fermi and CS5, such need for transcoding will be gone, and finally real time editing on high bitrate-h.264 video will become reality. Encoding the final work would take far less time.
Given video guys haven't seen this sort of revolution and that those are willing to drop big bucks, I do not doubt Fermi will be very successful in that market.
The notion and usage of GPU is chaning yet we are only talking about games and fps it can do. That, I think is something we need to reconsider when talking about Fermi and nVidia.
--------------------------------------------------------------------------------------------------------------------------------------
What I wrote in the Digital and Video Cameras section:
Too bad Fermi is getting trashed among hardware sites and gamers. Due to the fact its appeal is terrible in the gaming market, I think nVidia will try to cash in the video market. After all, nVidia is the pioneer in the video market and that it's a blue-ocean where competition does not exist.
So far, only below mentioned cards are currently supported:
* GeForce GTX 285 (Windows and Mac OS)
* Quadro FX 3800 (Windows)
* Quadro FX 4800 (Windows and Mac OS)
* Quadro FX 5800 (Windows)
* Quadro CX
However, Adobe mentioned that they are "planning to support additional cards in the future, including some of the new NVIDIA solutions based on the upcoming Fermi parallel computing architecture."
Keep it mind there's limitation set on GTX 285: only 3 tracks on timeline are CUDA supported. Fermit will, I bet, be supported fully. Given the limitation set on GTX 285, the future of other cheap GPUs being supported look somewhat gloomy. If they want to cash-in with Fermi, they will limit and limit other cards, I foresee.
Notheless, this is a revolution and a great step toward a great future. I have high hopes!
Last edited: