Whats your opinion of real time ray tracing.

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
Myself I can't wait for RtRt . The rendering is almost perfect . Here is a good article on it .
With the links contained in it to give good basic understanding .

The real question is when well we see RtRt.

http://softwarecommunity.intel.../articles/eng/1343.htm.


Here are a few high lights I find interesting. I will comment why I find them interesting.


No need for comment on this one.

Recently, it has been shown that ray tracing can be done in real-time on consumer PCs. This is an interesting development; ray tracing solves many of the problems that rasterization has by taking into account global effects (shadows, reflections, refractions) in an intuitive way, it is able to create more realistic graphics, and does so in a more elegant way. At the same time, ray tracing is a very resource-intensive algorithm; to make it real-time requires optimal use of modern hardware.

Its pretty easy to see why I find this interesting with Nehalem coming with 8 cores.

To maximize ray tracing performance, both thread level and instruction level parallelism should be used. As mentioned in section 2, the ray tracing algorithm is parallel by nature; rays can be traced independently and in any order. Creating a multithreaded ray tracer that uses all available cores is therefore straightforward. In principle, each ray can be rendered in its own thread. In practice, it is more efficient to assign tiles of pixels to each rendering thread, to reduce threading overhead.


This one isn't so transparent . But apple could be the big gainer here.

The worker thread code that waits for the signal from the master thread, gets the next task, renders assigned tiles, and signals the master when there are no more tiles to be rendered is shown below. This code does not rely on the OS for synchronization during the actual processing of tasks. Minimizing calls to system routines considerably decreases threading overhead.

Again this one explains itself.

A ray tracer that uses thread-level parallelism (as described in section 3) scales almost linearly in performance as the number of available cores increases. A well-written implementation, using a high-quality kd-tree and efficient traversal and shading code, should be able to achieve millions of rays per second, per core. This number can be increased significantly by using instruction-level parallelism. For this, SIMD instructions are used to operate on multiple data streams in parallel. SIMD instructions are implemented by the SSE instruction sets on modern processors.

This one I find interesting mostly because most of us know intel is really big on improveing Vectorization. SSE4 and what SSE4 1 brings with Nehalem .

Applying parallelism to the ray tracing algorithm can greatly improve the performance of a single-threaded, non-vectorized implementation. Dividing work over multiple threads is relatively simple for this rendering algorithm, since rays are independent. Splitting work by assigning tiles of pixels to rendering threads allows us to control the granularity of the multithreading, while at the same time increasing coherency of memory access. Using an efficient master/worker model, ray tracing has the potential to scale almost linearly with the number of available cores, which makes it the perfect test case for today?s multi-core processors.

Vectorization using SIMD instructions is used to improve the performance of individual rendering threads. By tracing four rays simultaneously, all stages of a ray tracer can be sped up considerably: In this first article, it was shown that normalization of ray directions is about eighteen times faster. By working with packets of rays throughout the stages of ray tracing, data conversion is minimized.


How hard will it be to port games to RtRt ? Intel is working hard on it .

From what I understand its not that hard. We just need the processors that have the parallism and vectorization to speed up the process . Think nehalem

Sure would be nice to do away with the gpu . Problem is when well we see this .

A long time away or the near future .

I hope for the near future but we just have to wait and see.

 

Nanobaud

Member
Dec 9, 2004
144
0
0
My impression of the rtrt demos so far (admittedly I have not studies all of them or any that closely) is that they come close to interactive frame rates for VGA-type resolutions for objects modeling diffuse materials. I can easily see the frame rates and resolutions increasing quickly, but I think it will be quite a while before that can include more realistic materials (translucent, specular reflective, ...), complicated lighting (finite-sized source, ambients, ...), advanced effects (more sophisticated AAA, filtering, ...), but these are the things that really make a ray-traced image spectacular. Meanwhile, scan-line imaging (not sure if that is the right term for video-card rendering techniques) continues to make tremendous advances, is already parallelized pretty well and can do continually more amazing things with textures. Certainly rtrt will eventually surpass the image qualities for video that we are getting today, but you have to consider what you will get by applying the same hardware power to the scan-line rendering technology of that time.

I'm not sure ray tracing will ever take over for video, but I will take it hands-down for a still image anytime.

nBd
 

speckedhoncho

Member
Aug 3, 2007
156
0
0
Why Computer Graphics class which dealt with the 3D-Rendering pipeline involved a lot of ray tracing algorithms. The premise was to use ray tracing as the article discussed to see if any polygon surface intersected the ray. The implication of each intersection depended of course on what the ray tracing was interpolating, but it was used in the surface elimination algorithm pushed by the professor.

Considering that ray tracing needs a linear equation (even in quad's, cubes, etc are involved) and parametric values, shader cores can easily use the ADD,MUL, & MAD operations just as they do with other matrix transformations.

As with mathematical breakthroughs in algorithms, developers making the engines will need a couple of years to make an engine where specular lights, diffusion, and clipping makes up the latter part of the rendering pipeline. Since the clipping now can be done early with the early-Z buffer, I wonder if engines now use that and if there would be benefit to using the raytracing algorithm instead of early-Z. Either way, if ray tracing has benefits, then when an engine comes out, games will no doubt use them if prototypes with lighting and polygon count higher than expected perform well.

 

speckedhoncho

Member
Aug 3, 2007
156
0
0
The worker thread code that waits for the signal from the master thread, gets the next task, renders assigned tiles, and signals the master when there are no more tiles to be rendered is shown below. This code does not rely on the OS for synchronization during the actual processing of tasks. Minimizing calls to system routines considerably decreases threading overhead.

The threading will be done on the GPU, so I doubt OS thread syncing will be an issue.
 

Modelworks

Lifer
Feb 22, 2007
16,240
7
76
Reminds me of PowerVR.
They had an interesting idea, I just don't think the industry was up to the task at the time.
Maybe now they are.
 

speckedhoncho

Member
Aug 3, 2007
156
0
0
Was PowerVR using ray tracing? I know they used tiles, but I forget what was involved in those algorithms. They touted it as a performance enhancer for early surface elimination so the clipped surfaces wouldn't be textured and shaded.
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
Not sure why Power Vr is being discussed here other than the fact intel licesnesed some of its tech. RtRt after its done completing a scene still needs to send that rendered scene to a gpu which displays the rendered pic on a Monitor. Read the above link. If you actually read this .

Things are a little more clear

http://www.pcper.com/article.php?aid=455
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
Here is a short discription of the system intel was using at IDF.

The demo system that was on display at IDF was running a dual-quad core (total 8 cores) system as you can see from the 8-threads being processed in the task manager on screen. The image here is from a map on Quake 4 and is being rendered completely on the Intel CPUs while the GPUs are only taking the final image and sending it to the monitor.

Now on the nehalem processor there will be 8 cores with H/T so that will = 16 threads . Than you add in the sse4 speed up of 4x and things are becoming very interesting fast,

Its still yet ouknown what vectorization improvements exist with nehalem so that just +++

So intels claims in the above link of RtRT in 2 years isn't looking undoable from my perspective .

Couple that to the fact that creating games for RtRt is easier than the present day gpu is looking very good.

Nehalem c is the cpu in the nehalem family I want.

I believe this is why AMD won't use all the sse4 and sse4.1 instruction set. This is why intel won't use SSE5 .

Intel and AMD are going in differant directions . Who wins in this race is unknown at this time . My money is on Intel .

The clear loser here is NV.
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
Just adding a bit more to this puzzle . I love jigsaw puzzles.

From the Newsdesk
Intel picks up gaming physics engine for forthcoming GPU product
By Jon Stokes | Published: September 17, 2007 - 10:44AM CT

Late last week, Intel announced the purchase of gaming development tools maker Havok, authors of the famous Havok physic engine used in a whole raft of top-shelf games like BioShock, Oblivion, Half-Life 2, and Halo 2. According to a statement from Intel, "Havok will operate its business as usual, which will allow them to continue developing products that are offered across all platforms in the industry."

Related StoriesIntel officially owns up to GPU plans with Larrabee (updated)
Intel's next GPU to be Pentium MMX-based?
Otellini on Intel's GPU, Gelsinger on hardware-based drive encryption
Clearing up the confusion over Intel's Larrabee, part II
So if Havok is going about business as usual, then why did Intel pick them up? Indeed, why would Intel buy a gaming dev tools maker in the first place?

The answer to both questions, of course, is Larrabee.

Larrabee's legacy problem
In late 2008 or early 2009, when Intel launches its forthcoming GPU product (codenamed Larrabee) on its new 45nm process, the company will almost certainly face one major hurdle: DirectX 10. Larrabee is a different sort of beast than the traditional GPU, and I've described before how its many-core x86 design is particularly suited for physics and real-time ray tracing. I've also talked about how Intel has crammed some specialized hardware onto the chip in order to make it better at the kind of raster graphics that traditional GPUs do.

Because of its unique architecture, when Larrabee debuts it'll be able to do some fantastic ray-tracing and physics tricks that other GPUs won't be able to match, but it's not at all clear the card will be any good at DirectX 10 games. If it does turn out that Larrabee's DX10 performance trails the GPU pack by a significant margin, then Intel will need some way to get users turned on to the idea of settling for fewer frames per second than they're used to with parts from NVIDIA and AMD/ATI. After all, games in late 2008 won't be ray-traced, so it won't matter that Larrabee's ray-tracing prowess makes for some great visual effects.

One way to get support for Larrabee built into games right off the bat is to buy a software company with a technology that's already widely used in AAA titles and that will benefit powerfully from Larrabee's unique architecture. Havok has both of these ingredients, which makes it a great fit.

Intel can make Havok's physics engine and other software scream on Larrabee, so that when then the new GPU launches the company can advertise "good enough DX10 performance" coupled with "but check out what we can do with physics and AI." If Intel can entice enough consumers with a combination of Havok performance and the promise of real-time ray tracing (RTRT) goodies to come, then the company can deliver a large enough installed base to developers to make the effort of putting out a RTRT version of their games worthwhile.

You can also look for Intel to bend over backwards to help game developers find ways to put RTRT into their engines. If you're a game developer working on a hotly anticipated title, then Intel would be silly not to throw free engineering resources at you to get you to offer as much support as possible for Larrabee's unique features.

Guys don't be afraid to jump in here. This is wide open . NO need to think in the traditional manner. Don't limit yourself to what you think you know. The guys at B3D are scratching their collective heads over this one. These guys are smart . Yet none have a real handle on this. Be brave express your ideas and forsight.
AT does some of the Best reviews around. Lets make these forums the best place to discuss this stuff without the threat of trolls. Or the bad behaviour of the past. AT forum staff have taken a bold step . Now lets us .The poster use this forum for the betterment of ALL.
Have fun with this subject!
 

Modelworks

Lifer
Feb 22, 2007
16,240
7
76
I would be more impressed with radiosity in real time than ray tracing.
The effect radiosity can have on a scene is to me a lot more than ray tracing.
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
A link to radiosity would be helpful to myself . As I am very much into the final rendered scene as seen on our monitors . Rest I assured I will read any link posted in its entirity more than once. It is a interesting view you have .

Keep in mind however that movie studious preferr RtRT . There is good reason for this .

I small paragraph as to why you feel this way would also be benefical to all forum members as to why you came to this conclusion.

Something like this would have been helpful . FRom Wiki.

The inclusion of radiosity calculations in the rendering process often lends an added element of realism to the finished scene, because of the way it mimics real-world phenomena. Consider a simple room scene.

The image on the left was rendered with a typical direct illumination renderer. There are three types of lighting in this scene, chosen and placed by the artist in an attempt to create realistic lighting: Spot Lighting with Shadows (to create the light shining on the floor), Ambient Lighting (without which the rest of the room would be totally dark), and Omnidirectional lighting without shadows (to reduce the flatness of the ambient lighting).

The image on the right was rendered using a radiosity algorithm. There is only one source of light, an image of the sky placed outside the window. The difference is marked. The room glows with light. Soft shadows are visible on the floor, and subtle lighting effects are noticeable around the room. Furthermore, the red color from the carpet has bled onto the grey walls, giving them a slightly warm appearance. None of these effects were specifically chosen or designed by the artist.

Its interesting but is it better than RtRT? Than one must take game developers into account. Is radiosity easier or more difficult to use than RtRt?

As I understand it. RtRt is easier for programmers . Which saves the game developers both time and complexity. Which = higher profits.

As has been pointed out already RtRT has been held back because of the shear numbers of rays it takes to render a complex scene. With nehalem 8 cores H/T = 16 threads and in conjuction with Larrabee Intel may have the ans to the above problem far sooner than any expected.



 

Lord Banshee

Golden Member
Sep 8, 2004
1,495
0
0
from google:
www.cs.rpi.edu/~cutler/classes/advancedgraphics/S07/lectures/13_radiosity.pdf

I don't know too much about rendering tech but it looks like raytrace is a precise way to do direct lighting, where radiosity is an algorithmic shortcut way to do global illumination. But personal when i used to render objects for what ever reason back a couple years i solved the results of "skylight" and Global illumination settings. Maybe Modelworks can clear these things up for us.
 

Modelworks

Lifer
Feb 22, 2007
16,240
7
76
The best explanation I can give is from the 3dsmax manual on the differences
I think to me the reason I would prefer radiosity is that scenes done with radiosity appear more natural/soft , where ray traced images can appear cold/harsh. As the manual itself says, Ideal would be both in the same renderer.

Compare the two images below, top is raytrace, bottom is radiosity.
http://www.modworks.net/ray.jpg


One of the first global illumination algorithms developed is known as ray-tracing. The ray-tracing algorithm recognizes that although billions of photons may be traveling about the room, the photons we primarily care about are the ones that enter the eye. The algorithm works by tracing rays backward, from each pixel on the screen into the 3D model. In this way, we compute only the information needed to construct the image. To create an image using ray-tracing, the following procedure is performed for each pixel on the computer screen.

A ray is traced back from the eye position, through the pixel on the monitor, until it intersects with a surface. We know the reflectivity of the surface from the material description, but we do not yet know the amount of light reaching that surface.

To determine the total illumination, we trace a ray from the point of intersection to each light source in the environment (shadow ray). If the ray to a light source is not blocked by another object, the light contribution from that source is used to calculate the color of the surface.

If an intersected surface is shiny or transparent, we also have to determine what is seen in or through the surface being processed. Steps 1 and 2 are repeated in the reflected (and, in the case of transparency, transmitted) direction until another surface is encountered. The color at the subsequent intersection point is calculated and factored into the original point.

If the second surface is also reflective or transparent, the ray-tracing process repeats, and so on until a maximum number of iterations is reached or until no more surfaces are intersected.

A significant disadvantage of both ray-tracing and scanline rendering is that these techniques do not account for one very important characteristic of global illumination, diffuse inter-reflections. With traditional ray-tracing and scanline rendering, only the light arriving directly from the light sources themselves is accurately accounted for. But, as shown in the room example, not only does light arrive at a surface from the light sources (direct lighting), it also arrives from other surfaces (indirect lighting). If we were to ray-trace an image of the kitchen, for example, the areas in shadow would appear black because they receive no direct light from the light sources. We know from experience, however, that these areas would not be completely dark because of the light they would receive from the surrounding walls and floor.

---------------------

Radiosity

To address this issue, researchers began investigating alternative techniques for calculating global illumination, drawing on thermal engineering research. In the early 1960s, engineers developed methods for simulating the radiative heat transfer between surfaces to determine how their designs would perform in applications such as furnaces and engines. In the mid-1980s, computer graphics researchers began investigating the application of these techniques for simulating light propagation.

Radiosity, as this technique is called in the computer graphics world, differs fundamentally from ray-tracing. Rather than determining the color for each pixel on a screen, radiosity calculates the intensity for all surfaces in the environment. This is accomplished by first dividing the original surfaces into a mesh of smaller surfaces known as elements. The radiosity algorithm calculates the amount of light distributed from each mesh element to every other mesh element. The final radiosity values are stored for each element of the mesh.

In early versions of the radiosity algorithm, the distribution of light among mesh elements had to be completely calculated before any useful results could be displayed on the screen. Even though the result was view-independent, the preprocessing took a considerable amount of time. In 1988, progressive refinement was invented. This technique displays immediate visual results that can progressively improve in accuracy and visual quality. In 1999, the technique called stochastic relaxation radiosity (SRR) was invented. The SRR algorithm forms the basis of the commercial radiosity systems provided by Autodesk.

Neither radiosity nor ray-tracing offers a complete solution for simulating all global illumination effects. Radiosity excels at rendering diffuse-to-diffuse inter-reflections, and ray-tracing excels at rendering specular reflections. By integrating both techniques with a production quality scanline rendering system, 3ds Max offers the best of both worlds. After you create a radiosity solution, you can render a two-dimensional view of it. In your 3ds Max scene, ray-tracing adds effects in addition to those that radiosity provides: lights can provide ray-traced shadows, and materials can provide ray-traced reflections and refractions. The rendered scene combines both techniques, and appears more realistic than either technique alone could provide.

 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
Thats interesting Still picture you chose . First off it shows raytracing in its worst possiable example. Notice the lighting above the sculpture it isn't used as a light source at all .After looking at the scene again the light source above the scupture is present . But the primary light source is absent. If you follow the shadow . That light source should also be in the scene. If you look at the shadows again you can determine were the light source is . The scuptures face should have been lit from the primary light source it isn't. That is a poor example at best. There seems to be only one source in that scene.That is used at any one instance. Reflextion source also seems to be absent from that scene.

Now watch this video at no time does it show what your tring to show as a finished raytracing scene . Heres the video . I find it pretty accurate as far as real world lighting.

http://www.idfun.de/q3rt/20040509_egoshooters_q3rt.avi


Could you show a video of radiosity so we can make a comparison . In real world game play terms .

 

BFG10K

Lifer
Aug 14, 2000
22,709
3,003
126
My opinion of ray tracing is that it's currently far too slow to be used in realtime. They can only just render Quake 3 decently and that game is almost ten years old.

Having said that I believe Doom 3 uses some elements of ray tracing so a hybrid approach might be more viable.
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
I would say your 100% correct at this time. But thats not what were discussing. Were talking about the attributes of RtRt. Along with intel's assertion that RtRt will be possiable in a short 2 year span . I have no reason to doubt Intel on this . Given what we presently know. nehalem c -2009 32nm probably highK with 3D gates 8cores moduler design meaning 1 of those cores could be Larrabee In this case it would be on quick path operating at 5-6 ghz in both directions. Or you could look at it as 8cores with H/T = 16 threads used in conjunction with Larrabee and Geneso. Using pcie2. I must not forget SSE4.1 it has much to do with all of this.

I wouldn't bet against intel as they are a company hitting on all cylinders and bring us leading edge tech, today and tomorrow. All must agree with this.

I could also have linked to Quake 4 that Daniel has also rendered using RtRt and his new engine. Now that isn't 10 years old . Besides the age of the game has little to do with this discussion. It just shows what Danial and Intel are working on.
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,003
126
But thats not what were discussing.
Thread title Whats your opinion of real time ray tracing so I gave you my opinion.

Along with intel's assertion that RtRt will be possiable in a short 2 year span . I have no reason to doubt Intel on this
Intel also claim GMA is a gaming solution so I'll believe it when I see it. That and don't forget the likes of AA.

Also throwing multiple cores at the problem isn't necessarily the silver bullet given we already have Quad-Core now but ray tracing is still not viable in gaming.

And finally if you're waiting two years for Intel think about what traditional GPU rendering will have achieved by then as they certainly won't be standing still for Intel.
 

DrMrLordX

Lifer
Apr 27, 2000
22,931
13,014
136
Originally posted by: BFG10K


Also throwing multiple cores at the problem isn't necessarily the silver bullet given we already have Quad-Core now but ray tracing is still not viable in gaming.

And finally if you're waiting two years for Intel think about what traditional GPU rendering will have achieved by then as they certainly won't be standing still for Intel.

Certainly GPU development won't be standing still for Intel (or anyone else), but keep in mind that today we have a good, overclockable quad-core available for what, $250-$300? In two years time, we might be able to get an octal-core CPU in a single socket for the same price, running at a higher frequency with better ipc and faster system RAM. If, by that point, then-current-generation games can be run well while looking good using real-time raytracing utilizing the CPU alone, you could put together a viable gaming PC for the cost of a CPU, motherboard, and RAM. No more $400+ video cards, no more broken/buggy video card drivers . . . hell you'd be able to run games like that in your VM of choice if you care to boot Linux and run a Windows VM of some kind. That or it would just be a lot easier to port games to other operating systems like OSX and/or Linux.

If we could go back to the days of gaming performance being primarily (if not solely) based on CPU performance, I'd sooner save up money and blow it on a decent phase-change system I could use to OC the hell out of numerous CPUs through multiple upgrade cycles instead of burning it up buying video cards that go obsolete every year or so.

 

Modelworks

Lifer
Feb 22, 2007
16,240
7
76
Originally posted by: Nemesis 1
Thats interesting Still picture you chose . First off it shows raytracing in its worst possiable example. Notice the lighting above the sculpture it isn't used as a light source at all .After looking at the scene again the light source above the scupture is present . But the primary light source is absent. If you follow the shadow . That light source should also be in the scene. If you look at the shadows again you can determine were the light source is . The scuptures face should have been lit from the primary light source it isn't. That is a poor example at best. There seems to be only one source in that scene.That is used at any one instance. Reflextion source also seems to be absent from that scene.

I didn't choose that picture.
Its the one that 3dsmax uses for the quoted text.
They both use the exact same light sources.

Thats the point of combining the two methods.
I've done thousands of renders using both ray tracing and radiosity and the benefit of radiosity is that you do not have to have multiple light sources to illuminate a scene.

Often when using just ray tracing alone you have to add lights for ambient light.
Radiosity does away with that and allows you to light a room just as it would be in the real world. As the quoted text states ray tracing does not handle diffuse emissions, which is why the first scene appears dark.


I cannot show you a real time video of radiosity in "game play" because there is no game engine or hardware that can do it in real time. Radiosity requires processing each frame of animation before the actual render can even take place.

Something else that isn't being discussed is that in professional ray tracing your going to use multiple rays for each pixel. Often that can be at least 10 ray bounces per pixel , sometimes as much as 30.
I don't see processing power being able to do that in real time anytime in the near future.
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
Ok first lets look at Nehalem C not nehalem . As nehalem c will be out in 2009. Intel has already said nehalem c will be much higher performance than nehalem .

Now if you read all the links in this thread . no were goes it say intel will be using or even talked about multi. rays . Fact is all sources seem to say single ray even danial doesn't discuss multi rays.

Back to Nehalem c . This is a modular design cpu. Which means that of the 8 cores present on Nehalem 2 could be for Graphics. 1 core could be a vertex unit. Again if you read this thread you will know thats a big deal.

The other could be a Larrabbe core . What is larrabbe . Larrabbe is Intels 1St terra scale chip . Having 16 miny cores each capable of 2 threads . for a combined 32 threads . add in the vertex unit and the 12 threads of the remaining 6 cores . Also factor in the SSE4.1 What we have is a highly parallel cpu capable of 44 threads. This would give intel the processing power to produce the billions of single rays required to render a complex scene.

I think this is more than enough. To do modern games at exceptable frame rates for good game play . Will it do more Fps. than NV or ATI . NO . Would I choose this over nv or ati. If it renders scenes like the one I linked . In a HEART BEAT. If I could get 30fps in the most complex scenes on a 30" monitor with those types of renders. I would be in heaven and completely emersed in game play . Add in whatever Havok is bringing to the table. This looks very promising.
It would allow game developers to bring games to market much faster.

Everthing about RtRt is a win win for us. Intel believes they will have the processing power to do this in 2 short years. If nothing else think about were processing power will be in 2 years . God this is an enthusist dream . No more GPU upgrades . New software for RtRT will come out all the time further improving the performance.

Danial you are the man that thinks he's the little train going up the mountain . I think I can, I think Ican. After riching the pick I new I could ,I new I could .