Panino Manino
Golden Member
- Jan 28, 2017
- 1,154
- 1,399
- 136
Hum... maybe Samsung wants to do more and better things offline in their Galaxies.
They are already the best phone "AI".
They are already the best phone "AI".
Imo server side AI is pretty pointless. It would be much better if on-device processing would increase.Hum... maybe Samsung wants to do more and better things offline in their Galaxies.
They are already the best phone "AI".
Thanks but I think on device usefulness peaked at low light photo noise reduction.Imo server side AI is pretty pointless. It would be much better if on-device processing would increase.
Not in Samsung's case.Thanks but I think on device usefulness peaked at low light photo noise reduction.
No that was my opinion.Not in Samsung's case.
I meant that Samsung's images are extremely noisy. They aren't really doing anything meaningful on-device.No that was my opinion.
That's me stating I really don't care for the rest of it, especially if it's in semi regular use when I'm not even using the phone.
Ah k, I haven't really made much of an effort to check myself.I meant that Samsung's images are extremely noisy. They aren't really doing anything meaningful on-device.
It's mostly just gimmicks, like how they use machine learning to detect the moon in pics and introduce details in photos that do not come from the sensor. (more testing on reddit)I meant that Samsung's images are extremely noisy. They aren't really doing anything meaningful on-device.
Sony sensors are good. What manufacturers do with'em though is extremely variable.Ah k, I haven't really made much of an effort to check myself.
I know that they make a big noise about their ISOCell sensors, but I think usually Sony's sensors have the upper hand in quality, though you have to go out of your way to find out which any given phone is using.
They'll be using the same sensors for base S26 as S23 (while the main camera isn't exactly the same as S22, in practice it is). Those cameras never were great hardware wise. UW doesn't have AF, tele is garbage pin sized 10MP camera that shoots upscaled images by default. Main camera has poor lense. They could probably achieve better results if they would completely redo their image processing but currently they are far behind everyone.Ah k, I haven't really made much of an effort to check myself.
I know that they make a big noise about their ISOCell sensors, but I think usually Sony's sensors have the upper hand in quality, though you have to go out of your way to find out which any given phone is using.
I meant that Samsung's images are extremely noisy. They aren't really doing anything meaningful on-device.
Sony sensors are good. What manufacturers do with'em though is extremely variable.
Even Sony cameras sometimes produced crap.
Hoho, let's decode the above statement regarding Exynos 2600's iGPU: Xclipse 960 aka AMD's Juno RDNA IP. It is based on new architecture; definitely refer to new generation of RDNA. Before I proceed, let's assume the story between Samsung and AMD: Samsung wants to showoff more selling features of X960 but AMD does not want to reveal too many details about new RDNA IP because it is the same IP would be used for all future APUs (N3P). AMD definitely have a final word on the statement; thus, the claims should be correct.
Check my table regarding the relationship between STX (N4P) and E2500 (SF3): the performance delta is caused by the clock speed; the features should be similar...Exynos 2500 is using RDNA3+/3.5 IP; thus it is safe to assume Xclipse 960 is using RDNA4+/4.5 IP. However RDNA4s (N4P) are designed for desktop dGPU; some leakers keep saying RDNA4 are not for mobile SoC, then what IP Samsung/AMD is using?Let's check the features below:
- The computing performance of the Exynos Xclipse 960 GPU is twice as high as that of its predecessor. This is bold claim: Samsung claims computing performance is double than Xclipse 950. Hoho, even RDNA4's improvement is only 20% faster in rasterization's IPC. Then where is 100% performance claim from? Hoho, if you read my speculation about doubling SP per CU aka Compute Unit; then with same clock speed, I could claim double performance per CU as well.
It is not full picture cause the number of CUs has been cut by half, that's why FP32 still remains the same.
- If above feature is considered overselling, then the second statement is underselling: This drives a ray tracing performance improvement of up to 50%. With 8CU, X960 still manage to offer 50% better RT performance than X950's 16CU/RT. Hoho, anyone wants to estimate how powerful is each Radiance core in X960?
As I speculated before, Soundwave (N3P) is like bigger brother of E2600. E2600 is having 8CU with 4.1TF, then how many CUs would Soundwave having? Don't believe what AMD marketing people told you; I rather believe in MLID's story regarding Senior Management about SWV. And if anyone still believes in Medusa Point is having RDNA3.5+ GPU, hoho...![]()

no idea. but I remember Vega being present in igpus for a very very long timedo you really believe AMD is incapable of updating iGPU architecture?

sir just one exampleDo you really think AMD will bundle such old technology in APU which will be available from 2027 towards 2029. Hoho, use your brain for once, please.
if you are talking specifically of handhelds, that should be the z3extreme. which I believe is the medusa premium (cpu = 4 + 8 + 2) (gpu = 24 CU / 12 wgp AT4)
