Originally posted by: Kakkoii
Originally posted by: Idontcare
Originally posted by: scooterlibby
Guys guys, let's not fight. MIMD relies on instruction sets that are massively parallel, yet infinitely unitary. Think of a MIMD as a random walk with drift and time trend model, the command function kaleidoscopes DIMM and SATA units for in-order reprieves, demanding superscalar multithreading on GPGPU + SOI processes. It's similar to the Hessian matrix, except you take the first third derivative to maximize core output. Now, I am an industry insider, and you may be thinking "This is all BS, what do SATA and DIMM have to do with anything ever?" Well, I assure you that my inside knowledge may render the explanation useless, but take comfort in it's indisputable truisms.
Based on insider info, I know this to be true firsthand. The value of the jacobian transformation cannot be understated when it comes to leveraging MIMD to optimal effect.
Think you could explain MIMD in a simplified/dumbed down way for the rest of us?![]()
Sorry Kakkoii, we were just being sarcastic. Scooterlibby's post and mine are a bunch of true terms (hessian and jacobian are actual mathematical entities, not fictional) but it's all pieced together to just be a steaming pile of nonsense for humor purposes.
Nothing in those two posts has anything to do with reality of MIMD, or GPU's, or insider info. It's all tongue in cheek.
As for actually explaining MIMD...yeah if I ever figure it out myself then I'd be happy to explain it. At this time though anything I would have to say on the matter would be misinformation at best, or just more first third derivatives of the adjunct to the jacobian matrix at worst.
