EDRAM size was, what, 128 MB? That is a rather modest size.
1080p = 1920 x 1200 = 2073600 pixels
Each pixel needs 32-bit of data for displaying True Color
2073600 x 32-bit = 66355200 bits
Bits to bytes conversion
66244100 / 8 = 8294400 Bytes
8294400 / 1024 = 8100 Kilobytes
8100 / 1024 = 7.91 Megabytes
Therefore a 1080p image is approx. 8 MB in size. Now, for screen display without flickering, Double-buffering is mandatory (8 x 2 = 16 MB) and many apps call for Triple-buffering to avoid tearing. (8 x 3 = 24 MB)
Add 4 x FSAA for OS X's desktop composition which is necessary for sub-pixel / font rendering, etc. (24 x 4 = 96 MB)
That is before we get into Z-buffer and texture storing for 3D, which is up to individual app's request. Modern APIs also call for extra frames to "render-ahead." For 3D, 128 MB quickly becomes a bottleneck unless some other techniques are employed to reduce the frame buffer size. (memory compression? Tile-based rendering?) In any case, Intel could not have soldiered on with 128 MB EDRAM as a high-end option even for 1080p, and even then it would have to license the graphics IPs to make it work. Furthermore, monitor resolutions have been on a revolutionary path sine the introduction of EDRAM, and there does not seem to be an end to it just yet. Add more demanding usage cases for professionals/enthusiasts (e.g. dual-monitor), the size of EDRAM required for high-end SKUs would have to grow larger and larger, with no end in sight.
I guess this is where HBM comes into play. Doesn't AMD hold some sort of key technology in enabling HBM?