There must be a 2+1 as well. The Celeron and/or Pentium uses GT1 graphics.
The GT1 (and non ULT dual-core) products were made the same way they always are, by blowing fuses and salvaging dies.
Anyway the point was that even far as back then they had more configurations despite the potentially better yield. They also did that with Atoms having six or so separate dies.
If they were able to do that back then, why did they regress on that department with Tigerlake/Alderlake having "horrible yield" as some like to think?
Intel may have had 8 designs queued up and ready to go, but they only ever produced 4 of them. That's the exact same number of client dies as they are currently producing for Alder Lake. There has to be sufficient volume to justify taping out, qualifying, and proceeding with a volume ramp of a new layout. Intel had to do 4+3e and 2+3 ULT because they had customers (primarily Apple) that wanted them. 4+2 and 2+2 ULT were the mainstream parts that everyone wanted and therefore clearly justified. Non-ULT 2+3 didn't have enough takers to bother with. The GT1 and non-ULT 2+2 dies would only get green-lit at the point where yields were good enough and demand strong enough that Intel was leaving money on the table by partially disabling a significant percentage of perfectly good dies just to fill customer orders. Intel never got to that point on 22nm, even with the insertion of Haswell Refresh.
Intel hasn't shared how many designs they might have had in the pipeline for Alder Lake, but from what they have shared regarding the 22nm, 14nm, and 10nm ramps, they would have been delusional to think that wasting time on additional layouts was merited.
The heavy use of multi-patterning and SAQP with 10nm necessitates more manufacturing steps than 14nm, which in turn makes it effectively impossible for them to ever match the defect densities or cycle times of 14nm. We're looking at a process that technically first achieved PRQ in Q4'17, but didn't exceed 14nm in terms of WSPM until 3.5 years later in Q2'21. Intel hasn't attempted to copy 10nm to one of their four major leading-edge manufacturing sites, and continues to maintain significant 14nm capacity at the other three. Intel has also been capacity constrained for the better part of the last 3.5 years, much of the capital investment for 10nm has been fully depreciated at this point, and they've had enough wafer starts to climb the yield curve. Even considering all that, Intel isn't in a hurry to convert the remainder of their 14nm lines to Intel 7, which means it probably doesn't make economic sense for them to do so. Yet Intel 7 offers up to a 2.7x density increase and ~26% better perf/W compared to the latest version of 14nm. This would all seem to point to 10nm yields not being awesome and cycle times being brutal.