-12V has been gone for ages. A supply I bought in 2005 didn't have it. I'm pretty sure PCI didn't need it either. The only real "use" for it was RS-232 serial ports, and those have have had charge-pump-based MAX232's or similar available for the past 20 years. BJT TTL logic required bipolar supplies, but computers have been nearly all unipolar CMOS for at least 20 years now.
move up to 48V to align better with the DC power distribution you see in the server space.
48V is a standard that came about on account of telecom "central offices" needing 48V for the POTS standard. The amount of that gear installed is waning, and nearly everyone has moved to 120V and even 240V distribution for datacenter-like applications.
Personally I suspect the next generation of ATX spec will define a way of allowing the PSU to communicate with the host motherboard digitally through the ATX connector. Through something resembling USB or i2C-based signaling. So that PSU vendors can incorporate voltage and current instrumentation into their PSU's. As well as read-back of PSU internal fan speeds and some host-based control capabilities of the internal PSU fans.
Going a step further, in the future, I expect PSU's to incorporate some sort of rudimentary capability of receiving signals over the power lines along the lines of power management. Signals that ultimately will go to the mobo, where host-based firmware can ratchet overall system power consumption up or down by manipulating appropriate features on the CPU, PCH, and GPU. And even OS-level features (ie: software might slow down/pause the execution of lower-priority jobs when power budgets are tight! Such could either be power-line carrier based, or even highly accurate/high definition frequency measurement hardware (as power systems engineers know, when the frequency dips below 50Hz/60Hz, there is a shortfall of energy in a power system, and AGC in a distributed power system relies upon feedback control through frequency measurement to control generator output!).
So basically in a nutshell:
* Future PSU's will be 'smart' with respect to the power grid they're attached to.
* Future PSU's will be able to talk digitally to manage noise, temperature, etc. with the hosts, with users deciding whether they want more fan noise or if they'd prefer component de-rating.
* Future PSU's will be "configuration-aware" so they don't keep more hardware "online" than is truly necessary for the peak loads that a system might actually experience (ie: a 1000W supply might have 3-4 "trains" (some might use the term 'phase', but its really not an accurate use of the term in electrical engineering!), and if plugged into a typical consumer PC that peaks at 100W, would shut down all but 1 train). A "mini supply" might even be incorporated to supply standby loads at extremely high efficiency levels, while the big supply is only turned on through a relay if the machine is turned on. Microprocessor control and coordination with the host will allow PSU's to adjust their electronics to the current and near-time predicted future operating state of the machine for maximum efficiency.
* Future PSU's will be able to monitor and report their health to the host for maintenance and trending purposes through firmware-based self-tests. Kind of like "S.M.A.R.T." for hard drives and SSDs. If you buy a used PSU, you'll be able to find out how many hours the previous owner used it for, and how many kw-hours it converted cumulatively. Hosting/datacenters will be able to extract and use this data for billing and planning purposes!
Why is all this going to happen? Because power is almost as expensive now as actual hardware over its service life. "Consumers" largely don't even use ATX anymore -- they're on tablets, laptops, NUCs. So the ATX people, to some extent, can run a bit wild with adding very powerful (no pun intended) features to the ATX standard to keep it modern.