What is advised and what is allowed is; well, not the same.
That's not the point, is it?
The compiler just needs to emit a 'pop'. Whether the instruction is actually called 'pop reg' or 'move.l (sp)+, reg' is irrelevant to a compiler. The difference is only in the mnemonic representation for humans.
For what push/pop is on the x86, the opcodes generated are not general purpose but very specific to certain operations. I believe in the 68k you can actually move memory to memory (i.e. move.l (sp)+, (reg)+ ). There is no way to do that on the x86. I don't know off hand but I would bet the encoding of this probably larger on average on the 68k than the x86.
You can do memory-to-memory on x86, that's what movs is for. The beauty of the 68k addressing modes is that you can do push/pop and lods/stos/movs all with the same instruction and two addressing modes (and no need for a direction flag either).
Actually it does not really have an accumulator, in a sort of balancing act, it has an opcode optimized register. Operations on the A register can be done with smaller opcodes relative to the same operations in other registers. It does have some operations that require use of the A register, BCD for example, but for general operations you are not required to use it.
Depends on what you are talking about. 16-bit mode is more restricted in the use of registers than 32-bit mode. Things like mul/div are fixed to the accumulator. As are the above-mentioned lods and stos, to name but a few.
Despite all its legacy instructions, the x86 produces a small code footprint relative to many other current architectures.
That was never debated. The argument was that this somehow made a compiler's job easier. I think that's completely unrelated.
That would be like saying that a compiler has a harder job optimizing for size in x64 mode than in x86, simply because the average instruction size is slightly larger in x64 mode.
No, you can use the exact same algorithms, the results will just be slightly larger... Then again, even handtuned assembly will be slightly larger. That's just a side-effect of the instructionset. Has nothing to do with how easy or difficult it would be to optimize for size.
Did I say that? I don't think so! I said it was a trade off.
You said CISC. The rest was implied.
The 68k did have a very small code footprint, I would say though, this may have lead to its fall from favor. Its lack of virtual addressing and opcode extensibility slowed its development keeping it behind the curve. Movement of its base to RISC architectures didn't help either.
Lack of virtual addressing? Que?
The 68k had an MMU on board since the 030 if I'm not mistaken.
As for opcode extensibility... Motorola did plenty of that. It was a variable-length instruction encoding scheme, much like x86. The difference was that Motorola stuck to using 16-bit words. So each instruction was a multiple of 2 bytes.
As you yourself said, it still had a very small code footprint. Not very different from x86, while x86 has plenty of one-byte opcodes. That does not necessarily guarantee smaller code. 68k got a lot of benefit from its larger register file and its clever addressing modes. It often just needed less instructions to do the same job as is x86 competitor. 68k also generally had higher IPC at the same clockspeed than its x86 competitors.
It's easy to argue that there are unused instructions of the x86 that could easily be dropped. However there is equal ability to further extend the ISA as shown by the adoption of x64, mmx, SSE, etc.
I think you'll notice though that extensions such as x64 and SSE are not all that similar to most of the classic x86 instructionset. It's somewhere between 68k and RISC. A lot of the concepts of the classic x86 instructionset are not applied in these extensions.
I don't think the use of an stack/accumulator has anything to do with memory speeds but rather the complexity of the internal units. It's a lot easier to latch to a known unit/location than to latch to one of many units/locations. Its really a way of limiting where operations take place.
I don't think that has anything to do with it. Why not? Because you can (and will) always use an internal register for that. There's no need to explicitly expose it to the programmer though, and let him use two instructions rather than one (one load and one store).
Nothing is really backwards when you relate software to the hardware it executes on.
That is my point, but perhaps you didn't fully understand it yet.
x86 was not a high-end CPU. It was meant for microcomputers.
Back in those days, there was a huge difference between what you had on your desktop and where the cutting-edge software and hardware-developments took place: mainframes, minicomputers... that sort of thing. The so-called Big Iron.
*THAT* is where compiler development took place. Not on simple PCs. Those PCs weren't powerful enough. In those days you didn't just open up a GUI with an IDE on your PC and compile some code while you wait. As I said, it wasn't until the early 90s that compilers and simple (text-based) IDEs became commonplace on PCs.
Heck, if you look at the most popular computer of the 80s, the C64... It didn't even *have* any compilers, because it just was physically incapable of running any. With only 64kb of memory and just short of 1 MHz, it just wasn't possible to try and write some C/C++ code (heck, the C64 didn't even have mul or div instructions in hardware. You had to implement those yourself, and optimize them for each purpose. Don't process more bits than you have to! Always a nice pop-quiz for the younger generation of programmers: How do you implement a mul or div with just adds, subs, shifts, compares?).
At one point I found a simple Pascal compiler for the C64.. but it could only handle a few hundred lines of code at best. It just wasn't possible to write anything meaningful. That's why pretty much everything on a C64 was written completely in assembly (even stuff like
Geos).
So no, compilers were developed on stuff like PDP-8, PDP-11 and all that. The same place where unix came from, among other things. x86 didn't have a whole lot to do with any of that.