Microcoding is a very old idea which was inherently RISC. I guess you’re really talking about the amount of chip real estate dedicated to RISCesque architecture (as we know it in the latter day). Big buffers, big pipe lines, etc.
I’m also talking about what is responsible for the performance delivered to your application. Without the new micro-architecture, Intel’s “pentium” line of CPU’s would have hit a wall years ago for no other reason than heat dissipation. There are limits on how fast you can clock a CISC CPU and be able to remove the heat, and CISC CPU’s (which used multiple clocks per instruction) needed ever-faster clocking to achieve throughput.
With the RISC approach, sure, clock speed mattered, but performance gain was had by getting instructions down to one clock per issue and going deep in a pipeline to “pre-execute” instructions with functional units of the CPU that weren’t being used on the current instruction, in effect getting more work out of the CPU per clock cycle.