And THERE is the invalid assumption that I'm driving at. Many optimizing compilers can get to that "almost as fast as assembler" -- in fact, Ada has achieved that as well (source), and you get things like array range-checking and compile-time error-checking in addition if you're using Ada. (And no, not all range-checks need to propagate to runtime: consider indexing into an array(character) with a variable of a subtype ranged 'a'..'z'.)
If an app is handling extremely high volume and throughput C is the best.
C makes you dependent on way too many assumptions & limitations -- the reason that "parallel computing" is such a big deal is because C & C++ don't handle it at all (Ada has the task construct, and so the problem has been solved for over 30-years) without some sort of library support, which is derived from the OS, which [in many cases] is developed to POSIX compliance (which is specifically a UNIX-ism, and also heavily dependent on the C-philosophies). The whole mess in the industry is because of that very mentality.
Furthermore, it must be asked why the insurance and banking (esp the banking, with the volume of electronic transaction throughput) still use COBOL (link1, link2) if C is so great for throughput. The answer is partly that saying "if it ain't broke, don't fix it", the other part being that C lacks one of the abilities that COBOL has: a 30 year old non-trivial program-text can correctly compile on a vastly different system.
Can’t quite get the knack of C programming? I get it....