Free Republic
Browse · Search
News/Activism
Topics · Post Article

To: stainlessbanner
Thanks for the heads up.

I actually have one more thing to add to this discussion -- OO programming also can increase code size tremendously, in return for adding flexibility, stability and code reusability.

Tweaking a program for optimal speed always means doing serious damage to your OO architecture. Before about 2 or 3 years ago, good OO design was typically non-existent. The new languages, Java and C#, are forcing developers to begin to understand and use solid OO design. Which will be slightly slower code than pure optimized code. But the benefits are thru the roof!

So I would say that, in and of itself, code size isn't a very useful way to measure software quality.

Features, ease of use, stability, flexibility, scalability, solid componentize architecture . . . these are the 'measurements' of software quality.

If you just apply these to MS software, I think you find the *real* proof that their software is low quality.

But just being 'big' doesn't necessarily mean 'bad'. In good software, 'bigger' should mean more functional.

33 posted on 12/17/2001 8:28:45 AM PST by Dominic Harr
[ Post Reply | Private Reply | To 9 | View Replies ]


To: Dominic Harr
I actually have one more thing to add to this discussion -- OO programming also can increase code size tremendously, in return for adding flexibility, stability and code reusability.

Unfortunately, for some reason, development systems today have gotten terrible at controlling 'dead code' bloat. It used to be normal for development systems to only include code which could actually be called; now it seems they throw in everything.

Part of this I think can be blamed on the design of C++, which does not allow the necessary interactions between the compiler and linker to determine what code is actually needed. As a simple example, many virtual methods are in practice never overridden; a static analysis of the software would be able to detect this and replace all of the virtual-method calls with "simple" CALL's. Unfortunately, this condition can't be detected until link time, after all of the code has already been generated.

It would be interesting to do a static analysis of some modern software and determine what portion of the code can in fact ever be executed. I would not be surprised if almost half the code in today's bloatware is 100% completely useless.

Another thing which should be noted: many of the applications which 'should' require fact CPU's seem to benefit from them. Quake, for example, runs much more nicely on a faster machine than a slow one. Many other applications, however, seem to have random 'snooze times' [where the application stops responding for a few seconds]; these seem independent of CPU speed. It would be interesting to know, during the times that someone is actually waiting for the CPU to do something, at what efficiency the CPU itself is running [as opposed to waiting on cache misses, etc.]

99 posted on 12/17/2001 7:51:28 PM PST by supercat
[ Post Reply | Private Reply | To 33 | View Replies ]

Free Republic
Browse · Search
News/Activism
Topics · Post Article


FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson