Skip to comments.High Level Logic: Rethinking Software Reuse in the 21st Century
Posted on 09/20/2010 8:52:32 AM PDT by RogerFGay
An application programmer spends six months perfecting a set of components commonly needed in the large company that employs him. Some of the components were particularly tricky and key pieces required very high quality, reliable and complex exception handling. It has all been tuned to run quickly and efficiently. Thorough testing has demonstrated his success. Part of his idea of perfection was to build in a way that the software, even many of the individual components, could easily be reused. But it is surprisingly likely that no one outside of a small group within the project will ever hear of it.
Tens of thousands of programmers building thousands of applications, repeatedly building the same functionality over and over again. A collective nightmare for CFOs. There are those who believe that the problem has been addressed with best practice object oriented programming techniques, enterprise frameworks, and strategic development of specialized open source systems. Certainly they are right up to a point.
While many available tools and good programming technique offer opportunities to reuse software, and most definitely reduce the need to rebuild many lower level (relatively speaking) functions, they also facilitate development of much more complex systems and provide a plethora of gizmos for doing old things in new ways, producing a never-ending stream of reasons to update what has already been done. Out there on the edge, where application programming actually takes place, the software reuse problem is a moving target.
Generally speaking, the benefits of software reuse far outweigh the costs.  But in the messy world of real-world application development, the challenge can be complex. Many managers value rapid prototypers over best practice engineers, not understanding that building on somewhat sloppy early results will typically and dramatically increase project time and cost and reduce quality. In larger organizations with shared interests, they wonder which project should bear the increased cost of building reusable software components? Who pays the cost of searching the (sometimes huge, distributed, and insufficiently documented) code-base to look for possible matches? Should a software project focus time and money on special packaging, documentation, and marketing material to promote reuse of components it builds?
I believe it is possible to realign the software development process in a way that will make everyone happy; from the executives who will see measurable improvements in productivity, to project managers pushing for rapid results, to the programmers who fantasize about widespread use of their best work, to the CFOs who see the company mission fulfilled on a smaller software maintenance budget.
Such a dramatic statement needs a theatrical follow-up. In the spirit of The Graduate, I just want to say one word to you just one word. Are you listening?
Exactly what do I mean by that? There is a great future in configuration. Think about it. Will you think about it? Shh! Enough said. That's a deal.
OK, it's not actually enough said in this context. I'll get back to configuration below. What I want you to think about first, really think about, is that this is the age of software components.
In the distant past, it was easy to see that it would be useful to represent often repeated binary sequences in hexadecimal code, then an obvious step to package sections of code into a machine language to handle common operations at a higher level. Then combine those into commonly used functions. It's been almost a half century since we got a little excited about structured programming. We built functions and libraries, and once again noticed that program structure and flow as well as development tasks often had commonalities across applications. Special tools emerged. It has been thirty-five years since the first IDE was created.
Believe it or not, it has been a half century since an object oriented programming language with classes and instances of objects, as well as subclasses, virtual methods, co-routines, and discrete event simulation emerged from research in artificial intelligence (Simula 67). C with Classes was renamed C++ in 1983 and developers quickly replaced their C compilers with C/C++ compilers. The Java Language Project was initiated in 1991 and Sun Microsystems offered the first Write Once, Run Anywhere public implementation in 1995 and the first release of the Java Enterprise Edition in 1999. This is the age of software components. But even one decade is a very long time in the software world. One might almost expect that something new is about to happen.
One word - Configuration.
If you're entrepreneurial, perhaps you have already realized that you could package sets of useful components as finished product suites (components = products). If you are an independent consultant or operate a specialized software development company, you can offer high quality services based on proven technology with your own product suite(s). (Shame on you if you don't already.)
But let's say that you want to build a complete system, for some purpose, that does something besides impress people with its quality and reusability in the development process an application. Adaptation by configuration is pervasive. Here are some examples.
Word processing software serves a very specialized purpose. It is adaptable by configuration. You can install specialized language support, adjust the characteristics of various text blocks, add templates for complete (reusable) document layouts, and even adapt it for use by the visually impaired. Some word processing systems are also extensible.
Lotus Notes has a history that reaches back into the 1970s (PLATO Notes). It is familiar to many software developers (and others) as an interactive, networked system that is adaptable (by configuration) to the specifics of a project or other activity, and also extensible. This is a bit more general than a word processor, providing a suite of services, but still somewhat specialized. IBM offers both extensions and tools. Custom code can in fact be added to extend the capabilities of the out-of-the-box system. Extending the concept, Lotus Sametime is offered as middleware for building custom networked applications.
WordPress is web software you can use to create a beautiful website or blog, says the WordPress website. We like to say that WordPress is both free and priceless at the same time.The core software is built by hundreds of community volunteers, and when youre ready for more there are thousands of plug-ins and themes available to transform your site into almost anything you can imagine. Over 25 million people have chosen WordPress to power the place on the web they call home.People all over the world build and post their own components. It doesn't take a software professional or layers of bureaucracy to select and add powerful new interactive features (beyond visitor comments and smiley faces) to customize websites. Welcome to the 21st century (and pervasive CMS)!
The Brave New World
What if you could do that with all software development? And what if a major portion of the reusable software components in a company, starting with their design, were treated seriously as independent internal products rather than vaguely outlined portions of a large pile of 1s and 0s? The idea might be much more practical than you think.
The shift to object oriented programming changed the way programmers think about creating systems. Components are what systems are made of these days. This major technological paradigm shift has also had a major impact on project process; which now leans toward lean, discrete, and agile. 
Some of the most complex and potentially expensive aspects of software reuse involve getting all the software organized, identified, documented, and searchable. But consider what is already inherent in the tools and processes commonly used by modern software engineers. Software objects are arranged in packages. In best practice, naming of packages and components is systematic and aims to make functional purpose easy to recognize. Package identifiers can also serve to direct programs to the physical location of each component. Documentation on individual objects, arranged in packages, can be automatically generated (easy to organize and keep up-to-date).
Best software practices encourage reusability. If I'm creating an application that reads and interprets 25 XML files, it only makes sense to write one (only one) general purpose read and parse component for XML files, so long as that is possible, rather than new code for each file. Only that part which must be treated uniquely requires additional code.
My experienced observation is that much of the time, in common practice, building nifty general purpose code is less expensive than building sloppy spaghetti code. Building good code from the start dramatically decreases project time and cost. There will be fewer avoidable complexities in downstream development, fewer bugs, and consistently higher quality. Consider also that experience matters. Developers who spend their careers building good code, not only prefer doing the job right, but become extremely efficient at it. When they design, what will come to their minds is good code, not junk. When they implement the design, they are quite familiar with the techniques and details needed to build good code.
From a variety of perspectives, developing reusable components in the spirit of discrete products is beneficial, and the time is right. What more is needed then, to maximize the benefits of software reuse?
Regular Stuff + Artificial Intelligence = Something
The Java language and frameworks like Java EE continue the development path that started with binary sequences in the first section of this article. They differ in that one does not generally innovate on the concept of adding two integers, for example. Initially, getting good fast versions of common functionality for a variety of machines was the point. Both Java SE and Java EE (and others) provide support for higher level functionality supporting, for example, a variety of ways to move data around on a network for display and processing.
In the world of artificial intelligence research however, it seems people enjoy branching off in new directions, moving things around (so to speak) to change the character of computing. The old research definition for AI was simply to get computers to do things that at present, humans do better. From the start, people thought about moving the human role (part of it anyway) into the computer.
In the mid to late 1980s, complex rule-processing software came on the market. New companies emerged marketing expert systems tools, large companies invested, and more powerful rule-processing capabilities were added to existing commercial products like database systems. A slightly deeper look yields something more interesting than a packaged way to process lots of if-then statements. AI researchers wanted to move logic out of application programs and into a general processing engine, with application logic treated as data. I'm going to cast that into the perspective I offer now, with a description that the researchers and developers at that time may never have used. Expert systems applications were built by configuring the processing engine with a rule base (and other information).
More powerful systems like KEE became commercially available in the same decade, incorporating a new and powerful programming component - objects - into the mix. The object oriented programming concept itself repackaged some of the common elements of complete applications into individual components; not just by definition, but by encouraged design practice. Its introduction was disruptive, setting vast numbers of working engineers to the task of rethinking how software systems should be written. An object you say? Sounds classy!
My agent is on the phone.
A software agent is a piece of software that acts for a user or other program in a relationship of agency, says Wikipedia (citing two sources ). Agent technology also has a history. The concept can be traced back to 1973 . An actor is "a self-contained, interactive and concurrently-executing object, possessing internal state and communication capability. One might call it the ultimate object.
Agent technology has already emerged from artificial intelligence laboratories. Modern agents extend the Write Once, Run Anywhere idea, even to the extent that there are what might be called door-to-door salesman varieties; traveling agents (also called robots, bots, web spiders and crawlers and even viruses) that move around the Internet (sometimes duplicating themselves) to perform tasks.
The telecommunications industry recognizes the importance of a new technology that screams to be used as a central processing system for a wide range of applications that can service the wide range of networked devices available today. JADE (Java Agent DEvelopment Framework) is a free software Framework distributed by Telecom Italia, that simplifies the implementation of multi-agent systems.
It changes the way you think about software development. Don't worry about the wide range of interfaces needed for so many devices. They're supported. Don't worry about the complexities of communication. The code is written and maintained by someone else. This goes beyond the relatively low level programming components available in IDEs and support offered by higher level development frameworks like Java EE. Much of the system already exists. Just focus on the very specialized components needed for your particular application that can be cast into the agent framework. Only that part which must be treated uniquely requires additional code.
You then let the framework know when your agents are needed. When they are, they get the call; automatically. And by the way; intelligent agents can sense when they are needed and respond appropriately, even learn and adapt to new circumstances.
Sometimes one is not enough. A multi-agent system (MAS) is a system composed of multiple interacting intelligent agents. Multi-agent systems can be used to solve problems which are difficult or impossible for an individual agent or monolithic system to solve. Examples of problems which are appropriate to multi-agent systems research include online trading, disaster response, and modeling social structures.
High Level Logic
OK, quick! Think of a way to go beyond what's been discussed so far.
How about this?
One more thing, while we're on the subject of code reuse.
- Design what is basically an agent-type system that easily interacts with others of its own kind installed anywhere in the world and may also communicate with other agent systems facilitated by use of standard message structures. (Note that the FIPA standard message offers SOAP level power.)
- Identify high level logical processes that you expect to be common to thousands of applications (maybe every conceivable application?) and provide generic engines for doing that part of the work; both in the spirit of expert systems above i.e. moving additional, common, higher level application logic to a generic engine for the first time and embellishing on the new generic processor and also support more basic processes like intelligent XML, encryption, authentication, rule-processing ....
- Provide a general problem solving (high level process what do you want, what are the variables etc.) engine.
- Include the ability to carry out more complex processes by implementing custom plans.
- Devise an even higher level generic processor that ties all of the above together in a top-level process.
- Allow application developers to extend and customize if they wish, without destroying the ability of one system to interact, cooperate, and share with the others.
- Provide a simple way to tie in a GUI as needed for a great variety of devices.
Now what you have is an outline for a system known as High Level Logic - HLL. The High Level Logic (HLL) Open Source Project stems from a history that goes back into the 1980s, as a concept for unleashing the power of expert systems. Prototype software was more recently built as part of a larger intelligent robotics project. The commitment to open-source development was made in July, 2010.
- Provide a mechanism to call upon any level of reusable code, from large systems to discrete components integrated through plans, rule-processors and other high level structures. Support options for utilizing common code remotely (if, for example, the process utilizes other remote resources), to fetch and utilize temporarily, and to fetch and store locally.
Although the development project is now (September 2010) at an early stage, there is sufficient software available to create applications. The plans to complete the first complete light-weight version, using only Java SE, that will have the full set of characteristics described above are quite concrete, already committed to specific technical issues logged in the project's issues tracker. An outline for a somewhat heavier version using Java EE components is given on the project description page.
Yet Another Path to Code Reuse
Subtly perhaps, four different approaches to code reuse have been mentioned and illustrated in this article.First, the development of higher-level languages involved assigning names to commonly used bits of logic in lower level coding and the development of tools to translate (interpreters, compilers, ) back into lower level code. One common implementation of the lower level code was then used by everyone using the same higher level language (compiler ).At least one more exists, which will be the subject of later articles. Learning and adaptive software has already reached well into the stage of commercial use. Developers write code explaining the results they want. The learning system automatically creates the code. There are many circumstances in which the same specification code written by developers can be reused on various platforms (physically different robots for example) in different environments and for different purposes (identifying and moving different objects in new environments for example). Even existing application code can be reused and automatically adapted to differences.
Second, object-oriented computing involved re-organizing some of the processes common to applications from the complete application level, down to component level.
Third, a more open mass use of certain web-based technologies led to common application cores and shared extensions. (Proprietary: Lotus Notes → Open: WordPress; and also more on the extreme techie side, consider the history of Sun Java development).
Fourth, highly innovative research identified distributed application logic that could be extracted into generic processing engines.
The direction of HLL incorporates all of the above providing a useful general purpose application platform rather than a specialized application platform (like CMS for example). It will be part of the purpose of the HLL Blog to provide further focus on these characteristics of the HLL system.
Given the current state of HLL, there is at least one characteristic that should be emphasized. Application developers focus their coding work only on those components that are unique to their application. There is a great deal of flexibility in what can be done on the application, because simply there are no restrictions. Java developers, for example, can write any kind of Java components they wish. The HLL engine can use components from anywhere an HLL system exits on any installed HLL system (component sharing). Components that builders wish to be accessed by HLL are named (like higher level language), and accessible to HLL systems through their configurations.
This aspect of HLL is worth emphasizing. It is the intent, that especially as an organization builds its library of reusable functionality application development will be largely a matter of configuration; and that's a very good reason to push reusable code development.
- SOFTWARE REUSE ECONOMICS: COST-BENEFIT ANALYSIS ON A LARGE-SCALE ADA PROJECT, Johan Margono and Thomas E. Rhoads, Computer Sciences Corporation, System Sciences Division. (I believe this 1992 article in ACM, which I found freely available on the Internet, is still quite relevant. Margono and Rhoads however, did not say benefits far outweigh the costs. They actually said; benefits have so far outweighed the costs. We believe that this will continue to be the case. Eighteen years later, with a great variety of new advantages mentioned in this current article, it is surely even more true due to long-term technical focus on the issue; and this article recasts the issue in that light. We've come a long way.)
- Nwana, H.S. 1996. Software Agents: An Overview. Knowledge Engineering Review, Vol.11, No.3, 205-244, Cambridge University Press
- Schermer, B.W., Software agents, surveillance, and the right to privacy: A legislative framework for agent-enabled surveillance. Leiden University Press, 2007, p.140.
- Carl Hewitt; Peter Bishop and Richard Steiger (1973). A Universal Modular Actor Formalism for Artificial Intelligence. IJCAI.
- Agile software development methods Review and analysis, Pekka Abrahamsson, Outi Salo, Jussi Ronkainen, and Juhani Warsta, Espoo 2002, VTT Publications 478, 109 p.
Real programmers work at assembly level. Most kids don’t even know what a register is these days.
I spent a decade administering a version control database depot for an IT dept of a billion $ software company. Despite numerous efforts by lower level staffers including myself, no code reuse project ever got off the ground. Redundant code is being written to this day, while the company upper management whines about the costs of the IT department, forces layoffs and exports jobs to Bangalore. Why?
Rush, artificial deadlines that are never met anyway, emphasis on projects that benefit directly and immediately customers outside of IT, the yes-man corporate culture. For the same reasons the code is written undocumented using unstandardized variable names. I tried to promote code reviews? Are you kidding?
At the same time, believe it or not, the engineering department which produces the company’s products adheres to strict standards of coding, code reviewing, release management. Go figure.
The “advertising” might work. The hard part is of course that the developer needs to know there’s an in house solution to Problem X before he realizes he has Problem X. Because developers tend to be natural problem solvers once they know they have Problem X their natural tendency is to tackle it, not to step back and wonder if somebody in the building has already solved it. And of course the other part of the problem is people tend to have tunnel vision, until they have to tackle Problem X most people don’t give a crap if anybody has a solution. So it all combines to put you in a position where you need to tell everybody you have that solution, knowing they don’t actually care, and hoping they remember when it matters.
I totally agree with you on the basics. Compare though with high level languages and frameworks. Every programmer knows that you refer to the API to get the most knowledge about modern programming - to use the language or framework that they’re using. Projects are defined all the time to include such tools. Engineers are interviewed to determine whether they can use them, or learn them. When a system comes along like HLL, that is designed to support sharing and reuse, then a similar situation exists for specialized application components.
In the circles I travel in APIs are almost treated like languages, people know C, and C++, and WinAPI, and .Net. So if you can teach your developers to think of the in house stuff as an API then it could work, of course then you have to really package it that way. At my first company, where we did the hardcore code reviews, we did that, we approached these common tools as our own version of the standard libraries that came with VC (that’s how long ago this was). We still ran into problem of people not knowing all the stuff that was there, but it did help. Since then nobody I’ve been with has been that ambitious.
[Real programmers work at assembly level. Most kids dont even know what a register is these days.]
Oh yeah? Well real programmers use bit toggles on the front of the machine to even boot er up!
That's been my experience. Ada, the language put out by the DOD for bid and designed & built by multiple committees was supposed to address that. "A camel is a horse designed by a committee". And a camel is what they got. The earliest implementations, with perhaps the exception of Verdix Ada (after it matured a bit), were mostly pretty bad. A few iterations (and a lot of in-the-field experience) later, Ada has gotten pretty good.
Of course, the proliferation of different software licenses on code hasn't helped either. Basic example - you cannot incorporate GPL code into BSD licensed code (the GPL is a viral license that requires all of the code it is linked with become GPL), however you can incorporate LGPL code into BSD licensed code. Some Open Source projects, notably from the FSF are even stricter, they not only require GPLed submissions, but the copyright must also be assigned to the FSF.
Certain software projects like XEmacs (which was a fork of GNU Emacs in the early 1990s by the now defunct Lucid) require GPL licensed contributions. Others like Emacs require not only GPL licensed contributions, but they must also be copyright assigned to the FSF. When Lucid went bankrupt and orphaned XEmacs as Open Source in its bankruptcy, it was in the position of being GPL, but unable for the most part to share code back with the parent.
There is a time and place for lawyers, but in the field of software development their involvement has been little short of disastrous.
Been there, done that, tried to change the world and it didn't quite work out, although I'm quite proud of how XEmacs has turned out. It's still my favorite text editor. (grep for "altair" in XEmacs ChangeLog files, that's me).
I use Apache Wicket for my Java-based web development, it's awesome, mark up pages with plain-old HTML, no tags, scriptlets or any of that garbage, any control can be easily AJAX-capable.
I code in 0s and 1s, and sometimes we don’t even have the 1s.
(Stolen from a Dilbert cartoon)
Hmmm. Smells like avionics or process control.
I was around when "software reuse" became the big buzzword and Ada was introduced as a measure (in part) to reduce software development cost in increasingly computerized military systems. It didn't really work at the time, but it felt good to a lot of manager types.
The problem is identifying what is the difference between hard real time requirements (and I only define "hard" as in if you miss taking action and someone will get killed or injured or something is damaged or destroyed, but that's not the standard definition) and just keeping the system usable.
Technology cannot eliminate the difference in requirements for different applications.
I have to agree, much as I am in love with the idea of software reuse. However, I would like to s/cannot eliminate/has not yet eliminated/. I'm an optimist. I may not live to solve it, but perhaps one of my sons will. One thing that my reading in history has convinced me of is "never misunderestimate the power of the human mind".
I’m in aerospace software.
I have to admit to being an Ada-phile. I like the language a lot more than C++, and find it very, very powerful, without carrying the inherent danger of blowing one’s rhetorical foot off! ;-P
Y’all have a great evening.
I have no doubt about the power of the human mind, but my comment has more to do with the plain fact that hard real-time embedded systems are driven by their requirements, which are expression of the functions performed by the system. If the systems didn't necessarily perform different functions, then one would likely avoid the cost of developing a separate, dissimilar system unless there were some other compelling reason to do so. Most businessmen don't purposefully waste money, so...
The power of reuse is embodied by the essence of object-oriented programming, where hierarchical construction allows for overlays on top of basic functionality, lending the ability to specialize a generic object. Were it not for the inherent dead and deactivated code (not to mention the compiler-invented subroutines outside the view of mortal men), this would be a perfect solution to most applications - once embedded computers get fast enough to absorb the bloat.
Of course, in aero the life span of a given system is measured in decades, meaning some of the processors still out there (and being upgraded) may be pre-1980 technology! It kind of makes the "processing power" tsunami into a bathtub wave - not much impact! LOL
Dilber is right on top of it as usual:
Stay well ................. FRegards
What is even funnier than your cartoon is the fact that, that particular one used to grace the inside wall of my cubicle where I could only see it.
Until, that is my manager issued the diktat that all cartoons and posters needed to be removed from all cubicles.
And Yes, there is a Dilbert cartoon on that too.
Art reflecting life.
I guess Java wasn’t sufficient...
old fashioned ping
>>The only method Ive ever seen work to get people reusing code is full group code reviews. <<
Same here. But my experience is from the 80’s and 90’s, when the group was all English speaking Americans.
It is a challenge today.
>>The only method Ive ever seen work to get people reusing code is full group code reviews. <<
Same here. But my experience is from the 80’s and 90’s, when the group was all English speaking Americans sitting in the same room with a whiteboard.
It is a challenge today.
Another one of the problems of out sourcing, makes it really hard to get the dev team together for information sharing.
Not so much the case now with video conferencing, bridge calls and instant messaging.
Video conferencing and bridge calls are overrated. I’ve dialed into a phone in a meeting room and it’s basically worthless. If you have everybody sitting at their desk using a webex thing half of them are screwing around on the web ignoring the meeting. And of course for a full run code review you’re talking about a week or two of these meetings that are really boring but you really want people paying attention, there’s no replacement for everybody being in the same room for something like that.
I suppose as with most things, YMMV.
Big deal. Android applications are still Java applications.
“Please note that the NDK does not enable you to develop native-only applications. Android’s primary runtime remains the Dalvik virtual machine.”
“The NDK will not benefit most applications.”
“...using native code does not result in an automatic performance increase, but does always increase application complexity.”
I’d have a really big problem with using learning/adaptive software (did someone say AI?) in safety-critical aerospace applications because such software deliberately fudges the boundaries of what it can and cannot do. That’s where learning occurs - at the edges of the envelope.
But, the edges of the envelope in aero software gets people killed.
I’m very skeptical, but not for the “crazy” reason. I distrust the “unexpected” solutions, too!
I’m in the certification side of aero software, BTW.
I was booked to give a talk at AUVSI’s Unmanned Systems Europe 2009 on that very subject; engineering with learning systems for applications that require rigid quality standards. We had critical events at the company then, so I couldn’t make it. I may yet write the paper though. It’s not so mysterious and difficult - but I’m hiding the reason ... for now. (Drum rolls, marketing hype, raising expectations, music plays, curtain slowly opens ...)
... This paper describes the new software architecture and discusses the potential of application. The current implementation can be installed on a wide range of autonomous systems, automatically locates sensors and actuators and builds its own system specific control programs. Local environment simulations are constructed from sensor input and used as a robot's imagination to adapt and solve problems. System behaviour can be extended by training in new environments and providing new challenges, by installing new fitness functions to drive learning, and by integrating application specific components developed by hand. Adaptation and field creation of new behaviour can be limited to accommodate various requirement levels for testing before use, from exploratory research and experimental development to the most rigid field-ready product quality control standards. We also expect the system to reduce development time and cost.
The issue comes down to meeting regulations, where the system or equipment is required to (1) be designed to properly perform its intended function under all foreseeable operating conditions and (2) be safe to a specified probability, based on the potential effects of failures that the system can contribute to.
Because of the autonomous learning of the AI system, the specific response to a given set of operating conditions cannot be identified before hand.
The current (and immediate future) answer to the second part of the regs is the application of a process-based discipline for developing software. Allowing autonomous self-modification by the software itself does not support this approach. The cert authorities will allow alternative methods, but are (very) unlikely to allow autonomous self-modification of software programs in the near future.
Part of the reason is the outcome of a previous experiment of AI, where a neural net was used to analyze satellite photos for hidden armor (tanks, etc.). The neural net was very successful on its training data - mostly from Germany.
But the net’s success plummeted when it was shown satellite photos of the desert.
They found out that the neural net had settled on cpunting the number of leaves/leafy trees it could see as a predictor of the presence of camouflaged armor - which doesn’t work in the desert.
The process of decision making, when civilian lives are on the line, is not yet ready to be delegated to leaf-counting programs! :)
Have a great day!
Part of my standard talk is that “it’s not yet time to fire all the engineers.” Who turned the sucker loose in the desert when it was only trained in the Black Forest? I’d a seen that one coming a thousand miles away.
I guess I’d have to ask - why wasn’t the experiment carried out by experts?
BTW: I could never get all that excited about neural nets.
The moral of the story is that the margin of error for software running some of these systems is ten to the minus ninth power - extreme improbability.
Allowing a computer to teach itself can’t reach that level of certitude.
Even the smallest glitch would result in industry-killing liability lawsuits the likes of which we’ve never seen.
The risk is not worth the reward.
If I get the paper written, I’m sure I’ll blog about it. The issues you raise are terribly obvious. Can’t think of any reason I’d imagine that someone in the field wouldn’t understand them.
That’s because the “field” was a sandbox back when this experiment was performed. It was one of the first successful neural nets, and it had to be handled by people who knew the field it was dabbling in - satellite photo reconnaissance.
Hindsight is 20-20 in most instances, but sometimes the people in charge aren’t looking at the history one would like them two.
For many applications, I think it’s fine to use self-teaching software. At this point, I don’t agree that the aero field is ready to take the risk.
Have a great evening - NCIS is coming on, so I’m going offline. (A man has to have a FEW vices, right?!? ;-P)
There is a great deal of benefit to using automated tests, they will help insure that positive flows through the code's logic work, and that common error handing routines are sufficient. I have over a decade of OO automated tool experience so I am not panning well constructed automation, but I guarantee you I can break any code tested only in this manner.
In my humble opinion, the path to achieving what you have outlined is:
- Components need to be simplified, documented, thoroughly tested, and rock solid.
- Developers need to be penalized for reinventing the wheel, unless it is truly a better wheel, and then the old wheel should be thrown out.
- QA needs to be brought into the process at inception, not after development has already started.
- QA cycles should not be compressed in order to make up for development overruns (yes I know that I'm dreaming here).
- Project management needs to play a much more active role in projects while they are in flight and learn to close them out when complete to the original scope.
- All major development methodologies are valid and sound if the rules are followed, however most of the time the rules are bent or broken.
NCIS has been one of my favorites. Can’t keep my interest up in re-runs forever though (which is what we get were I am).
There are certain basic scientific rules that aren’t violated when you switch the form of implementation. Pattern recognition is one of those things I focused on as a student; and even though I did different things in the real world, there were plenty of bits of scientific wisdom from that part of my education that served me well.
Just to bring you up-to-date, there are now techniques using genetic programming, sometimes combined with neural networks that do quite well.
Also, consider the fact that you - as a human - automatically notice things in your peripheral vision that grab your attention: a highly effective sort of early warning system. You don’t have to be staring right at something that should get your attention and thinking consciously about it. You don’t have to look at it clearly and for a long time, studying the many subtleties to have your early warning (sort of, it doesn’t only apply to bad things) alarm tripped.
This has led to some interesting experiments in improving recognition by actually reducing - filtering - the information being processed. Counter-intuitive; but it’s improved reliability - and it’s faster of course.
Yes, I agree with much of what you say. I don’t see why that would be a problem with the suggestions I’ve made. In fact, I’ve pointed to the fact that keeping up with modern technology, modern project process has become more agile. This means that project participants should be involved in the flow of work - must less like the olden days when project emphasis would shift from one group to another in large phases.
The one thing about your comments that leaves me a little cold, is the way you want QA people to monopolize testing. If engineers do no testing, then they’ll end up shipping a lot of stuff that doesn’t work to QA. No point in that. And software systems need to end up doing what they’re intended to do - and for many best efforts that involves experts and specialists in the application area (and often end use customers).
Each has a particular role within the quality assurance process.
I want a gold star for this comment: Quality is everyone’s concern!
Last night was the season premier of NCIS.
I understand that I am not on the cutting edge of self-teaching software, but I’m afraid I will remain skeptical on its utility in aerospace.
The very concept of “develop (ad hoc), decouple, and thoroughly test” is well below the minimum requirements for process as they are currently written. I don’t see that changing any time soon - and I am on the committe working on the next generation of guidance on the subject.
To my knowledge, there have been very few aircraft incidents caused by software. I think I heard of a single one, but human memory is frail. To radically change what works - what makes for a high degree of safety in the fielded systems - doesn’t make all that much sense, especially when doing so requires adoption of unproven technology.
And, before you protest my choice of the word “unproven”, consider that the aerospace industry is just now coming to grips with OOT. We’re behind the times, but people stay alive...
Have a great day, Roger.
Actually, they now have systems designed to detect wind shear.
But, one cannot consider the lack of a system (including software) to perform a function as having the software contribute to a failure caused by the lack of that system.
That’s kind of like saying “I turned into the wrong driveway and hit a tree I didn’t have”.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.