Free Republic
Browse · Search
Bloggers & Personal
Topics · Post Article

Skip to comments.

High Level Logic: Rethinking Software Reuse in the 21st Century
High Level Logic (HLL) Open Source Project ^ | September 20, 2010 | Roger F. Gay

Posted on 09/20/2010 8:52:32 AM PDT by RogerFGay

click here to read article


Navigation: use the links below to view more comments.
first previous 1-20 ... 41-6061-8081-100101-111 next last
To: Pessimist

Big deal. Android applications are still Java applications.

“Please note that the NDK does not enable you to develop native-only applications. Android’s primary runtime remains the Dalvik virtual machine.”

“The NDK will not benefit most applications.”

“...using native code does not result in an automatic performance increase, but does always increase application complexity.”


81 posted on 09/21/2010 9:43:37 AM PDT by DigitalVideoDude (It's amazing what you can accomplish when you don't care who gets the credit. -Ronald Reagan)
[ Post Reply | Private Reply | To 70 | View Replies]

To: RogerFGay

I’d have a really big problem with using learning/adaptive software (did someone say AI?) in safety-critical aerospace applications because such software deliberately fudges the boundaries of what it can and cannot do. That’s where learning occurs - at the edges of the envelope.

But, the edges of the envelope in aero software gets people killed.

I’m very skeptical, but not for the “crazy” reason. I distrust the “unexpected” solutions, too!

I’m in the certification side of aero software, BTW.


82 posted on 09/21/2010 10:36:02 AM PDT by MortMan (Obama's response to the Gulf oil spill: a four-putt.)
[ Post Reply | Private Reply | To 73 | View Replies]

To: MortMan

I was booked to give a talk at AUVSI’s Unmanned Systems Europe 2009 on that very subject; engineering with learning systems for applications that require rigid quality standards. We had critical events at the company then, so I couldn’t make it. I may yet write the paper though. It’s not so mysterious and difficult - but I’m hiding the reason ... for now. (Drum rolls, marketing hype, raising expectations, music plays, curtain slowly opens ...)


83 posted on 09/21/2010 11:54:38 AM PDT by RogerFGay
[ Post Reply | Private Reply | To 82 | View Replies]

To: MortMan
... This paper describes the new software architecture and discusses the potential of application. The current implementation can be installed on a wide range of autonomous systems, automatically locates sensors and actuators and builds its own system specific control programs. Local environment simulations are constructed from sensor input and used as a “robot's imagination” to adapt and solve problems. System behaviour can be extended by training in new environments and providing new challenges, by installing new “fitness functions” to drive learning, and by integrating application specific components developed “by hand.” Adaptation and field creation of new behaviour can be limited to accommodate various requirement levels for testing before use, from exploratory research and experimental development to the most rigid field-ready product quality control standards. We also expect the system to reduce development time and cost.

84 posted on 09/21/2010 12:30:16 PM PDT by RogerFGay
[ Post Reply | Private Reply | To 82 | View Replies]

To: MortMan
Based on this: RoboBusiness: Robots that Dream of Being Better
85 posted on 09/21/2010 12:32:55 PM PDT by RogerFGay
[ Post Reply | Private Reply | To 82 | View Replies]

To: RogerFGay

The issue comes down to meeting regulations, where the system or equipment is required to (1) be designed to properly perform its intended function under all foreseeable operating conditions and (2) be safe to a specified probability, based on the potential effects of failures that the system can contribute to.

Because of the autonomous learning of the AI system, the specific response to a given set of operating conditions cannot be identified before hand.

The current (and immediate future) answer to the second part of the regs is the application of a process-based discipline for developing software. Allowing autonomous self-modification by the software itself does not support this approach. The cert authorities will allow alternative methods, but are (very) unlikely to allow autonomous self-modification of software programs in the near future.

Part of the reason is the outcome of a previous experiment of AI, where a neural net was used to analyze satellite photos for hidden armor (tanks, etc.). The neural net was very successful on its training data - mostly from Germany.

But the net’s success plummeted when it was shown satellite photos of the desert.

They found out that the neural net had settled on cpunting the number of leaves/leafy trees it could see as a predictor of the presence of camouflaged armor - which doesn’t work in the desert.

The process of decision making, when civilian lives are on the line, is not yet ready to be delegated to leaf-counting programs! :)

Have a great day!


86 posted on 09/21/2010 12:41:58 PM PDT by MortMan (Obama's response to the Gulf oil spill: a four-putt.)
[ Post Reply | Private Reply | To 84 | View Replies]

To: MortMan

Part of my standard talk is that “it’s not yet time to fire all the engineers.” Who turned the sucker loose in the desert when it was only trained in the Black Forest? I’d a seen that one coming a thousand miles away.


87 posted on 09/21/2010 3:36:49 PM PDT by RogerFGay
[ Post Reply | Private Reply | To 86 | View Replies]

To: MortMan

I guess I’d have to ask - why wasn’t the experiment carried out by experts?


88 posted on 09/21/2010 3:48:21 PM PDT by RogerFGay
[ Post Reply | Private Reply | To 86 | View Replies]

To: MortMan

BTW: I could never get all that excited about neural nets.


89 posted on 09/21/2010 3:49:26 PM PDT by RogerFGay
[ Post Reply | Private Reply | To 86 | View Replies]

To: RogerFGay

It was.

The moral of the story is that the margin of error for software running some of these systems is ten to the minus ninth power - extreme improbability.

Allowing a computer to teach itself can’t reach that level of certitude.

Even the smallest glitch would result in industry-killing liability lawsuits the likes of which we’ve never seen.

The risk is not worth the reward.


90 posted on 09/21/2010 4:13:36 PM PDT by MortMan (Obama's response to the Gulf oil spill: a four-putt.)
[ Post Reply | Private Reply | To 88 | View Replies]

To: MortMan

If I get the paper written, I’m sure I’ll blog about it. The issues you raise are terribly obvious. Can’t think of any reason I’d imagine that someone in the field wouldn’t understand them.


91 posted on 09/21/2010 4:21:20 PM PDT by RogerFGay
[ Post Reply | Private Reply | To 90 | View Replies]

To: RogerFGay

That’s because the “field” was a sandbox back when this experiment was performed. It was one of the first successful neural nets, and it had to be handled by people who knew the field it was dabbling in - satellite photo reconnaissance.

Hindsight is 20-20 in most instances, but sometimes the people in charge aren’t looking at the history one would like them two.

For many applications, I think it’s fine to use self-teaching software. At this point, I don’t agree that the aero field is ready to take the risk.

Have a great evening - NCIS is coming on, so I’m going offline. (A man has to have a FEW vices, right?!? ;-P)


92 posted on 09/21/2010 5:09:59 PM PDT by MortMan (Obama's response to the Gulf oil spill: a four-putt.)
[ Post Reply | Private Reply | To 91 | View Replies]

To: RogerFGay; BuckeyeTexan
The worst projects I have ever worked on were either tested by engineers or business experts. I appreciate everything you have said, but as a quality assurance professional with many years of experience I see a few problems. The biggest downfall to extensive use of reusable code for large complex assemblies is lack of documentation and late defect/issue discovery.

There is a great deal of benefit to using automated tests, they will help insure that positive flows through the code's logic work, and that common error handing routines are sufficient. I have over a decade of OO automated tool experience so I am not panning well constructed automation, but I guarantee you I can break any code tested only in this manner.

In my humble opinion, the path to achieving what you have outlined is:
- Components need to be simplified, documented, thoroughly tested, and rock solid.
- Developers need to be penalized for reinventing the wheel, unless it is truly a better wheel, and then the old wheel should be thrown out.
- QA needs to be brought into the process at inception, not after development has already started.
- QA cycles should not be compressed in order to make up for development overruns (yes I know that I'm dreaming here).
- Project management needs to play a much more active role in projects while they are in flight and learn to close them out when complete to the original scope.
- All major development methodologies are valid and sound if the rules are followed, however most of the time the rules are bent or broken.

93 posted on 09/21/2010 6:22:01 PM PDT by Woodman
[ Post Reply | Private Reply | To 40 | View Replies]

To: MortMan

NCIS has been one of my favorites. Can’t keep my interest up in re-runs forever though (which is what we get were I am).

There are certain basic scientific rules that aren’t violated when you switch the form of implementation. Pattern recognition is one of those things I focused on as a student; and even though I did different things in the real world, there were plenty of bits of scientific wisdom from that part of my education that served me well.

Just to bring you up-to-date, there are now techniques using genetic programming, sometimes combined with neural networks that do quite well.

Also, consider the fact that you - as a human - automatically notice things in your peripheral vision that grab your attention: a highly effective sort of early warning system. You don’t have to be staring right at something that should get your attention and thinking consciously about it. You don’t have to look at it clearly and for a long time, studying the many subtleties to have your early warning (sort of, it doesn’t only apply to bad things) alarm tripped.

This has led to some interesting experiments in improving recognition by actually reducing - filtering - the information being processed. Counter-intuitive; but it’s improved reliability - and it’s faster of course.


94 posted on 09/22/2010 4:46:48 AM PDT by RogerFGay
[ Post Reply | Private Reply | To 92 | View Replies]

To: MortMan
On the more general topic of learning systems producing untested behavior on the fly - I really had planned a longer more detailed presentation with a variety of options. Let me just name the one that's easiest to accept. Use learning systems only in development. Take the result, test it thoroughly, adjust and supplement if needed; and release only the result - without running the learning engine in the final product.

What you get:

Rapid development of complex algorithms, some of which are potentially beyond the current capabilities of human analysts and programmers. Improved results with dramatic reductions in time and cost of development.
95 posted on 09/22/2010 4:51:30 AM PDT by RogerFGay
[ Post Reply | Private Reply | To 92 | View Replies]

To: Woodman

Yes, I agree with much of what you say. I don’t see why that would be a problem with the suggestions I’ve made. In fact, I’ve pointed to the fact that keeping up with modern technology, modern project process has become more agile. This means that project participants should be involved in the flow of work - must less like the olden days when project emphasis would shift from one group to another in large phases.

The one thing about your comments that leaves me a little cold, is the way you want QA people to monopolize testing. If engineers do no testing, then they’ll end up shipping a lot of stuff that doesn’t work to QA. No point in that. And software systems need to end up doing what they’re intended to do - and for many best efforts that involves experts and specialists in the application area (and often end use customers).

Each has a particular role within the quality assurance process.

I want a gold star for this comment: Quality is everyone’s concern!


96 posted on 09/22/2010 4:59:25 AM PDT by RogerFGay
[ Post Reply | Private Reply | To 93 | View Replies]

To: RogerFGay

Last night was the season premier of NCIS.

I understand that I am not on the cutting edge of self-teaching software, but I’m afraid I will remain skeptical on its utility in aerospace.

The very concept of “develop (ad hoc), decouple, and thoroughly test” is well below the minimum requirements for process as they are currently written. I don’t see that changing any time soon - and I am on the committe working on the next generation of guidance on the subject.

To my knowledge, there have been very few aircraft incidents caused by software. I think I heard of a single one, but human memory is frail. To radically change what works - what makes for a high degree of safety in the fielded systems - doesn’t make all that much sense, especially when doing so requires adoption of unproven technology.

And, before you protest my choice of the word “unproven”, consider that the aerospace industry is just now coming to grips with OOT. We’re behind the times, but people stay alive...

Have a great day, Roger.


97 posted on 09/22/2010 5:24:42 AM PDT by MortMan (Obama's response to the Gulf oil spill: a four-putt.)
[ Post Reply | Private Reply | To 94 | View Replies]

To: MortMan
The very concept of “develop (ad hoc), decouple, and thoroughly test” is well below the minimum requirements for process as they are currently written.

I realize that there's quite a bit of work to do to disseminate information. Providing an overview of engineering approaches to meet a variety of situations was just my idea of a way to get started; kind of like the introductory chapter.

I've been working with people more involved with genetic programming. "Fitness functions" are used to define what the finished algorithm is supposed to do. There is no reason I can think of why the process of developing and maintaining the fitness functions should not be the same as in traditional development. The "fitness functions" are themselves, programs - designed to work with a learning engine to produce a result. Input - program - output.

The fitness functions themselves can also include restraints; i.e. you can also specify what results are not allowed to do. So far, it's still quite traditional - except that reviewed and approved detailed requirements go directly into a program that engineers what the requirements tell it to produce.

Results are not just a black box filled with 1s and 0s. They can be produced in the form of more traditional high level language structures, and you can pick apart and analyze results any way that you with - testing at functional through systems level.

A particular area of interest to me and HLL; is the blending of evolution and design - i.e. automated machine learning and traditional fixed processes for structure and "high level" control. Safety assurance is one of the fundamental motivations for this interest.

But I'm getting ahead of the current state of the HLL project - in which machine learning hasn't even been mentioned yet, and isn't on the current issues list. I do expect to blog on the subject sometime during the next 3 months or so; but I'll be (pleasantly) surprised (and probably happy to the point of excitement) if any demonstration of such emerges during that time. -- on the other hand - if other things are done in good time, I have been considering throwing in a simple demonstration using a relatively simple open-source gp engine - at least enough to show the mechanism of blending the two.
98 posted on 09/22/2010 6:00:21 AM PDT by RogerFGay
[ Post Reply | Private Reply | To 97 | View Replies]

To: MortMan
To my knowledge, there have been very few aircraft incidents caused by software.

Unless you consider the lack of it; both in aircraft and on the ground in development processes. Just one example here to drive the thought: wind shear.
99 posted on 09/22/2010 6:41:14 AM PDT by RogerFGay
[ Post Reply | Private Reply | To 97 | View Replies]

To: RogerFGay

Actually, they now have systems designed to detect wind shear.

But, one cannot consider the lack of a system (including software) to perform a function as having the software contribute to a failure caused by the lack of that system.

That’s kind of like saying “I turned into the wrong driveway and hit a tree I didn’t have”.


100 posted on 09/22/2010 7:00:18 AM PDT by MortMan (Obama's response to the Gulf oil spill: a four-putt.)
[ Post Reply | Private Reply | To 99 | View Replies]


Navigation: use the links below to view more comments.
first previous 1-20 ... 41-6061-8081-100101-111 next last

Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.

Free Republic
Browse · Search
Bloggers & Personal
Topics · Post Article

FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson