Jim choate writes:
Also there is the potential to use neural networks at these levels (which are not necessarily reducable to Turing models, the premise has never been proven)
Uhh, gee; given that I've seen neural networks implemented on conventional computer systems, and as far as I know those were perfectly functional (if slow) neural networks, I think that pretty much proves it (as if it needed to be).
I'd say that the burden of proof is to demonstrate that there are algorithms implementable on a neural network which are unimplementable on a Turing machine. That'd be a pretty significant breakthrough.
The bottom line is that this whole area is a unknown and if we persist in carrying unproven assumptions from the macro-world over into the QM model we WILL be in for a nasty surprise.
Complexity theory doesn't have anything to do with any world, macro- or micro- or mega- or whatever. It's mathematics.
-- | GOOD TIME FOR MOVIE - GOING ||| Mike McNally <m5@tivoli.com> | | TAKE TWA TO CAIRO. ||| Tivoli Systems, Austin, TX: | | (actual fortune cookie) ||| "Like A Little Bit of Semi-Heaven" |
I use both digital and analog circuits in some of my designs and they are not necessarily reducable. Just because you can use a neural network to solve a problem using conventional architecture machines does not a priori prove anything about the reducability of the technology. I would have to say that 'spin glass' model neural networks might be such a model. However, either way you approach it (yours o r mine) it has not been done and assuming it is the same will lead to some problems. Complexity theory is mathematics so I would have to say your last assertion is total drivel. r