While I came up with such a design this was purely accidental and I had no idea, at first, that I was doing anything «questionable. I was therefore very familiar with the frequent problems of aknown unknownsa and the less common a but more difficult problems when aunknown unknownsa revealed themselves. To me it was laughable that anyone faced with such complex information processing problems would attempt to construct a ado it alla pre-defined application program. My anaA?ve novicea solution to the commercial task mention earlier was no more that micro-managing the task as I would have done if I had been doing it manually. The problem was that what I was doing was philosophically incompatible with the areligiona of the Stored Program Computer and through the 1970s and 1980’s I found it impossible to get a university grant, or to get many peer-reviewed papers past the deeply entrenched computer science apriesthood.
For all I know there may have been other researchers starting along similar lines who were similarly jumped on for daring to suggest that the foundations that underlie the stored program computer approach may have some cracks in them. No 2, 1990, pp 155-163 [Due to the delays in the peer review system this was published over two years after I had decided to throw in the towel!
The last difference: A computer can’t «feel». Imagine the amount of work our brain goes through for us to «feel» something. And of course, intuition too.
#1 and #7 are not true. I am pretty sure that McCulloch, Pitts, Minsky and Papert mathematically demonstrated that the computer is an excellent analogy for the brain. Just go read their papers and you’ll seeputer Science is a very powerful tool for neuroscientists and psychology, because the CS provides a mathematically rigorous framework for describing and solving problems in neuroscience and psychology. Without these formal methods, psychology wouldn’t even be a science. Btw, I’m not a computer scientist, I’m a molecular biologist.
Similarly, one could imagine there being a «language module» in the brain, as there might be in computers with natural language processing programs. Cognitive psychologists even claimed to have found this module, based on patients with damage to a region of the brain known as Broca’s area. More recent evidence has shown that language too is computed by widely distributed and domain-general neural circuits, and Broca’s area may also be involved in other computations (see here for more on this).
The problem comes from assuming that this is where the processing of that particular thing is done
This point follows naturally from the homeloansplus.org/payday-loans-hi previous point — experience profoundly and directly shapes the nature of neural information processing in a way that simply does not happen in traditional microprocessors. For example, the brain is a self-repairing circuit — something known as «trauma-induced plasticity» kicks in after injury. This can lead to a variety of interesting changes, including some that seem to unlock unused potential in the brain (known as acquired savantism), and others that can result in profound cognitive dysfunction (as is unfortunately far more typical in traumatic brain injury and developmental disorders).
Each of your points is either not true or it just lists an area where the brain is currently at a high complexity level.
The modularity issue I’m intrigued by. Clearly the areas of the brain are not as discrete as those in our computers, but don’t we still refer to experience occurring in the neocortex? Although I really don’t know enough about this (and I want to know more!) there must be some level of modularity occurring in the brain. My gut instinct is telling me here that a brain based completely on spaghetti wiring just wouldn’t work very well.
Before reading this article I would have argued is primarily as software problem. Now I have to agree that it is also a hardware problem. But who’s to say we can’t simulate (if not develop) hardware that will work sufficiently similar to our organic hardware? Then it will still come down to software problem. And we just don’t have a good model of how human «brain software» works. And I don’t think we will for a very long time.
But like I said before, you can model a raisin with a housefly
Before I annoy anyone here’s something for Jonathan: As far as modularity goes, there is some. You can predict what kind of deficits a person will have based on where an injury occurs. To use computers as an analogy — I can’t help it — if we cut the power cord on a compter it stops adding numbers together. Thus addition takes place in the power cord.
Lisa Richard both make the very interesting point that metaphorical reasoning appears to be a necessary component for understanding complex things, in particular the brain.
I’m arguing that the way the RAM works is important, because one doesn’t need to have the same limitations in RAM behaviour that one has the brain. You *can* model the limitations on a sufficiently powerful computer. *
Another thing to consider, is maybe the solution isn’t so much in recreating the brain in a computer — I like to use flight as another technology. We’ve created rockets and jets and shuttles, etc, which acheive flight, but not in the same way that birds do. A simple example, perhaps, but I think if you spoke to scientists 150 or 200 years ago, maybe they would be focused on all the reasons why flight might be impossible, pointing to all the details of why we can’t make a machine that is like a bird.
Well, if that is true then would a machine processing somewhat less than massive amounts of information fall short of being conscious? is going too far, logically, in asserting that a set of electrical impulses become consciousness simply because a similar set of impulses is observed to coincide with awareness in a human brain. At most, one could say it is a necessary condition that some such process be present for the associated experience of consciousness to occur in a living brain.
It may be true that their critique had the effect of making neural networks in general such an unpopular topic that the modern analysis of three-layer networks was delayed by a few years, but one can hardly blame M P for that, or for not developing or presaging an entire new field of research, one that they were were both early to acknowledge to be of immense importance. Science advances by bad, but influential, ideas being conclusively refuted, as well as by great ideas coming to the fore.