Wednesday, August 27, 2008

philosophers of mind = lazy?

Are philosophers of mind just lazy? Two points of evidence:

1) The excessive dependence of the literature on thought experiments which exhibit obvious inadequacies for addressing the subtle issues at stake.

Often, these thought experiments depend upon the "argument from lack of imagination." Thought experiments of the form: "consider bizarre situation X, you wouldn't say X would exhibit [explanandum Y], would you? Therefore your theory of [explanandum Y] is inadequate." Usually Y = meaning, consciousness, intelligence, etc. The Chinese Room is a famous example here.

Unfortunately, this argument form is absurdly weak and can always be countered by an opponent who simply claims to have the requisite amount of imagination. ("Yes, I would say X exhibits [explanandum Y].") This strategy is pursued by Dennet in Consciousness Explained and is implicit in the likes of Hofstadter.

Another deficiency of these thought experiments is their flagrant physical impossibility. Why our philosophy of mind should be based primarily on the consideration of physically impossible situations is beyond me.

Consider, for example, Mary the Cognitive Scientist, who has supposedly been raised in a situation where she will never "see" (ie experience the qualia associated with) red (while nevertheless learning everything cognitive science has to say about color vision, etc., etc.). We are then asked to consider her reaction upon leaving her isolation chamber and seeing red for the first time.

But, how would she be isolated from seeing red? Red surfaces are not needed for this experience, white light, appropriately manipulated, is all that's necessary (we've known this since Newton (an absurdly conservative estimate)). In fact, no special apparatus is needed: simply pressing on one's closed eyelids while turned toward a light is enough to experience all the colors of the rainbow.

Suppose, however, one could somehow prevent any redness triggers from reaching Mary's brain, would she then experience the same qualia red as the rest of us when finally exposed to the appropriate stimulus? This seems highly unlike given what we know about brain development. Faculties not used during the appropriate developmental period (say, language in a wild child, or horizontal-sensors in a cat's visual system) simply atrophy. If the experiment could be performed, most likely Mary could never experience red qualia, no matter what stimulus she was presented with.

So, thought experiments = lazy with research, lazy with argumentation.

2) If we rank theories of brain dynamics w/r/t popularity amongst philosophers and w/r/t the difficulty of the associated mathematics, the rankings match up precisely.

Consider three theories about the appropriate formalism for understanding intelligent behavior:

i) GOFAI, ie rule-based relations between logical formulae

ii) Connectionism, ie models based on simple "neuron-like" nodes arranged in an interconnected network

iii) Dynamical systems, ie dynamic models involving differential equations where arbitrarily small changes in initial conditions can result in drastically different behavior

Despite the tendency on the part of both GOFAI and Connectionism to feign an embattled, minority status, I think it's fair to characterize some loose form of the logic-based picture as maintaining dominance, even if not in the extreme "GOFAI" form of its most famous defender, Fodor. Supposing even that Connectionism is a distant second, it is still far ahead of the dynamical systems approach.

Yet, the dynamical systems approach has a lot going for it:

a) It is the same formalism we use to model other complex systems in nature (weather, population dynamics, fluid flow). Why think the human brain is intrinsically simpler than these other natural systems?

b) The mathematics subsumes that of Connectionism (neural networks are a special case of a dynamical system).

c) In the abstract, the formalism is indifferent between several levels of description (we could use it to describe interactions between neurons, or interactions within the electrical field generated by the brain, or something more abstract even).

Connectionism's main claim, that it is modeled on the behavior of actual neurons, falls apart with just a few moments' research into neural behavior. GOFAI's abject failure in the realm of linguistics and AI leaves little going for it other than a priori analysis into the essence of intelligence (well, and a lot of thought experiments . . .).

However, note: the mathematics of GOFAI is just first order logic, often simplified to propositional logic; pretty easy compared to the probability and continuous mathematics needed for the learning rules in connectionist networks. However, connectionist networks can be simplified, and sometimes even analyzed in terms of simple nonmonotonic logics; nothing compared to the difficulties surrounding the differential equations of a dynamical system.

As an example, analysis of logical systems is relatively easy (proving how they will behave: such metalogical proofs are the bread and butter of professional logicians and theoretical computer scientists). However, analysis of differential equations can be quite difficult (if not impossible). Many investigators resort to running simulations because top-down analysis (proofs about how the system will behave) is so difficult.

So, are the resistance of philosophers of mind to dynamical systems models and the embarrassingly underinformed rejections of connectionist models by the likes of Fodor, just instances of armchair philosopher laziness?

1 comment:

Anonymous said...

I have little to say about your contrast points, but I disagree with your assessment of thought experiments. While I think that relying on thought experiments excessively certainly harms frutiful discussion (and in the long run, progress on the or an issue), I also think that the appropriate use of such experiments can serve as a guide to further inquiry and experimentation. Imagining Y, given X, is sometimes a fruitful way of envisaging the range of plausible variation in background conditions to an experiment. It's possible that connectionism, or any one of the positions in the philosophy of mind, appeared ex nihilo, but it's far more likely that they appeared because of people engaging in thought experiments of the type that you suggest are not worthwhile. But you're also right to point out that reliance on said abstractia qua abstractia is unfruitful.