A Practical Logic of Cognitive Systems - The Reach of Abduction: Insight and Trial

A Practical Logic of Cognitive Systems - The Reach of Abduction: Insight and Trial

von: Dov M. Gabbay, John Woods

Elsevier Trade Monographs, 2005

ISBN: 9780080460925 , 496 Seiten

Format: PDF, ePUB, OL

Kopierschutz: DRM

Windows PC,Mac OSX für alle DRM-fähigen eReader Apple iPad, Android Tablet PC's Apple iPod touch, iPhone und Android Smartphones Online-Lesen für: Windows PC,Mac OSX,Linux

Preis: 175,00 EUR

Mehr zum Inhalt

A Practical Logic of Cognitive Systems - The Reach of Abduction: Insight and Trial


 

Chapter 2

Practical Logic


Dav M. Gabbay    Department of Computer Science King's College London Strand, London, WC2R 2LS, U.K.

John Woods    Philosophy Department University of British Columbia, Vancouver, BC Canada, V6T 1Z1
Department of Computer Science King's College London Strand, London, WC2R 2LS, U.K.

… for all the proclaimed rationality of modem humans and their institutions, logic touches comparatively little of human practice.

Richard Sylvan

[T]he limit on human intelligence up to now has been set by the size of the brain that will pass through the birth canal …. But within the next few years, I expect we will be able to grow babies outside the human body, so this limitation will be removed. Ultimately, however, increases in the size of the human brain through genetic engineering will come up against the problem that the body’s chemical messengers responsible for our mental activity are relatively slow-moving. This means that further increases in the complexity of the brain will be at the expense of speed. We can be quick-witted or very intelligent, but not both.

Stephen Hawking

2.1 First Thoughts on a Practical Logic


The theory of abduction that we develop in this volume is set up to meet two conditions. One is that it show how abduction plays within a practical logic of cognitive systems. The other is that, to the extent possible, it serve as an adequate standalone characterization of abduction itself. In the first instance we try to get the logic of cognitive systems right, though with specific attention to the operation of abduction. In the second instance, we try to get abduction right; and we postulate that our chances of so doing improve when the logic of abduction is lodged in this more comprehensive practical logic.

We open this chapter with a brief discussion of what we take such a logic to be. Readers who wish a detailed discussion can consult chapters 2 and 3 of the companion volume, Agenda Relevance: A Study in Formal Pragmatics. Other readers, who may be eager to get on with abduction without these prefatory remarks, can go directly to section 3.1.

In the prequel to this book we adopted a convention for flagging the more important of the claims and ideas advanced by our conceptual model of the relevance relation. Key claims that we were prepared to assert with some confidence we flagged as (numbered) definitions or propositions. Ideas that called for a greater tentativeness we flagged as (numbered) propositions prefixed with the symbol . We here follow that same practice for abduction.

2.1.1 A Hierarchy of Agency Types


We take the position that reasoning is an aid to cognition, a logic, when conceived of as a theory of reasoning, must take this cognitive orientation deeply into account. Accordingly, we will say that a cognitive system is a triple of a cognitive agent, cognitive resources, and cognitive target performed in real time. (See here [Norman, 1993; Hutchins, 1995].) Correspondingly, a logic of a cognitive system is a principled description of conditions under which agents deploy resources in order to perform cognitive tasks. Such is a practical logic when the agent it describes is a practical agent. So, then,

Definition 2.1

Cognitive systems

A cognitive system CS is a triple X, R, A of a cognitive agent X, cognitive resources R, and a cognitive agenda A executed in real time.

Definition 2.2

Practical logics, a first pass

A practical logic is a systematic account of aspects of the behaviour of a cognitive system in which X is a practical agent.

A practical logic is but an instance of a more general conception of logic. The more general notion is reasoning that is target-motivated and resource-dependent. Correspondingly, a logic that deals with such reasoning is a Resource-Target Logic (RT-logic). In our use of the term, a practical logic is a RT-logic relativized to practical agents.

How agents perform is constrained in three crucial ways: in what they are disposed towards doing or have it in mind to do (i.e., their agendas); in what they are capable of doing (i.e., their competence); and in the means they have for converting competence into performance (i.e., their resources). Loosely speaking, agendas are programmes of action, exemplified by belief-revision and belief-update, decision-making and various kinds of case-making and criticism transacted by argument. For ease of exposition we classify this motley of practices under the generic heading “cognitive”, and we extend the term to those agents whose practices these are.1

An account of cognitive practice should include an account of the type of cognitive agent involved. Agency-type is set by two complementary factors. One is the degree of command of resources an agent needs to advance or close his (or its) agendas. For cognitive agendas, three types of resources are especially important. They are (1) information, (2) time, and (3) computational capacity. The other factor is the height of the cognitive bar that the agent has set for himself. Seen this way, agency-types form a hierarchy H partially ordered by the relation C of commanding-greater-resources-in-support-of-higher-goals-than. H is a poset (a partially ordered set) fixed by the ordered pair 〈C, X〉 of the relation C on the set of agents X.

Human agency ranks low in H. If we impose a decision not to consider the question of membership in H of non-human primates, we could say that in the H-space humans are the lowest of the low. In the general case the cognitive resources of information, time and computational capacity are for human agents comparatively less abundant than for agents of higher type, and their cognitive goals are comparatively more modest. For large classes of cases, humans perform their cognitive tasks on the basis of less information and less time than they might otherwise like to have, and under limitations on the processing and manipulating of complexity. Even so, paucity must not be confused with scarcity.2 There are cases galore in which an individual’s resources are adequate for the attainment of the attendant goal. In a rough and ready way, we can say that the comparative modesty of an agent’s cognitive goals inoculates him against cognitive-resource scarcity. But there are exceptions, of course.

Institutional entities contrast with human agents in all these respects. A research group usually has more information to work with than any individual, and more time at its disposal; and if the team has access to the appropriate computer networks, more fire-power than most individuals even with good PCs. The same is true, only more so, for agents placed higher in the hierarchy — for corporate actors such as NASA, and collective endeavours such as quantum physics since 1970. Similarly, the cognitive agendas that are typical of institutional agents are by and large stricter than the run-of-the-mill goals that motivate individual agents. In most things, NASA aims at stable levels of scientific confirmation, but, for individuals the defeasibly plausible often suffices for local circumstances.

These are vital differences. Agencies of higher rank can afford to give maximization more of a shot. They can wait long enough to make a try for total information, and they can run the calculations that close their agendas both powerfully and precisely. Individual agents stand conspicuously apart. For most tasks, the human cognitive agent is a satisficer. He must do his business with the information at hand, and, much of the time, sooner rather than later. Making do in a timely way with what he knows now is not just the only chance of achieving whatever degree of cognitive success is open to him as regards the agenda at hand; it may also be what is needed in order to avert unwelcome disutilities, or even death. (We do not, when seized by an onrushing tiger experience, wait before fleeing for a refutation of skepticism about the external world or a demonstration that the approaching beast is not an hallucination.)

Given the comparative humbleness of his place in H, the human individual is frequently faced with the need to practise cognitive economies. This is certainly so when either the loftiness of his goal or the supply of drawable resources create a cognitive strain. In such cases, he must turn scantiness to advantage. That is, he must (1) deal with his resource-limits and in so doing (2) must do his best not to kill himself. There is a tension in this dyad. The paucities with which the individual is chronically faced are often the natural enemy of getting things right, of producing accurate and justified answers to the questions posed by his agenda. And yet not only do human beings contrive to get most of what they do right enough not to be killed by it, they also in varying degrees prosper and flourish.

This being so, we postulate for the individual agent slight-resource adjustment strategies (SRAS), which he uses to advantage in dealing with the cognitive limitations that inhere in the paucities presently in view. We make this assumption in the spirit of Simon...