BY: RICHARD J.KOSCIEJEW
To a first approximation, cognitive science agrees with everyday notions about reasoning, that is to say, the deliberated act or process of thinking or significantly characterized in descriptions that can be made actual, so much as to form an idea or something in the mind or the process of thinking. Though in a particularly complete manner, in spite of the fact, whose natural or acquired proficiency especially in a particular activity enabling possession of or marked by a high level of efficiency and ability. According to both cognitive science and that of reasoning, as viewing something (as an aim, end, or motive) the revealing considerations in opinion of the position or resented evaluation from this viewpoint hold accountable the totalitarian form of the German perspective. However, reasoning is a special sort of relation between beliefs - a relation that holds when accepting (or rejecting) one or more beliefs causes others to be accepted (rejected). If you learn, for example, that everyone dislikes iguana pudding, that should increase the likelihood of your believing that Calvin, in particular, dislikes iguana pudding. Reasoning could produce an entirely new belief about Calvin’s attitude toward the pudding, or it could modify an old one. In either case, accepting the second idea on the basis of the first exemplifies reasoning of the simplest sort. More complex reasoning results from chains of such changes. (Café Maudit serves everything Calvin dislikes. So, since everybody dislikes iguana pudding, Calvin does; since Calvin does, Café Maudit serves it.)
It would seem, therefore, that the attempt to have truths of ‘reason’, as well as truths of ‘fact’ of which this distinction is associated with Leibniz, who declares that there are only two kinds of truths of reason and truths of fact. The former are all either explicit identities, i.e., of the form ‘A is A’, ‘AB’ is ‘B’, etc., or they are reducible to this form by successively substituting equivalent terms. Leibniz dubs them ‘truths of reason’ because the explicit identities are self-evident a priori truths: Whereas, the rest can be converted too such by purely rational operations. Because their denial involves a demonstratable contradiction, Leibniz also says that truths of reason ‘rest on the principle of contradiction, or identity’ and that they are necessary propositions, which are true of all possible worlds. Some examples are ‘All equilateral rectangles are rectangles’ and ‘All bachelors are unmarried’: The first already of the form ‘AB is B’ and the latter can be reduced to this form by substituting ‘unmarried man’ for ‘bachelor’. Other examples, or so Leibniz believes, are ‘God exists’ and the truths of logic, mathematic and geometry.
Truths of fact, on the other hand, cannot be reduced to an identity and our only way of knowing them is a posteriori, or by reference to the facts of the empirical world. Likewise, since their denial does not involve a contradiction, their truth is merely contingent, they could have been otherwise and hold of the actual world, but not of every possible one. Some examples are ‘Caesar crossed the Rubicon’ and ‘Leibniz was born in Leipzig’, as well as propositions expressing correct scientific generalizations. In Leibniz’s view, truths of fact rest on the principle of sufficient reason, which states that nothing can be so unless there is a reason it is so. This reason is that the actual world (by which he means the total collection of things past, present and future) is better than any other possible world and was therefore crested by God.
In defending the principle of sufficient reason, Leibniz runs into serious problems. He believes that in every proposition, the notion of the predicate is contained in that of the subject (this holds even for propositions like ‘Caesar crossed the Rubicon’: Leibniz thinks anyone who did not cross the Rubicon, would not have been Caesar). And this containment relationship - which is eternal and unalterable even by God - guarantees that every truth has a sufficient reason. If truth consists in concept containment, however, then it seems that all truths are analytic and so necessary: And if they are all necessary, surely they are all truths of reason. Leibniz responds that not every truth can be reduced to an identity in a finite number of steps: In some instances revealing the connection between subject and predicate concepts would require an infinite analysis. But while this may entail that we cannot prove such propositions a priori, it does not appear to how that the proposition could have been false. Intuitively, it seems a better ground for supposing that it is necessary truth of a special sort. A related question arises from the idea that truths of fact depend on God’s decision to create the best world: If it is part of the concept of this world that it is best, however, how could its existence be other than necessary? Leibniz answers that its existence is only hypothetically necessary, i.e., it follows from God’s decision to create this world, but God had the power to decide otherwise. Yet God is necessarily good, so how could he have decided to do anything else? Leibniz says much more about these matters, but it is not clear whether he offers any satisfactory solution.
Considerations that call for or justify action are given to the reasons for action. They may be subjective or objective: As a subjective reason is a consideration an agent understands to support a course of actions, whether or not it actually does. An objective reason is one that does support a course of action, regardless of whether the agent realizes it. What is cited as reasons may be matters either of fact of value, but when facts are cited values are relevant, thus the fact that cigarette smoke contains nicotine is a reason for not smoking only because nicotine has undesirable effects? The most important evaluative reasons are normative reasons -, i.e., considerations having (e.g., ethical force.) Facts become obligating reasons when, in conjunction with normative considerations, they give rise to an obligation. Thus, in view of the obligation to help the needy, the fact that others are hungry is an obligating reason to see that they are fed.
Reasons for action enter practical thinking as the contents of beliefs desires, and other mental stares, but not all the reasons one has need motivate the corresponding behaviour. Thus, I may recognize an obligation to pass taxes, yet do so only for fear of punishment. If so, then only my fear is an explaining reason for my actions. An overriding reason, one that takes precedence over all others, where it is often claimed that moral reasons override all others objectively, and should do so subjectively as well. Finally, one may speak of an all-things-considered reason - one that after discussing this is taken as finally determinative of what will be done.
Belief is a dispositional psychological state, a state of believing, and a particular intentional content or proposition believed, in virtue on which a person will assent to a proposition under certain conditions, as propositional knowledge, traditionally understood, entailed belief. Reasons for belief are roughly the bases of belief. The word ‘belief’ is commonly used to designate both a particular, and a particular intentional content or proposition believed. A behavioural view implies that beliefs are just dispositions to behave in certain ways. Your believing that the stove is hot is just your being disposed to act in a manner appropriate to its being hot. The problem is that our belief, including their propositional content indicated by a ‘that’ clause, typically explains why we do what we do.
As state-object view implies that belief consists of a special relation between a psychological state and an object of belief, what is believed. As a state-object allows that beliefs are dispositional rather than episodic, since they can exist while no action is occurring. Beliefs are either occurrent or non-occurrent, such as an occurrent belief is unlike non-occurrent beliefs that requires current assent to the proposition believed. If the assent is self-conscious, the belief is an explicit occurrent belief, if the assent is not self-conscious, the belief is an implicit occurrent belief. Non-occurrent beliefs permit that we do not cease to believe that 2 + 2 = 4, for instance, merely because we now happen to be thinking of something else or nothing at all.
Reasons for belief exhibit an analogous duality. A proposition, ‘p’, might be said to provide a normative reason to believe a proposition, ‘q’, for instance, when ‘p’ bears some appropriate warranting relation to ‘q’. And ‘p’ might afford a perfectly good reason to believe ‘q’. Even though no one, as a matter of fact, believes either ‘p’ or ‘q’. In contrast, ‘p’ is a reason that I have for believing ‘q’. If I believe ‘p’ and ‘p’ counts as a reason (in the sense refer to) to believe ‘q’. Undoubtedly, I have reason to believe countless propositions that I will never, as it happens, come to believe. Suppose, however, that ‘p’ is a reason for which I believe ‘q’. In that case, I must believe both ‘p’ and ‘q’, and ‘p’ must be a reason to believe ‘q’ - or, at any rate, I must regard it as such. It may be that I must, in addition, believe ‘q’ at least in part, because I believe ‘p’.
Reason in these senses are inevitably epistemic: They turn on considerations of evidence, truth-conductiveness, and the like. But not all reasons for belief are of this sort. An explanatory reason, a reason that I believe ‘p’, might be an explanation for my having or coming to have this belief. Perhaps I believe ‘p’ because I was brainwashed, or struck on the head, or because I have strong non-epistemic motives for this belief, that I might, of course, hold the belief on the basis of unexceptionable epistemic grounds. When this is so, my believing ‘p’ may both warrant and explain my believing ‘q’. Reflection of this sort can lead to questions concerning the overall or ‘all-things-considered’ reasonableness of a given belief. Some philosophers (e.g., Clifford) argue, that a belief’s reasonableness depends exclusively on it epistemic standing: My believing ‘p’ is reasonable for me provided it is epistemically reasonable for me; where belief is concerned, epistemic reasons are overriding. Others, siding with James, have focussed on the role of belief in our psychological economy, arguing that the reasonableness of my holding a given belief can be affected by a variety of non-epistemic considerations. Suppose I have some evidence that ‘p’ is false, but that I stand to benefit in a significant way from coming to believe ‘p’, if that is so, and is the practical advantage of my holding ‘p’ considerably outweigh the practical disadvantage. It might seem obvious that my holding ‘p’ is reasonable for me in some all-embracing sense.
Externalism, carries with it, the views that there are objective reasons for action that is not dependent on the agent’s desires, and in that sense external to the agent. Internalism (about reasons) is the view that reasons for action must be internal in the sense that they are grounded in motivational facts about the agent, e.g., her desires and goals. Classic internalists’, such as Hume denies that there are objective reasons for action. For instance, whether the fact that an action would promote health is a reason to do it depending on whether one has a desire to be healthy. It may be a reason for some and not for others. The doctrine is hence, a version of relativism: A fact is a reason only insofar as it is so connected to an agent‘s psychological states that it can motivate the agent. By contrast, externalists’ hold that not all reasons depend on the internal states of particular agents. Thus, an externalist could hold that promoting health is objectively good and that the fact that an action would promote one’s health is reason to in-act to it, regardless of whether one desire’s health.
This dispute is closely tied to the debate over motivational internalism, which may be conceived as the view that moral beliefs (for instance) are, by virtue of entailing motivation, internal reasons for action. Those who reject motivational internalism must either deny that [sound] moral beliefs always provide reasons for action or hold that they provide external reasons.
Even so, the distinction between reasons and causes is motivated in good part by a desire to separate the rational from the natural order. Historically, it probably traces back, at least to Aristotle’s similar (but not identical) distinction between final and efficient cause. Recently, the contrast has been drawn primarily in the domain of actions and, secondarily, elsewhere.
Many who have insisted on distinguishing reasons from causes have failed to distinguish two kinds of reason. Consider my reason for sending a letter by express mail. Asked why I did so, I might say I wanted to get it there in a day, or simply: To get it there in a day. Strictly, the reason is expressed by ‘to get it there in a day’. But what this expresses are my reason only because I am suitably motivated; I am in a reason state: Wanting to get the letter there in a day. It is reasons’ states - especially wants, beliefs and intentions - and not reasons strictly do call, that are candidates for causes. The latter are abstract contents of propositional attitudes, the former are psychological elements that play motivational roles.
If reason states can motivate, however, why (apart from confusing them with reasons proper) deny that they are causes? For one thing, they are not events, at least in the usual sense entailing change: They are dispositional states (this contrasts them with occurrences, but does not imply that they admit of dispositional analysis). It has also seemed to those who deny that reasons are causes that the former justify as well as explain, the actions for which they are reasons, whereas the role of causes is at most to explain. Another claim is that the relation between reasons (of what seems as reason states are often cited explicitly) and the actions they explain is non-contingent, whereas the relation of causes to their effects is contingent. The ‘logical connection argument’ proceeds from this claim to the conclusion that reasons are not causes.
There is, then, a clear distinction between reasons proper and causes, and even between reason states and event causes: But the distinction cannot be used to show that the relation between reasons and the actions they justify is in no way causal. Precisely, parallel points hold in the epistemic domain (and for all the propositional attitudes, since they all similarly admit of justification, and explanation, by reasons). Suppose my reason for believing that you received my letter today, is that I sent it by express yesterday. My reason, strictly speaking, is that I sent it by express yesterday: My reason state is my believing this. Arguably, my reason justifies the further proposition I believe for which it is my reason, and my reason state - my evidence belief - both explains and justifies my belief that you received the letter today. I can say, that justifies that belief is (the fact) that I sent the letter by express yesterday. Nevertheless, this statement expresses my believing that evidence proposition, and if I do not believe it then my belief that you received the letter is not justified: But is not justified by the mere truth of the proposition (and can be justified even if that proposition is false).
Similarly, there are, for belief as for action, at least five main kinds of reason: (1) normative reasons, reasons (objective grounds) there are to believe (say, to believe that there is a greenhouse effect: (2) person-relative normative reasons, reasons for (say) me to believe (3) subjective reasons, reasons I have to believe, (4) explanatory reasons, reason I believe I believe, and (5) motivating reasons, reasons for which I believe. Of cases (1) and (2) are propositions and thus, not serious candidates to be causal factors. The suites corresponding to (3) may or may not be causal elements. Reasons why, in that of case (4), are always [sustaining] explainers, though not necessarily even prima facie justifiers, since a belief can be causally sustained by factors with no explanatory and possess whatever minimal ‘prima facie reasons’ - if all actions of promise-keeping are the better for it, in that an action may be a prima facie duty (in virtue of some property it has) ceases to be so on further enquiry, however, justificatory power (if any) a reason must have to be a basis of belief.
Current discussion of the reasons-causes issue has shifted from the question whether reason states can causally explain to the deeper questions whether they can justify without so explaining and what kind of causal chain non-waywardly connects reason states with actions and beliefs they do explain. Reliabilists’ tend to take a belief as justified by a reason only, which if it is held, at least in part, for that reason: In a sense implying, but not entailed by, being causally based on that reason. Internalists often deny this, perhaps thinking we lack internal access to the relevant causal connections.
The exterenalism/internalism distinction has been mainly applied in theories of epistemic justification. It has also been applied in a closely related way to accounts of knowledge and in a rather different way to accounts of belief and thought content.
The internalist requirement of cognitive accessibility can be interpreted in at least two ways: A strong version of internalism would require that the believer actually be aware of the justifying factors in order to be justified, while a weaker version would require only that he be capable of becoming aware of them by focussing his attention appropriately, but without the need for any change of position, new information and so forth. Though the phrase ‘cognitively accessible’ suggests the weak interpretation, the main intuitive motivation for internalism, viz. the idea that epistemic justification requires that the believer actually have in his cognitive possession a reason for thinking that the belief is true, and would require the strong interpretation.
Perhaps the clearest example of an internalist position would be a ‘foundationalist view’, is that according to which foundational beliefs pertain to immediately experienced states of mind and other beliefs are justified by standing in cognitively accessible logical or inferential relations to such foundational beliefs. Such a view could count as either a strong or a weak version of internalism, depending on whether actual awareness of the justifying elements or only the capacity to become aware of them is required. Similarly, a ‘coherentist view’ could also be internalist, if both the beliefs or other states with which a justificandum belief is required to cohere and the coherence relations themselves are reflectively accessible.
It should be carefully noticed that when internalism is construed in this way, it is neither necessary nor sufficient by itself for internalism that the justifying factors literally are internal mental states of the person in question. Not necessary, because on at least some views, e.g., a direct realist view of perception, something other than a mental state of the believer can be cognitively accessible. Not sufficient, because there are views according to which at least some mental stares need not be actual (a strong version) or even possible (a weak version) objects of cognitive awareness. Also, on this way of drawing the distinction, a hybrid view, that according to which some of the factors required for justification must be cognitively accessible while others need not and usually will not be, would count as an externalist view. Obviously too, a view that was externalist in relation to a strong version of internalism (by not requiring that the believer actually be aware of all justifying factors) could still be internal in relation to a weak version (by requiring that he at least is capable of becoming aware of them).
The most prominent recent externalist views have been versions of ‘reliabilism’, whose main requirement for justification is that the belief be produced in a way or through a process that makes it objectively likely that the belief is true. What makes such a view externalist is the absence of any requirement that the person for whom the belief is justified have any sort of cognitive access on the relation of reliability in question. Lacking such access, such a person will, in general, have no reason for thinking that the belief is true or likely to be true, but will, on such an account, nonetheless, be epistemologically justified in accepting it. Thus, such a view arguably marks a major break from the modern epistemological tradition, stemming from Descartes, which identifies epistemic justification with having a reason, perhaps even a conclusive reason, for thinking that the belief is true. An epistemologist working within this tradition is likely to feel that the externalist, rather than offering a competing account of the same concept of epistemic justification with which the traditional epistemologist is concerned, has simply changed the subject.
Two general lines of argument are commonly advanced in favour of justificatory externalism. The first starts from the allegedly commonsensical premiss that knowledge can be non-problematically ascribed to relatively unsophisticated adults, to young children, and even to higher animals. It is then argued that such ascriptions would be untenable on the standard internalist accounts of epistemic justification (assuming that epistemic justification is a necessary condition for knowledge), since the beliefs and inferences involved in such accounts are too complicated and sophisticated to be plausibly ascribed to such subjects. Thus, only an externalist view can make sense of such commonsense ascriptions and this, on the presupposition that commonsense is correct, constitutes a strong argument in favour of externalism. An internalist may respond by challenging the initial premise, arguing that such ascriptions of knowledge are exaggerated, while perhaps at the same time claiming that the cognitive situation of, at least, some of the subjects in question is less restricted than the argument claims. A quite different response would be to reject the assumption that epistemic justification is a necessary condition for knowledge, as, perhaps, by adopting an externalist account of knowledge rather than justification.
The second general line of argument for externalism points out that internalist views have conspicuously failed to provide defendable non-sceptical solutions to the classical problems of epistemology. In striking contrast, however, such problems are in general easily solvable on an externalist view. For example, Goldman (1986) offers a one-page solution, in a footnote, of the problem of induction. Thus, if we assume both that the various relevant forms of scepticism are false and that the failure of internalist views, so far is unlikely to be remedied in the future, we have good reason to think that some externalist view is true. Obviously, the cogency of this argument depends on the plausibility of the two assumptions just noted. An internalist can reply, first, that it is not obvious that internalist epistemology is doomed to failure, that the explanation for the present lack of success may simply be the extreme difficulty of the problems in question. Secondly, it can be argued that most or even all of the appeal of the assumption that the various forms of scepticism are false depends essentially on the intuitive conviction that we do have reasons in our grasp for thinking that the various beliefs questioned by the sceptic are true, a conviction that the proponent of this argument must of course reject.
The main objection to externalism rests on the intuition that the basic requirement for epistemic justification is that the acceptance of the belief in question be rational or responsible in relation to the cognitive goal of truth, which seems to require, in turn, that the believer actually be aware of a reason for thinking that the belief is true (or at the very least, that such a reason be available to him). Since the satisfaction of an externalist condition is neither necessary nor sufficient for the existence of such a cognitively accessible reason. It is argued, externalism is mistaken as an account of epistemic justification. This general point has been elaborated by appeal to two sorts of putative intuitive counterexamples to externalism. The first of these challenges the necessity of the externalist conditions for epistemic justification by appealing to examples of belief which seem intuitively to be justified, but for which the externalist conditions are not satisfied, the standard examples of this sort are cases where beliefs are produced in some very non-standard way, e.g., by a Cartesian demon, but, nonetheless, in such a way that the subjective experience of the believer is undistinguishable from that of someone whose beliefs are produced more normally (Foley, 1985). Cases of this general sort can be constructed in which any of the standard externalist conditions, e.g., that the belief be a result of a reliable process, fail to be satisfied. The intuitive claim is that the believer in such a case, is nonetheless epistemically justified, as much so as one whose belief is produced in a more normal way, and hence, that externalist accounts of justification must be mistaken.
Perhaps, the most interesting reply to this sort of counterexample, on behalf of reliabilism specifically, holds that reliability of a cognitive process is to be assessed in ‘normal’ possible worlds, i.e., in possible worlds that are actually the way our world is commonsensically believed to be, rather than in the world which actually contains the belief being judged. Since the cognitive processes employed in the Cartesian demon case are, we may assume, reliable when assessed in this way, the Reliabilists’ can agree that such beliefs are justified, the reliabilist can agree that such beliefs are justified. The obvious further issue is whether or not there is an adequate rationale for this construct of reliabilism, so that the reply is not merely ad hoc (Goldman, 1986).
The second correlative way of elaborating the general objection to justificatory externalism challenges the sufficiency of the various externalist conditions by citing cases where those conditions are satisfied, but where the believers in question seem intuitively not to be justified. Here the most widely discussed examples have to do with possible occult cognitive capacities like clairvoyance. Considering the point in application once again to reliabilism specifically, the claim is that a reliable clairvoyant who has no reason to think that he has such a cognitive power, and, perhaps, even good reasons to the contrary, is not rational or responsible and, hence not epistemically justified in accepting the beliefs that result from his clairvoyance, despite the fact that the reliabilist condition is satisfied.
One sort of response to this latter sort of objection is to ‘bite the bullet’ and insist that such believers are in fact justified, dismissing the seeming intuitions to the contrary as latent internalist prejudice. A more widely adopted response attempts to impose additional conditions, usually of a internalist sort, which will rule out the offending example while still stopping for short of a full internalism. But while there is little doubt that such modified versions of externalism can handle particular cases well enough, but to avoid clear intuitive implausibility, the issue is whether there will not always be equally problematic cases that they cannot handle, and also whether there is any clear motivation for the additional requirements other than the general internalist view of justification that externalists’ are committed to reject.
A view in this same general vein, one that might be described as a hybrid of internalism and externalism (Swain, 1984; Alston, 1989) holds that epistemic justification requires that there be a justificatory factor that is cognitively accessible to the believer in question (though it need not be actually grasped), thus ruling out, e.g., a pure reliabilism. At the same time, however, though it must be objectively true that beliefs for which such a factor is available are likely to be true, this further fact need not be in any way grasped or cognitively accessible to the believer. In effect, of the two premisses needed to argue that a particular belief is likely to be true, one must be accessible in a way that would satisfy at least weak internalism, while the second can be (and normally will be) purely external. Once, again, the internalist will respond that this hybrid view is of no help at all in meeting the objection that the belief is not held in the rational, responsible way that justification intuitively seems to require, for the believer in question, lacking one crucial premiss, still has no reason at all for thinking that his belief is likely to be true.
An alternative to giving an externalist account of epistemic justification one which may be more defensible while still accommodating many of the same motivating concerns, is to give an externalist account of knowledge directly, without relying on an intermediate account of justification. Such a view will obviously have to reject the justified true belief account of knowledge, holding instead, that knowledge is true belief which satisfies the chosen externalist condition, e.g., is a result of a reliable process, and, perhaps, further conditions as well. This makes it possible for such a view to retain an internalist account of epistemic justification, though the centrality of that concept to epistemology would obviously be seriously diminished.
Such an externalist account of knowledge can accommodate the commonsense conviction that animals, young children, and unsophisticated adults possess knowledge, though not the weaker conviction (if such a conviction even exists) that such individuals are epistemically justified in their beliefs. It is also, at least, less vulnerable to internalists’ counterexamples of the sorts that were earlier presented, since the intuitions involved there, pertain more clearly to justification than to knowledge. What is uncertain is what ultimate philosophical significance the resulting conception of knowledge is supposed to have. In particular, does it have any serious bearing on traditional epistemological problems and on the deepest and most troubling versions of scepticism, which seem in fact to be primarily concerned with justification, rather than knowledge.
A rather different use of the terms ‘internalism’ and ‘externalism’ has to do with the issue of how the content of beliefs and thoughts is determined: According to an internalist view of content, the content of such intentional states depends only on the internalist view of content, the content of such intentional states depends only on the non-relational internal properties of the individual’s mind, and not at all on his physical and social environment: While according to an externalist view, content is significantly affected by such external factors. Here too, a view that appeals to both internal and external elements is standardly classified as an externalist view.
As with justification and knowledge, the traditional view of content has been strongly internalist in character. The main argument for externalism derives from the philosophy of language, more specifically from the various phenomena pertaining to natural kind terms, indexical, and so on, that motivate the view that have come to be known as ‘direct reference’ theories. Such phenomena seem at least, to show that the belief or thought content that can be properly attributed to a person is dependent on facts about his environment - e.g., whether he is on Earth or Twin Earth, what in fact he is pointing at, the classificatory criteria employed by the experts in his social group, etc. - Not just on what is going on internally in his mind (Putman 1975; Burge 1979).
An objection to externalist accounts of content is that they seem unable to do justice to our ability to know the contents of our beliefs or thoughts ‘from the inside’, simply by reflection. If content is dependent on external factors pertaining to the environment, then knowledge of content should depend on knowledge of these factors - which will not, in general, be available to the person whose belief or thought is in question.
The adoption of an externalist account of mental content would seem to support an externalist account of justification in the following way: If part or all of the content of a belief is inaccessible to the believer, then both the justifying status of other beliefs in relation to that content and the status of that content as justifying further beliefs will be similarly inaccessible, thus, contravening the internalist requirement for justification. An internalist must insist that there are no justification relations of these sorts, that only internally accessible content can either be justified or justify anything else, but such a response appears lame unless it is coupled with an attempt to show that the externalist account of content is mistaken.
That on the one hand, and the other, the scandalous rumours and undertones about or abroad the tattling rumour about the death of epistemology. Death notices appeared in such works as ‘Philosophy and the Mirror of Nature’ (1979) by Richard Rorty and Williams’s ‘Groundless Belief’ (1977). Of late the rumours seem to have died down, but whether they will prove to have been exaggerated remains to be seen.
Arguments for the death of epistemology typically pass through three stages. At the first stage, the critic characterizes the task of epistemology by identifying the distinctive sorts of questions it deals with. At the second stage, he tries to isolate the theoretical ideas that make those questions possible. Finally, he tries to undermine those ideas in question are less than compelling, there is no pressing need to solve the problems they give rise to. Thus, the death-of-epistemology the theorists holds that there is no barrier of, say, demonology or judicial astrology. These disciplines, are centre on questions that were once taken very seriously indeed, but as their presuppositions came to seem dubious, debating their problems came to seem pointless. Furthermore, some theorists hold that philosophy, as a distinctive, professionalized activity, revolves essentially around about the death of epistemology is apt to evolve into speculation about the death of philosophy generally.
Clearly, the death-of-epistemology theorist must hold that there is nothing special about philosophical problems. This is where philosophers who see little sense in talk of the death of epistemology disagree. For them, philosophical problems, including epistemological problems, are distinctive in that they are ‘natural’ o r ‘intuitive’: That is to say, they can be posed and understood taking for granted little or nothing in the way of contentious, theoretical ideas. Thus, unlike problems belonging to the particular sciences, they bare ‘perennial’ problems that could occur to more or less anyone, anything and anywhere, But are the standard problems of epistemology really ‘intuitive’ as all that? Or if they indeed come to seem so commonsensical, is this only because commonsense is a repository fo r ancient theory? These are the sorts of question that underlie speculation about epistemology’s possible demise.
Because it resolves round questions like this, the death-of-epistemology movement is distinguished by its interest in what we may call ‘theoretical diagnosis;, bring to light the theoretical background to philosophical problems so as to argue that they cannot survive detachment from it. This explains the movement’s interest in historical-explanatory accounts of the emergence of philosophical problems. If certain problems can be shown not to be perennial, rather, to have emerged at definite points in time, this strongly suggestive of their dependence on some particular theoretical outlook: And, if an account of that outlook makes intelligible the subsequent development of the discipline centred on those problems, that is evidence for its correctness. Still, the goal of theoretical diagnosis is to establish logical dependence, not just historical correlation. S0, although not just historical correlation. So, although historical investigation into the roots and development of epistemology can provide valuable clues to the ideas that inform its problems, history cannot substitute for problem-analysis.
The death-of-epistemology movement has many sources: In the pragmatists, particularly James and Dewey, and in the writings of Wittgenstein, Quine, Sellars and Austin. But the project of theoretical diagnosis must be distinguished from the ‘therapeutic’ approach ti philosophical problems that some names on the list of theoretical diagnosis does not claim that the problem he analyses are ‘pseudo-problems’ rooted in ‘conceptual confusion’: Rather, rooted and claims that, while genuine, they are wholly internal to a particular intellectual project whose generally unacknowledged theoretical commitments he aims to isolate and express criticism.
Turning to details, the task of epistemology, as these radical critics conceive it, is to determine the nature, scope and limits, indeed the very possibility of human knowledge. Since epistemology determines the extent to which knowledge is possible. It cannot itself take for granted the results of any particular forms of empirical inquiry. Thus epistemology purports to be a non-empirical discipline, the function of which is to sit in judgement on all particular discursive practices with a view to determining their cognitive status. The epistemologists (or, in the rea of epistemological-centred philosophy, we might as well say ’the philosophers’) is someone professionally equipped to determine what forms of judgement are ‘scientific’, ‘rational’. ‘merely expressive’, and so forth. Epistemology is therefore fundamentally concerned with sceptical questions. Determining the scope and limits of human knowledge is a matter of showing where and when knowledge is possible. But there is a project called ‘showing that knowledge is possible’ only because there are powerful arguments for the view that knowledge is impossible. yet, the scepticism in question is firs t and foremost radical scepticism, the theses, with respect to this or that area of putative knowledge we are never so much as justified in believing one thing rather than another. The task of epistemology is thus to determine the extent in which bit is possible to respond to the challenge s posed by radical sceptical arguments by determining where we can and cannot have justifications for our beliefs. If in turns out that the prospects are most hopeful for some sorts of beliefs than for others, we will have uncovered a difference in epistemological status. The ‘scope and limit’ question and problem of radical scepticism are two sides of one coin.
This emphasis on scepticism as the fundamental problem of epistemology may strike some philosophers s misguided. Much recent work on the concept of knowledge, particularly that inspired by Gettier’s demonstration of the insufficiency of the standard ‘justified true analysis’, has been carried on independently of any immediate concern with scepticism. I think it must be admitted that philosophers who envisage the death of epistemology tend to assume a somewhat dismissive attitude to work of this kind. In part, this is because they tend to be precise necessary and sufficient conditions for the application of any concept. But the determining factor is their thought that only the centrality of the problem of radical scepticism can reexplain the important for philosophy that, at least in the modern period, epistemology has taken on. Since radical scepticism concerns the very possibility of justification, for philosophers who put this problem first, questions about what special sorts of justification yield knowledge, or about whether knowledge might b e explained in non-justificational terms, are of secondary importance. Whatever importance they have will have to derive the end from connections. If any, with sceptical problems.
In light of this, the fundamental question for death-of-epistemology theorists becomes, ‘What are the essential theoretical presuppositions of arguments for radical scepticism? Different theorists suggest different answers. Rorty traces scepticism to the ‘representationalist’ conception of belief and its close ally, the correspondence theory of truth. According to Rorty, if we think of beliefs as ‘representations’, that aim to correspond with mind-independent ‘reality’ (mind as the mirror of nature), we will always face insuperable problems when we try to assure ourselves that the proper alignment has been achieved. In Rorty’s view, by switching to a more ‘pragmatic’ or ‘behaviouristic’ conception of beliefs as devised for coping with particular, concrete problems, we can put scepticism, hence the philosophical discipline that revolves around it, behind us once and for all.
Other theorists stress epistemological foundationalism as the essential background to traditional sceptical problems. There are reasons for preferring this approach. Arguments for epistemological conclusions require at least one epistemological premiss. It is, therefore, not easy to see how metaphysical or semantic doctrines of the sort emphasized by Rorty could, by themselves, generate epistemological problems, such as radical scepticism. On the other hand, the case for scepticism’s essential dependence on foundationalist preconceptions is by no means easy to make. It has even been argued that this approach ‘gets things almost entirely upside down’. The thought it has, is that foundationalism is an attempt to save knowledge from the sceptic, and is therefore a reaction to, rather than a presupposition of, the deepest and most intuitive arguments for scepticism. Challenges like this certainly need to be met by death-of-epistemology theorists, who have sometimes been too ready to take for obvious asceticism’s dependence on foundationalist, or other theoretical ideas. This reflects, perhaps the dangers of taking one’s cue from historical accounts of the development of sceptical problems. It may be that, in the heyday of foundationalism, sceptical arguments were typically presented within a foundationalist context. But the crucial question is not whether some sceptical arguments do take foundationalism for granted but whether there are any that do not. This issue - indeed, the general issue of whether scepticism is a truly intuitive problem – can only be resolved by detailed analysis of the possibilities and resources of sceptical argumentation.
Another question concern why anti-foundationalism leads to the death of epistemology than a non-foundational. Hence ‘coherentist’ approach to knowledge and justification. It is true that death-of-epistemology theorists often characterize justification is to make a negative point. According to foundationalism, our beliefs fall naturally to foundationalism, our belief categories that reflect objectives context-independent relations of epistemological priority. Thus,, for example, experimental beliefs are thought to be naturally or intrinsically prior to beliefs about the natural world. This relation of epistemic priority is, so to say, just a fact. Foundationalism is therefore committed to a strong form of ‘realism’ about epistemological facts and relations, call it ‘epistemological realism’. For some anti-foundationalists, talk of coherence is just a way of rejecting the picture in favour of the view that justification is a matter of accommodating new beliefs to relevant background beliefs in contextually appropriated ways, there being no context-independent, purely epistemological restrictions on what sorts of beliefs can confer evidence on what others. If this is all that is meant, talk of coherence does not point to a theory of justification so much as too the deflationary view that justification is not the sort of thing we should expect to have theories about. There is, however, a stronger sense of a genuine theory. This is the radically holistic account of justification, according to which inference depends on assessing our entire belief-system or ‘total view’, in the light of abstract criteria of ‘coherence’. But it is questionable whether this view, which seems to demand privileged knowledge of what we believe is an alternative to foundationalism or just a variant form. Accordingly, it is possible that a truly uncompromising anti-foundationalism will prove as hostile to traditional coherence theories as to standard foundationalist positions, reinforcing the connection between the rejection of foundationalism and the death of epistemology.
The death-of-epistemology movement has some affinities with the call for a ‘naturalized’ approach to knowledge. Quine argues that the time has come for us to abandon such traditional projects as refuting the sceptic by showing how empirical knowledge can be rationally reconstructed on a sensory basis, hence justifying empirical knowledge at large. We should concentrate instead on the more tractable problem of explaining how we ‘project our physics from our data’, i.e., how retinal stimulations cause us to respond with increasingly complex sentences about events in our environment. Epistemology should be transformed into a branch of natural science, specifically experimental psychology. But though Quine presents this as a suggestion about how to continue doing epistemology, to philosophers who think that the traditional questions still lack satisfactory answers, it looks more like abandoning epistemology in favour of another pursuit entirely. It is significant, therefore, that in subsequent writings Quine has been less dismissive of sceptical concerns. But if this is how `naturalized`epistemology develops then for the death-of-epistemology theorist, its claims will open up a new field for theoretical diagnosis.
Even so, the scepticism hypothesis is designed to impugn our knowledge of empirical propositions by showing that our experience is not a reliable source of beliefs. Thus, one form of traditional scepticism developed by the Pyrrhonists, namely that reason is incapable of producing knowledge, is ignored by contemporary scepticism. Apparently, the sceptical hypothesis can be employed in two distinct ways. It can be used to show that our beliefs fall short of being certain and it can be used to show that they are not even justified. In fact, as we are to implicate that the first use depends on or upon the second.
`Letting ‘p’ stand for any ordinary belief (e.g., there ids a table before me) the first type of argument employing the sceptical hypothesis can be stared as follows:
1. If ‘S’ knows that ‘p’, then ‘p’ is certain.
2. The sceptical hypothesis shows that ‘p’ is not certain.
Therefore, ‘S’ does not know that ‘p’ is not certain.
No argument for the first premiss is needed because this first form of the argument employing the sceptical hypothesis is only concerned with cases in which certainty is though t to be a necessary condition of knowledge. Yet issues surrounding certainty are inextricably connected with those concerning scepticism. For many sceptics have traditionally held that knowledge requires certainty, and, of course, they claim that certain knowledge is not possible. In part, in order to avoid scepticism, the anti-sceptics have generally held that knowledge does not require certainty: According to which the meaning of a concept is to be sought in the experimental or practical consequences of its application. The epistemology of pragmatism is typically anti-Cartesian, fallibilistic, naturalistic. In some versions it is also realistic, in others not. In fact, Wittgenstein (1972) claims roughly, that propositions which are known are always subject to challenge, whereas, when we say that ‘p’ is certain, we are foreclosing challenges to ‘p’. As he puts it. ‘Knowledge and certainty’ belong to different categories (Wittgenstein, 1969). As such, if justification is a necessary condition of knowledge, it is suggested that it explicitly employs the premiss needed by the first argument discussed or aforementioned, as namely that ‘S is not justified in denying the sceptical hypothesis. Nonetheless, the first premiss employs a version of the co-called ‘transmissibility principle’ which probably first occurred with Edmund Gettier’s standard analysis of propositional knowledge, and is suggested by Plato and Kant among others, and implies that if one has a justified true belief that ‘p’ then one knows that ‘p’ has a three individually necessary and jointly sufficient conditions, as the ‘tripartite definition of knowledge’ stating that justification, truth and belief are justified true beliefs. The belief condition requires that anyone who knows that ‘p’ believe that ‘p’, the truth condition requires that any known proposition be true, and the justification condition requires that any known proposition be adequately justified, warranted or evidentially supported.
Such as in the second premiss of the argument is a Cartesian not in of doubt which is roughly that a proposition, ‘p’. Is doubtful for ‘S’ if there is a proposition that (1) ’S’ is not justified in denying and (2) if added to S’s beliefs , would lower the warrant of ‘p’ as it seems clear that certainty is a property that can be ascribed to either a person or a belief. However, a Cartesian characterization of a concept of absolute certainty seems the approach that is a proposition ‘p’. Is certain for ‘S’ just in case ‘S’ is warranted in believing that ‘p’ and there are absolutely no grounds whatsoever for doubting. If. now one could characterize those ground in a variety of ways (Firth,1976; Miller, 1978; Klein, 1981,1990). For example, a ground ‘g’ for making ‘p’ doubtful for ‘S’ could be such that (a)‘S’ is not warranted in denying ‘g’ and:
(B1) If ‘g’ is added to S’s beliefs, the negation of ‘p’ is warranted Or,
(B2) if ‘g’ is added t o S’s beliefs, ‘p’ is no longer warranted:
Or,
(B3) If ‘g’ is added to S ‘s beliefs, ‘p’ becomes less warranted (even if only slightly so).
Warrant might also be increased rather than just ‘passed on’. The coherence of probable propositions with other probable propositions with other probable propositions might (defensibly ) making them all the more evident (Firth, 1964).
Nonetheless, if belief is a necessary condition of knowledge since we can believe a proposition without believing all of the propositions entailed by it. It is clear that the principle is false. similarly, the principle entails for other uninteresting reasons. For example, if the entailment is a very complex one, ‘S’ may not be justified in believing what is entailed because ‘S’ does not recognize the entailment. In addition., "S’ may recognizes the entailment but believe the entailing in the proposition for silly reasons. But, the interesting question is this: If `S`is justified in believing (or knows) that `p`. And `p`obvious ly (to S) entails`q`, and `S`believes `q`on the basis of believing is justified in believing (or, in a position to know) that q.
Even so, Quine argued that the classical foundationalist project was a failure, both in its details and in it s conception. On the classical view, an epistemological theory would tell us how we ought to arrive at our beliefs, only by developing such a theory and then applying it could we reasonably come to believe anything about the world around us. Thus, on this classical view, an epistemological theory must be developed independently of, and prior to, any scientific theorizing: Proper scientific theorizing could only occur after such a theory was developed and deployed. This was Descartes’ view of how an epistemological theory ought to proceed, it was what he called ‘First Philosophy’. Moreover, it is this approach to epistemological issues motivated not only foundationalism, but virtually all epistemological theorizing for the next 300 years.
Quine urged a rejection of this approach to epistemological questions. Epistemology, on Quine’s view, is a branch of natural science. It studies the relationship between human beings ad their environment, in particular, it asks how it is that human beings can arrive at beliefs about the world around them on the basis of sensory stimulation, the only source of belief there is. Thus Quine commented, [sensory stimulation] and the torrential output [our total science] is a relation we are prompted to study for somewhat the same reasons that always prompted epistemology: Namely, in order to see how evidence relates to theory, and in what ways one’s theory of nature transcends any available evidence (Quine,1969), Quine spoke of this project study as ‘epistemology naturalized’.
One important difference between this approach and more traditional ones becomes plain when the two are applied to sceptical questions. On the classical view, if we to explain how knowledge is possible, it is illegitimate to make use of the resources of science: This would simply beg the question against the sceptic by making use of the very knowledge which he calls into question. Thus, Descartes’ attempt to answer the sceptic begins by rejecting all those beliefs about which any doubt is possible. Descartes must respond to the sceptic from a starting place which includes no be beliefs at all. Naturalistic epistemologists, however, understand the demand to explain the possibility of knowledge differently. As Quine argues, sceptical questions arise from within science. It is precisely our success in understanding the world, and thus in seeing that appearance and reality may differ, that raises the sceptical question in the first place . We may thus legitimately use the resources of science to answer the question which science itself has raised. The question about how knowledge is possible should thus be construed as an empirical question: It is a question about creatures such as we (given what our best current scientific theories tell us, we are like) ma y come to have accurate beliefs about the world (given what our best current scientific theories tell us the world is lik0e), Quine suggests that the Darwinian account of the origin of species gives a very general explanation of why it is that we should be well adapted to getting true beliefs about our environment (Stich, 1990), although Quine himself does no t suggest it, in that investigations in the sociology of knowledge are obviously relevant as well.
This approach into sceptical questions clearly makes them quite tractable, and its proponents see this, understandably, as an important advantage of the naturalistic approach. It is in part for this reason that current work in psychology and sociology is under such close scrutiny by many epistemologists. By the same token, the detractors of the naturalistic approach argue that this way of dealing with sceptical questions simply bypasses the very question which philosophers have long dealt with. Far from answering the traditional sceptical question. It is argued, the naturalistic approach merely changes the topic (e.g., Stroud, 1981). Debates between naturalistic epistemologists and their critics thus frequently focus on or upon whether this new way of doing epistemology adequately answers, transforms or simply ignores the questions, which others see as central to epistemological inquiry. Some are the naturalistic approach as an attempt to abandon the philosophical study of knowledge.
Precisely what the Quinean project amounts to is also a subject of some controversy. Both those who see themselves as opponents of naturalistic epistemology and those who are eager to sign onto the project frequently disagree about what the project is. The essay of Quine’s which prompted this controversy (Quine, 1969) leaves a great deal of room for interpretation.
At the centre of this controversy is the issue of the normative dimension of epistemological inquiry. Philosophers differ regarding the sense. if any, in which epistemology is normative (roughly, valuational). But what precisely is at stake is this controversy is no clearer than the problematic fact/value distinction itself. Much epistemologists as such make judgements of value or epistemic responsibility? If epistemology is naturalistic, then even epistemic principles simply articulate under what conditions - say, appropriate perceptual stimulation - a belief is justified, or constitutes knowledge. Its standards of, e.g., resilience for bridges. It is not obvious, however, that the appropriated standards can be established without independent judgements that, say, a certain kind of evidence is good enough for justified belief (or knowledge). The most plausible view may be that justification is like intrinsic goodness: Though it supervenes on natural properties, it cannot be analysed wholly in factual statements.
Perhaps, the central role which epistemological theories have traditionally played is normative. Such theories were meant not merely to describe the various processes of belief acquisitions and retention, but to tell us which of these processes we ought to be using. By describing his preferred epistemological approach as a ‘chapter of psychology and hence of natural science’ (Quine, 1969). Quine has encouraged many to interpret his view as a rejection of the normative dimension of epistemological theorizing (Goldman. 1986; Kim, 1988). Quine has, however, since repudiated this reading: Naturalization of epistemology does not jettison the normative and settle for the indiscriminate description of ongoing procedure’ (Quine, 1986 & 1999)
Unfortunately, matters are not quite a simple as this quotation makes things seem, Quine goes on to say, ‘For me, normative epistemology is a branch of engineering. It is the technology of truth-seeking, . . . There is no question as of th e ultimate value, as in morals: It is a matter of efficacy for an ulterior end, truth or prediction. The normative, as elsewhere in engineering, becomes descriptive when the terminal parameter is expressed’ (Quine, 1986). But this suggestion, brief as it is, is compatible with a number of different approaches.
On one approach, by Alvin Goldman (Goldman, 1968). Knowledge is just true belief which is produced by a reliable process, that is, a process which tends to produce true beliefs. In so much as, the view that a belief acquires favourable epistemic status by having some kind of reliable linkage to the truth. Variations of this view have been advanced for both knowledge and justified belief. The first formulation of a reliable account of knowing appeared in a note by F.P. Ramsey (1931), who said that a belief is knowledge if it is true, certain and obtained by a reliable process. P. Unger (1968) suggested that `S`knows that `p`just in case it is not at all accidental that `S`is right about its being the case that `p`. D.M. Armstrong (1973) drew an analogy between a thermometer that reliably indicates the temperature and a belief that reliably indicates the truth. Armstrong said that a non-inferential belief qualifies as knowledge if the belief has properties that are nominally sufficient for its truth, i.e., guarantee its truth according to the laws of nature.
Yet, the `technological`question arises in asking which processes tend to produce true belief. Questions of this sort are clearly part of natural science, but there is also the account of knowledge itself. On Goldman`s view, the claim that knowledge is reliably produced true belief is arrived at independent of, and prior to, scientific investigation: It is a product of conceptual analysis. Given Quine `s rejection of appeals to meaning, the analytic-synthetic distinction, and thus the very enterprise of conceptual analysis, this position is not open to him. Nevertheless, it is for many and attractive way of allowing scientific theorizing to play a larger role in epistemology than it traditionally has, and thus one important approach which might reasonably be thought of as a naturalistic epistemology.
Those who eschew conceptual analysis will need another way of explaining how the normative dimension of epistemology arises within the context of empirical inquiry. Quine says that this normative is not mysterious once we recognize that it `becomes descriptive when the terminal parameter is expressed`. But why is it conduciveness to truth. Than something else, such as survival, which is at issue here. Why is it that truth counts as the goal for which ewe should aim. Is this merely a sociological point, that people do seem to have this goal. Or is conduciveness to truth itself instrumental to other goals in some way that makes it of special pragmatic importance. It is not that Quine has no way to answer these questions within the confines of the naturalistic position he defines, rather that there seem to be many different options open, such that which is needed of further exploration and elaborations.
A number of attempts to fill in the naturalistic account draw a close connection between how people actually reason and how they ought to reason, thereby attempting to illuminate the relation between th e normative and the descriptive. One view has in that these two are identical (Kornblith, 1985; Sober, 1978), that with respect to a given subject-matter ‘psychologism’ is the theory that the subject-matter in question can be reduced to, or explained in terms of, psychological phenomena., as mental acts, events, states, dispositions and the like. But different criteria of legitimacy are normally considered appropriate types of reasoning, or roles for the faculty of reason, seem to be commonly recognized in Western culture.
It is, nonetheless, that modern science gave new impetus to affirmative theorizing about rationality, it was probably, at least in part because of the important part played by mathematics in the new mechanics of Kepler, Galileo and Newton, that some philosophers though it plausible to suppose that rationality was just as much the touchstone of scientific truth as of mathematical truth. At any rate, that supposition seems to underlie the epistemologies of Descartes and Spinoza, for example, in that which observation and experiment are assigned relatively little importance compared with the role of reason. Correspondingly, it was widely held that knowledge of right and wrong is knowledge of necessary truths that are to be discovered by rational intuition in much the same way as it was believed that the fundamental principles of arithmetic and geometry are discovered, for example, Richard Price argued that rational agent void of all moral judgement, . . is not possible to be imagined`(1797).
But in modern philosophy the most influential sceptical challenge to everyday beliefs about rationality was originated by Hume. Hume argued the impossibility of reasoning from the past to the future or from knowledge about some instances of a particular kind of situation to knowledge about all instances of that kind. There would b e nothing contradictory, he claimed, in supposing both that the sun had always risen in the past and that it would not rise tomorrow. In effect, therefore, Hume assumed the only valid standards of cognitive rationality were those concerned that rationality, where in of consisting to the conformity with the laws of deductive logic, and that of rationality as exhibited by correct mathematical calculation, and the concerning aspect of whose reasoning that depends for its correctness solely on the meaning of words that belong neither to logical nor to our mathematical vocabulary thus, it would be rational to infer that, if two people; are first cousins of one another, they share, at least one grand-parent. The form of rationality is exhibited by applicative induction that conform to appropriate criteria, as in an inference from experimental data to a general theory that explains them. For example, a hypothesis about the cause of a phenomenons needs to be tested in a relevant variety of controlled conditions in order to eliminate other possible explanations of the phenomenon, and it would be irrational to judge the hypothesis to be well-supported unless it had survived a suitable set of such tests.
Deduction was not a rational procedure, on his view, because it could not be reduced to the exercise of reason in one or another of these roles as played by doing their part.
Hume’s argument about induction is often criticized for begging the question on the grounds that induction should be held to be a valid process in its own right and with its own criterions of good and bad reasoning. But this response to Hume seems just to beg the question in the opposite direction. What is needed instead, as, perhaps, to demonstrate a continuity between inductive and deductive reasoning, with the latter exhibited as a limiting case of the former (Cohen. 1989). Even so, Hume’s is no t the only challenge that defenders of inductive rationality need to rebuff. Popper has also denied the possibility of inductive reasoning, and much-discussed paradoxes about inductive reasoning have been proposed by Goodman and Hemper.
Hemper’s paradox in the study of confirmation (1945) has introduced a paradox that raises fundamental questions about what counts as confirming evidence for a universal hypothesis. To generate the paradox three intuitive principles are invoked:
1. Nicol`s Principle (after Jean Nicod, 1930): Instances of A`s that are B`s provide confirming evidence for the universal hypothesis that, all A`s are B`s: While instances of A`s that are non-B`s provide disconfirming evidence. for example, instances of ravens that are black constitute confirming evidence for the hypothesis. Àll ravens are black` while instances of non-black ravens are disconfirming.
2. Equivalence Principle. If is confirming evidence for hypothesis h1 and if h1 is logically equivalent to hypothesis h, then is confirming evidence for h2. For example, if instances of ravens that are black are confirming evidence that all ravens are black, they are also confirming evidence that all non-black things are non-raven, since the latter hypothesis is logically equivalent to the former.
3. A Principle of Deductive logic: A sentence of the form, All A`s are B`s is logically equivalent to one of the form, that All non-B`s are non-A`s.
Using these principles, the paradox is generate d by supposing that all the non-black things so far observed have been non-ravens. These might include white shoes, green leaves and red apples, by Nicod`s principle, this is confirming evidence for the hypothesis, All non-black things are non-ravens. (In the schematic version of Nicod`s principle. Let A`s be non-black things and B`s be non-ravens.) But by principle (3) of deductive logic, the hypothesis, All non-black things are non-ravens, is logically equivalent to, All ravens are black. Therefore by th equivalence principle (2) the fac t that all the non-black things so far observed have been non-ravens is confirming evidence for the hypothesis that all ravens are black. That is, instances of white shoes, green leaves and red apples count as evidence for this hypothesis, which seems absurd, This is Hempel`s ravens paradox.
Hume also argued, as against philosophers like Richard Price (1787), that it was impossible for any reasoning to demonstrate the moral rightness or wrongness of a particular action. There would be nothing self-contradictory in preferring the destruction of the whole world to the scratching of one’s little finger. The only role for reason in decision making was to determine the means to desired ends. Nonetheless, Price’s kind of ethical rationalism has been revived in more recent times by W.D. Ross (1930) and others. Perhaps Hume’s argument had been based on question-begging assumptions, and it may be more cogent to point out that ethical rationalism implies a unity of moral standards that is not grounded to exist in the real world.
Probabilistic reasoning is another area in which the possibility of attaining fully rational results has sometimes been queried, as in the lottery paradox. And serious doubts have also been raised (Sen. 1982) about the concept of a rational agent that is required b y classical models of economic behaviour. No doubt a successful piece of embezzlement may in certain circumstances further the purpose of an accountant, and need not be an irrational action. But is it entitled to the accolades of rationality: And how should its immorality be weighed against its utility in the scales of practical reasoning? Or, is honesty always the rationally preferable policy.
These philosophical challenges to rationality gave been directed against the very possibility of these existing valid standards of reasoning of this of that area of enquiry. They have thus been concerned with the integrity of the concept of rationality rather than with the extent to which that concept is in fact instantiated by the actual thoughts, procedure and actions of human beings. The latter of issue’s seem at first sight to be a matter for philosophical, than philosophical research. Some of this research will no doubt be concerned with the circumstances under which people fail to perform in accordance with valid principles that they have nevertheless developed or adopted, as when they make occasional mistakes in their arithmetical calculations. But there also to be room for research into the categories of the population have developed or adopted. Some of this would be research into the success with which the relevant principles have been taught, as when students are educated in formal logic or statistical theory. Some would be research into th e extent to which those who have not had any relevant education are, or are not, prone to any systematic patterns of error in their reasoning. And it is this last type of research that has claimed results with ‘bleak implications for human rationality’ (Nisbett and Borgida, 1975).
One robust result was when (Wason, 1966) logically untutored subjects are presented with four cards showing, respectively, ‘A’. ‘D’ ‘4' and ‘7', and they know that every card has a letter on one side and a number on the other. They are then given the rule, ‘If a card has a vowel on one side, It has an even number on the other’, and told that their task is to say which of the cards they need to turn in order to find out whether the rule is true or false. The most frequent answers are ‘A and 4' and ‘Only A’ which are both wrong, while the right answer ‘A and 7' is given spontaneously by very few subjects. Wason interpreted this result as demonstrating that most subjects have a systematic bias towards seeking verification than falsification in testing the rule, and he regarded this bias as a fallacy of the same kind as Popper claimed to have discerned in the belief that induction could be a valid form of human reasoning.
Some of these results concern probabilistic reasoning, for example, in an experiment (Kahneman and Tversky, 1972) on statistically untutored students the subjects are told that in a certain town blue and green cabs operate in a ratio of 85 to 15 respectively. A witness identifies the cab in an accident as green and the court is told that in the relevant circumstances he says that a cab is blue when it is blue, or that a cab is green when it is green in 80 per cent of cases. When asked the probability that the cab involved in the accident was blue subjects tend to say 20 per cent. The experimenters have claimed that this robust result shows the prevalence of a systematic fallacy in ordinary people’s probabilistic reasoning, though a failure to pay attention prior probabilities and it has been argued (Saks and Kidd, 1980) that the existence of several such results demonstrates the inherent unsoundness of mandating lay juries to decide issues of fact in a court of law.
However, it is by no means clear that these psychological experimenters have interpreted their data correctly o r that the implications for human rationality are as bleak as they suppose (Cohen, 1981, 1982). For example, it might be argued that Wason’s experiment merely shows the difficulty that people have in applying the familiar rule of contraposition to artificial conditional relationships that lack any basis in causality or in any explanatory system. And as for the cabs, it might well be dispute d whether the size of the fleet to which a cab belongs should be accepted as determining a prior probability that can count against a posterior probability founded on the causal relation between a witness’s mental powers and his courtroom testimony. To count against such a posterior probability one would need a prior one that was also rooted in causality, such as the ratio in which cabs from the blue fleet and cabs from the green fleet (which may have different policies about vehicle maintenance and driver training) are involved in accidents of the kind in question. In other words, the subjects may interpret the question to concerning probabilities, not probabilities conceived as relative frequencies that may be accidental, nonetheless, it is always necessary to consider whether the dominant responses given by subjects in such experiments should be taken, on the assumption that they are correct, as indicating how the task is generally understood - instead of as indicating, on the assumption that the task is understood exactly in the way intended, what error are being made.
Finally, there is an obvious paradox in supposing that untutored human intuitions may be systematically erroneous over a wide range of issues in human reasoning. On what non-circular basis other than such intuitions can philosophers ultimately found their theories about the correct norms of deductive or probabilistic reasoning? No doubt an occasional intuition may have to be sacrificed in order to construct an adequately comprehensive system of norms. But empirical data seem in principle incapable of showing that the untutored human mind is deficient in rationality, since we need to assume the existence of this rationality - in most situations - in order to provide a basis for those normative theories in terms of which we feel confident in criticizing occasional errors of performance in arithmetical calculations, and so forth.
There has been a steady stream of two-way traffic between epistemology and psychology. Philosophers and psychologists have relied on novel epistemological doctrines and arguments to support psychological views, more recently, epistemologists have been drawn to psychology in an attempt to solve their own problems.
It is, nonetheless, that many epistemological disagreements within psychology pertain in some way or other to disputes about ‘behavioiuralism’. The epistemological argument most widely used by behaviouralists turns on the alleged unobservability of mental events or states. If cognitions are unobservable in principle, the argument runs, we have no warrant for believing that they exist and, hence, no warrantably accepting to cognitive explanations. The same argument applies to non-cognitive mental states, such as sensations or emotions. Opponents of behaviouralism sometimes reply that mental states can be observed. Each of us, through ‘introspection’, can observe at least some mental states, namely our own (at least those of which we are conscious). To this point, behaviouralists have made several replies, some (e.g., Zuriff, 1985) argue that introspection is too unreliable for introspective reports too qualify as firm scientific evidence. Others have replied that, even if introspection is private and that this fact alone renders introspective data unsuitable as evidence in a science of behaviour. A more radical reply, advanced by certain philosophers, is that introspection is not a form of observation, but rather a kind of theorizing. More precisely, when we report on the basis of introspection, that we have a painful sensation, a thought, a mental imag e, and so forth, we are theorizing about what is present. The resulting view, the fact that we introspect does on this view, the fact that we introspect does no t show that any mental states are observable.
Given by our inherent perception of the world, is only after a long experience that one is visually to identify such things in our familiar surroundings that do not typically go through such a process known as the relevance of perceptual identifiability. However, the perceptual knowledge of the expert is still dependent, of course, since even an expert cannot see what kind of flower it is, nonetheless, it is to say, that the expert has developed identificatory skills that no longer require the sort of conscious inferential processes that characterize a beginner’s efforts. Much of our perceptual knowledge - even (sometimes) the most indirect and derived forms of it - does not mean that learning is not required to know in this way. That these sensory facts are, so to speak, are right up against the mind’s eye, and one cannot be mistaken about the conveying facts in the mind, as for these facts are, in reality, facts about the way things appear to be. Normal perception of external conditions, are, then, turning to be (always) a type of indirect perception. Such by seeing that the appearances (of the tomatoe) and inferring (this is typically said to be automatic and unconscious), on the basis of certain background assumptions (e.g., that there typically is a tomatoe in front of one when one has experiences of this sort) that there is a tomatoe in front of one. All knowledge of an objective reality, then, even what commonsense regards as the most direct perceptual knowledge, is based on a even more direct knowledge of the appearances.
Fo r the representationalist, then, perceptual knowledge of our physical surroundings is always theory-loaded and indirect. Such perception is ‘loaded’ with the theory that there is some regular, some uniform, correlation between the way things appear (known in a perceptually direct way) and the way things actually are (known, known at all, in a perceptually indirect way).
Another view, as direct realism, refuses to restrict direct perceptual knowledge to an inner world of subjective experience. Though the direct realist is willing to concede that much of our knowledge of the physical world is indirect, however direct and immediate it may sometimes feel, some perceptual knowledge of physical reality is direct. What makes it direct is that such knowledge is not based on., nor in any way dependent on, other knowledge and belief. The justification needed for the knowledge is right there in the experience itself.
Too understand the way that is supposed to work, consider an ordinary example, for which of ‘S’ identifies a banana (learns that it is a banana) by noting its shape and colour - perhaps, even tasting and smelling it (to make sure it is not wax). In this case the perceptual knowledge that is a banana is (the direct realist admits) indirect, dependent on S’s perceptual knowledge of its shape, colour, smell and taste. ‘S’ learns that it is a banana by seeing that it is yellow, banana-shaped, and so on. Nonetheless, S’s perception of the banana’s colour and shape is not indirect. ‘S’ does not see that the object is yellow, for example, by seeing (knowing, believing) anything more basic - either about the banana or anything else, e.g., his own sensations of the banana, for ‘S’ has learned to identify such features, that, of course, what ‘S’ learned to do is not make an inference, even a unconscious inference, from other things he believes. What ‘S’ acquired was a cognitive skill, a disposition to believe of yellow objects he saw that they were yellow. The exercise of this skill does not require, and in no way depends on, the having of any other beliefs. ‘S’ identifcatory success will depend on his operating in certain special conditions, of course, ‘S’ will not, as, perhaps, be able to visually identify yellow object s in drastically reduced lighting, at funny viewing angles, or when afflicted with certain nervous disorders. But these facts about when ‘S’ can see that something is yellow does not show that his perceptual knowledge (that ‘a’ is yellow) in any way depends on a belief (let alone knowledge) that he is in such special conditions. It merely shows that direct perceptual knowledge is the skill of exercising a skill, an identificatory skill that like any skill requires certain conditions for its successful exercise. An expert basketball player cannot shoot accurately in a hurricane. He needs normal conditions to do what he has learned to do. So also with individuals who have developed perceptual (cognitive) skills. They need normal conditions to do what they have learned to do. They need normal conditions to see, for example, that something is yellow. But they don’t, any more than the basketball player, have to know they are in the conditions to do what being in these conditions enables then to do.
This means, of course, that for the direct realist direct perceptual knowledge is fallible and corrigible. Whether ‘S’ sees that ‘a; is ‘F’ depends on his being caused to believe that ‘a’ is ‘F’ in conditions that are appropriate for an exercise of that cognitive skill. If conditions are right, then ‘S’ sees (hence, knows) that ‘a’ is ‘F’. If they aren’t, he doesn’t. Whether or not ‘S’ knows depends, then, not on what else (if any thing) ‘S’ believes, but on the circumstances in which ‘S’ comes to believe. This being so, this type of direct realism is a form of an externalized world.
Nonetheless, epistemologists often use the distinction between internalised and externalised theories of epistemic justification without offering any very explicit explication. However, the distinction has been mainly applied to theories of epistemic justification. It has also been applied in a closely related way to accounts of knowledge and in a rather different way in accounts of belief and thought content. Also, on this way of drawing the distinction, a hybrid view, as according to which some of the factors required for justification must be cognitively accessible while others need not and in general will not be, would count as an externalized view. Obviously too, a view that was externalist in relation to a strong version of internalism (by not requiring that the believer actually be aware of all justifying factors) could still be internalized in relation to a weak version (by requiring that he at least, be capable of becoming aware of them). However, most prominent recent externalist views have been versions of ‘reliabilism’, whose main requirement for justification is roughly that the belief be produced in a way of or the convergent process that makes it objectively likely that the belief is true (Goldman, 1986)
What makes such a view externalist is the absence of any requirements that the person for whom the belief is justified have any sort of cognitive access to the relation of reliability in question. Lacking such access, such a person will in general have no reason for thinking that the belief is true or likely to be true, but will, on such an account, nonetheless, be epistemically justified in accepting it. Thus, such a view arguably marks a major break from the modern epistemological tradition, stemming from Descartes, which identifies epistemic justification with having a reason, perhaps even a conclusive reason, for thinking that the belief is true. An epistemologist working within this tradition is likely to feel that the externalist, rather than offering a compelling account of the same concept of epistemic justifications with which the traditional epistemologist is concerned has simply changed the subject.
The general line of argument for externalism points out that internalist views have conspicuously failed to provide defensible non-sceptical solutions to the classical problems of epistemology. A in striking contrast, however, such problems are in general easily solvable on an externalized view. For example, Goldman (1986) offers a one-page solution, in a footnote, of the same problem of induction. Thus, if we assume both that the various relevant forms of scepticism are false and that the failure of internalist views so far is unlikely to be remedial in the future. We have good reason to think that some externalist view is true. obviously the cogency of this argument depends on the plausibility of the assumptive arguments. An internalist can reply, that it is not obvious that internalist epistemology is designed to failure that the failure, that the explanation for the present lack of success may simply be the extreme difficulty of the problem in question. As it can be argued that most or even all of the appeal of the assumption that the various forms of scepticism are false and depends essentially on the intuitive conviction that we do not have reasons in our grasp for thinking that the various beliefs questioned by the sceptic are true - a conviction that the proponent of this argument must of course reject.
The main objection to externalism rests on the intuition that the basic requirement for epistemic justification is that the acceptance of the belief in question be rational or responsible in relation to the cognitive goal of truth, which seems to require in turn that the believer actually be aware of a reason for thinking that the belief is true (or at the very least, that such a reason be available to him). Since the satisfaction of an externalist condition is neither necessary nor sufficient for the existence of such a cognitively accessible reason: It is argued, externalism is mistaken as an account of epistemic justification. This general point has been elaborated by the sorts of putations intuitive to counterexamples of externalism. However, of these sorted challenges are plainly necessary of the externalist conditions for epistemic justification by appealing to examples of belief for which the intuitive to be justified, but for which the standard examples of this sort are cases where beliefs are produced in some very non-standard way, e.g., by a Cartesian demon, but nonetheless, in such a way that the subjective experience of the believer is indistinghable from that of someone whose beliefs are produced more general, this sort can be constructed with which any of the standard externalist condition, e.g., that the belief be a result of a reliable process, fail to be satisfied. The intuitive claim is that the believer in such a case is nonetheless, epistemically justified, as much so as one whose belief is produced in a more normal way, and hence, that externalist accounts of justification must be mistaken.
A view in this same general vein, one that might be described as a hybrid of internalism and externaslism (Swain, 1981 and Alson, 1989), holds that epistemic justification requires that there be a justificatory factor that is cognitively accessible to the believer in question (though it need not be actually grasped), thus ruling out, e.g., is pure reliabilism. At the same time, however, though it must be objectively true that beliefs for which such a fact or is available are likely to be true, this further fact need not be in any way grasped or cognitively accessible to the believer. In effect these premises’s needed to be argued, that a particular belief is likely to be true. One must be accessible in a way that would satisfy at least weak internalism, while having to be (and normally will be) purely external. At this point, the internalist will respond that this hybrid view is of no help at all in meeting the objection that the belief is not held in the rational, responsible in ways that justification may intuitively seems to require for the believer in question, lacking one crucial premiss, still has no reason for not all for thinking that his belief is likely to be true.
An alternative to giving an externalized account of epistemic justification, one which may be more defensible while still accommodating many of the same motivating concerns, is to give an externalist account of knowledge directly, without relying on an intermediate account of justification. Such a view will obviously have to reject the justified true belief account of knowledge, holding instead that knowledge is true belief which satisfies the chosen externalist condition, e.g., is a result of a reliable process and, perhaps, further conditions as well. This make s it possible for such a view to retain an internalist account of epistemic justification, though the centrality of that concept to epistemology would obviously be seriously diminished.
Such an externalist account of knowledge can accommodate the commonsense conviction that animals, young children, and unsophisticated adults possess knowledge, through most weaker of convictions (if such conviction even exits) that such individuals are epistemically justified in their beliefs. It is also, at least less vulnerable to internalist counterexamples of the sorts discussed, since the intuition involved there pertain more clearly to justification than to knowledge. What is uncertain is what ultimate philosophical significance the resulting conception of knowledge is supposed to have. In particular, doesn’t have any serious bearing on traditional epistemological problems and on the deeper and most troubling versions of scepticism, which seem in fact to be primarily concerned with justification, rather than knowledge.
As with justification and knowledge, the traditional view of content has been strongly internalist in character. The main argument for externalisms derives from the philosophy of language, more specifically from the various phenomenon pertaining to natural kind terms as, indexical, and so forth, that motivate the views that have come to be known as ‘direct reference ‘ theories. Such phenomenons seem at least to show that the belief or thought content that can be property attributed to a person is dependent on facts about the environment -e.g., whether he is on Earth or Twin Earth what in fact he is pointing at the classificatory criteria employed by the experts in his social group, and so on - not just on what is going on internally in his mind or brain (Putnam, 1975 and Burge, 1979).
An objection to externalist accounts of content is that they seem unable to do justice to our ability to know the contents of our beliefs or thoughts from the inside, simply by reflection. If content is dependent on external factors pertaining to the environment, then knowledge of content should depend on knowledge of content factors - which will not in general be available to the person whose belief or thought is in question.
The adoption of an externalist account of mental content would seem to support an externalist account of justification in the following way: If part of all of the content of a belief’s inaccessible to the believer, then both the justifying status of other beliefs in relation to that content and the status of that content as justifying further beliefs will be similarly inaccessible. Thus contravening the internalist requirement for justification. An internalist must insist that there are no justification relations of these sorts, that only internally accessible content can either be justified of justly anything else, but such a response appears lame unless it is coupled with an attempt to show that the externalist account of content is mistaken.
Direct perception of objective facts, pure perceptual knowledge of external events, is made possible because what is needed (by way of justification) for such knowledge has been reduced. Background knowledge - and, in particular, the knowledge that the experience does, indeed, suffer for knowing - isn’t needed.
This means that the foundation of knowledge are fallible, nonetheless, though fallible, they are in no way derived. That is what makes them foundations, even if they are brittle, as foundations sometimes are, everything else rests on or upon them.
As it is, direct realism is in of assuming that objects of realism exist independently of any mind that might perceive them: And so it thereby rules out all forms of idealism and phenomenalism, which hold that there are no such independently existing objects. Its being a ‘direct’ realism rules out those views defended under the rubic of ‘critical realism’, or ‘representative realism’ in which there is some non-physical intermediary - usually called a ‘sense-datum’ or a ‘sense-impression’ - that must first be perceived or experienced in order to perceive the object that exists independently of this perception. Often the distinction between direct realism and other theories of perception is explained more fully in terms of what is ‘immediately’ perceived, rather than ‘mediately’ perceived. These terms are Berkeley’s who claims (1713) that one might be said to hear an electrical street car rattling down the street, but this is mediate perception as opposed to what is ‘in truth and strictness’ the immediate perception of a sound. Since the senses ‘make no inference’, the perceiver is then said to infer the existence of the electrical street car. Or to have it suggested to him by means of hearing the sound. Thus for Berkeley, the distinction between mediate and immediate perception is explained in terms of whether or not either inference or suggestion is present in the perception itself.
Berkeley went on to claim that objects of immediate perception - sounds, colours, tastes, smells, sizes and shapes - were all ‘ideas of the mind’. Yet he held that there was no further reality to be inferred from them: So that the object of mediate perception of what we would call ‘physical objects’ - are reduced to being simply collections of ideas. Thus, Berkeley uses the immediate-mediate distinction to defend ‘idealism’. A direct realist, however, can also make use of Berkeley’s distinction to define his own position. D.M. Armstrong does this by claiming that the objects of immediate perception are all occurrences of sensible qualitites, such as colours, shapes and sounds, and these are all physical existents, and not ideas or any sort of mental intermediary at all (Armstrong, 1961). Physical objects, all mediately perceived are the bearers of these properties immediately perceived.
Berkeley and Armstrong’s way of drawing the distinction between mediate and immediate perception - by reference to inference or the lack of it - houses major difficulties. There are cases in which it is plausible to assert that someone perceived a physical object - say, a tree, - even when that person was unaware of perceiving it. (We can infer from his behaviour in carefully walking around it that he did see it, even though he does not remember seeing it). Armstrong would have to say that in such cases inference was present, because seeing a tree would be a case of mediate perception: Although it would have to be an unconscious inference, but this seems baseless: There is no empirical evidence that any sort of inference was made at all.
There seems that whether a person infers the existence of something from what he perceives is more a question of talent and training than it is a question of what the nature of the objects inferred really is. For in instance, if we have three different colour samples, a trained artist might not have to infer their difference, instead, he might see their difference immediately. Someone with less colour sense, however,, might see patches ‘A’ and ‘B’ as being the same in colour, and see that ‘A’ is darker than ‘C’. On this basis, he might then infer that ‘A’ is darker than ‘B’, and ‘B’ darker than ‘C’, and its inference might present in determining difference in colour, but colour was supposed to be an object of immediate perception. On the other hand, a games keeper might no t have to infer that the animal he sees as placed within the grounds of the Metro Zoo in Toronto, who sees a black panther, he sees it to be such as straightaway. Someone unfamiliar with the Toronto Zoo and the animalized placements, however, he might have to infer this from the creature’s markings in identifying it. Hence , inference had not been present in cases of perceiving physical objects, are yet in perceiving of these objects of physical objects was supposed to be mediate perception.
A more straightforward way to distinguish between different objects of perception was advanced b y Aristotle in ‘De Anima’, where he spoke of objects directly or essentially perceived as opposed to those objects incidentally perceived, in of those that comprise perceptual properties, either those discerned by only one sense (the ‘proper sensibles’), such as colour, sound, taste, smell, and tactile qualities, or else those discerned by more than one sense, such as size, shape, and motion (the ‘(common sensibles’). The objects incidentally perceived are the concrete individuals which posses the perceptual properties, that is, particular physical objects .
According to Aristotle’s direct realism, we pe receive physical objects incidentally: That is, only by means of the direct or essential perception of certain properties that belong to such objects. In other words , by perceiving the real properties of things, and only in this way, can we thereby be said to perceive the things the themselves. These perceptual properties, though not existing independently of the objects that have the m, are yet held to exist independently of the perceiving subject: And, the perception of them is direct in that no mental messages have been perceived or sensed in order to perceive these real properties.
Aristotle‘s way of defining his position seems superior to the psychological account offered by Armstrong, since it is unencumbered with the extra baggage of inference or suggestion. Yet, a common interpretation of the Aristotlean view leads to grave difficulties. The interpretation identifies the property of the perceived sense organ. It is base on Aristotle’s saying that in perception the soul taken in the form of the object perceived without its matter.. On this interpretation, it is easy to think of direct realism as being committed to the view that ‘colour as seen’ or ‘sound as heard’ were independently existing of physical objects, such a view has been rightly disparaged by its critics and labelled as ‘naive realism’: For this is s a view holding that the way thing look seem the way things are, even in the absence of perceives to who the y appear that way.
Similarly, such reductions could be made with regard to the other sensible properties that seemed to be perceived-dependent: sound could be reduced to sound waves, taste and smells to the particular shape’s of the molecules that lie on the tongue or enter the nose, and tactual qualities such as roughness and smoothness to structural properties of the objects felt . All of these properties would be taken to be distinct from the perceptual experience that these prope rties typicall give is to when they cause changes , that the perceiver’s sense the sense organs. When critic s complain that such a reduction on and the greenness of green and the yellowness of yellow (Campbell, 1976), the direct realism can answer this, it is by identifying different colours with distinct light waves, that we can best explain how it is that the perceiver’s in the same environment, with similar physical constituents, can cite similar colour experiences of green or of yellow.
A direct realist could claim that one directly perceives what is real only when there is no difference between the property proximately impinging on the same sense organ and that the property of the object which gives rise to the sense organ’s being affected. For colour, this would mean that the light waves reflected from the surface of the object mus t match those entering: the eyes, and for sound, it means that the sound waves emitted from the object must match those entering the ear. A difference in the property at the object from that at the same organ would result in illusions, not veridical perception. Perhaps, this is simply a modern version of Aristotle ‘s idea that a genuine perception the soul (now the sense organ) takes in the form of the perceived object.
If it is protested that illusion might also result from an abnormal condition of the perceiver, this can also be accepted, if one’s normal colour experience deviated too far from normal, even when the physical properties at the object and the sense organ were the same, then misperception or illusion would result. But such illusion could only be noted against a backdrop of veridical perception of real properties. Thus, the chance of illusion due to subjective factors need not lead to Liberal views of colour, sounds, tastes, and smell as existing merely‘ by convention’. The direct realist could implicate that there must be a break basis in veridical perception for must be a real basis in veridical perceptions for any such agreement to take place at all: And, veridical perception is best explained in terms of the direct perception of the properties of physical objects. It is explained, in other words, when our perceptual experience is caused when our perceptual experience is caused in the appropriate way.
This reply, n the part of the direct realist does not, of course, serve to refute the global sceptic, who claims that, since or perceptual experience could be just as it is without there being any real properties at all, we have no knowledge of any such properties but no view of perception alone is sufficient to refuse such global scepticism (Pitcher, 1971). For such a refutation we must go beyond a theory that claims how best to explain our perception of physical objects, and defend a theory that best explains how we obtain knowledge of the world.
In it s best-known form the adverbial the or of experience proposes that the grammatical object of a statement attributing an experience of someone being analysed as an adverb. More example.
(1) Tom is experiencing a pink square
Is rewritten as:
Rod is experiencing (pink square)-ly
This is present as an alterative to the act/object analysis, according to which the truth of a statement like (1) requires the existence of an object of experience, corresponding to it s grammatical object. A commitment to the explicit adverbiatization of statements of experience, however, essential to adverbialism. The core of the theory consists, rather, in th e denial of objects of experience (as opposed to objects of perception) coupled with the view that the role of the grammatical object in a statement of experience is to characterize more fully the sort of experience which is being attributed to the subject. The claim, then, is that the grammatical object is functioning as a modifier, and, in particular, as a ,modifier of a verb. If this is so, it is perhaps, appropriate to regard it as a special kind of adverb at the semantic level.
Nevertheless, our acquaintance with ‘experience‘ is to meet with directly (as through participation or observation) is an intricate and intimate affair as knowledge of something based on personal exposure. However familiar, it is not possible to define experience in an illuminating way, however, know what experiences are through acquaintance with some of their own, e.g., a visual experience of a given after-image, a feeling of physical nausea or a tactile experience of an abrasive surface (which might be caused by an actual surface) - rough or smooth - or which might be part of a dream, or the product of a vivid sensory imagination).
Another core feature of the sorts of experiences with which to consider is concerned of our spatial temporality in occupying a certain particular point in space and time, such that they have representational content. The most obvious cases of experience with content are sense experiences of the kind normally involved in perception. We may describe such experiences by mentioning their sensory modalities and their contents, e.g., a gustatory experience (modality) of chocolate ice cream (content), but do so more combined with noun phrases specifying their contents: As in ‘Macbeth perceived visually and ‘Macbeth had a visual experience of a dagger’.(The reading with which we are concerned).
As in the case of other mental states and events with content, it is impossible to distinguish between the properties which an experience represents and the properties which it possesses. To talk of the representational properties of an experience is to say something about its content, not to attribute those properties to the experience itself. Like every other experience, a visual experience of a pink square is a mental event, and it is therefore, presents those properties. It is, though it represent those properties, it is, perhaps, fleeting, pleasant o r unusual, even though it does not represent those properties. An experience may represent a property which it possesses and it may even do so in virtue of possessing that property, as in the case of a rapidly changing (complex) experience representing something as changing rapidly, but this is the exception and not the rule.
Which properties can be (directly) represented in sense experience is subject to debate. Traditionalists include only properties whose presence could not be doubted by a subject having appropriate experiences, e.g., colour and shape in the case of visual experience, and (apparent) shape, surface, texture, hardness, etc., in the case of tactile experience. This view is natural to anyone who has an egocentric, Cartesian perspective in epistemology, and who wishes for pure data in experience to serve as logically certain foundations for knowledge. The successors to the empiricalists’ concept of ideas of sense are the sense-data, a term introduced by Moore and Russell, and refers to the immediate objects of perceptual awareness, such as colour patches and shapes, usually supposed distinct from surfaces of physical objects. Qualities of sense-data was supposed to be distinct from physical qualities because their perception is more relative to conditions, more certain and more immediate and because sense data are private and cannot appear other than they are. They are objects that change in our perceptual fields when conditions of operations change and physical objects remain constant.
Others who do not think that this wish can be satisfied, and who are more impressed with the role of experience in providing animals with ecologically significant information about the world around them, claim that sense experience represent properties, characteristics and kinds which are much richer and much more wide-ranging than the traditional sensory qualities. We do not see only colours and shapes, they tell us, but also earth, water, men, women and fire. There is no place as to examine the factors relevant to a choice between these alternatives.
Given the modality and content of a sense experience, most of us will be aware of its character even though we cannot describe that character directly. This suggests that character and content are not really distinct and there is a close connection between them. For one thing, the relative plexuity of the character of a sense experience places limitations on its possible content, e.g., a tactile experience of something touching one’s left ear is just too simple to carry the same amount of content as a typical everyday visual experience. Furthermore, the content of a sense experience of a given character depends on the normal causes of appropriately similar experiences, e.g., the sort of gustatory experience which we have when eating chocolate would not represent chocolate unless it were normally caused by chocolate. Granting a contingent connection between the character of an experience and its possible causal origins, it again follows, that its possible content is limited by its character.
Character and content are, nonetheless, irreducibly different, for the following reasons. (1) There are experiences which completely lack content, e.g., certain bodily pleasures, and (2) not every aspect of the character of an experience with content is relevant to that content, e.g., the unpleasures of an aural experience or of chalk squeaking on a board may have no representational significance. (3) Experiences in different modalities may overlap in content without a parallel overlap in character, e.g., visual and tactile experiences of circularity feel completely different, and (4) the content of an experience with a given character may vary according to the background of the subject, e.g., a certain aural experience may come to have the content ‘singing bird’ only after the subject has learned something about birds.
According to the act/object analysis of experience (which is a special case of the act/object analysis of consciousness), every experience involves an object of experience even if it has no material object. Two main lines of argument may be offer in support of this view, one phenomenological and the other semantic.
In outline, the phenomenological argument is as follows. Whenever we have an experience, even if nothing beyond the experience answers to it, we seem to be presented with something through the experience (Which is itself diaphanous). The object of the experience is what ever is so presented to us - be it an individual thing, an event, or a state of affairs.
The semantic argument is that objects of experience are required in order to make sense of certain features of our about experience, including, in particular, the following, (1) Simple attributions of experience (e.g., Rod is experiencing a pink square) seem to be relational, and (2) we appear to refer to objects of experience and to attribute properties to them (e.g., The after-image which John experienced was green. (3) We appear to quantify over objects of experience (e.g., Macbeth saw something which his wife did not see).
The act /object analysis faces several problems concerning that status of objects of experience, currently the most common view is that they are sense-data - private mental entities which actually posses the traditional sensory qualitites s represented by the experience of which they are the object. But the very idea of an essentially private entity is suspect. Moreover, since an experience may apparently represent something as having a determinable property (e.g ., redness) without representing it as having any subordinate determinate property (e.g., any specific shade of red), a sense-data may actually have a determinable property without having a determinate property subordinate to it. Even more disturbing is that sense-data may have contradictory properties, as sense-data theorist mus t either deny that there are experiences or admit contradictory objects.
These for experiences seem not to present us with bare properties (however complex), but with properties embodied in individuals. The view that objects of experience are Meinongian objects accommodates this point. It is also attractive in so far as (1) It allows experiences to represent properties other than traditional sensory qualities, and (2) It allows for the identification of objects of experience and objects of perception in the case of experiences which constitute perception. According to the act/object analysis of experience, every experience with content involves an object of experience to which the subject is related by an act of awareness (the event of experiencing that object). This is meant to apply not only to perceptions, which have mental objects (whatever is perceived), but also to experiences like hallucinations and dream experiences, which do not. Such experiences, nonetheless appear to represent something, and their objects are supposed to be whatever it is that they represent. Act/object theories may differ on the nature of objects of experience, which have been treated as properties. Meinongian objects (which may not exist or have any form of being), and, more commonly, private mental entities with sensory qualities. (The term ‘sense-data’ is now usually applied to the latter, but has also been used as a general term for objects of sense experience, as in the work of G.E. Moore). Act/object theorists may also differ on the relationship between objects of experience and objects of perception. In terms of representative realism, objects of perception (of which we are ‘indirectly aware’) Meinongians, however, may simply treat objects of perception as existing objects of experience. But most philosophers will feel that the Meinongian’s acceptance of impossible objects is too high a price to pay for these benefits.
A general problem for the act/objedct ansalysis is that the question of whether two subjects are experiencing one and the same thing (as opposed to having exact ly similar experiences) appears to have an answer only on the assumption that the experiences concerned are perception with material objects. But in terms of the act/object analysis the question must have an answer even when this condition is not satisfied. (The answer is always negative on the sense-data theory, it could be possible on the other versions of the act/object analysis, depending on the fact of the case).
In view encapsulated of problems, the case for the act/object analysis should be reassessed. The phenomenological argument is not, on reflection, convincing, for it is easy enough to grant that any experience appears to present us with an object without accepting that it actually does. The semantic argument that is more impressive, but, nonetheless, answerable. The seemingly relational structure of attributions of experience is a challenge dealt within the adverbial theory. Apparent reference to and quantification over objects of experience can be handled by analysing them as reference to experience theme selves, and quantification over experiences tacitly typed according to content. (This, ‘The after-image which John experienced was green. John’s after-image experience was an experience of green, and Macbeth saw something which his wife did not see, because ‘Macbeth had a visual experience which his wife did not have’.
Pure cognitivism attempts to avoid the problems facing the act/object analysis by reducing experiences to cognitive events or associated disposition, e.g., Susy’s experience of a rough surface beneath her hand might be ‘identified with the event of her acquiring this belief, with a disposition to acquire in which has somehow been blocked.
This position has attractions, as it does full justice to the cognitions of experience and to the important role of experience as a source of belief acquisition. It would also help clear the way for a naturalistic theory of mind, since there seems to be some prospect of a phyicalist/fundationalist account of belief and other intentional states. But pure cognitivism is completely undermined by its failure to accommodate the fact that experiences have a felt character which cannot be reduced to the content. A launching celebration of gratifying the adverberial theory is an attempt to undermine the act/object analysis by suggesting a semantic account of attributions of experience which does not require objects of experience. Unfortunately, the oddities of explicit adverbialization of such statements have driven off potential supporters of the theory. Furthermore, the theory remains largely undeveloped, and attempted refutations have traded on this. It may, however, be founded on sound basic intuition, and there is reason to believe that an effective development of the theory of which is possible.
The relevant intuitions are (1) That when we say that someone is experiencing ‘an A’ or has an experience ‘of an A’, we are using this content-expression to specify the type of thing which the experience is especially apt to fit. (2) That doing this is a matter of saying something about the experience itself (and maybe about the normal also about the normal causes of the experience itself) and, (3) That there is no good reason to suppose that it involves the description of an object which the experience is ‘of’. Thus, the effective role of the content-expression in a statement of experience is to modify the verb it complements, not to introduce a special type of object.
Perhaps the most important criticism of the adverbial theory is the ‘many property problem’, according to which the theory does have the resources to distinguish between, e.g.,
(1) Frank has an experience of a brown triangle
And:
(2) Frank has an experience of brown and an experience of a triangle.
Which is entailed by (1) but does not entail it. The act/object analysis can easily accommodate the difference between (1) and (2) by claiming that the truth of (1) requires a single object of experience which is both brown and triangular, while that of (2) allows for the possibility of two objects of experience, one brown and the other triangular, however, that (1) is equivalent to.
(1*) Frank has an experience of something’s being both brown and triangular.
And (2) is equivalent to:
(2*) Frank has an experience of something’s being brown and an experience of something’s being triangular.
And the difference between these can be explained quite simply in terms of logical scope without invoking objects of experience. The adverbialist may use this to answer the many-property problem by arguing that the phrase’s ‘a brown triangle’ in (1) does exactly the same work as the clause ‘something’s being both brown and triangular,’ in (1*). This is perfectly compatible with the view that it also has the ‘adverbial’ function of modifying the verb ‘has an experience of’ for it specifies the experience more narrowly just by giving a necessary condition for the satisfaction of the experience (the condition being that there is something both brown and triangular before Frank).
And yet, a position of which should be mentioned is the state theory, according to which a sense experience of an ‘A’ is an occurrent, non-relational state of the kind which the subject would be in when perceiving an ‘A’. Suitably qualified, this claim is no doubt true, but its significance subject to debate. Perhaps, it is enough to remark that the claim is compatible with pure cognitivism and the adverbial theory, and that state theorists are probably best to advised to adopt adverbialism as a means of developing their intuitions.
That is to say, that most generally, intuition and deduction are quantifiable when one has intuitive knowledge, that ‘p’ when:
1: One knows that ‘p’.
2: One’s knowledge that ‘p’ is immediate, and
3: One’s knowledge that ‘p’ is not an instance of the operations of any of the five senses (so knowledge of the nature of one’s own experience of the nature of one’s own experience is not intuitive).
On this account neither mediate nor sensory knowledge is intuitive knowledge. Some philosophers, however, want to allow sensory knowledge to count as intuitive, so to do this, omit clause (3) above.
The two principal families of examples of mediate (i.e., not immediate) knowledge that have interested philosophers are, knowledge through the representation and knowledge by inference. Knowledge by representation occurs when the thing known is not what one appeals to as a basis for claiming to know it, as when one appeals to sensory phenomena as a basis for knowledge of the world (and the world is not taken to be a sense-phenomenal construct) or as when one appeals to words as a source of knowledge of the world (as when one claims that a proposition is true of the world solely by virtue of the meaning of the words expressing it).
(There are other idioms that are used to mark out the differences between non-intuitional ways of knowing, such as know ing indirectly and knowing directly, or knowing in the absence of the thing known and knowing by virtue of the presence of the thing known. It is sometimes useful to speak of the object of knowledge being intuitively given, meaning that we can know things about it without mediation. The justification of a claim to knowledge by appeal to its object being intuitively given is surely as good as could be. What could be a better basis for a claim to knowledge than the object of knowledge itself given just as it is?).
One of the fundamental problems of philosophy , overlapping epistemology and the philosophy of logic, is that of giving criteria for when a deductive inference is valid, criteria for when an inference does or can continue knowledge or truth. There are in fact two very different proposals for solutions to this problem, one that had slowly come into fashion during the early part of this century, and another that has been much out of fashion, but gaining in admirers. The former, which develops out of the tradition of Aristotelian syllogistic, holds that all valid deductive inferences can be analysed and paraphrased as follows:
The sentences occurring in the deduction are aptly paraphrased by sentences with an explicit, interpreted logical syntax, which in the main consists of expressions for logical operations, e.g., predication, negation, conjunction, disjunction, quantification, abstraction, . . .,: and
The validity of the inferences made from sentences in that syntax to sentences, in that syntax is entirely a function of the meaning of the signs for logical operations expressed in the syntax.
In particular, it is principally the meaning of the signs for logical operations that justify taking considered rules of inference as valid (Koslow, 1991). For example, is such a justification as given by Gottlob Frége (1848-1925), one of the great developers of this vie w of the nature of the proper criteria for valid deductive inference, someone who is in fact, in the late nineteenth century, gave us an interpreted logical syntax (and so a formal deductive logic) far, far greater and more powerful than had been available through the tradition of Aristotelian syllogistic:
A B is meant to be a proposition that is false when ‘A’ is true and ‘B’ is false: Otherwise, it is true (Frége. 1964) paraphrased; variables restricted to the True . . .the False.
The following is a valid rule of inference: From ‘A’ and A B, infer ‘B’, for if ‘B’ were false, since ‘A’ is true A B would be false, but it is supposed to be true (Frége, 1964, paraphrased).
Frége believed that the principal virtue of such formal-syntactical reconstructions of inferences - as validity moving on the basis of the meaning of the signs for the logical operations alone - was that it eliminated the dependence on intuition and let on see exactly on what our inferences depended, e.g.,:
We divided all truths that require justification into two kinds, those for which the proof can be carried out purely by means of logic and those for which it must be supported by facts of experience.
. . . Now, when I came to consider the question to which of these two kinds the judgment of arithmetic belong. I first had to ascertain how far one could proceed in arithmetic by means of inference alone, with the sole support of those laws of thought that transcend all particulars. . . .To prevent anything intuitive (Anschauliches) from penetrating. here unnoticed, I had to bend every effort to keep the chain of inference free from gaps (Frége, 1975).
In the literature most ready to hand, the alternative view was supported by Descartes and elaborated by John Locke, when maintained that inferences move best and most soundly when based on intuition. (their word):
Syllogism serves our Reason, in that it shows¸the connection of the Proofs, î.e., the connexion between premises and conclusion¸in any one instance and no more, but in this, of no great use. Since the Mind can perceive such connexion, where it readily is, as easily, nay, perhaps better without Syllogism¸.
If we observe the Acting of our own Minds, we shall find, that we reason best and clearest when we only observe the connexion of the Ideas, without reducing out Thoughts to any Rule of Syllogism . (Locke, 1975. p.670).
What is it that one is intuiting? Ideas, or meaning, and relationships among them. Ideas or meaning are directly given, to be directly the difference s being marked by Locke is between (1) inferring Socrates is mortal from the premises All men are mortal and Socrates is a man by appealing to the form-logical rule. All ‘A’ are ‘B’ and ‘C’ is an ‘A`, therefor C and B which is supposed to be done without any appeal to the intuitive meanings of, All and is, and (2) seeing that Socrates is moral follows from, All men are mortal and Socrates is a man by virtue of understanding (the meaning of) those informal sentences without any appeal to th e formal-logical sentences without any appeal to the formal-logical rule. Lock e is also making the point that inference made on the basis of such an understanding of meanings are better, and more fundamental, then inferences made on the basis of an appeal in a formal-logical schema. Indeed, Locke would certainly maintain that such informal, intuitive inferences made on the basis of understanding the meaning of sentences serve better as a check on the correctiveness inference. Serve as a check on intuitive inferences.
Such distrust of formal logical inference or greater trust in intuitive inference has been promoted in recent times by Henri Poincaré and L.R.J. Bouwer (Detlefsen, 1991).
We might say that for Frége, too, logical inferences moved by virtue of intuition of meaning, the meaning of the signs for logical inference, for we have seen how Frége appealed to such meanings in order to justify formal-logical rules of inference. Of course, its content, the formal-logical rules are justified, Frége is quite content to appeal to them in the construction of deduction, not returning each time to the intuited meanings of the logical signs. What is new in Frége is the conviction that inferences that proceed wholly on the basis of the logical signs, signs for logical operations, are complete with respect to logical implication - that if ‘B’ logically follows from ‘A’, then from ‘A’ by rules which mention only logical operations and not, e.g., the concrete meaning of predicate-expressions in the relevant propositions. There is a deep issue of which is destined to become the principal issue in the philosophy and epistemology of logical theory, but, to what extent, in what measure, does intuition of the non-logical content measure, does intuition of the non-logical contents of propositions, i.e., content other than the meanings of the signs for logical operations, rightly sustain inference?
But one does not really need to reach to such an example, that virtually all inferences set out in mathematical proofs most obviously proceed on the basis of intuitively given meaning content rather than appeal to formal-logical rules, and it is easy to find examples of such proofs that clearly do not depend on the meaning of signs for logical operators, but rather on the non-logical content of the mathematical propositions. There is a good example in Hilbert (1971, p 6, paraphrased).
Similar problems face the suggestion that necessary truths are the ones we know with certainty: We lack a criterion for certainty, as there are necessary truths we don’t know, and (barring dubious arguments for scepticism) it is reasonable to suppose that we know some contingent truths with certainty. Leibniz defined a necessary truth as one whose opposite implies a contradiction. Every such proposition, he held, is either an explicit identity (i.e., of the form ‘A is A’, ‘AB is B’, etc.) or is reducible to any identity by successively substituting equivalent terms. (Thus, one might be so reduced by substituting ‘unmarried man’ for ‘bachelor’. This has several advantages over the ideas of the previous ascriptions. First, it explicates the notions of necessity and possibility and seems to provide a criterion we can apply. Second, because explicit identities are self-evidently a priori propositions. The theory implies that all necessary truths are knowable a priori, but it does not entail that we actually know all of them, nor does it define ‘knowable’ in a circular way. Third, it implies that necessary truths are knowable with certainty, but does not preclude our having certainty in knowledge of contingent truths by means of contingent truths by mans other than a reduction.
Nevertheless, this view is also problematic. Leibniz`s example of reduction are too spare to prove a claim about all necessary truths. Some of his reductions. Moreover, are deficient: Frége has pointed out , for example, that his proof of `2 + 2 = 4, presupposes the principle of association and so does not depend only on the principle of identity. More generally, it has been shown that arithmetic cannot be reduced to logic, but requires the resources of set theory as well. Finally, there are other necessary propositions (e.g., `Nothing can be red and green all over`) which do not seem to be reducible to identities and which Leibniz does not show how to reduce.
Leibniz and others have thought of truths as property of propositions, where the latter are conceived as things which may be expressed by, but are distinct from, linguistic items like statements on another approach truth is a property of linguistic entities, and the basis of necessary truth is convention. Thus, A.J. Aver, for example, argued that the only necessary truths are analytic statements and that the latter rests entirely on or upon our commitment to use words in certain ways.
The general project of the positivistic theory of knowledge is to exhibit the structure, content and basis of human knowledge in accordance with empirical principles. Since sentence is regarded as the repository of all genuine human knowledge, structure, or as it was called the logic of science. The theory of knowledge thus becomes three major tasks (1) to analyse the meaning of the statements of science exclusively in terms of observations or experiences in principles available to human beings, (2) to show how certain observations or experiences serve to confirm a given statement in the sense of making it more warranted or reasonable, and (3) to show how non-empirical or a priori knowledge of the necessary truths of logic and mathematics is possible even though every matter of fact which can be intelligibly thought or known is empirically verifiable or falsifiable.
1. The slogan ‘the meaning of a statement is its method of verification’ expresses the empirical verification theory of meaning. It is more than the general criterion of meaningfulness according to which a sentence, if it is cognitively verifiable. It says in addition, that the meaning of each sentence is: It is all those observations which would confirm or disconfirm the sentence. Sentence which would be verified or falsified by all the same observations are empirically equivalent or have the same meaning.
A sentence recording the result of a single observation is an observation or ‘protocol’ sentence. it can be conclusively verified or falsified on a single occasion. Every other implies an indefinitely large number of observation sentences which together exhaust its meaning, but at no time will all of them have been verified or falsified. To give an ‘analysis’ of the statements science is to show how the content of each scientific statement can be reduced in this way, to nothing more than a complex combination of directly verifiable ‘protocol’ sentences
Verificationism, is that of any view according to which the conditions of a sentence’s or a thought’s being meaningful or intelligible are equated with the conditions of is being verified or falsified. An explicit defence of the position would be a defence of the verifiability principle of meaningfulness. The exclusiveness of a scientific world view was to be secured by showing that everything beyond the reach of science is strictly or ‘cognitively’ meaningless. In the sense of being incapable of truth or falsity, and so not possible an object of meaningfulness and it was found in the idea of empirical verification. And anything which does not fulfil this criterion is declared literally meaningless to its truth or falsity. It is not an appropriate object of enquiry. Moral and aesthetic and other confirmable nor disconfirmable on empirical grounds, and so are cognitively meaningless. They are at best expressions of feeling or preferences which are neither true nor false. Whatever is cognitively meaningful and therefore factual is value-free. The positivist claim that many of the sentences of traditional philosophy, especially those in what they called ‘metaphysics’, also lack cognitive meaning and say nothing that could be true or false. But they did spend much time trying to show this in detail about the philosophy of the past. They were more concerned with developing a theory of meaning and of knowledge adequate to the understanding and perhaps even the improvement of science.
Implicit verificationism is often present in position or arguments which do not defend that principle in general, but which reject suggestions to the effect that a certain sort of claim is unknowable or unconfirmable, on the sole ground that it would therefore be meaningless or unintelligible. Only if meaningfulness or intelligibility is indeed a guarantee of known or confirmability is the position sound. If it is , nothing we understand could be unknowable or unconformable by us .
2. The observations recorded in particular ‘protocol’ sentences are said to confirm those ‘hypotheses’ of which they are instances. The task of confirmation theory is therefore to deduce the notion of a confirming instance of a hypothesis and to show how the occurrence of more and more such instances adds credibility or warrant to the hypothesis in question. A complete answer would involve a solution to the problem of induction: To explain how any past or present experience makes it reasonable to believe in something that has not yet been experienced. Even so, all inferences from past or present experience to an unobserved matter of fact ‘proceed upon’ that principle. But no assurance can be given to that principle from reason alone: It is not impossible, for the future to be different from the past Whether the future will resemble the past is a contingent matter of fact. Experience is therefore needed to assure us of that principle. It cannot do so alone, since the principle partly can tell us only how things have been in the past. Something more than past experience is needed. As in the sense of implying a contradiction, for the future to be different from the past.
But reason, even when combined with past experience, cannot be what leads us to believe that the future will resemble the past. If it did, it would be by means of an inference from past experience to the principle that the future will resemble the past. And, so before, any such inference would have too ‘proceed on the supposition’ that the future will resemble the past, but that would be evidently going in a circle and taking that for granted which is the very point in question.
3. Logical and mathematical propositions and other necessary truths, are not to predict the course of future sense experience, they cannot be empirically confirmed or disconfirmed, but they are essential to science and so must be accounted for. They are one and all ‘analytic’ in something like the Kantian sense: True solely in virtue of the meanings of their constituent terms. They serve only to make explicit the contents of and the logical relations among the terms or concepts which make up the conceptual framework through which we interpret and predict experience. Our knowledge of such truths is simply knowledge of what is and what is not contained in the concepts we use.
Nonetheless, the Lockean/Kantian distinction is based on a narrow notion of concept on which concepts are senses of expressions in the generalized language. The broad Frégean/Carnapian distinction is based on a broad notion of concept on which concepts are conceptions - often scientific ones - about the nature of the referents of expressions (Katz, 1971 and Putnam, 1981). Whereas, in its conflation of these two notions of concept produce the illusion of a single concept with the content of philosophical, Logical and mathematical conceptions, but with the status of linguistic concepts. All that is necessary is to keep the original, narrow distinction from being broadened. This insures that preposition expressing the content of broad concepts cannot receive the easier justification appropriate to narrow ones, its notion allows us to pose the problem of how necessary is knowledge, if not for being of what is possible, moreover, logical and mathematical knowledge are part of the problem. By which Quine, did not undercut the foundations of rationalism, hence, a serious reproval of the new empiricism and naturalized epistemology is, to say the least, very much in order (Katz, 1990).
Experience can perhaps show that a given concept has no instances, or that it is not a justified useful concept for us to employ, but that would not show that what we understand to be included in that concept is no really included in it, or that it is not the concept we take it to be. Our knowledge of the constituents of and the relations among our concepts is the reference or not dependent on experience: It is a priori, it is knowledge of what holds necessarily, and all necessary truths are ‘analytic’, as there is no synthetic a priori knowledge to mark the distinction, one who characterizes a priori knowledge in terms of justification which is independent of experience is faced with the task of articulating the relevant sense of experience. Proponents of the a prior often cite ‘intuition’ or ‘intuitive apprehension’ as the source of a priori justification. Recent attacks on the existence of an a priori is knowledge known knowledge fall into three general camps. Some as Putnam (1979) and Kitcher (1983), begin by providing an analysis of the concept of a priori knowledge and then ague that alleged examples of a prior in knowledge and fail to satisfy the conditions specified in the analysis. Attacks in the second generality are independent processes of any particular analysis of the concept on the alleged source of such knowledge. Benacerrak (1973), for example, argues by dominants of the a priori to be the source of mathematical knowledge, but cannot fulfil that role. A third form of attack is to consider prominent examples such of our positions alleged to be knowable only a priori and to show that they can be justified by experiential evidence . The Kantian position that has received most attention is the claim that a priori knowledge is the claim that some a priori knowledge is of synthetic priori propositions. Initially, there were two different claims. That were concerned exclusively with some of Kant’s particular examples of alleged synthetic a priori knowledge only the claim that the truths of arithmetic are synthetic.
Kantian strategies that mathematical knowledge is a necessary condition of empirical knowledge. Kant argued that the laws of mathematics are actually constraints on our perception of space and time. In knowing mathematics, then, we know only the laws of our own perception. Physical space in itself, for all we know, may not obey the laws of Euclidean geometry and arithmetic, but the world as perceived by us must, that mathematics is objective - or ‘intersubjective - in the sense that it holds good of all portions of the whole human race, past, present, and future. For this reason, there is no problem with the applicability of mathematics in empirical science - or, so the Kantian claim.
In the sense of which we are to assume of some characteristic, as pending some thought to be epistemological problems, it will be difficult do not give of some needed demonstration. It is, and would seem, that to base conclusions of truth that must fail because in any such attempt be to our understanding of the truths and reasons of fact, that in this distinction is associated with Leibniz, who declares that there are only two kinds of truths - truths of reason and truths of fact. The former are either explicit identities, i.e., of the form ‘A is A’, ‘AB is B’, and so forth, or they are reducible to this form by successively substituting the equivalent terms. Leibniz also says that truths of reason ‘rest on the principle of contradiction, or identity’ and that they are necessary propositions, which are true of all possible worlds. Some examples are, ‘All bachelors are unmarried: And ‘All bachelors are unmarried‘: The first is already of the form ‘AB is B’ and the latter case be reduced to this form, that of any substituting ‘unmarried man’ for ‘bachelor’. Other examples, or so Leibnitz believes, are ‘God exists’ and the truths of logic, arithmetic and geometry.
Truths of fact, on the other hand, cannot be reduced to an identity and our only way of know them is a posteriori, or by referenc e to the facts of the empirical world. Likewise, since their denial does not involv e a contradict ion, the trut h is merely contingent: They could have been otherwise and hold of the actual world, but not of every possible one. Some examples are ‘Caesar crossed the Rubicon’ and ‘Leibniz was born in Leipzig’ as well as propositions expressing correct scientific generalizations. In Leibniz’s view, truths of fact are often the principle of sufficient reason, which states that nothing can be so unless there is a reason why it is so. This reason is that the actual world (by which the means the total collection of things past, present and future) is better than any other possible world and was therefore created by God.
Necessary contingent ruths sare ones which mudst be true or whose opposite is impossible. Contingent truths arre thos e thst sarer not necessary sand wehose opposite is threrefore. 1-3 below are nrecessary . 4-6, contingemt
(1) It is not the case that it is r asining asnd not raining
(2) 2 + 2 = 4.
(3) All bachrelor s sare unmarried.
(4) All seldom r ains in th e Sahasra.
(5) There are mor e than fgour ststs I n the US of A.
(6) Some bachelors drive Maseratis.
Plantinga (1974) charact erizes the sense of necessity, illustrated in 1-3 as ‘broadlly logical. For it include s niot only truths of logic, but those of mathemayics, set throry, asnd other quasi-logical ones. Yet it is not so broad ass to include maters of causal or nstural necessity, dsuch sas
(7) Nothing travels faster than the speed of light.
One would like an account of our distinction and a criterion by which to apply it. Some suppose that necessary truths are those we know a priori. But we lack a criterion for a priori truths, and there are necessary for truths we don’t know at all (e.g., undiscovered mathematical ones). It won’t help to say that necessary truths are ones where it is possible, in the broadly logical sense, to know a priori, for this is circular. Finally, Kripke (1972) and Plantinga (1974) argue that some contingent truths are knowable a priori knowledge. Similar problems face the suggestion that necessary truths are the ones we know to be of certainty: We lack a criterion for certainty, there are necessary truths we don’t know and barring dubious arguments for scepticism, it is reasonable to suppose that we know some contingent truths with certainty.
Leibniz defined as necessary truth as one whose opposite implies a contradiction. Every such proposition, he held, is either an explicit identity (i.e., of the form ‘A is A’, ‘AB is B’) or is reducible to an identity by successively substituting equivalent terms. (Thus 3 above might be so reduced by substituting ‘unmarried man’ for ‘bachelor’). This has several advantages over the ideas of the previous paragraph. First, it explicates the notion of necessarily and possibility and seems to provide a criterion we can apply. Second, because explicit identities are self-evident a prior propositions, that theory implies that all necessary truths are knowable a priori that all necessary truths are knowable a priori, but it does not entail that we actually know all of them, nor does it define ‘knowable’ in a circular way. Third, it implies that necessary truths are knowable with certainty, but does not preclude our having certain knowledge of contingent truths by means other than a reduction.
Nevertheless, this view is also problematic, as Leibniz’s examples of reduction are too sparse to prove a claim about all necessary truths. Even so, Frége has pointed out, for example, that his proof of ‘2 + 2 = 4' presupposes the principle of association and so does not depend only on the principle of identity. More generally, it has been shown that arithmetic cannot be reduced to logic, but requires the resources of set theory as well. Finally, there are other necessary propositions (e.g., ‘Nothing can be red and green all over’) which do not seem to be reducible to identities and which Leibniz does not show how to reduce.
Leibniz’s account of our knowledge of contingent truths is remarkably similar to what we would expect too find in an empiricist’s epistemology. Leibniz claimed that our knowledge of particular contingent truths has its basis in sense perception. He argued that our knowledge of universal contingent truths cannot be based entirely on simple enumerative inductions, but must be supplemented by what he called ‘the conjectural method a priori’: Which he described as follows.
1. Truth is fundamentally a matter of the containment of the concept of the predicate of a proposition in the concept of its subject.
2. The distinction between necessary truth and contingent truth is absolute, and in no way relative to a corresponding distinction between divine and human sources of knowledge.
3. A proposition is known a priori by a finite mind only if that proposition is a necessary truth (Parkinson and Morris, 1973).
Hence, although Leibniz commenced with an account of truth that one might expect to lead to the conclusion that all knowledge is ultimately a priori knowledge, he set out to avoid that conclusion.
Leibniz’s rationalism in epistemology is most evident in his account of our a priori knowledge, that is, according to (3), our knowledge of necessary truths. One of Leibniz’s persistent criticisms of Locke’s empiricism is the thesis that Locke’s theory of knowledge provides no explanation of how we know of certain propositions that they are not only true, but necessarily true. Leibniz agreed that Locke offered no adequate account of how we know propositions to be true whose justification does not depend on or upon experience: Hence, that Locke had no acceptable account for a priori knowledge, however, Leibniz diagnosis of Locke’s failing was straightforward: Locke lacked an adequate account of our a prior knowledge because, on Locke’s theory must come from experience, thus overlooking what Leibniz took to be the source of our a prior knowledge namely, what is innate to the mind. In that, Leibniz argued for the second alternative, the theory of innate doctrines and concepts.
The thesis that some concepts are innate to the mind is crucial to Leibniz’s philosophy. He held that the most basic metaphysical concepts, e.g., the concepts of substances and causation, are innate, whereby, he was unmoved by the inability of empiricism to reconstruct full-blown versions of those concepts from the materials of sense experience.
Leibniz’s account of our knowledge of contingent truths is remarkably similar to what we would expect to find in an empiricist’s epistemology. Leibniz claimed that our knowledge of particular contingent truths has its basis in sense perception. He argued that our knowledge of universal contingent truths can not be based entirely on simple enumerative inductions, but must be supplemented by what he called ‘the conjectural method a priori’, which he described as follows:
The conjectural method a priori proceeds by hypotheses, assuming certain causes, perhaps, without proof and showing that the things that happen would follow from those that happen would follow from those assumptions. A hypothesis of this kind is like the key to a cryptograph, and the simpler it is, and the greater the number of events that can be explained by it, the more probable it is (Loemker, 1969).
Leibniz’s conception of the conjectural method a priori is a precursor of the hypothetico-deductive method. He placed emphasis on the need for a formal theory of probability, in order to formulate an adequate theory of our knowledge of contingent truths.
Leibniz sided with his rationalist colleagues, e.g., Descartes, in maintaining, contrary to the empiricist, that, since thought is an essential property of the mind, there is no time at which a mind exists without a thought, a perception. But Leibniz insisted on a distinction between having a perception and being aware of it. He argued forcefully on both empirical grounds and conceptual grounds that finite minds have numerous perceptions of which they are not aware of the time at which they have them (Remnant and Bennett, 1981).
Leibniz’s rationalism in epistemology is most evident in his account of our a priori knowledge, that is, according to (3), our knowledge of necessary truths. One of Leibniz’s persistent criticisms of Locke’s empiricism is the thesis that Locke’s theory of knowledge provides no explanation of how we know of certain propositions that they are not only true, but necessarily true. Leibniz argued that Locke offered no adequate account of how we know propositions to be true whose justification does not depend upon experience: Hence, that Locke had no acceptable account of our a priori knowledge. Leibniz’s diagnosis of Locke failing was straightforward: Locke lacked an adequate account of our a priori knowledge because, on Locke’s theory, all the material for the justification of beliefs must come from experience, thus overlooking what Leibniz took to be the source of our a priori knowledge, namely, what is innate to the mind. Leibniz summarized his dispute with Locke thus:
Our differences are on matters of some importance. It is a matter of knowing if the soul in itself is entirely empty like a writing tablet on which nothing has as yet been written . . . And if everything inscribed there comes solely from the senses and experience, or if the soul contains originally the sources of various concepts and doctrines that external objects merely reveal on occasion . . . (Remnant and Bennett, 1981).
Leibniz argued for the second alternative, the theory of innate doctrines and concepts. Th e thesis that some concepts are innate to the mind is crucial to Leibniz’s philosophy. he held that the most basic metaphysical concepts, e.g.,, the concepts of substance and causation, are innate. Hence, he was unmoved by the inability of empiricist to reconstruct full-blown versions of those concepts from the materials of sense experience.
These in innate ideas, have been variously defined by philosophers either as ideas consciously present to the mind prior to sense experience (the non-dispositional sense), or as ideas which we have an innate disposition top form (though we need not be actually aware of them at any particular time, e.g., as babies)-the dispositional sense.
Understood in either way they were invoked to account for our recourse tn experiential verification, such as those of mathematics, or to justify versions of moral and religious claims which were held to be capable of being known by introspection of our innate ideas, examples of such supposed truths might include ‘murder is wrong’ or that ‘God exists’.
One difficulty with the doctrine is that is sometimes formulated as one about concepts or idea s which are held to be innate and at other times as one about a source of propositional knowledge. in so far as concepts are taken to be innate the doctrine relates primarily to claim about meaning, our idea of God, for example, is taken as a source for the meaning of the word God. When innate ideas are understood propostionally their supposed innateness is taken as evidence for their truth. This latter thesis clearly rests on the assumption that innate propositions have an unimpeachable source, usually taken to be God, but then any appeal to innate ideas to justify the existence of God is circular. Despite such difficulties the doctrine of innate ideas has a long and influential history or until the eighteenth century and the concept has in employment in Noam Chomsky‘s influential account of the mind’s linguistic capacities.
The attraction of the theory has been felt strongly by those philosophers who have been unable to give an alternative account of our capacity to recognize that some propositions are certainly true where that recognition cannot be justified solely on the basis of an appeal to sense experience. Thus, Plato argued that, for example, recognition of mathematical truths could only be explained on the assumption by some assumption of some form of recollection. ‘Recollection’, or anamnesis has several roles in Plato’s epistemology. In the Meno, it is invoked to explain the behaviour of an uneducated boy who answers a geometric problem that he has never heard. At the same time, it is used to solve a paradox about inquiry and learning. In the Phaedo, it is said too explain our possession of concepts, construed as knowledge of Forms, which we supposedly could not have gained from experience. Recollection also appears in the Phaedrus, but is notably absent from important presentations of Plato’s epistemological views in the Republic and other works. Since there was no plausible post-natal source the recollection must refer back to a pre-natal acquisition of knowledge. Thus understood, the doctrine of innate ideas supported the view that there were important truths innate in human beings and it was the sense which hindered the proper apprehension.
The ascetic implications of the doctrine were important in Christian philosophy throughout the Middle Ages and the doctrine featured powerfully in scholastic teaching until its displacement by Locke’s philosophy in the eighteenth century. It had in the meantime acquired modern expression in the philosophy of Descartes who argued that we can come to know certain important truths before we have an empirical knowledge at all. Our idea of God, for example, and our coming to recognize that God must necessarily exit, as Descartes held, logically independent of sense experience. In England the Cambridge Platonists such as Henry More and Ralph Cudworth added considerable support.
Locke’s rejection of innate ideas and his alternative empiricist account was powerful enough to displace the doctrine from philosophy almost totally. Leibniz, in his critique of Locke, attempted to defend in with a sophisticated dispositional version of the theory , but it attracted few followers.
The empiricist alternative to innate ideas as an explanation of the certainty of propositions was in the direction of construing all necessary truths as analytic. Kant’s refinement of the classification of propositions with the fourfold distinction, analytic/synthetic and a priori/ a posteriori did nothing to encourage a return to the innate ideas doctrine, which slipped from view. The doctrine may fruitfully be understood as the production of confusion between explaining the genesis of ideas or concepts and the basis for regarding some propositions as necessarily true.
Chomsky’s revival of the term in connection with his account of human speech acquisition has once more made the issue topical. He claims that the principles of language and ‘natural logic’ are known unconsciously and are a precondition for language acquisition. But for his purposes innate ideas must be taken in a strongly dispositional sense - so strong that it is far from clear that Chomsky’s claim’s are in conflict with empiricist’ account as some (including Chomsky) have supposed. Quine, for example, sees no clash with his own version of empirical behaviourism, in which old talk of ideas is eschewed in favour of dispositions to observable behaviour.
The physical sciences investigate the nature and behaviour of matter and energy on a vast range of size and scale. In physics itself, scientists study the relationship between matter, energy, force, and time in an attempt to explain how these factors shape the physical behaviour of the universe. Physics can be divided into many braches. Scientists study the motions of objects, a huge branch of physics known as mechanics that involves two overlapping sets of scientific laws. The laws of classical mechanics govern the behaviour of objects in the macroscopic world, which includes everything from billiard balls to stars, while the laws of quantum mechanics govern the behaviours of the particles that make up individual atoms.
The new mathematics is new only in that the material is introduced at a much lower level that heretofore. Thus, geometry, which was and is commonly taught in the second year of high school, is now frequently introduced, in an elementary fashion, in the fourth grade - in fact, naming and recognition of the common geometric figures, the circle and the square, occurs in kindergarten. At any early stage, numbers are identified with points on a line, and the identification is used to introduce, much earlier than in the traditional curriculum, negative numbers and the arithmetic processes involving them.
The element of set theory constitute the most basic and perhaps the most important topic of the new mathematics. Even a kindergarten child can understand, without formal definition, the meaning of a set of red blocks, the set of fingers on the left hand, and the set of the child’s ears and eyes. The technical word set is merely a synonym for many common words that designate an aggregate of elements. The child can understand that the set of fingers on the left hand and the set on the right hand match - that is, the elements, fingers, can be put into a one-to-one correspondence. The set of fingers on the left hand and the set of the child’s ears and eyes do not match. Some concepts that are developed by this method are counting, equality of number, more than, and less then. The ideas of union and intersection of sets and the complement of a set can be similarly developed without formal definition in the early grades. the principle and formalism of set theory are extended as the child advances; upon graduation from high school, the student’s knowledge is quite comprehensive.
The amount of new mathematics and the particular topics taught vary from school to school. In addition to set theory and intuitive geometry, the material is usually chosen from the following topics: A development of the number systems, including methods of numeration, binary and other bases of notation, and modular arithmetic, measurement, with attention to accuracy and precision, and error study, studies of algebraic systems, including linear algebra, modern algebra, vectors and matrices with an axiomatically delegated approach; logic, including truth tables, the nature of proof, Venn or Euler diagrams, relations, functions and general axiomatic, probability and statistics, linear programming, computer programming and language and algebraic geometry and calculus. Some schools present differential equations, topology and real and complex analysis.
Cosmology, of an evolution, is the study of the general nature of the universe in space and in time - what it is now, what it was in the past and what it is likely to be in the future. Since the only forces at work between galaxies that makes up the material universe are the forces of gravity, the cosmological problem is closely connected with the theory of gravitation, in particular with its modern version as comprised in Albert Einstein’s general theory of relativity. In the frame of this theory the properties of space, time and gravitation are merged into one harmonious and elegant picture.
The basic cosmological notion of general relativity grew of the work of great mathematicians of the 19th century. In the middle of the last century two inquisitive mathematical minds - Russian named Nikolai Lobachevski and a Hungarian named Janos Bolyai - discovered that the classical geometry of Euclid was not the only possible geometry: In fact, the y succeeded in constructing a geometry that was fully as logical and self-consistent as the Euclidean. They began by overthrowing Euclid’s axiom about parallel lines, namely, that only one parallel to a given straight line can be drawn through a point not on that line. Lobachevski and Bolyai both conceived a system of geometry in which a great number of lines parallel to a given line could be drawn through a point outside the line.
To illustrate the differences between Euclidean geometry and their non-Euclidean system considering just two dimensions are simplest - that is, the geometry of surfaces. In our schoolbooks this is known as plane geometry, because the Euclidean surface is a flat surface. Suppose, now, we examine the properties of a two-dimensional geometry constructed not on a plane surface but on a curved surface. For the system of Lobachevski and Bolyai we must take the curvature of the surface to be negative, which means that the curvature is not like that of the surface of a sphere but like that of a saddle. Now, if we are to draw parallaxes lines or any figure (e.g., a triangle) on this surface, we must decide first of all how we will define a straight line, equivalent to the straight line of plane geometry. The most reasonable definition of a straight lime in Euclidean geometry is that it is the path of the shortest distance between two points. On a curved surface the line, so defined, becomes a curved line known as a geodesic.
Considering a surface curved like a saddle, we find that, given a straight-line or geodesic, we can draw through a point outside that line a great many geodesics that will intersect the given liven line, no matter how far they are extended. They are therefore parallel to it, by the definition of parallel. The possible parallels to the line fall within certain limits, taken part by the intersecting lines.
As a consequence of the overthrow of the Euclidean axiom on parallel lines, many of his theorems are demolished in the new geometry. For example, the Euclidean theorem that the sim of the three angles of a triangle is 180 degrees no longer holds on a curved surfaces. On the saddle-shaped surface the angles of a triangle formed by three geodesics always add up to less than 180 degrees, the actual sum depending on the size of the triangle. Further, a circle on the saddle surface does not have the same properties as a circle in plane geometry. On a flat surface the circumference of a circle increases in proportion to the increase in diameter, and the area of a circle increases in proportion to the square of the increase in diameter. Still, on a saddle surface both circumference and the area of a circle increase at faster rates than on a flat surface with increasing diameter.
After Lobachevski and Bolyai, the German mathematician Bernhard Riemann constructed another non-Euclidean geometry whose two-dimensional model is a surface of positive, rather than negative, curvature - that is, the surface of a sphere. In this case a geodesic line is simply a great circle around the sphere or a segment of such a circle, and since any two great circles must intersect at two points (the poles) there are no parallel lines at all in this geometry. Again, the sum of the three angles of a triangle is not 180 degrees: In this case it is always more than 180. The circumference of a circle now increases at a rate slower than in proportion to its increase in diameter, and its area increases more slowly than the square of the diameter.
Now all this is not merely an exercise in abstract reasoning but bears directly on the geometry of the universe, in which we live. Is the space of our universe flat, as Euclid assumed, or is it curved negatively (per Lobachevski and Bolyai) or curved positively (Riemann)? If we were two-dimensional creatures living in a two-dimensional universe, we could tell whether we were living on a flat or a curved surface by studying the properties of triangles and circles drawn on that surface. Similarly, as three-dimensional beings living in three-dimensional space, in that we should be capably able by way of studying geometrical properties of that space, to decide what the curvature of our space is. Riemann, in fact developed mathematical formulas describing the properties of various kinds of curved space in three and more dimensions. In the early years of this century Einstein conceived the idea of a universe as a curved system in four dimensions, embodying time as the fourth dimension, and he proceeded to apply Riemannian curvilineal formulas to test his ideas.
Einstein showed that time can be considered a fourth coordinate supplementing the three coordinates of space. He connected space and time, thus establishing a space-time continuum. By means of the speed of light as a link between time and space dimensions. However, recognizing that space and time are physically different entities, he employee the imaginary number Ì, or me, to express the unit of time mathematically and make the time coordinate formally equivalent to the three coordinates of space.
In his special theory of relativity Einstein made the geometry of the time-space continuum strictly Euclidean, that is flat. The great idea that he introduced later in his general theory was that gravitation, whose effects had been neglected in the special theory, must make it curved. He saw that the gravitational effect of the masses distributed in space and moving in time was equivalent to curvature of the four-dimensional space-time continuum. In place of the classical Newtonian statement that, the sun produces a field of forces that impel the earth to deviate from straight-line motion and to move in a circle around the sun. Einstein substituted a statement to the effect that, the presence of the sun causes a curvilineal of the space-time continuum in its vicinitized area.
The motion of an object in the space-time continuum can be represented by a curve called the objects, world-line. Einstein declared, in effect, The world line of the earth is a geodesic trajectory in a curved four-dimensional space around the sum. In others words, the . . . earths world-line . . . corresponds to the shortest four-dimensional distance between the position of the earth in January . . . and its position in October . . .
Eiensteins idea of the gravitational curvature of space-time was of course, triumphantly affirmed by the discovery of perturbations in the motion of Mercury at its closet approach to the sun and of the deflection of light rays by the suns gravitational field. Einstein next attempted to apply the idea to the universe as a whole. Does it have general properties ascertained by it own quality of curvature, similar to the local curvature of the suns gravitational field? He now had to consider not a single centre of gravitational force but countless focal points in a universe full of matter concentrated in galaxies whose distribution fluctuates considerably from region to region in space. However, in the large-scale view the galaxies are spread uniformly throughout space as far out as our biggest telescopes can see, and we can justifiably .smooth out its matter to a general average (which comes to about one hydrogens atom per cubic metes)./ On this assumption the universe as a whole has a smooth general curvature.
Nevertheless, if the space of the universe is curved, what is the sign of this curvature? Is it positive, as in our two0dimensional analogy of the surface of a sphere, or is it negative, as in the case of a saddle surface? Since we cannot consider space alone, how is this space curvature related to time?
Analysing the pertinent mathematical equations, Einstein came to the conclusion that the curvature of space must be independent of time, i.e., that the universe as a whole must be unchanging (though it chainages internally). Einstein was forced to introduce an additional hypothesis that amounted to the assumption that a new kind of force was acting among the galaxies. This hypothetical force had to be independent of mass (being the same for an apple, the moon and the sun) and to gain in strength with increasing distance between the interaction objects (as no other forces ever, do in physics).
Eiensteins new force, called, cosmic repulsion, allowed two mathematical models of a static universe. One solution, which was worked out by himself and became known as, Eiensteins spherical universe, gave the space of the cosmos a positive curvature, like a sphere, this universe was closed and thus had a finite volume. The space coordinates in Eiensteins spherical universe were curved in the same way as the latitude or longitude coordinates on the surface of the earth. However, the time axis of the space-time continuum ran quite straight, as in the good old classical physics. This means that no cosmic event would ever recur. The two-dimensional analogy of Eiensteins space-time continuum is the surface of a cylinder, with the time axis running parallel to the axis of the cylinder and the space axis perpendicular to it.
The other static solution based on the mysterious repulsion forces was discovered by the Dutch mathematician, Eillem de Sitter. Its geometry was similar to that of a globe, with longitude serving as the spac e coordinates and latitude as time. Unhappily astronomical observations contradicted by both Einstein and de Sitter; static models of the universe, and they were soon abandoned.
In the year 1922 a major turning point came in the cosmological problem. A Russian mathematician, Alexander A. Friedman discovered an error in Eiensteins proof for a static universe. In carrying out his proof Einstein had divided both sides of an equation by a quantity that, Friedman found, could become zero under certain circumstances. Since division by zero is not permitted in algebraic computations, the possibility of a nonstatic universe could not be excluded under the circumstance in question. Friedman showed that two nonstatic models were possible. One depiction as afforded by the efforts as drawn upon the imagination can see that the universe as expanding with time, others, by contrast, are less inclined to neuronal excitation and cannot see beyond any celestial attempt for looking.
Einstein quickly recognized the importance of this discovery. In the last edition of his book, The Meaning of Relativity he wrote: The mathematician Friedman found a way out of this dilemma. He showed that having a finite density in the whole is possible, according to the field questions (three-dimensional) space, without enlarging these field equations. Einstein remarked that the cosmic repulsion idea was the biggest blunder that he ever made in his entire life.
Almost at the very moment that Friedman was discovering the possibility of an expanding universe by mathematician reason, Edwin P. Hubble at the Mount Wilson Observatory on the other side of the world found the first evidence of an actual physical expansion through his telescope. He made a compilation of the distance of a number of far galaxies, whose light was shifted toward the red end of the spectrum and it was soon found the it was soon found that the extent of the shift was in direct proportion to a galaxies distance from us, as estimated by its faintness Hubble and others interpreted the red-shift as the Doppler effect - the well-known phenomenon of lengthening of wavelengths from any radiating source that is moving rapidly away (a train whistle, a source of light or whatever). To date there has been no other reasonable explanation of the galaxies red-shift factor. In the explanation is correct, it means that the galaxies are all moving away from one another with increasing velocity as they move farther apart. Thus, Friedman and Hubble laid the foundation for the theory of the expanding universe. The theory was soon developed further by a Belgian theoretical astronomer, Georges Lamaitre, he proposed that our universe started from a highly compressed and extremely hot state that he called the , primeval atom. (Modern physicists would prefer the term, primeval nucleus). As this matter expanded, it gradually thinned out, cooled down and reaggregated in stars and galaxies, giving rise to the highly complex structure of the universe as we now know it to be.
Not until a few years ago the theory of the expanding universe lay under the cloud of a very serious contradiction. The measurements of the speed of light of the galaxies and their distances from us indicated that the expansion had started about 1.6 billion years ago. On the other hand, measurements of the age of ancient rocks in the earth by the clock of radioactivity (i.e., the decay of uranium to lead) showed that some of the rocks were at least three billion years old: More recent estimates based on other radioactive elements raised the age of the earths crust to about five-billion-year-old rocks. Happily, the contradiction has now been disposed by Walter Baades recent discovery that the distance yardstick (based on the periods of variable stars) was faulty and that the distances between galaxies are more than twice as great as they were thought to be. This change in distances raised the age of the universe to five billion years or more.
Friedmans solution of Einsteins cosmological equation, permits two kinds of universe. We can call one the, pulsating universe. This model says that when the universe has reached a certain maximum of permissible expansion, it will begin to contract, that it will shrink until its matter has been compressed to a certain maximum density, possibly that of atomic nuclear material, which is a hundred million times denser than water: That it will then begin to expand again - and so on through the cycle ad infinitum. The other model is a, hyperbolic one: It suggests that from an infinitely thin state an eternity ago the universe contracted until it reaches the maximum density, from which it rebounded to an unlimited expansion that will go on indefinitely in the future.
The question whether our universe is , pulsating or hyperbolic, should be decidable from the present rate of expansion. The situation is analogous to the case of a rocket shot from the surface of the earth. If the velocity of the rocket is less than seven miles per second - the escape velocity - the rocket will climb only to a certain height and then fall back to the earth. (If it were completely elastic, it would bounce up again, . . . and so forth. However, a rocket shot with a velocity of more than seven miles per second will escape from the the earths shackling of gravitational field and disappeared in space. The case of the receding system of galaxies is very similar to that of an escape rocket, except that instead of just two interacting bodies: The rocket and the earth, we have an unlimited number of them escaping from one another, we find that the galaxies are fleeing from one another at seven times velocity necessary for mutual escape.
Thus, we may conclude that our universe corresponds to the hyperbolic model, so that its present expansion will never stop. We must make one reservation. The estimate of the necessary escape velocity is based on the assumption that practically all the mass of the universe is concentrated in galaxies. If intergalactic space contained matter whose total mass was more than seven times that in the galaxies, we would have to reverse out conclusion and decide that the universe is pulsating. There has been no indication so far, however, that any matter exists in intergalactic space. It could have escaped detection only if it were in the form of pure hydrogen gas, without other gases or dust.
Is the universe finite or infinite? This resolves itself into the question" Is the curvature of space positive or negative-closed like that of a sphere, or open like that of a saddle? We can look for the answer by studying the geometrical properties of its three-dimensional space, just as we examine the properties of figures on two-dimensional surfaces. The most convenient property to investigate astronomically is the relation between the volume of a sphere and its radius.
We saw that, in the two-dimensional case, the area of a circle increases with increasing radius at a faster rate on a negatively curved surface than on a Euclidean or flat surface: And that on a positively curved surface the relative rate of increase is slower. Similarly, the increase of volume is faster in negatively curved space, slower in positively curved space. In Euclidean space the volume of a sphere would increase in proportion to the cube, or third power, of the increase in radius. In negatively curved space the volume would increase faster than this, in undisputably curved space, slower. Thus, if we look into space and find the volume of successively larger spheres, as measured by a count of the galaxies within them, increasing faster than the cube of the distance to the limit of the sphere (the radius), we can conclude that the space of our universe has negative curvature, and therefore is open and infinite. Similarly, if the number of galaxies increases at a rate slower than the cube of the distance, we live in a universe of positive curvature - closed and finite.
Following this idea, Hubble undertook to study the increase in a number of galaxies involving distances. He estimated the distance of remote galaxies by their relative faintness, as galaxies vary considerably in intrinsic brightness, but over large numbers of galaxies these variations are expected to average out. Hubbles calculations produced the conclusion that the universe is a closed system - a small universe only a few billion light-years in radius.
We know now that the scale he was using was wrong: With the new yardstick the universe would be more than twice as large as he calculated, and, nonetheless, there is more fundamental doubt about his result. The whole method is based on the assumption that the intrinsic brightness of a galaxy remains constant. What if it changes with time? We are seeing the light of the distant galaxies as it was emitted at widely different times in the past - 500 million, a billion, two billion years ago. If the stars in the galaxies are burning out, the galaxies must dim as they grow older. A galaxy two billion light-years away cannot be put on the same distance scale with a galaxy 500 billion light-years away unless we take into account the fact that we are seeing the nearer galaxy at an older, and less bright, age. The remote galaxy is farther away than a mere comparison of the luminosity of the two would suggest.
No comments:
Post a Comment