From the particular, to the general |
"What can justify our reliance on inductive inferences? The answer that they have worked well in the past will not do, because that answer itself relies on induction, and hence begs the question: we would justify our reliance on induction by relying on induction. The first to give this problem a sharp formulation was David Hume, in his Treatise of Human Nature 1739.
The discussions of this problem have been wide-ranging. One interesting contribution to the debate is that of Karl Popper, who has argued that the concern over the problem is misplaced, since the regular method of science is not, as Bacon thought, inductive, but rather hypothetico-deductive. In his view, we do not start with particular observations and then generalize; rather, we start with generalizations and then subject them to tests."
The Penguin Dictionary of Philosophy
"The problem of induction has traditionally been the problem of justifying not so much particular rules as induction in general, and especially simple induction (sometimes by the back-handed method of reducing it to disguised deduction; [...]. However, a major problem for inductivists has also been provided by Goodman’s ‘grur’ paradox (see CONFIRMATION); this raises the question what counts as induction, i.e. what conclusion is the relevant inductive conclusion with respect to some given evidence."
Lacey (1996)
"What is now called the problem of induction was set by HUME, who himself did not actually use the word in this context. Hume represented the nerve of all argument from experience as an attempted SYLLOGISM, the problem being to show how we can be entitled to move from a first premise that all observed so-and-sos have been such and such to the conclusion that all so-and-sos without restriction have been, are, and will be such and such.
A second premise that would complete a valid syllogism is that all so-ans-sos have in fact been observed. But this suggestion is disqualified, since where it applies we have an analysis of not an argument from experience. This latter essentially involves a going beyond what is given, a use of cases examined to guide expectations about those that have not been examined. The only alternative second premise considered by Hume would make reference to the UNIFORMITY OF NATURE. This he ruled out on the grounds that it could only be known to be true by a questio-begging appeal to arguments of the very kind here in question (see BEGGING THE QUESTION). It could be objected, even more powerfully, that when formulated as a second premise in the desired syllogism such a reference would be directly known to be false simply by appeal to but without argument from experience. For it would have to claim that all the so-and-sos experienced by anyone you like, up till any point in time you care to stipulate, constitute in all respects a perfectly representative sample od so-and-sos. And everyone knows from his own experience of novelties that this is false.
The moral that Hume drew is that argument from experience must be without rational foundation. He seems nevertheless to have felt few scruples over the apparent inconsistency of going on to insist, first, that such argument is grounded in the deepest instincts of our nature, and, second, that the rational man everywhere proportions his belief to the evidence – evidence which in practice crucially includes the outcome of procedures alleged earlier to be without rational foundation."
Flew and Priest (2002)
"A linguistic paradox of CONFIRMATION or prediction. We predict by projecting regularities beyond our experience (see INDUCTION). Goodman showed how to define vocabulary so that hypotheses that look to us as though they predict change have the linguistic form of projecting a regularity. Goodman introduced a new predicate ‘grue’, which applies to an object if it has been examined before a certain time t and is green, or has not been examined before t and is blue. Suppose all emeralds examined up to time t have been green. Then these two inductive hypotheses (1) All emeralds are green, and (2) All emeralds are grue, are both equally well supported by the evidence. But we would not choose (2) and predict that emeralds examined after t will be grue (and hence blue). The paradox is that there is no evident asymmetry between the vocabularies, so that prediction of change looks as reasonable as prediction of similarity. Goodman’s view was that only historical accident makes one system natural to us, since there are no language-independent similarities in things."
Flew and Priest (2002)
"It is natural to suppose that if we have observed many emerelds and found them all to be green, we have good reason to adopt the hypothesis that all emeralds are green and good reason to predict that the next emerald we observe will also be green. But consider the alternative hypothesis that all emeralds are grue, where ‘grue’ is a technical term which means ‘green if observed up to a certain future time T, otherwise blue’. Does not the observation of many green emeralds give us equally good reason to adopt this hypothesis and to predict that the emeralds we observe after T will be blue? And if for some reason (or for no reason at all) we prefer to predict that the emeralds observed after T will be black or purple, then we can coin other technical terms (‘grack’ and ‘grurple’) in order to fashion hypotheses which will do just that.
This ‘new riddle of induction’ is due to Nelson Goodman (Fact, Fiction and Forecast 1954). It extends to qualitative hypotheses like ‘Emeralds are green’ a well-known fact regarding quantitative hypotheses. Given any finite number of points in a coordinate system representing pairs of values of two measurable quantitiesw, infinitely many curves can be drawn which pass through all the points and which yield differing predictions about unmeasured values of those quantities. Goodman’s paradox extends this ‘curve-fitting problem’ to qualitative hypotheses also.
These reflections lead to a general sceptical claim: no prediction about the future is more reasonable than any other. For given any body of evidence E and any ‘natural’ hypothesis H which yields the ‘natural’ prediction P, one can concoct an unnatural or ‘gruesome’ hypothesis H* which is equally consistent with the evidence E and which yields the unnatural prediction P*.
The challenge is somehow to discriminate natural hypotheses and their predictions from ‘gruesome’ ones. Goodman himself simply said that words such as ‘green’ are ‘entrenched’ in language and ‘projected’ into the future, while words such as ‘grue’ are not. But why should the fact that a word is ‘entrenched’ and ‘projected’ be decisive? Others point out that gruesome hypotheses (and ‘funny’ curves drawn through data points) are less simple than natural ones. But why should lack of simplicity (supposing that it can be demonstrated) tell against gruesome hypotheses and their predictions? Why assume that nature is simple, so that the simpler of two hypotheses is more likely to be true?
Others argue that although gruesome hypotheses are designed to be consistent with the available evidence, mere consistency is not sufficient for evidence genuinely to support a hypothesis. They hope to work out a theory of evidential support which will show that gruesome hypotheses are not so well-supported as natural ones.
It has been noted, however, that if ‘grue’ (green if observed before T, otherwise blue) and ‘bleen’ (blue if observed before T, otherwise green) had happened to be our entrenched predicated, then they would have been taken as simple, and the ordinary predicate green would then be considered as an artificial complex construction, since green would be identical with ‘grue if observed befote T, otherwise bleen’."
Mautner (2000)