The following is an essay I wrote in 2021 during my time as an undergraduate student in Computer Science Philosophy at Oxford. Of my philosophical writings from that time, it's the one of which I'm most proud and which I believe remains highly relevant to my work these days. The essay deals with the various problems of *induction* that have been posed by philosophers from David Hume to Nelson Goodman down through the centuries. Since induction is broadly held to form the basis of empirical science and human learning, it stands to reason that coming to grips with these problems may be able to inform our approach to problems in machine learning, artificial intelligence, etc. In that spirit, I offer these reflections on the nature of and justification for scientific induction. --- # On the Origin of Theories by Corinthia Beatrix Aberlé November 7, 2021 > "Planet Earth is a machine... [and] all the organic life that has ever existed amounts to a greasy film that has survived on the exterior of that machine thanks to furious improvisation." > > – Sam Hughes, *Ra* --- The laws of science are commonly upheld as paradigm cases of our capacity for the production of knowledge – if we are justified in believing anything, we are justified in believing what science tells us about nature. But to what faculty of the intellect do we owe such laws and their justification? The standard empiricist answer is that our scientific hypotheses are arrived at by way of *induction* or *inductive inference* from observation of the natural world. *Induction* is a somewhat slippery term that is perhaps better characterised in what it *isn't* than what it *is*. Namely, *induction* is not *deduction*, i.e. the process of inferring in such a way as to make one's conclusion a logical consequence of one's premises. Inductive inference is commonly introduced by way of example, such as the classic: *all observed swans have been white, therefore all swans are white*. The above example serves to illustrate several distinguishing features of induction – the given argument is *not* deductively valid, because it is possible for the premises to be true (there are people, I imagine, who are only aware of the existence of white swans) and yet for the conclusion to be false. Moreover, the conclusion *is* false (there are black swans), and this highlights that although we are inclined to think of inductive inference as *generally* reliable, it is not *totally* reliable. Upon reflection, we might be inclined to ask: if induction is demonstrably not reliable in all instances, what licenses our belief in its *general* reliability? So arises the scandal of empiricism: the problem of induction. There is seemingly no legitimate way of convincing ourselves of the reliability of induction without first taking induction for granted. This problem was first noted by David Hume (c.f. Hume, 1740). In modern terms, Hume argues that it is impossible to demonstrate deductively that inductively-inferred hypotheses are generally true, and it is circular to try and do so inductively. Hume's argument has for the most part withstood all attempts at criticism, and as no third mode of generating knowledge beyond induction and deduction has been forthcoming, the problem of justifying induction has remained a conundrum for the philosophy of science. One may wonder, however, to what degree the apparent intractability of the problem of induction is due to the vagueness surrounding induction itself. The conception of induction involved in Hume's arguments is something like a process of inferring, on the basis of a perceived regularity in some observed phenomena, that this regularity holds for *all* such phenomena, unobserved as well as observed. But is this process of *generalization* one and the same as that by which scientific hypotheses are generated and evaluated (which, hereafter, I call *scientific induction*)? Perhaps, by getting clearer on how scientific theorizing *in fact* proceeds, we may shed new light on the problem of induction, and whether it is indeed a problem for science. This has been the approach taken by many authors from the time of Hume onwards, but such approaches have each been beset by problems of their own. By a careful consideration of one such flawed but promising attempt at resolving the problem of induction – Hans Reichenbach's *pragmatic* justification of induction – I shall proceed to refine the argument and its attendant notion of induction to something that at once accords better with scientific practice and stands a better chance of success. Along the way, we shall meet Goodman's so-called "new riddle of induction," which stands as a hurdle in the way of any attempt at explicating the process of scientific induction. The concept of scientific induction ultimately arrived at shall be *evolutionary* in conception. That is, scientific induction, as I understand it, is nothing more or less than the mechanism of intellectual adaptation of a community of rational agents to their environment, the *furious improvisation* by which science squares itself with the world as we find it. ## I. The pragmatic argument In *Experience and Prediction* (1938), Reichenbach begins his consideration of induction with a critique of Hume's arguments concerning induction, wherein he accepts as decisive Hume's objections to the possibility of demonstrating the general truth of inductively-made hypotheses, but finds lacking Hume's own attempt at resolution of the problem he exposed. According to Hume, induction – though not rationally justifiable – is nonetheless part of human nature, arising as the intellectual reflection of our tendency for the formation of *habits*. Reichenbach (rightly, to my mind) deplores Hume's solution as intolerably defeatist. Appeals to *human nature* and *habit* be damned – if induction turned out to be a *bad* habit, and our nature flawed in this respect, then we would have every reason to look for alternatives to induction, and *change* our nature accordingly as much as possible. Of course, Reichenbach does not think induction to be a bad habit, but an explanation of why this should be the case is nowhere to be found in the Humean picture. Reichenbach then sets out to give a precise account of what makes induction preferable to other methods of reasoning about the unobserved. Reichenbach notes that although Hume's argument forecloses the possibility of justifying induction on the grounds of its likelihood of leading us to truth, this does not rule out the possibility of justifying induction by appeal to some other, more *pragmatic*, criterion of assessment. The idea of Reichenbach's approach, broadly speaking, is to show that the method of induction employed by science satisfies some property that makes it at least as favorable as any comparable method. This leads Reichenbach to an ingenious argument, the specifics of which are couched in and dependent upon the highly technical apparatus of probabilistic reasoning developed by Reichenbach in prior sections of *Experience and Prediction*, but the practical upshot of which is simple to state: if any method of predicting the unobserved on the basis of the observed works, then induction works. It is regrettable that Reichenbach staked much of his argument on the specifics of his particular, *frequentist* interpretation of probability and its relation to science, which weakens the argument's generality. Central to Reichenbach's argument is his criterion of success for methods of prediction: such a method is successful at predicting some event if the frequency of the event converges to a limit and the probabilities assigned by the method to the event converge to the same limit. Induction, for Reichenbach, is then just the method of assuming that the observed frequency of an event is its actual frequency. A Bayesian, to give but one example, would define successful prediction and perhaps also induction itself differently, and so would not accept this part of Reichenbach's argument. Nonetheless, as Skyrms (2000) notes, the core of Reichenbach's argument – specifically the argument given by Reichenbach for the above-stated conclusion – applies somewhat more broadly. In the simplified and generalized form given by Skyrms, the argument runs as follows: if there is a method that yields reliable predictions regarding some phenomenon, then induction, applied to the *success of that method* would eventually lead us to accept the predictions made by the method. Now the validity of this argument turns a great deal upon what is meant by *reliability* of prediction, and how we are to understand *induction* as proceeding. We shall see, in due course, whether it is possible to cash out *reliability* and *induction* in sufficiently weak terms as to be widely acceptable, yet that make Reichenbach's argument – or something like it – valid. First, however, it will be fruitful to consider the main objection posed by Skyrms to Reichenbach's argument. Skyrms makes a productive distinction between the *levels* (or *orders*, as I shall call them) at which induction – and prediction more generally – occurs. At the first order, we make predictions regarding some phenomenon or class of phenomena – natural phenomena if our science is a natural science, social phenomena if our science is a social science, etc. At the second order, we make predictions regarding first-order methods of prediction. So the claim that one theory of a natural phenomenon will yield better predictions than another is a kind of *second-order* prediction. There are then third-order predictions concerning methods for making second-order predictions, and so on. Skyrms then characterizes Reichenbach's argument as an argument by *mathematical induction* (N.B. despite the nominative similarity to induction, mathematical induction remains a form of *deduction,* rather than induction in the empirical sense), attempting to show how the validity of induction at lower orders justifies induction at higher orders as well. Skyrms concludes, on this basis, that Reichenbach has at best only demonstrated the inductive step of such an argument, wherein the lower orders justify those above them, but has *not* provided a solid base case justifying first-order induction. Without such a base case, Skyrms concludes that Reichenbach's argument remains as circular as any of the other myriad ill-fated attempts to justify induction. However, I think the confusion here lies not in Reichenbach's argument, but in Skyrms' interpretation of it. Reichenbach's argument, to my mind, is decidedly *not* a (mathematically) *inductive* argument, but rather, in the jargon of category theory and type theory, a *[coinductive](https://en.wikipedia.org/wiki/Coinduction)* one. For those not in the know, coinduction is a type of mathematical construction which is formally dual (in a precise, category-theoretic sense) to mathematical induction. Technically speaking, an inductive type or set is defined as the least fixed point of some transformation, while the corresponding coinductive object is defined as the *greatest* fixed point of such a transformation. In the case of sets, this means that an inductively-defined set is *contained within* any other set that is a fixed point of the transformation with respect to which it was defined, while a *coinductively-defined* set *contains* all such sets. At this level of generality, the difference between mathematical induction and coinduction may seem rather abstract, so by way of a concrete example: for any set $A$, the set of finite sequences of elements of $A$ can be (and typically is) defined by induction, as the least fixed point of the endofunctor $F(X) = 1 + A \times X$ on the category of sets. By contrast, the set of *potentially infinite* sequences of elements of $A$ may be coinductively defined as the greatest fixed point of this endofunctor. This example is typical of the duality between inductively- and coinductively- defined set—sets defined by induction tend to consist of finite things, while sets defined by coinduction tend to consist of *potentially infinite* things. Likewise, the way in which one typically works with such structures is dual. In the case of lists, one can build a list through finitely many actions, but then recursively decompose any such list. Conversely, a potentially infinite sequence can be *built recursively,* but then can typically only be *observed* finitely (e.g. by repeatedly getting the next element of the sequence). Coming back to Reichenbach's pragmatic argument, then, we see that it has much more in common with the coinductive strategy than the (mathematically) inductive. Contra Skyrms, Reichenbach is *not* saying that the lower orders of induction justify the higher orders—indeed, this gets the direction of justification in Reichenbach's argument backwards. Reichenbach's claim is rather that induction at higher orders can act as a potentially infinite structure whereby the higher orders recursively act as corrective measures on the lower orders. In this sense, Reichenbach is *coinductively* building the structure of induction by a recursive argument, and attempting to use this to demonstrate its reliability. More formally, we might say that Reichenbach implicitly defines the set of reliable methods for reasoning about empirical data as the greatest fixed point of some transformation, and then attempts to show that induction belongs to this set by showing that it is (contained in) a fixed point of this transformation. Now, whether this latter interpretation of Reichenbach's argument turns out to be valid depends upon whether and how we interpret the set of reliable prediction methods as coinductively defined. In order to investigate this further, it shall first be prudent for us to ask whether induction of the sort described by Reichenbach, which I shall call *simple induction*, really corresponds to the methods of prediction employed in science, what I have called *scientific induction*. This naturally prompts consideration of a problem for any attempt at elucidating the specifics of such induction: Nelson Goodman's 'new riddle' of induction. ## II. The new riddle Unlike Reichenbach, Goodman more-or-less accepts Hume's resolution of the problem of induction, what Goodman calls the 'old riddle', and the conclusion that there is no wholly rational justification of induction (Goodman, 1955, ch. 3-4). What remains then, on Goodman's account, isn't to try and justify our inductive practices, but rather to spell them out precisely. The problem, as Goodman puts it, is to state exactly what relation must inhere between some evidence and a hypothesis in order for the evidence to count toward the hypothesis, the so-called 'confirmation' relation. Here, however, Goodman poses a new challenge, what he terms the 'new riddle' of induction. Goodman considers inductive arguments that take the form 'all observed $P$s have had property $Q$, therefore all $P$s have property $Q. Note that this can be seen as the special case of Reichenbach's definition of induction where the observed frequency of an event is $1$. In fact, we can recover Reichenbach's definition from Goodman's form of induction by simply replacing '$Q in the above with '$Q$ with frequency $r, so what is to follow applies equally to both notions of induction, which may just as well be called *simple induction*. Goodman then asks whether this form of argument can be valid for any choice of $P$ and $Q$ whatsoever, i.e. whether all such $P$ and $Q$ are *projectible* predicates. According to Goodman, the answer must be negative, due to the following example. We would readily countenance 'all observed emeralds have been green, therefore all emeralds are green' as a valid inductive inference, but now consider the following definition: an object is *grue* just in case it is observed to be green before some arbitrary time $t$ in the future, and blue otherwise. Now, if all emeralds have been observed to be green, and it is before time $t$, then they have also all been observed to be *grue*, so 'all emeralds are grue' should be equally valid a conclusion from this data. But this seems absurd. Goodman agrees that it is absurd, but tasks us with explaining *why* it is absurd. Straightforward attempts at distinguishing *green* from *grue* tend to fail, e.g. one cannot claim that *green* is somehow more basic than *grue* due to the occurrence of green in the definition grue, since one could just as easily take grue (and the complementary predicate *bleen*, whose definition is left as an exercise for the reader) as primitive and define green in terms of it. Like Hume, Goodman poses his own quasi-solution to his problem. Goodman says that the predicates we tend to treat as projectible are those that we have most consistently used to make successful inductive inferences in the past, or in Goodman's terminology, that have become *entrenched*. In "On Projecting Grue" (1976), John Moreland raises more-or-less the same objection against this solution on Goodman's part that Reichenbach raised against Hume: the putative solution punts on any normative aspect of the practice it describes, and hence cannot shed light on why this practice should be preferable to others, or adjudicate solutions to novel instances of the problem this practice addresses. Unlike Reichenbach, who essentially accepted Hume's formulation of the problem of induction, Moreland moreover suggests that Goodman's framing of the new riddle is incomplete, and poses some clever examples to show that there *are* in fact cases where we might consider it valid to project *gruesome* predicates of the sort defined by Goodman. One of Moreland's examples runs as follows: let us suppose – contrary to ordinary physics – that chromium (which is responsible for the green colour of emeralds, which are actually beryl crystals) decays to iron (the presence of which in beryl causes blue colouration) with some known half-life. Let us moreover suppose that we have to hand decent estimates of the amount of chromium present in Earth's crust at its formation, and finally suppose that it is consistent with our observations that all the emeralds we observe before time $t$ contain *all* the unaccounted-for chromium that has not yet decayed to iron. Then it is not a stretch to say that 'all emeralds are grue' is a perfectly valid, if grammatically unwieldy inference to make. Moreland further notes that Goodman's framing of the new riddle and his solution both fail to take account of conceptual changes in science, whereby science may radically alter the definitions of its most central predicates. Indeed, some of the most celebrated advancements in science have come from replacing commonly-held notions with alternatives that are seemingly much more outlandish or implausible, and could scarce claim to be more *entrenched*. One man's *grue* is another's *non-euclidean spacetime*. This highlights the importance of definitions in science, which offers a promising point of attack against Goodman's argument, and the notion of *simple induction* more generally. As alluded to previously, Goodman's choice of example is flawed in that emeralds are mineralogically defined to be beryl crystals that contain sufficient trace chromium (and/or vanadium) as to produce a green colouration. Hence all emeralds are green *by definition*. Godfrey-Smith (2003), in his discussion of the new riddle, treats this as an unfortunate mistake on Goodman's part, wholly accidental to the central problem of the new riddle. However, I think that this mishap on Goodman's part reveals something deeper about the argument. Much of the intuitive force of Goodman's argument owes itself to the fact that we already believe that all emeralds are green. If we consider an alternative example concerning, say, the whiteness of swans, then on the one hand this doesn't suffer the defect that swans are white by definition, but on the other hand it becomes that much easier to imagine a scenario in which the hypothesis *all swans are whack* (i.e. white if observed before time $t$, and black otherwise) is perfect justifiable. Indeed, if we let $t$ be the time at which our hypothesis is made, then the hypothesis becomes simply an odd way of saying that we believe the swans we have so-far observed to be *all* the white swans, and expect the rest to be black. The fact that emeralds are defined to be green by modern mineralogy, by contrast, obviates any further theorising regarding their colour. This points to something overlooked by Goodman, namely that scientific theories are not the result of a wholly external process of *projection* applied to certain predicates, but themselves pose norms regarding which predicates are *of interest* to them. It is productive, in this instance, to think of scientific hypotheses as less analogous to answers, and more so to questions. There is seemingly no principled choice to be made between 'all quetzals are green' or 'all quetzals are grue' as answers to 'what colour are quetzals?', but there *is* a principled reason for asking 'are all quetzals green?' rather than 'are all quetzals grue?', namely that we are *interested* in the greenness of animals, but not in their grueness. Now the reasons we may have for being interested in greenness rather than grueness are doubtless many, diverse, and to some extent arbitrary. Importantly, however, our interests are also *malleable*, and a mature scientific theory will serve, among other things, to shape our interests in such a way as to encourage us to ask *good* questions. Hence the modern mineralogist is not interested in the colour of emeralds; that question has been settled *as a matter of definition* by modern mineralogy. More generally, I want to say that the new riddle of induction is only a riddle for conceptions of science that labour under the notion of scientific theories as mere collections of statements made up of basic laws and their logical consequences. Of course there is no principled distinction to be drawn between predicates such as green and grue at this level, due to their mutual logical interdefinability. However, scientific theories do not offer up merely some basic laws for us to deduce from; each scientific theory comes with a wealth of norms, heuristics, and best practices that serve to guide the creativity and intellect of practicing scientists in forming hypotheses, designing experiments, etc. *Simple induction*, then, is but one heuristic among others for evaluating scientific hypotheses, and by no means the best such heuristic in all cases. For sciences that seek not only to predict, but also to *explain* phenomena, a hypothesis such as 'all emeralds are green', though it may be predictively reliable, is not wholly satisfactory, for we may rightly ask 'but *why* are all emeralds green?'. The very notion of there being a one-size-fits-all confirmation relation is therefore ill-founded, for by conditioning our practices of hypothesis-formation and testing, every scientific theory brings to the table its own criteria of confirmation. These criteria, moreover, are not held separate from a theory in assessing its success; the theory stands or falls with them, and their ability to generate accurate predictions, as much as with its basic laws. From this perspective, the new riddle is no more a problem than the general under-determination of theory by data, to which the answer posed by science is the same: let a thousand flowers bloom, and a hundred schools of thought contend. Skyrms' problem of justifying first-order induction and Goodman's new riddle thus both turn out to be red herrings, since *scientific induction* in general looks nothing like the form of *simple induction* they both supposed. What I want to suggest, rather, is that scientific induction instead proceeds by the chaotic proliferation of competing theories at all orders of prediction, culled by the hand of induction on the success of these theories at higher orders. Induction, under this view, is *not* the means nor the end of science, but only the starting point—a method which by its very nature seeks to replace itself with something better-suited to the task at hand. The mechanics of this process shall be my subject in the next, and final section. ## III. A new pragmatic argument The claim I have just made – that science, does not proceed by way of simple induction – prompts comparison with the thought of another philosopher who denied the usual conception of induction's role in science: Karl Popper. Indeed, Popper's philosophy shall provide a convenient foil in setting out the conception of *scientific induction* I wish to defend. Popper accepted Hume's conclusion that there was no purely rational basis for induction, and took the further step of concluding that science neither does nor ought to make use of inductive inference. Instead, Popper proposed a model of *conjectures and refutations*, whereby scientists introduce bold hypotheses that are then subjected to a barrage of empirical testing (Popper, 1959). The theories that science counsels us to accept, on Popper's account, are then those that are well-*corroborated*, in that they have withstood robust and inventive attempts at their falsification by empirical experiment. On the basis of previous considerations, we may agree with Popper that scientific theorizing is generally not governed by *simple induction*, and that its law is closer to the law of the Popperian jungle: survival of the fittest. Nonetheless, I want to argue that Popper was wrong to dismiss induction entirely, for induction – as typically conceived – comes into play at *higher orders* of science, when it comes to adjudicating choices between competing theories. What reason have we, on Popper's account, for accepting a well-corroborated theory if not induction on its past successes? Popper attempted to meet this challenge by introducing a notion of *verisimilitude* (i.e. truth-approximation) of theories to argue that corroborated theories have greater verisimilitude. This notion, however, was subsequently shown by Miller (1974) to be trivial, and incapable of distinguishing competing theories in the manner desired by Popper. Moreover, an acquaintance with the history of science shows that the Popperian criterion of empirical falsification for the rejection of a theory does not withstand comparison to actual scientific practice. Kuhn (1980) notes, for instance, that theories often find initial success by choosing which observations to *ignore* or consider anomalous, as much as they do by generalising existing observations. As Lakatos (1978) eloquently puts it, "theories... are born refuted and they die refuted." As a rule, we do not reject a theory out of hand in the presence of a single contradiction, but only when the contradictions start to come hard and fast. But this also begins to look suspiciously like induction, for what are we doing in such cases other than inferring from a theory's repeated failure that it is likely to do so again? This form of higher-order induction on the successes and failures of theories is, mutatis mutandis, the *whiff* of inductivism that Lakatos implored Popper to accept, and which I now seek to justify in the manner of Reichenbach. One might wonder if higher-order induction on the success of theories of the sort just described is afflicted with something like Goodman's new riddle. We can, of course, define gruesome predicates of scientific theories, e.g. call a theory *goodbad* if it is found to generate accurate predictions more than 50% of the time before time $t$ and less than 50% of the time otherwise. But of course such predicates pose no problem for scientific induction, simply because they are not of interest to us in assessing the success of theories. We are not interested in whether a theory is *goodbad*, but simply whether it is *good*. But how to go about telling the good theories from the bad? How, specifically, without committing ourselves to anything as strong as Reichenbach's frequentism? Here we may give the devil (Hume) his due, and admit that understanding the *mechanics* that drive scientific induction may prove necessary in determining what it is that scientific induction *accomplishes*. Unlike Hume, however, I shall seek to understand the *normative dimension* of these mechanics, specifically the values they answer to. Insofar as these values are sufficiently universal as can be claimed constitutive of the scientific enterprise itself, this shall make up the *pragmatic* aspect of a new pragmatic argument for scientific induction. And insofar as the set of methods answering to these values can be cast coinductively, in such a way as to count scientific induction among the number, this shall constitute the *logical* dimension of this new argument. What is to be gained by accepting a scientific theory? More pertinently, what is a reason for *maintaining* the theory one has already accepted, provided it continues to perform adequately? I have already indicated somewhat of an answer, in this regard, when I have said that theories offer not only collections of laws, but also norms, heuristics, etc. What should be acknowledged, however, is that these intellectual treasures are not to be taken free of charge – it generally takes work to become conversant in the jargon of a theory, to become efficient at executing the tasks it demands, etc. Moreover, I want to say that this is not, as Hume would have it, a mere consequence of human nature specifically, but applies more generally to *any* agent or system that processes information, due to the deep and subtle complexities of information transmission and computation. This – perhaps not exclusively – is what makes *normal science*, as Kuhn defines it, desirable: the coalescence of the scientific community around shared standards and norms, and the wealth of efficiency and specialization that results. This we might call the *inertia* of scientific induction—what makes us resistant to moving between theories. What, then, could drive us away from a theory? As previously mentioned, we tend to give up on theories that have repeatedly failed to live up to the standards we expect of them. But why should this be so? We may, after all, be like the miner who abandons his toil with only an inch of earth left between him and the diamonds he seeks. Be that as it may, our waning confidence in flagging theories buys us one thing: the avoidance of *degeneration*, to borrow a Lakatosian turn of phrase. A *degenerating research programme*, in Lakatos' terminology, is one that never yields novel, successful predictions, and is left to rationalize away the anomalies that accumulate around it. If the theory we adopt turns out to be such a sinking ship, then sooner or later our tolerance for its failures will dissolve, and we shall be compelled to cast about for a new intellectual raft to cling to. Of course, we have no guarantee of finding a more able vessel to carry us along, but so long as there is some craft capable of supporting us, we at least stand a chance to prosper. These two intellectual forces, one that drives us toward stability, the other that drives us toward dynamism in our choice of theories, form the basis of scientific induction, which as I see it is nothing more or less than a process for finding equilibrium between these opposing forces. Indeed, we may define the set of *adaptive* methods for empirical inference as those which are subject to both such stabilizing and dynamic forces *at all* levels, and in this case the definition is transparently coinductive since it characterizes a kind of *potential infinity* of these stabilizing and dynamic forces at all levels of inquiry. And by construction, the many methods of scientific induction as I have defined it count among such *adaptive* methods. Hence, at last, we have arrived at the promised new pragmatic argument for induction. Scientific induction, in this sense, names not one but a whole family of methods for dealing with uncertainty. Indeed, everyone who participates in science sets their own standards for accepting and rejecting theories, hence the process of scientific induction is diverse in its manifestations. So long, however, as the standards adopted by the scientific community submit to both the forces of *coalescence* and *anti-degeneration* identified above, the multifarious processes of scientific induction are unified in what they attain: intellectual adaptability. Just as evolution allows for the adaptation of species, and results from competing pressures of reproduction, mutation, and natural selection, so scientific induction allows for the adaptation of *theories*, and arises from similar pressures. What scientific induction secures for us, in this sense, is not certainty but continued room to maneuver. That such adaptability is a cardinal virtue of the human intellect, and thereby lends justification to our practices of scientific induction, I think I need not stress further. --- ### References * P. Godfrey-Smith 2003, "Goodman's Problem and Scientific Methodology", *The Journal of Philosophy* vol. 100, no. 11, pp. 573-590 * N. Goodman 1955, *Fact, Fiction, and Forecast*, Cambridge, MA: Harvard University Press, ch. 3-4 * D. Hume 1740, *A Treatise of Human Nature*, part III, sec. 1-6 * T. Kuhn 1980, *The Structure of Scientific Revolutions*, Chicago/London: The University of Chicago Press, ch. 2-6 * I. Lakatos 1978, *The Methodology of Scientific Research Programs*, J. Worrall & G. Curie (eds.), Cambridge: Cambridge University Press, intro., ch. 3 * D. Miller 1974, "Popper's Qualitative Theory of Verisimilitude", *The British Journal for the Philosophy of Science* vol. 25, no. 2, pp. 166-177 * J. Moreland 1976, "On Projecting Grue", *Philosophy of Science* vol. 43, no. 3, pp. 363-377 * K. Popper 1959, *The Logic of Scientific Discovery*, New York/London: Routledge Classics, ch. 2 sec. 11, ch. 5 sec. 30, ch. 10 sec. 79-85 * H. Reichenbach 1938, *Experience and Prediction*, Chicago/London: The University of Chicago Press, ch. 4 * B. Skyrms 2000, *Choice and Chance*, Belmont, CA: Wadsworth, ch. 3, sec. 3-4