Navigation
Papers by Melberg
Elster Page
Ph.D work

About this web
Why?
Who am I?
Recommended
Statistics
Mail me
Subscribe
Search papers
List of titles only
Categorised titles

General Themes
Ph.D. in progress
Economics
Russia
Political Theory
Statistics/Econometrics
Various papers

The Questions
Ph.D Work
Introduction
Cost-Benefit
Statistical Problems
Social Interaction
Centralization vs. Decentralization

Economics
Define economics!
Models, Formalism
Fluctuations, Crisis
Psychology

Statistics
Econometrics

Review of textbooks

Belief formation
Inifinite regress
Rationality

Russia
Collapse of Communism
Political Culture
Reviews

Political Science
State Intervention
Justice/Rights/Paternalism
Nationalism/Ethnic Violence

Various
Yearly reviews

Philosophy
Explanation=?
Methodology

 

Reflections on rationality and the collection of information

 

 

How much information

should you collect

before making a decision?

Theory, Implications and Applications

By Hans O. Melberg

 

 

Preface

In the preface to Ulysses and the Sirens Jon Elster writes that “to fail is always to fail at something, and it leaves you with a knowledge of the kind of thing you unsuccessfully tried to do” (Elster 1983, viii). The same can be said about this paper. It will be judged a failure if the criterion is whether it provides a comprehensive, formal and reliable answer to the question of how much information to collect. However, the attempt to answer the question may still be valuable. First, while I cannot give a comprehensive positive answer, the negative aim of arguing against some theories is still possible and valuable. Second, given the cumulative nature of academic work, it is perfectly acceptable to drop the aim of comprehensiveness and focus on some aspects of the problem in question. Third, given inherent limitations of time, space and personal abilities I would end up with a very poor result if I aimed for the first best comprehensive answer. For these and other reasons I continued to work under the title “How much information should you collect before making a decision?” even though I never laboured under the illusion that I would be give a satisfactory answer.

I once read a joke on the theme that copying from one book was called plagiarism, while copying from several books was called a dissertation. In preparing this work I have not only used other people's ideas, but I have also asked people for comments and advice. Jon Elster, Aanund Hylland and Roy Radner took the time to answer what sometimes must have seen like childlike questions, and for this I am grateful. I should also thank Timur Kuran and Barry Weingast who sent me copies of forthcoming papers. Support from a project financed by the Norwegian Research Council and led by Pål Kolstø, allowed me to go to the Association for the Nationalities' conference in New York in 1998 to learn more about rational choice theories of ethnic violence.

 

 

Preface *

Introduction *

What is the question? *

Why try to answer this question? *

The answer should not be obvious, or the obvious answer should be incorrect *

The answer and its implications should be important *

Is it possible to answer the question and if so how? *

The arguments *

What is a rational-decision according to Elster? *

Introduction *

The theory in general *

Hardin vs. Elster on beliefs, collection of information and rationality *

Sub-conclusion *

The general argument for the indeterminacy of rational choice *

Is Elster's argument on the impossibility of collecting an optimal amount of information correct? *

The problem of infinite regress *

The problem of estimation *

Resurrecting the problem *

Non-linearities and the problem of forming expectations: Leijonhouvd *

Ergodicity and the problem of using the past to predict the future: Davidson *

Revisiting the problem of infinite regress *

Implications *

Making old psychological theories of economic fluctuations rational and plausible? *

Herd-behaviour, ethnic conflict and economic fluctuations? *

Friedman and the Darwinian argument to save the maximization assumption *

Extensions and criticism *

Conclusion *

 


 

Introduction

 

What is the question?

The starting point of this paper is the following question: How much information should we collect before we make a decision? The short answer is that we should collect information as long as the expected value of spending more time on collecting information is greater than the expected cost. But, how do we know the expected value and cost of more information? To answer this we need to collect information i.e. we have to collect information to determine how much information to collect. As the reader may already have understood, this apparently leads to an infinite regress. We must collect information on how much information to collect before we decide how much information we should collect and so on forever. This is the problem of infinite regress in the collection of information, and some authors - for instance Jon Elster and Sidney Winter - argue that it is a serious problem in the theory of rational choice.

Although I shall argue that the problem of infinite regress is significant, I am sceptical of the claim that the problem is caused by infinite regress in the collection of information. As I will discuss in the next chapter, there are at least three kinds of infinite regress problems: Infinite regress in deciding how to decide; Infinite regress in forming beliefs for a given set of information; and infinite regress when trying to determine how much information to collect. Of these I believe the second is more important than the third (see section xx.xx). Moreover, I believe the problem of infinite regress in information is less fundamental than the problem of estimation which is discussed in the paragraph below. (The arguments are made in section xx.xx)

The infinite regress problem is only one possible source of indeterminate answers when trying to decide how much information to collect. A second problem occurs when it is impossible to form rational estimates of the value of information for reasons other than infinite regress. For instance, when we are in a unique situation we cannot determine the value of information from historical experience of similar situations, and hence there is (in the classical view of probability) no rational basis for estimating the value of information. I shall label this the estimation problem. One of the arguments in this paper is that some authors use the label "infinite regress" while they really discuss the estimation problem.

 

Why try to answer this question?

What makes a question worth asking and answering? First of all, the answer should not be obvious or the obvious answer must be incorrect. Second, the question must be important. Third, it must be possible in principle to give an answer. In this section I will try to relate these requirements to the problem of deciding how much information to collect. The aim is both to demonstrate that I am not flogging dead horses (i.e. there is disagreement today) or arguing against straw men (i.e. I show that there are people who think the collection of information does not represent a problem).

 

The answer should not be obvious, or the obvious answer should be incorrect

One good way of demonstrating that the answer is not obvious, is to show that the “experts” disagree. For instance, on the question under consideration, Roy Radner (1996, p. 1363) seems to believe that the problem of information collection is no major problem. As he writes:

It is convenient to classify the costly (resource-using) activities of decision-making into three groups:

1. observation, or the gathering of information;

2. memory, or the storage of information;

3. computation, or the manipulation of information. [...]

4. communication, or the transmission of information.

Of these activities, and their related costs, the first, second, and fourth can be accommodated by the Savage paradigm with relative little strain, although they do have interesting implications.

In apparent contradiction with this view, we may quote Jon Elster (1985, p. 69) who believes the problem of information collection is significant. He writes:

In most cases it will be equally irrational to spend no time on collecting evidence and to spend most of one's time doing so. In between there is some optimal amount of time that should be spent on information-gathering. This, however, is true only in the objective sense that an observer who knew everything about the situation could assess the value of gathering information and find the point at which the marginal value of information equals marginal costs. But of course the agent who is groping towards a decision does not have the information needed to make an optimal decision with respect to information-collecting.[23] He knows, from first principles, that information is costly and that there is a trade-off between collecting information and using it, but he does not know what that trade-off is.

The term contradiction may be too strong to describe the difference between the two quotations. Radner claims that information collection can be “accommodated by the Savage Paradigm” by he does not discuss whether mere consistency of subjective beliefs (which is what is required within the Savage paradigm) is sufficient to label the decision “rational.” Thus, the difference between Radner and Elster may be that Radner is willing to label a decision rational as long as it is based on consistent beliefs, while Elster places stronger demands on beliefs, such that they should be rationally constructed for a given set of information. This is a topic I will discuss closer in the first part of this paper which is a general introduction to rational decision-making. In any case, the quotations prove that there is a difference in the degree to which the collection of information is viewed as a problem by decision theorists, whether it be a substantial disagreement or a mere problem of labelling remains to be discussed.

 

The answer and its implications should be important

Not all non-obvious questions are worth asking. For instance, assume you have spent much time and effort finding the answer to a non-obvious question, but that few or no important consequences follow from answering the questions. It seems like you could have made better use of your time trying to answer a different and more important question. Sometimes people react in this way to the opening paragraph of this paper. “So what”; “Who cares?” and “This is simply too abstract to be of practical use” were some of the comments. I disagree, but before I can explain why it is necessary to discuss the meaning of "importance."

Clearly, "importance" is a subjective term so what is important to you need not be important to me. Although the previously mentioned reaction (“who cares?”) could be dismissed on this ground (“I don’t care what you say, it is important to me!”), I think this would be wrong. It is wrong because I believe the reaction stems not from thinking that the implications are unimportant, but from being unaware of the importance of the implications. Hence, I will try to persuade the reader by making some of the implications explicit.

First of all, we should distinguish between implications that derive their importance from being directly policy-relevant versus those implications that have intellectual importance. If I could give a good answer to the question of how to reduce the problem of unemployment, this would immediately be of importance to the welfare of many people (which may be one commonly agreed meaning of “importance”). I admit that my question is not important in this sense. However, sometimes a question is academically important because the answer may change the way we explain many phenomena. In short, new directions of research may be opened and some theories may be weakened. In what way is this the case with the question of how much information we should collect?

Take the first suggestion, that the answer to the question of how much information to collect may change the way we explain many economic phenomena. The main example in this paper, is how we should explain economic fluctuations. More specifically, to explain economic fluctuations it is important to explain why investments is so volatile and to explain this we need a theory of firm behaviour. If it turns out that it is impossible to give a determinate answer to how much information we should rationally collect, then the theory of the firm cannot be based solely on the theory of rational choice. At best these theories are incomplete and need to be complemented by theories that explain how much information we collect. For instance, according to Jon Elster (1983 (Review), p. 6) Sidney Winter and Richard Nelson argue that the problem of infinite regress leads to theories of satisficing instead of maximizing:

The Nelson-Winter attack on optimality is therefore a two-pronged one. The argument from satisficing is that firms cannot optimise ex ante, since they do not have and cannot get the information that would be required. Specifically, they would need an optimal amount of information, but this leads to a new optimisation problem and hence into an infinite regress. On the other hand, we cannot expect firms to maximise ex post, since the elimination of the unfit does not operate with the same, speed and accuracy as it does in natural selection. Taken together, these two arguments strike at the root of neo-classical orthodoxy.

In sum, if it can be shown that there is no solution to the problem of collecting an optimal amount of information, we are justified in searching for alternative theories of economic fluctuations, be it psychological theories like Keynes' animal spirits, evolutionary theories of the type Nelson and Winter favour or Simeon's satisficing/bounded rationality theory. Limitations of space and knowledge prevents me from discussing all these, but in section xx I try to say something about animal spirits and in section xx I present so called herd-models of economic fluctuations.

The argument, of course, does not only apply to theories of economic fluctuations. Indeed, since rational choice and maximizing theories are fundamental in many economic theories, the argument that rational choice is indeterminate is potentially very important. As Robert Solow writes: economics is built on three fundamental assumptions: greed, rationality and maximization. The same theme is also apparent in Daniel Hausman's (1992) book The Inexact and Separate Science of Economics. In short, since rational maximization is so fundamental in economics, I believe it is important to ask whether one aspect of the rational choice problem (the rational collection of information) yields determinate answers.

The second suggestion that could make an argument important, was that it could weaken the appeal of some theories. The main concrete example I shall use, is rational choice theories of ethnic violence. Recently a number of authors -- Russell Hardin (1995), David Laitin (1998), Barry Weingast (1999, forthcoming) and Timur Kuran (1998) to mention some -- have tried to explain ethnic violence using rational choice models. In this paper I shall argue that these theories are weakened if we reflect on the problems related to rationality and the collection of information. This may not change the lives of many people, but it is an important intellectual implication.

More widely, the argument could be extended to economic imperialism in general. The last decade have seen an increasing use of the economic approach in the other social sciences, especially political science and sociology. I cannot evaluate this tendency in general, but one implication of the argument that it is impossible to collect an optimal amount of information is to weaken the appeal of this tendency. (Other arguments may still increase it).

In summary, besides the inherent intellectual satisfaction in satisfying our curiousity about the answer to a question, I believe there are two potentially important implications from my paper. The first concerns the justification of using homo sociologicus in economics. The second is the appeal of homo economicus in other disciplines. It is important to note that I do not pretend to give general answers to these questions. All I do is to draw upon a case study of each tendency to develop my main theme of information and rationality. I do claim, however, that the answer to my main theme is important as one of the many arguments involved in the above mentioned disputes.

 

Is it possible to answer the question and if so how?

Some questions are interesting (or non-obvious) and important, but there is very little hope of determining the answers with any degree of reliability and this greatly reduces the utility of spending time on them. Hence, the third demand for it to be worth trying to answer a question, is that it is at least in principle possible to give an answer. This demand may be strengthened to say that we should only focus on those questions and answers that can be relatively reliably answered given inherent limitations of knowledge (both personal and in general), time, resources and so on.

Some questions do not have objectively true answers. For instance, it is in principle impossible to give an objectively true answer to the question of whether vanilla or chocolate ice cream is the best ice-cream flavour. More seriously the question "What is just?" may not have a unique and objectively true answer (but see Melberg 1996 for some arguments that it might still be possible to provide a partial answer). As for the topic of this paper, it is true that there is no single definition of rationality that everybody agrees to. This does not mean that it is impossible to discuss the question of rational information collection in a scientific manner.

...

Having established that it is in principle possible to at least discuss the problem in a scientific matter, it remains to be argued exactly how we should proceed to answer the question. I have previously admitted that I shall proceed by dividing the question into smaller parts and then select only some of these for closer investigation. Moreover, I have chosen to use many concrete examples and I shall usually make sure that I have a quotation either to argue against or to use as a starting point. Finally, on the question of the use of formal and abstract mathematics vs. verbal reasoning, I have opted for a middle course. These choices were made mostly out of necessity. I would have liked to be able to give a more formal and comprehensive answer, but limited personal abilities prevent me from doing so.

One final aspect should be commented on since it may seem peculiar to some. I think it is important to state the weaknesses of my own arguments and sometimes I will indicate the degree to which I am unsure. This is not only a question of academic honesty. By telling the reader about my own uncertainty, I make it easier for those who want to scrutinise and build on my arguments. I also prevent them from taking what I weakly believe too seriously. This is important because, as mentioned in the preface, science is a cumulative effort and there is no reason to make this cumulative work more difficult than it already is by hiding uncertainties behind confident language.

 

 

 

The arguments

I first present two conceptual arguments.

  1. Rationality demands not only picking the action with the highest expected utiity, but also having rational beliefs. Moreover, consistency of subjective beliefs is not enough to label the beliefs rational (This is in agreement with Elster and in partial disagreement with Savage and Radner).
  2. Against Russell Hardin and H. Smith I argue that even the collection of information should be rational if we want to call the action rational. In sum, maybe rationality depends on the process, not the end result? (rem with costly information the rational action need not be the one in the feasible set that maximizes expected utility)
  3.  

    Having defined rationality, I then consider the question of rational collection of information in more detail. The arguments are:

  4. I criticise Elster's view that infinite regress in the collection of information is a major problem in the theory of rational decision-making. His argument is more of the second type,that is on problems of finding reliable expected values for reasons other than infinite regress.
  5. I then present a more weak argument, but still one that I believe to be true: The Bayesian view of probability is better than the classical relative frequency view and this reduces the force of Elster's arguments on the impossibility of estimating the expected value of information. The section includes a criticism of defining acts based on "second-decimal arguments" as hyperrational.
  6. Having criticized the argument for the impossibility of rational collection of information. I then try to resurrect the problem by the following arguments:

  7. Starting with a quotation from an article by Leijhouvd I try to speculate on how non-linearities makes it more difficult to form reliable expectations. The results are suggestive, but not general enough to be truly convincing.
  8. I discuss why I either fail to understand Paul Davidsson arguments about why ergodicity makes it impossible to use the past as a source of information about the future, or -- if my understanding of his argument is correct -- argue that he is wrong. The argument includes a disaggregation of the concept of information as well as a discussion of bottom-up vs. top-down views on how we use the past.
  9. The problem of infinite regress appears not only when you try to collect information, but also when we try to decide how to decide, and more importantly, when we try to form beliefs even with a given set of information. This argument is advanced using a formal cobweb model and a Cournot duopoly model.
  10. On the implications of the arguments made in (5) and (6):

  11. Some old "psychological" theories of economic fluctuations look more plausible when we consider the problems resurrected in (6) (That is, theories like those of J.S. Mill, Pigou and, more weakly, Keynes' animal spirits).
  12. I also consider how the assumption of costly information makes herd-theories of economic fluctuations more appealing.
  13. As for the implications on economic theories of ethnic violence, I consider and try to critizise, first, a model by T. Kuran and second, a model by B. Weingast.
  14. On the more general implications for economics I consider Friedman's Darwinian justification for the continued use of the assumption of maximization. Although I largely I agree with his critics, I consider one small criticism of the critics.
  15. Before concluding I consider, and admit, to some counterarguments both against the specific arguments related to information and the application of the theory to economic fluctuations and ethnic violence. I admit that I in no way have provided micro-foundations that can explain all economic fluctuations or presented a better theory of ethnic violence than those I have critizised.

 

What is a rational-decision according to Elster?

 

Introduction

The aim of this chapter is to define rational choice. In short, what does our intuition tell us is required before we label an action rational? To structure the discussion I have chosen to use Elster's theory as a starting point. I then defend, and to some extent go beyond, his arguments about the need to consider belief-formation and information-collection before a decision is defined as rational. Finally, I conclude the chapter by considering Elster's general arguments for the indeterminacy of rational choice.

The theory in general

Let me start with the following quotation from Jon Elster's (1985, p. 71) article, "The nature and scope of rational choice explanation":

Ideally, a fully satisfactory rational-choice explanation of an action would have the following structure. It would show that the action is the (unique) best way of satisfying the full set of the agent's desires, given the (uniquely) best beliefs the agent could form, relatively to the (uniquely determined) optimal amount of evidence. We may refer to this as the optimality part of the explanation. In addition the explanation would show that the action was caused (in the right way) by the desires and beliefs, and the beliefs caused (in the right way) by consideration of the evidence. We may refer to this as the causal part of the explanation. These two parts together yield a first-best rational-choice explanation of the action. The optimality part by itself yields a second-best explanation, which, however, for practical purposes may have to suffice, given the difficulty of access to the psychic causality of the agent.

According to this view there are two general demands that have to be met before we can use rational choice to explain an action: First, the demands of optimality. Second the demands of causality. The demands of optimality can be divided into three requirements: optimality in the choice of action from the feasible set, optimality of beliefs for a given set of information, and optimality in the collection of information. The two causal demands required that action and beliefs was caused "in the right way" given preferences, beliefs and evidence. For instance, assume it is rational for me to press a green button (not the red), and I do so. We would not call this a rational action is the reason I pressed the green button was that somebody pushed me and I accidentally hit it. The same goes for beliefs. I may, for example, make two errors when I calculate probabilities, but these two errors may cancel each other out so the final belief is optimal. This is an example of evidence causing the beliefs in the wrong (non-rational) way.

Before going on to examine Elster's three optimality requirements, it is useful to compare his demands to those of others, as well as pointing out some general distinctions that are useful when defining rationality.

In Sour Grapes Elster (1983) distinguished between thin and broad rationality. The thin theory is mainly concerned with consistency. Irrationality is defined by inconsistent choices (intransitive) or having inconsistent beliefs (for instance, probabilities that add to more than one). Economists usually use this thin concept of rationality, but they also add several technical assumptions to make the concept operative (i.e. to define a utility function) (see Sen 1987 for the argument that consistency is the key meaning of economic rationality). Under certainty it is required that the preferences are complete and continous (see, for instance, Hargreaves Heap 1992, p. 6). Under uncertainty the assumptions include strong independence and the laws of probability (see Hargreaves Heap 1992, p. 9 for the assumptions and Hacking 1987, p. 165 for the seven basic laws of probability).

Having argued that economists use a "thinner" notion of rationality than Elster, we could then go to the other extreme to consider the very broad notion used by many philosophers. In this tradition rationality is often defined as being or acting "reasonable." Robert Nozick (1993) in his book The Nature of Rationality, argues that it is not reasonable to consider acting on all kinds of preferences as rational. For instance, he (1993, p. 144) claims that it is irrational for an agent "to have desires that she knows are impossible to fulfill (sic)" and he devotes a sub-chapter to discuss the many demands he want to make on preferences before they should be called rational. These discussions are often said to be about substantive rationality, as opposed to the economists more instrumental concept of rationality (i.e. they are about what we should want, not only what we should do to get something we want). This substantive demand goes beyond Elster's definition since he (at least in the article above) does not want to make this a part of the standard definition of rationality.

We now have three concepts of rationality. The thin and instrumental economic definition, Elster's more broad definition, and finally the broad and substantive notion employed in philosophy. Why, then, focus on Elster? First of all, with the rational expectations revolution in economics there has been a tendency to widen the scope of rationality even in economics. As described by Roger E. Backhouse (1995, p. 118):

In the post-war period economic theory has been dominated by the attempt to explain economic phenomena in terms of rational behaviour. In macroeconomic this has taken the form of providing a microeconomic foundation for macroeconomic theories: deriving macroeconomic relationships as the outcome of individuals' optimizing subject to the constraints imposed by their endowments, markets and technology. There has been an aversion to what Lucas has termed 'free parameters': parameters describing individual or market behaviour that are not derived from the assumption of utility or profit maximization.

One of these free parameters was the assumption of either rigid or only backwards looking adaptive expectations, so in the 1970s Lucas, in the words of Backhouse (1995, p. 123) argued that optimizing behaviour "should be systematically applied to all aspects of macroeconomic models, including the formation of expectations." This demonstrates the tendency for economics to go beyond the thin notion of rationality and it identifies this trend as a key to progress in economic science.

Why should economist take this step? As mentioned in the quotation, the main argument is not that we can explain more doing so, but that the explanation is better grounded. The underlying notion is one that conceives of progress in a discipline as explaining much using little (reductionism and parsimony). But, if this is the underlying justification, the logical step is to include Elster's third requirement: If we demand that beliefs be rational for a given set of information parsimony also demands that we should demand that the collection of information be rational. As the discussion below demonstrates, not everybody will agree to this.

 

Hardin vs. Elster on beliefs, collection of information and rationality

In the book One for All: The Logic of Group Conflict, Russell Hardin (1995, p. 16) tries "to go as far as possible with a rational choice account of reputedly primordial, moral, and irrational phenomena of ethnic identification and action." A short summary of his theory of ethnic violence goes as follows. It is that it is rational to identify with a group (since it provides both security, material benefits and satisfies a psychic need to belong somewhere). Being a member of a group affects your beliefs since it tends to reduce awareness of alternative ways of doing things, as well as inducing the belief that what we do is the right thing to do (is-ought fallacy). Given these beliefs, it becomes rational for people who want power to play on people's ignorance and the belief that we are "better" that the other. Finally, group violence happens when the leaders find that this is their best way of maintaining power (for instance to distract people from economic failure). Using nationalist propaganda, they create a belief that it is in peoples' self interest to engage in a pre-emptive attack against the other group. Once violence starts there is a spiral that only increases violence, since it creates hate as well as an even stronger belief that we must destroy the other side before they kill us (and there is no way the parties can credibly promise not to discriminate the other).

Although this to some extent is a plausible story, we have to ask whether it is intuitive to label it rational. More specifically, is the formation of beliefs behind nationalism and ethnic violence rational? Hardin (1995, p. 62, emphasis removed) admits that beliefs used to explain group conflict are "not convincing, even patently not so in the sense that it would not stand serious scrutiny..." (adding that this "does not entail that people cannot believe it" but this is not relevant to my argument). But how can it be rational to act on beliefs that are obviously wrong? Hardin (1995, p. 62-63) answer is worth quoting in at length:

One might say that the supposed knowledge of ethnic or national superiority is corrupt at its foundation. Unfortunately this is true also of other knowledge, perhaps of almost all knowledge of factual matters. [...] Hence, at their foundations there is little to be distinguish supposed knowledge of normative from that of factual matters [...] Should we say that anyone who acts on such knowledge is irrational? We could, but then we would be saying that virtually everyone's actions are always irrational. It seems more reasonable to say that one's beliefs may have corrupt foundations but that, given those beliefs, it is reasonable to act in certain ways rather than others is one wishes to achieve particular goals.

[...]

Someone who carries through on an ethnic commitment on the claim that her ethnic group is in fact superior, even normatively superior, to others, may not be any more irrational than I am in following my geographic knowledge. She merely follows the aggregated wisdom of her ethnic group.

In short, because all knowlege is corrupted at its base it "would be odd [...] to conclude that the action was irrational when taken if it was fully rational given the available knowledge" (Hardin 1995, p. 16, my emphasis).

Who is correct, Hardin or Elster? First of all, there are several internal inconsistencies in Hardin's argument. For instance, even if we agree that rationality demands only optimality for given information, it is difficult to see how people can believe that the individuals in their ethnic group descend from one "original" group Eve. This is a common belief among nationalist (see Connor 1994). Hence, "patently false beliefs" do not require us to collect information to be falsified, they may be irrational even for a given knowledge. Secondly, at one point he also claims that "a full account of rational behaviour must include the rationality of the construction of one's knowledge set. Costs and benefits of gaining particular bits and categories of knowledge may be related to one's circumstances...". Maybe, I failed to understand this (and it is preceded by the term "On this view ..."), but as far as I can see it is in stark contradiction to the rest of the argument. Finally, there is an inconsistency in his arguments since he later (1995, p. 192) attacks communitarianism using the following two arguments:

The chief epistemological problem with particularistic communitarianism is that it violates the dictum of the epigraph of this chapter: The important thing is not to stop questioning [...] To question such beliefs is to increase the chance of bettering them. (p. 192)

"Commonsense epistemology allows for variations in our confidence of our knowledge. My belief that concern for human welfare dominates concern for various community values or even community survival is radically different from my belief that certain rough physical laws hold sway over us." (p. 210)

He cannot at the same time demand that the construction of beliefs should not be rational and claim that communitarians are wrong in not questioning these belief. Indeed, if it is true what he says that all knowledge is corrupt at its foundation, then we should put little faith in the preposition that we can increase the reliability of our beliefs and values by questioning them.

Second, we might question the argument that all knowledge is equally corrupt at its foundation. As I shall discuss later, Jon Elster and Leif Johansen have made similar claims, but they in no way go this far. Is it really true that all our knowledge is so weak that none of the differences are worth seeking out or acting on?

Third, and perhaps most important, is the suggestion that to demand that we should construct the set of knowledge in a rational way must lead us to conclude that "virtually everyone's actions are always irrational." My immediate response would be that his argument leads to an equally odd conclusion: To reject the demand for rational construction of beliefs makes almost all behaviour rational. A better approach, I think, would be to say that rational/irrational/non-rational are not labels that either apply or do not apply. It is a question of degrees. An action can be more or less rational depending on variables such as the rationality of the construction of the beliefs behind the action. Hence, the information demand does not committ me to the position that almost all action is irrational, as Hardin claims. Moreover, his argument makes it far too easy (and uninteresting) to prove that a phenomenon is caused by individually "rational" action.

In the end it boils down to our intuition. Is it intuitive to buy a house or a car without first collecting some information about its quality? If the answer is no, then you accept some kind of demands on the collection of information before you label the decision rational. Would you say that a man who went out today to walk on water because he believed he was God, is rational? If no, then you accept that the beliefs have to be in some proportion to the evidence before the action is defined as rational.

Sub-conclusion

I have argued in favour of Elster's definition of rationality and against Hardin. The argument had two main aspects. First there is the theoretical presuppostition based on parsimony that if we assume maximizing behaviour in the choice of action for given beliefs, then we should also assume it when people form belief and when they collect information. Second, when faced with some concrete examples it sounded intuitively wrong to exclude optimal beliefs and optimal collection of information from the definition of rationality. One may go further and argue that intuition tells us to adopt the more substantial definition of rationality used in philosophy, but for the present purposes it is not necessary to go this far. All we need for the problem of infinite regress and the problem of estimation to exist is to have the construction of beliefs and the information set within the definition of rationality. To argue that more is needed (i.e. preferences must be rational) may be correct, but it is not necessary to make the argument to get the problems I want to discuss off the ground.

 

The general argument for the indeterminacy of rational choice

Before going on to discuss the special case of indeterminacy in the collection of information, we should locate the problem within the larger theory of indeterminacy in rational choice.

Elster's general argument is that for each of the three maximization problems, there may either be no solution or many solutions. The topic is extensively discussed in the opening chapter of Solomonic Judgements, as well as in the mentioned article from 1985 on the nature and scope or rational choice explanation and the introduction to the edited book Rational Choice (there is considerable overlap between the two). To facilitate the dicussion I have outlined the issues in Table 1.

 

Tabell 1

Indeterminacy problem

Optimization problem

No solution

More than one solution

1. Chose the optimal act within the feasible set

Whoever writes the lowest number closes to zero gets a prize.

When the agent is indifferent between two options

2. Form the optimal beliefs for a given set of information

Radical uncertainty and strategic uncertainty may make it imposible to form optimal beliefs.

 

3. Collect an optimal amount of information

In some situations it is impossible to estimate the marginal cost and benefit of spending more time collecting information

 

 

Is Elster's argument on the impossibility of collecting an optimal amount of information correct?

 

Introduction

Is it true that the collection of information cannot be solved rationally because it leads to an infinite regress in which we always need to collect more information before we can decide? To answer this question, I shall first examine Elster's arguments as he has presented them in different papers. From this it will emerge that his main point is not that the real problem he points to is not infinite regress, but the impossibility of forming reliable expectations due to reasons other than infinite regress. However, when considered more closely even this argument is not without flaws, since it relies heavily on the notion of probability as relative frequency.

 

The problem of infinite regress

 

When discussing the problem of collecting an optimal amount of information, Elster is clearly very much influenced by the arguments made by Sidney Winter (1964, 1975) and Winter & Nelson (1982) in the book An Evolutionary Theory of Economic Choice. Elster (1983, p. 5) reviewed this book for London Review of Books and hailed it as "a landmark in the development or economic theory." The key quotation that Elster often refers is this:

It does not pay, in the terms of viability or of realized profit, to pay a price for information on unchanging aspects of the environment. It does not pay to review constantly decisions which require no review. These precepts do not imply merely that information costs must be considered in the definition of profits. For without observing the environment, or reviewing the decision, there is no way of knowing whether the environment is changing or the decision requires review. It might be argued that a determined profit maximizer would adopt the organizational form which calls for observing those things that it is profitable to observe at the times when it is profitable to observe them: the simple reply is that this choice of a profit maximizing information structure itself requires information, and it is not apparent how the aspiring profit maximizer acquires this information, or what guarantees that he does not pay an excessive price for it

[...]

At some level of analysis, all goal seeking behaviour is satisficing behavior. There must be limits to the range of possibilities explored, and those limits must be arbitrary in the sense that the decision maker cannot know that they are optimal." (Winter 1964, quoted from Elster 1983, p. 139-140, my emphasis)

According to Elster (1983 ETC, p. 139), Winter has by this argument demonstrated "that the neoclassical notion of maximizing involves an infinite regress and should be replaced by that of satisficing." And he adds: "The argument appears to me unassailable..."