In Risk and Rationality, I present an alternative to the orthodox theory of rational decision-making.  According to the orthodox theory, expected utility maximization, rational individual may differ in their beliefs and desires, but they must maximize expected utility relative to these.  My alternative, risk-weighted expected utility maximization, allows rational individuals to differ in their beliefs, desires, and risk-attitudes--in the weight they give to what happens in worse states versus what happens in better states, or in how much they care about prudence versus venturesomeness.  I show that the preferences of risk-weighted expected utility maximizers are indeed rational, and that they cannot be captured by the orthodox theory, even sophisticated versions of it.  Furthermore, I show that beliefs, desires, and risk-attitudes can be derived from preferences.

A precis, written for a symposium, can be found here: 'Precis of Risk and Rationality'.  A brief explanation of the conditions that allow us to derive beliefs, desires, and risk-attitudes can be found in 'Risk and Tradeoffs'.

I defend risk-weighted expected utility theory against some objections in two symposia.  My first set of replies is under the title 'Revisiting Risk and Rationality: A Reply to Pettigrew and Briggs', and my second, with a precis, 'Replies to Commentators'.

In 'Instrumental Rationality, Epistemic Rationality, and Evidence-Gathering', I explore an upshot of risk-weighted expected utility maximization: rational individuals may prefer not to receive new information, even if such information is cost-free.  This point has interesting upshots for the rationality of faith.

In 'Risk and Motivation: When the Will is Required to Determine What to Do', Dylan Murray and I explore an application of risk-weighted expected utility to the question of what role the will plays in deliberation.  We argue that the will is required to make trade-offs between what happens in worse states and what happens in better states; in short, that the will is required to determine risk-attitudes.

In 'Why high-risk, non-expected-utility-maximizing gambles can be rational and beneficial: The case of HIV cure studies', I explore an application of my work to medical research ethics.  I show that it can indeed be rational for subjects to participate in research trials with "adverse" risk-benefit ratios, and argue that we ought to let them do so.

In 'Taking Risks behind the Veil of Ignorance', I explore an application of my work to distributive ethics.  In particular, I provide a new argument for a natural view in distributive ethics: that the interests of the relatively worse off matter more than the interests of the relatively better off, in the sense that it is more important to give some benefit to those that are worse off than it is to give that same benefit to those that are better off, and that it is sometimes (but not always) more important to give a smaller benefit to the worse off than to give a larger benefit to those better off.  I refer to this position as relative prioritarianism.