Societal Preferences for Healthcare

Research into how to allocate our scarce healthcare resources

Tag Archives: decision making

Decision-making in healthcare

I recently helped conduct a workshop on Measuring Societal Preferences along with Dean Regier and Helen McTaggert-Cowen at the Priorities in Health 2012 conference in Vancouver, BC.  The workshop covered why we might be interested in measuring societal preferences, qualitative approaches to eliciting preferences, and quantitative approaches, including experimental design for choice experiments and a brief overview of stated preference methods.

My contribution to the workshop (my slides are here) was a discussion of the theoretical underpinnings for measuring societal preferences in healthcare, and specifically why we might want to try to measure people’s preferences.  Welfare economics has conventionally taken a “welfarist” perspective, which holds that individuals are the best judges of their own well-being, and that each individual’s well-being is as important as everyone else’s.  It is therefore inappropriate for anyone, and most particularly ‘society’, to try make decisions on behalf of others.  Under this perspective, the only fair way that we as a society can decided whether or not an option is worth doing (for example, should the government buy a new piece of medical equipment?), is to check whether each individual citizen feels better off with the new option.  If everyone says yes, then we should buy the new equipment.  If somebody — even just a single person — says they feel worse off, then we should not buy the new equipment.  Because one person’s well-being is just as important as everyone else’s, we cannot disregard anyone’s well-being for the sake of “the greater good.”  To do so would be to prioritize one person’s well-being over another’s.

However, because most societal decisions involve ‘winners’ and ‘losers’ — in our example, everyone might be taxed in order to buy a piece of equipment that only a few of them may ever need — a strict welfarist perspective is somewhat impractical.  In addition, the idea that resources cannot be redistributed unless everyone agrees means that blatantly unfair allocations, for example, where all resources in society are held by a single individual and everyone else has nothing, can be held as ‘fair’ if that single person objects to a redistribution.  For this reason, the strictly unanimous decision rule has been modified to allow for the possibility of the winners compensating the losers.  If the winners from a particular decision could, in theory, compensate the losers and still remain better off than they were before the decision, the new program has an overall net benefit to society and we should proceed.  Welfare economics takes the position that whether or not compensation actually takes place is a political, not economic, issue, and all that matters from an economic perspective is that there is the POTENTIAL for compensation that leaves everyone at least as well-off as before the decision.  Also recognize that now we’re allowing for some people’s preferences — the winners — to over-ride other people’s preferences — the losers.  This combination of over-riding some people’s preferences, while also allowing for the possibility of winners compensating losers, is known as the “extra-welfarist” perspective (‘extra’ in the sense of “factors beyond just individual welfare”).

This decision rule works well in most cases, and ensures that decisions that would have a net benefit to society can proceed on the grounds that decisions can be made for the greater good, while losers can be compensated for their losses.  Things are more complicated, though, when we consider healthcare decisions.  A new piece of healthcare equipment may help a person live a longer life, or have a greater quality-of-life.  Such benefits are difficult to value in terms of dollars and cents, and therefore are more often measured in terms of quality-adjusted life years, or QALYs, but maximizing such benefits is still the primary objective of conventional healthcare decision-making.  It is important to recognize, though, that it is difficult to compensate people for being ‘losers’ in healthcare — people would probably find it distasteful to be offered money in return for poorer health, and even if they didn’t, it is impossible to compensate someone who has died as a result of a decision to prioritize others over them.  This makes it difficult to justify an exclusive focus on maximization while disregarding distributional issues when making decisions about how to allocate (inevitably scarce) healthcare resources.

In attempting to incorporate distributional issues (also known as equity or distributive justice) in order to make such decisions ‘fairly’, we must also recognize that the desirability of any particular distribution of resources is a value judgment: what seems fair to one reasonable individual may seem unfair to another.  It is the intrinsic nature of such value judgments that they cannot be resolved by logic alone.  Despite this reality, though, we are still faced with the need to allocate societal resources in the fairest way possible.  I would argue that the most straightforward solution to this problem is to ask people what distribution they would prefer.  This is the basis of the Communitarian approach to resource allocation: resources should be allocated in a way that reflects the preferences of the community.  These preferences determine the objectives of the healthcare system, and the degree to which these preferences are satisfied in turn determines the value the community gets from the healthcare system.

Simple yes/no or rating scale questions are likely to be too simple for eliciting preferences for allocation decisions, most particularly because they do not force respondents to recognize the trade-offs between different allocations.  As discussed in an earlier post, it is crucial to recognize that a decision to give a particular group higher priority necessarily means that another group must receive lower priority.  Such preferences, therefore, are better elicited using discrete choice experiments, or other choice-based stated preference methods such as constant-sum paired comparisons.  Quantitative discrete choice methods were presented by Dean Regier, and if you are interested in more details, his slides can be accessed here.

The Price of Life

Film maker Adam Wishart has made a wonderful documentary about healthcare prioritization that I think illustrates the issues very well.  It focuses on one decision by NICE, the National Institute for Clinical Excellence in the UK, about whether or not the UK healthcare system, the National Health Service (NHS), should fund Revlimid, a very expensive drug used to treat terminally ill patients with myeloma, a cancer of the blood.

It follows a number of patients with terminal myeloma, for whom this decision is literally life or death.  It interviews the developer of the drug, who argues that high drug prices are essential to encourage the innovation that leads to the next generation of lifesaving drugs.  Finally, it shows the dilemma faced by health system administrators who are responsible for allocating a budget between different groups of citizens and patients.  If this expensive drug is funded, it means that they will have to find the money to pay for it elsewhere in their budget.  If you’ve participated in my preferences survey, you’ll recognize their dilemma.

From my point-of-view, a key moment comes at the 48 minute mark of the film, where one of the decision makers expresses the essential truth of healthcare priority setting: if they fund this very expensive drug rather than funding less expensive treatments for patients with different conditions, it implies that an extra year of life lived by myeloma patients is worth more than an extra year of life lived by the other patients. This is the reality of “priority setting”: to give one group higher priority, you have to give another group lower priority.

As mentioned in the film, many people are unhappy with the QALY, or the quality-adjusted life year, as the basis for how these decisions are currently made.  It is assumed that society considers one additional QALY to be equally valuable to everyone, regardless of who gets it and how many they get.  But as the preliminary results of my survey show so far, that’s not how most people feel.  The overwhelming majority of respondents so far have been answering in ways that suggest they do care about who gets that additional QALY.  The objective of my research is to explore how we can make these decisions in a way that we all can agree with. 

I would be very curious to hear your thoughts on this documentary.  If you were the committee chairman at the end, how would you have voted?  Do you accept the need to deny some patients treatment, or is there a different solution where all patients can receive all beneficial treatments?

Public participation and the delusion of objectivity in healthcare decision-making

Much of my research into societal preferences starts with the idea that all members of society should have a say in how healthcare resources are allocated.  However, any discussion around the role of public participation in healthcare decision-making inevitably raises the issue of objectivity — in particular, the idea that ‘decision makers’ have it and the public doesn’t.

The classical definition of objectivity is “having a reality independent of the individual.”  That is, an objective truth should be able to be recognized by everyone without requiring any interpretation or explanation.  For example, the fact that the Empire State building is taller than the tallest NBA basketball player is an objective truth.  Any individual can accept this fact as truth without requiring any explanation or persuasion.  On the other hand, arguing that blue is a better colour than purple is not an objective truth; whether one chooses to accept this as fact depends on some form of persuasion, as well as the particular tastes and perspective of the individual.  The fact that one individual may prefer blue over purple is a subjective truth: it is true from the perspective of that particular individual, but it is not necessarily true for all individuals.

When we as a society need to make an important decision between subjective truths, we conventionally rely on small groups of decision makers who, because of their knowledge, expertise and professionalism, are considered to be uniquely “impersonal, impartial, unbiased and neutral.”  Leaving decisions to impartial, objective decision makers is known as procedural objectivity.  Since the process of making the decision is ‘objective’, it is expected that the result of that decision will also be objective.  That is, the alternative chosen will be objectively better than the alternative not chosen.  ‘Objectively better’ in this context means that a decision can be easily accepted by all individuals, with no further explanation or persuasion.

Like blue versus purple, the ideal allocation of scarce healthcare resources is not an objective truth.  It is not possible to look at two alternative allocations of resources between different groups of patients and say that one is objectively better than the other.  Any individual’s ideal allocation of scarce healthcare resources depends on their preferences: do they prefer to treat younger or older patients; the sickest patients or those most likely to return to full health?  A case can be made for almost any allocation, but as with blue versus purple, the strength of this case ultimately rests upon tastes, perspective and persuasion, not objective truth.  And so healthcare has tended to rely on procedural objectivity and small groups of decision makers to choose the objectively better allocation of resources.

But as Arthur Fine so eloquently puts it, procedural objectivity represents “the view from nowhere, and of no-one in particular.”  By carefully excluding personal perspectives from decisions, we make it impossible to understand the very nature of subjective truths: that truth depends on your perspective!  And to say that such decisions can be objectively better implies that they can be accepted by everyone as truth, with no further explanation or persuasion.   However, it is likely that a cancer patient, told that it will be objectively better for society to no longer fund her drugs, will require a little more explanation and persuasion about this decision.

Fine argues that the fundamental point of procedural objectivity in societal decision-making is not truth, but trust.  People don’t value procedural objectivity because they believe it arrives at a decision that represents an objective truth, they value it because they believe it arrives at a decision they can trust.  In this view of objectivity as trust, objectivity represents anything that improves trust in a decision.  In some cases, trust may be enhanced by the impartiality of professional decision makers.  In other cases, trust may be enhanced by a broader process, with more personal perspectives.  In the case of the cancer patient above, she does not need to believe that such a decision is objectively best, only that she can trust the process by which the decision was made.

It is in enhancing trust that the public has an important role in healthcare decision-making.  Joanna Coast and colleagues asked a group of healthcare decision-makers, including government bureaucrats, physicians, hospital administrators, and members of the general public about the role of citizens in healthcare rationing.  She found that both decision makers and the general public felt that citizens lack sufficient ‘objectivity’ to participate in health care rationing.  Both groups viewed objectivity as the ability to make decisions based solely on facts while setting aside any emotion or empathy.  However, as I suggest above, judgments about the appropriate allocation of healthcare resources rely on subjective values, and cannot be resolved by objective facts alone.

Support for this view of objectivity as trust comes from Mari Broqvist and Peter Garpenby, who asked Swedish citizens about their willingness to accept healthcare rationing.  The feeling of the participants was that citizenship implies a willingness to stand aside for the benefit of others, but also an expectation that others will stand aside when they have greater needs.  However, participants also felt that insufficient knowledge about why some patients were given higher priority would erode their trust in this fragile social contract and make them less willing to stand aside for others.  Broad, public involvement in healthcare decision-making was viewed as a way to enhance understanding and trust.

This suggests to me that concern over the impartiality of citizens in healthcare decision-making is misplaced.  The allocation of healthcare resources is an inherently subjective process and it is not improved by ignoring this truth.  Rather than impartiality, the system depends upon trust, and this trust is best enhanced by the participation of all citizens.  It is not that healthcare decision-making is too important to include citizens; rather, healthcare decision-making is too important not to include citizens.

This post goes a bit beyond my original intent of using this blog simply as a way of communicating survey results to participants.  I found this exercise challenged me to organize my thoughts and present my argument in a clear manner without resorting to (too many) academic references.  It was useful for me, but I am curious to know if you got anything out of it.  Please let me know if I’ve convinced you that citizens have an essential role to play in healthcare decision-making, or if you think there is some glaring hole in my argument.  Is my interpretation of objectivity completely at odds with yours?  Is this the most boring thing you’ve ever read and you want the last 5 minutes of your life back?  I look forward to your comments.