Bias and Risk in Behavioral Polls and Studies – A Cautionary Tale for Public Policy

Courtesy WebIndia

by James C. Sherlock

Here at BR, both the authors and commenters spend a great deal of time discussing the outcomes of behavioral polls and studies.

Taxes, mandates, and bans are behaviorally informed. As are most public policies.

But behavioral science adds levels of risk and bias much more prevalent than in the hard sciences.

As a citizenry, we generally understand that polls that predict future behavior can prove unreliable because we see political polling.

Most expect polls about how we feel about our lives to be imperfect, but not purposely so. Yet some polls are designed to support a specific political position.

We probably understand a lot less about the risks and biases in behavioral studies that govern most public policy, because assessing them requires technical expertise most, including most elected politicians and political observers do not possess.

Which is a key reason such policies often go wrong.

Quality of Polling. Remember the red wave of 2022? I don’t either. And professional pollsters were generally trying to get it right. To predict future voting. Political pollsters know that both sides start with 47-48% of the voters in every election.

They are polling in order to understand the persuadable. And sort the wheat from the chaff in their poll results.

And, importantly, political pollsters are trying to get it right to preserve their reputations, and their incomes.

Questions and their design matter to outcomes of polls. So do various methods of trying to wring meaning out of them.

Sometimes, as in studies, the refs in observational poll design are players with a rooting interest in the outcome.

Six months ago I wrote an article exposing the purposeful and very public official corruption in 2020 of Virginia’s previously excellent, scientifically structured Authoritative School Climate Survey by people with political/dogmatic goals.

They destroyed the existing question base and shaped a new one in order to get results they wanted to support public policies that they had created.

Quality of Behavioral Studies. The Proceedings of National Academy of Sciences published in 2013 a meta-analysis (study of studies) that recommended caution in accepting behavioral studies, especially those authored in the United States.

US studies may overestimate effect sizes in softer research urged caution.

We found that primary studies whose outcome included behavioral parameters were generally more likely to report extreme effects, and those with a corresponding author based in the US were more likely to deviate in the direction predicted by their experimental hypotheses, particularly when their outcome did not include additional biological parameters.

Behavioral studies have lower methodological consensus and higher noise, making US researchers potentially more likely to express an underlying propensity to report strong and significant findings.

Behavioral science-based studies were assessed often to be biased by both small sample sizes and confirmation bias of the researchers who tend to find what they start out looking for. As that study writes, too many tend:

to deviate in the direction predicted by their experimental hypotheses.

Nice way of saying the authors cheated when studying the results of their own hypotheses.

It was written by Danielle Fanielli and John P.A. Ioannnidis, two renowned meta-research scholars. Both are now at Stanford.

They are also responsible for the famous report Meta-assessment of bias in science published in the same Proceedings in 2017.

That one was not limited to behavioral sciences.

If you remember the extensive discussions of the reproducibility crisis in hard and soft science studies that even made the popular press, that report was their primary source.

It concluded:

The social sciences, in particular, exhibited effects of equal or larger magnitude than the biological and the physical sciences for most of the biases and some of the risk factors.

Yet we cite behavioral studies all the time, and in the process give many of them far more credit in public policy and in debates than they deserve.

Public Policy. My personal focus in this blog is on Virginia education and public health policy, both subject primarily to behavioral analysis.

Education.  

I have pressed in this space for consideration in the field of education only of studies assessed by the Institute of Educational Sciences What Works Clearinghouse to be both scientifically valid and to provide strong evidence.

We spent a lot of ink back and forth about the single major study on the effectiveness of Positive Behavioral Interventions and Supports (PBIS).

I used as reference the conclusions of the Institute for Educational Sciences about that study, for the simple reason that they scientifically review the construct and evidence of studies, which I cannot do.

There are many other examples of public education policy changing based upon questionable or no evidence but rather on politics.

Public health.

Admit it. You have been waiting for this discussion to turn to studies of masking and other physical interventions for airborne viruses as well as the COVID isolation recommended by the CDC and enforced by public policy.

Here is the latest multi-national meta-analysis of that subject of physical interventions published three weeks ago. This is the 6th version going back to 2006. The conclusion that I find most interesting does not pick a side:

The high risk of bias in the trials, variation in outcome measurement, and relatively low adherence with the interventions during the studies hampers drawing firm conclusions.

People simply do not do, or do not do consistently or well, what they are told to do in the area of public health. Hardly shocking.

Yet firm conclusions were drawn in that meta-analysis about the efficacy of medical/surgical masks and N95/P2 respirators worn properly.

Which were by and large not the types of masks people wore. And even fewer wore them properly.

Public policy enforced masking during COVID anyway. Even on children, who were the least likely both to suffer ill effects from COVID and to wear masks properly.

And it kept them home. As directed by the teachers unions in direct contact with CDC.

But COVID was hardly the first pandemic.

The authors of Social isolation and its impact on child and adolescent development: a systematic review, a meta analysis, screened 519 articles published worldwide between 1990 and 2000 on the effects of social isolation on child development.

Using prescreening to eliminate all but 83, and Agency for Healthcare Research Quality (AHRQ) standards for the rest, the researchers found 12 that met high quality standards.

They showed the same results as COVID isolation.

So the results of COVID isolation were not just predictable but predicted.

Now comes CDC’s Understanding the Pandemic’s Impact on Children and Teens. (Understanding)

It claims to “describe the COVID-19 pandemic’s profound effect on the physical and mental well-being of children and teens” using data about pediatric emergency department visits.

The data should be solid. They are from the CDC’s National Syndromic Surveillance Program (NSSP).

But if you read Understanding, the impacts on pediatric health were less from the pandemic itself than from the preventive measures recommended by CDC.

For example, weekly visits among older children (5–11) and teens (12–17) increased for self-harm, drug poisoning, and psychosocial concerns during 2020, 2021, and 2022 when compared to 2019.

The other report shows that teenage girls may have experienced the largest overall increase in behavioral and psychosocial concerns. The proportion of ED visits for eating disorders doubled and tic disorders more than tripled in this population as well. Other studies have also noticed increases in tic-like symptoms among girls during the pandemic.

Note the use of “during the pandemic,” not “caused by the pandemic.”

Children and teens were largely at home and isolated for very long periods from their friends and extended families.

As recommended by the CDC.

Now they tell us about the mental health disaster. That was predicted before COVID.

They do not discuss its equally disastrous effects on education.

Bottom line. Both polling and behavioral studies are necessary, often headline-ready and regularly-flawed features of modern life.

Authors at BR, right and left, do the best we can to use them properly. But it is and will remain a crap shoot.

Some regular commenters disagree, often angrily and at length, from the other side of the culture wars.

But, for everyone, caveat emptor.

Updated Feb 18 at 1745 to insert reference to pre-COVID findings of the effects of social isolation on children.