Quantcast
Channel: securities – ClassActionBlawg.com
Viewing all articles
Browse latest Browse all 4

Association, Causation, and the Fuzzy World of the Baysian p-Value in Class Actions

$
0
0

David H. Kaye, Distinguished Professor of Law and Weiss Family Faculty Scholar at the Penn State School of Law, recently published a fascinating commentary in the BNA Insights section of the BNA Product Safety & Liability and Class Action Reporters, entitled Trapped in the Matrixx: The U.S. Supreme Court And the Need for Statistical Significance.  In the article, Professor Kaye applies his vast expertise in the areas of scientific evidence and statistics in the law to add some color to the U.S. Supreme Court’s March 2011 decision in the securities class action Matrixx Initiatives, Inc. v. Siracusano

For those not familiar with Matrixx, the case involved allegations that the makers of the cold remedy product, Zicam, withheld information from investors suggesting that the product may cause a condition called anosmia, or loss of smell.  At the risk of oversimplifying, the holding of Justice Sotomayor’s unanimous opinion can generally be summarized as follows: in a securities fraud action arising out of an alleged failure to disclose information about a possible causal link between a product and negative health effects, the plaintiff need not allege that the omitted information showed a statistically significant probability that the product causes the ill effects in order to establish that the information was material.  The decision reaffirms the applicability of the reasonable investor standard for materiality announced in Basic Inc. v. Levinson, which looks to whether the omitted information would have “significantly altered the ‘total mix’ of information made available” to investors. 

Thus, Matrixx eschews a bright-line rule (statistical significance) in favor of a more flexible “reasonable investor” standard.  Professor Kaye does not take issue with the Court’s rejection of a bright line rule requiring a plaintiff to plead (and ultimately, prove) statistical significance of omitted information in the securities context.  Instead, he is critical of the Court’s failure to articulate in better detail the technical shortcomings of using statistical significance as a bright-line rule, and he cautions against interpreting Matrixx as suggesting that something less than statistical significance would be appropriate to prove a causal link between a product and disease in other contexts.  In other words, it is one thing to say that the causal link does not have to be statistically significant in order for information about an association between the product and disease to be meaningful to investors or consumers.  It is another thing to say that statistical significance is unimportant when it is necessary to actually show evidence of a causal link itself, such as in the toxic tort context.

Although I followed and generally agreed with Professor Kaye’s article from a legal perspective, there were some technical concepts discussed in the article that were admittedly a bit over my head.  Fortunately, I knew just who to ask for more insight, having recently worked with Justin Hopson of Hitachi Consulting on two CLE presentations discussing the use of statistics in class actions.  Here are some of Justin’s observations after reading the article:

  • The article is well-written.  Professor Kaye would make a good expert witness.
  • Kaye identifies studies showing that zinc sulfate caused anosmia.  He does not comment on zinc acetate, nor zinc gluconate, the active ingredients in Zicam.  It sounds like the causal link may have been known, and available to use.  So, this was not a case about “arbitrary statistics.”  Instead, the issue had to do with the measurement of an understood, causal relationship.
  • Kaye describes the standard applied in Matrixx as looking to whether a reasonable investor would find the omitted information “sufficiently extensive and disturbing” to induce him to make a different investment decision.  Nonlawyer experts may be tempted to ask for a formulaic definition for this phrase, and it may not be obvious without explanation that the standard would leave the question about what is “sufficiently extensive and disturbing” to the factfinder.
  • Kaye talks about the historical treatment of .05 as the threshold ”significance level” that makes something statistically significant.  I’ve often thought of “significance level” associated with the relative degress of the “risks”.  If the risk of being wrong is death, then is 1-in-20 OK?  You really have to think through: What does Type I and Type II error look like in my experiment?  What are the implications?
  • If one were really attempting to compute the potential causal connection between Zicam and anosmia, it might help to understand why the FDA suggested a background rate in “all cold remedies”.  If the causality is related to zinc sulfate, then isn’t that the common population?
  • The point that the 0.05 “convention” is somewhat arbitrary is an important one.  Kaye observes that “[a] useful rule of complaint drafting must avoid inquiries into the soundness of expert judgments about the population, the test statistic, and the model.”  Hmm…so how do we get a useful rule if you cannot attack the fundamentals?  Indeed, Kaye’s next point is that Bayesian analysis should be used sometimes.  All inferential statistics have assumptions, and any appropriate standard of pleading or proof should be flexible enough to allow the opposing lawyer to challenge every single assumption. 
  • The observation that “the p-value, by itself, cannot be converted into a probability that the alternative hypothesis is true” is also very important.  This is a common misunderstanding in beginning stats because we teach, “I fail to reject the null hypothesis.  Or I reject the null, and accept the alternative hypothesis.”  It becomes very important to correctly specify null/alternative in exhaustive and exclusive terms.  Otherwise some other non-specified conclusion should be reached.
  • The one thing I might challenge is the assertion that adverse event reports (AERs) are “haphazardly collected data”.  I’m not sure why Kaye chose this phrase.  The AERs should be observations.  It is only their cause that is in doubt.  It is not their function to establish the causal link.  Instead, the link would have to be established with other data, such as through a clinical trial using a well-organized data collection process.


Viewing all articles
Browse latest Browse all 4

Trending Articles