Following her guest lecture, Wendy Wagner (University of Texas at Austin) speaks to us about the many facets of incomprehensibility in legal texts and law practice. She also suggests approaches to overcome incomprehensibility in law.
You refer to several aspects of incomprehensibility of legal texts, such as information asymmetries, comprehension asymmetries, blind spots, power and transparency. Could you briefly explain the relationships between these terms in your work?
“Comprehension asymmetries” often run in parallel to the well-established “information asymmetries.” Information asymmetries occur when a party with superior access to relevant information hides it from his or her audience. Comprehension asymmetries occur when a party with superior processing abilities fails to clearly communicate the core message in the information to an audience that is less able to make sense of it. Comprehension asymmetries can arise from intentional obfuscation or databombing, or it can arise from neglect -- a sophisticated party simply fails to ensure that key information is communicated in a way that is understandable to the audience.
“Transparency,” at least as that term is widely used in the U.S., targets attempts to correct information asymmetries only. Transparency reforms insist on access to the information, but generally neglect the parallel imperative of ensuring that the shared information is understandable to the intended audience. However, if a privileged party “shares” information but does so in ways that are almost certainly likely to overwhelm the target audience (by databombing, obfuscating, complexifying, etc.), then the end result is effectively the same as if the privileged party hid the information.
A “blind spot” in the design of some legal processes is the tendency of legal architects to ignore this parallel requirement of meaningful communication in “transparent” systems. If the only requirement is one of information-sharing, then this solitary requirement could even backfire, leading to data-dumping and other tactics that have the effect of making the target audience’s comprehension of the information worse, while complying with the letter of the law.
You describe potential remedies to overcome incomprehensibility such as ‘different communication rather than more information,’ meaningful communication, incentives and adjustments in the whole process of publishing legal information by agencies. In your view, what would be the most feasible and effective means in achieving more comprehensible legal information?
To reform this comprehensibility problem, strong incentives must be created for agencies and stakeholders (and other sophisticated parties) to communicate meaningfully with one another and to do so without playing information games. If any agency’s explanations or policies are effectively inscrutable to stakeholders, the agency in a reformed system must be sanctioned, for example by being subjected to much more rigorous judicial review. And if a stakeholder’s comment is excessively complicated in ways that exact unjustifiably high resources from the agency to decipher, then the stakeholder should be deemed to have waived his/her ability to challenge the agency if his/her comment is rejected. Only clearly communicated comments and rules enjoy deference; convoluted arguments and explanations are presumptively invalid. Comprehensibility, moreover, should be assessed by ensuring first that the legal rules institute strong incentives (sanctions and rewards) for meaningful communication and second by requiring sophisticated parties to demonstrate their efforts to communicate meaningfully. Text-based standards that attempt to “measure” comprehensibility will not be effective for a variety of reasons. Instituting strong incentives for meaningful communication is therefore critical.
You explained during your lecture how you got into studying incomprehensibility in legal information, which essentially seems to have arisen from your own struggle with certain legal texts. Now that you are well into the project, how does the project relate to your other work?
My book on incomprehensibility ties together a number of discrete problems that I have been studying over the years. These include: the tendency of sophisticated parties to make raw policy choices appear technically-preordained in ways that undermine the public’s understanding and ability to participate (“the science charade” and “misunderstanding models”), the manipulation of science for policy by high stakes groups or political officials (“bending science”, “white house science”, and “stealth deregulation”), poorly structured administrative process rules that encourage information-dumping (“filter failure”), the lack of intelligent policies about which party is best able to produce information relevant to understanding environmental risks (“commons ignorance”), and reforms that incentivize focussed, meaningful communication between parties (“competition-based regulation” and “racing to the top”). I hope to continue connecting my past work with my future research on incomprehensibility, as well as immerse myself in the ongoing, excellent work by others that addresses these problems.
A study of ‘Big data’ might be your next project. Can you already share more information about what this might entail or where this might go?
Technical and scientific information have always presented challenges for democratic accountability, but the growing use of large databases and computational models in agency decision-making magnify the risks of inscrutable agency decisions. Legal process requirements should in theory counteract the risk that the technicality and details inherent in algorithms and datasets are out of the reach of parties’ understanding. But rather than encourage rigorous democratic deliberations in the use of these new tools, existing legal requirements in the US are generally set up in ways that can perversely reward agencies for incomprehensibility and overly-technical decisions.
In a reformed system, agencies must be required to comprehensibly explain – using best practices that include visual representations of models -- the framing, algorithm choices, and procedures used to ensure the scientific integrity of their analyses. Rewards (e.g., increased judicial deference for accessible explanations) are also essential to incentivize excellent models.