If you wanted to measure the length of the line above these words, would you use a straight ruler or one that’s bent into an angle? Silly question, clearly. A bent ruler would give you an incorrect answer, so you’d choose the straight one.
Rulers are designed to measure length. That’s their one job. Risk assessment tools are obviously more complex than rulers. They measure human behavior, which has many more facets than a line has length. Still, like rulers, risk tools attempt to distill the available information into a simple, useful, accurate measuring device—in this case, to figure out the likelihood of a future adverse event.
The information risk assessment developers use comes from administrative system data. These data are what tool developers use to build their “ruler.” In child welfare, as in all our human service and justice systems, administrative data reflect the historical, systemic racism and oppression inherent in our society. The over-surveillance and over-reporting, for example, of certain racial and ethnic groups—namely black, Latinx, and American Indian/Alaska Native people—show up in our social services systems and ultimately in those systems’ data.
A risk assessment developer who does not account for this system data bias will build a tool that reflects that bias—in essence, a bent ruler for black and brown families.
To correct for the bias, a tool developer can balance the predictive power of the tool with equity. This, in essence, straightens the ruler.
You might legitimately ask: Why even make a risk assessment tool, or a ruler, if your materials are flawed in the first place? Why not use your intuition, your clinical skill or experience, or some other method to make decisions about families? Why not use a different measurement tool, or just eyeball it?
Research shows that despite their limitations, actuarial risk assessments lead to more consistent, more accurate, more reliable decisions than clinical skills alone. At the same time, no tool alone can, or should, inform decisions about a family without the input of a skilled social worker. Structured tools and clinical judgment are strongest when employed in tandem so that children and families get the best of what the field has to offer.
To uphold equity, and to place that value into tool development, is a choice. As an organization whose mission is to promote just and equitable social systems, consideration for equity is woven into the way we develop and validate risk tools. We also choose to do this transparently, so that it is possible for users of a risk assessment—and those whose lives are affected by it—to see and understand how the tool was built and how we considered the impact on groups traditionally overrepresented in social service systems.
This is not to suggest this work is easy or the path clear. Anytime you prioritize one outcome over another, methodological tradeoffs and sacrifices are part of the process. The question becomes: Is equity important to you?
As developers or users of risk tools, equity either matters to you or it doesn’t. This choice is reflected in how you construct your tool and how you monitor its impact on kids and families of color. Some may say that a tool isn’t the right instrument to advance equity, that this goal should be left to policy and practice; but we disagree. No matter how many good people are working in a given system, we cannot, individually, overcome the institutional and structural racism that is ingrained in the systems we wish to improve. When we acknowledge the inherent bias of underlying data, we empower ourselves to develop risk models that account for this and to develop ethical and responsible parameters on how they are used in practice. Choosing to ignore or minimize the need for equity in this work amounts to endorsement of the disparate outcomes and treatment of families of color in our society.