Australia Day

Algorithms are commonly seen to be mysterious but nasty bits of computer code that help spread fake news and target our online experience with uncannily personalised advertising. To a mathematician or a computer scientist, however, algorithms are usually much more benign.
As defined by the Oxford Dictionary, an algorithm is simply a “process or set of rules to be followed in calculations or other problem-solving operations, especially by a computer.” As so often happens in human life, the problem with algorithms lies not within the tool itself but in problematic decisions about what rules they follow and what they might be used for.
Applied as tools for social good, algorithms have been powerful adjuncts to human information-processing and decision-making for 50 years. In this vein, a recent article in The New York Times, “Can an Algorithm Tell When Kids Are in Danger?” , describes how the use of algorithms under certain circumstances has significantly improved authorities’ decisions around child protection matters. The article focuses on child protection practice in Allegheny County, Pennsylvania, in the US north-east. There, changes introduced by using an algorithm at the assessment stage help better identify whether a child needs to be visited by a child protection worker.
Allegheny County has a population of just over 1.2 million – about the same as Adelaide – and child protection authorities there receive around 14,000 notifications each year. About half of those notifications are “screened out” at the first point of contact. Across the US, the Times reports, “42 percent of the four million allegations received in 2015, involving 7.2 million children, were screened out, often based on sound legal reasoning but also because of judgment calls, opinions, biases and beliefs. And yet more United States children died in 2015 as a result of abuse and neglect – 1,670, according to the federal Administration for Children and Families; or twice that many, according to leaders in the field – than died of cancer.”
However, in August 2016, “Allegheny County became the first jurisdiction in the United States, or anywhere else, to let a predictive-analytics algorithm … offer up a second opinion on every incoming call, in hopes of doing a better job of identifying the families most in need of intervention.” The Allegheny Family Screening Tool conducts a statistical analysis of the current and any previous notification calls, using more than 100 criteria located in county databases for jails, psychiatric services, public-welfare benefits, drug and alcohol treatment centres and more. It takes just a few seconds to generate a coloured bar that runs from a green 1 (lowest risk) to a red 20 (highest risk).”
The algorithm was developed by Emily Putnam-Hornstein, of the University of Southern California, and Rhema Vaithianathan, of the Auckland University of Technology in New Zealand. Switching the conventional child-protection focus from the question of which children should be removed from their parents, they wanted to know: “Which families are most at risk and in need of help?,” Prof Vaithianathan said.
They created an algorithm based on all 76,964 allegations of maltreatment made to Allegheny County authorities over a four-year period. Prof Vaithianathan said: “What the screeners have is a lot of data, but it’s quite difficult to navigate and know which factors are most important.”
“The Times’ Dan Hurley reports: “[Putnam-Hornstein and Vaithianathan] found … [that] 48 percent of the lowest-risk families were being screened in, while 27 percent of the highest-risk families were being screened out.” Of 18 calls to the county’s Office of Children, Youth and Families (CYF) “in which a child was later killed or gravely injured as a result of parental maltreatment, eight cases, or 44 percent, had been screened out as not worth investigation.”
Dr Rachel Berger, a paediatrician and director of the child-abuse research centre at Children’s Hospital of Pittsburgh, said: “All of these children are living in chaos. How does CYF pick out which ones are most in danger when they all have risk factors? You can’t believe the amount of subjectivity that goes into child-protection decisions. That’s why I love predictive analytics. It’s finally bringing some objectivity and science to decisions that can be so unbelievably life-changing.”
The Allegheny Family Screening Tool avoids ethical issues attached to other predictive-analytics software packages that are sold as a “black box” to other US child-welfare agencies, court systems and parole boards.
The first issue is that the vendors of those proprietorial systems refuse to disclose how their algorithms work. In contrast, the Allegheny Tool is owned outright by Allegheny County and, according to The New York Times, “Its workings are public. Its criteria are described in academic publications and picked apart by local officials. At public meetings held in downtown Pittsburgh before the system’s adoption, lawyers, child advocates, parents and even former foster children asked hard questions not only of the academics [Putnam-Hornstein and Vaithianathan] but also of the county administrators who invited them.”
Predictive-analytic algorithms used to assist social policy have also been criticised on the grounds that future behaviour cannot be accurately forecast, and that deciding which families to investigate should be based solely on the al legations made. In Pittsburgh, however, advocates for parents, children and civil rights all support the way CYF has introduced and managed program. Even the American Civil Liberties Union approves, with one representative saying: “I think they’re putting important checks on the process. They’re using it only for screeners, to decide which calls to investigate, not to remove a child.”
A third criticism of using predictive-analytic algorithms to support child welfare decisions stems from the “garbage in, garbage out” adage coined by computer programmers, with the assumption being that the algorithms will contain inherent biases against minority groups. However, Hurley reports, “the Allegheny experience suggests that its screening tool is less bad at weighing biases than human screeners have been, at least when it comes to predicting which children are most at risk of serious harm.”
Ms Erin Dalton, a deputy director of the Allegheny County’s department of human services and leader of its data-analysis department, says: “Black children are, relatively speaking, over-surveilled in our systems, and white children are under-surveilled. Who we investigate is not a function of who abuses. It’s a function of who gets reported.”
In Australia, Aboriginal children are removed from their families at nearly 10 times the rate of non-indigenous children . It’s much the same in the US. But the problem is not that non-whites are necessarily worse parents than white people. A study by Putnam-Hornstein et al (2013) found that black children in California were more than twice as likely as white children there to be placed in foster care. “But after adjusting for socioeconomic factors, the study she showed that poor black children were actually less likely than their poor white counterparts to be the subject of an abuse allegation or to end up in foster care.”
As Hurley observes: “Poverty, all close observers of child welfare agree, is the one nearly universal attribute of families caught up in the system.”
After the Allegheny Family Screening Tool had been used for only 16 months, black and white families were being treated more consistently than before. “And the percentage of low-risk cases being recommended for investigation had dropped – from nearly half, in the years before the program began, to around one-third. … At the same time, high-risk calls were being screened in more often. Not by much – just a few percentage points. But in the world of child welfare, that represented progress.” Further work on the algorithm has increased its accuracy at predicting bad outcomes to more than 90% from around 78%.
It would seem that a well designed predictive analytic algorithm, one that adjusts for human biases and is open to public scrutiny and input, will not replace skilled social workers but can dramatically and consistently augment human decision-making in child-protection cases.