AI Now’s 2019 report is out, and it’s exactly as dismaying as we thought it would be. The good news is that the threat of biased AI and Orwellian surveillance systems no longer hangs over our collective heads like an artificial Sword of Damocles. The bad news: the threat’s gone because it’s become our reality. Welcome to 1984. The annual report from AI Now is a deep-dive into the industry conducted by the AI Now Institute at New York University. Its focused on the social impact that AI use has on humans, communities, and the population at large. It sources information and analysis from experts in myriad disciplines around the world and works closely with partners throughout the IT, legal, and civil rights communities. This year’s report begins with twelve recommendations based on the institute’s conclusions:

Regulators should ban the use of affect recognition in important decisions that impact people’s lives and access to opportunities. Government and business should halt all use of facial recognition in sensitive social and political contexts until the risks are fully studied and adequate regulations are in place. The AI industry needs to make significant structural changes to address systemic racism, misogyny, and lack of diversity. AI bias research should move beyond technical fixes to address the broader politics and consequences of AI’s use. Governments should mandate public disclosure of the AI industry’s climate impact. Workers should have the right to contest exploitative and invasive AI—and unions can help. Tech workers should have the right to know what they are building and to contest unethical or harmful uses of their work. States should craft expanded biometric privacy laws that regulate both public and private actors. Lawmakers need to regulate the integration of public and private surveillance infrastructures. Algorithmic Impact Assessments must account for AI’s impact on climate, health, and geographical displacement. Machine learning researchers should account for potential risks and harms and better document the origins of their models and data. Lawmakers should require informed consent for use of any personal data in health-related AI.

The permeating theme here seems to be that corporations and governments need to stop passing the buck when it comes to social and ethical accountability. A lack of regulation and ethical oversight has lead to a near total surveillance state in the US. And the use of black box systems throughout the judiciary and financial systems has proliferated even though such AI has been proven to be inherently biased. AI Now notes that these entities saw a significant amount of push-back from activist groups and pundits, but also points out that this has done relatively little to stem the flow of harmful AI: The report also gets into “affect recognition” AI, a subset of facial recognition that’s made its way into schools and businesses around the world. Companies use it during job interviews to, supposedly, tell if an applicant is being truthful and on production floors to determine who is being productive and attentive. It’s a bunch of crap though, as a recent comprehensive review of research from multiple teams concluded. Per the AI Now 2019 report: At this point, it seems any company that develops or deploys AI technology that can be used to discriminate – especially black box technology that claims to understand what a person is thinking or feeling – is willfully investing in discrimination. We’re long past the time that corporations and governments can feign ignorance on the matter. This is especially true when it comes to surveillance. In the US, like China, we’re now under constant public and private surveillance. Cameras record our every move in public at work, in our schools, and in our own neighborhoods. And, worst of all, not only did the government use our tax dollars to pay for all of it, millions of us unwittingly purchased, mounted, and maintained the surveillance gear ourselves. AI Now wrote: The big concern here is that the entrenchment of these surveillance systems could become so deep that the law enforcement community would treat their extrication the same as if we were trying to disarm them. AI Now warns that these problems — biased AI, discriminatory facial recognition systems, and AI-powered surveillance — cannot be solved by patching systems or tweaking algorithms. We can’t “version 2.0” our way out of this mess. In the US, we’ll continue our descent into this Orwellian nightmare as long as we continue to vote for politicians that support the surveillance state, discriminatory black-box AI systems, and the Wild West atmosphere that big tech exists in today. If you’d like to read the full 60-page report, it’s available online here.