For decades, much of the federal government’s security-clearance-granting process has relied on techniques that emerged in the mid-twentieth century.
“It’s very manual,” said Evan Lesser, president of ClearanceJobs, a website posting, jobs, news and advice for positions that involve security clearances. “Driving around in cars to meet people. It’s very antiquated and takes up a lot of time.”
A federal initiative that started in 2018 called Trusted Workforce 2.0 formally introduced semi-automated analysis of federal employees that occurs in close to real time. This program will let the government use artificial intelligence to subject employees who are seeking or already have security clearances to “continuous vetting and evaluation” — basically, rolling evaluation that takes in information constantly, throws up red flags and includes self-reporting and human analysis.
“Can we build a system that checks on somebody and keeps checking on them and is aware of that person’s disposition as they exist in the legal systems and the public record systems on a continuous basis?” said Chris Grijalva, senior technical director at Peraton, a company that focuses on the government side of insider analysis. “And out of that idea was born the notion of continuous evaluations.”
Such efforts had been used in government in more ad hoc ways since the 1980s. But the 2018 announcement aimed to modernize government policies, which typically re-evaluated employees every five or 10 years. The motivation for the adjustment in policy and practice was, in part, the backlog of required investigations and the idea that circumstances, and people, change.
“That’s why it’s so compelling to keep people under some kind of a constant, ever-evolving surveillance process,” said Martha Louise Deutscher, author of the book “Screening the System: Exposing Security Clearance Dangers.” She added that, “Every day you’ll run the credit check, and every day you’ll run the criminal check — and the banking accounts, the marital status — and make sure that people don’t run into those circumstances where they will become a risk if they weren’t yesterday.”
The program’s first phase, a transition period before full implementation, finished in fall 2021. In December, the U.S. Government Accountability Office recommended that the automation’s effectiveness be evaluated (though not, you know, continuously).