Lakshmi Raman: AI at the CIA Balancing Innovation and Responsibility | Festina Lente - Your leading source of AI news | Turtles AI

Lakshmi Raman: AI at the CIA Balancing Innovation and Responsibility
How Lakshmi Raman leads AI integration at the CIA, balancing innovation and responsibility

Highlights:

  • Lakshmi Raman leads the CIA’s use of AI with a balanced approach to innovation and responsibility.
  • The CIA uses AI to process large amounts of data and combat global threats.
  • Ethical concerns arise over the CIA’s use of AI on data collected about American citizens.
  • Raman emphasizes transparency and legal compliance in the use of AI systems.

 

Lakshmi Raman, Director of AI at the CIA, leads the integration of emerging technologies in intelligence, balancing innovation and responsibility. Discover how the CIA uses AI to address global challenges and maintain national security.

 

Lakshmi Raman, the director of AI at the CIA, has a long career in intelligence. Joining the CIA in 2002 as a software developer after earning her bachelor’s degree from the University of Illinois Urbana-Champaign and her master’s degree in computer science from the University of Chicago, she later held various management roles until leading the agency’s data science efforts.

 

Raman emphasizes the importance of having female role models and predecessors in her career, a valuable asset in a historically male-dominated field. "I still have people I can ask for advice and learn from about what leadership looks like," she said. "Every woman has to navigate specific challenges in her career path."

 

In her current role, Raman orchestrates and drives AI activities within the CIA. "We think AI is here to support our mission," she said. "It’s humans and machines together that are at the forefront of our use of AI."

 

AI is not new to the CIA, which has been exploring data science and AI applications since 2000, particularly in natural language processing, computer vision, and video analytics. The agency strives to stay updated on new trends like generative AI, with a roadmap informed by industry and academia.

 

"When we think about the huge amounts of data we have to process, content triage is an area where generative AI can make a difference," Raman said. "We are exploring tools for search and discovery, ideation, and generating counterarguments to counteract analytical biases."

 

The U.S. intelligence community feels an urgency to deploy tools that can help the CIA combat rising geopolitical tensions, from terrorism risks to disinformation by foreign actors. Last year, the Special Competitive Studies Project set a two-year timeline to adopt generative AI at scale.

 

One tool developed by the CIA, named Osiris, is similar to OpenAI’s ChatGPT but customized for intelligence use. Osiris summarizes data, currently only unclassified and publicly or commercially available data, and allows analysts to delve deeper with follow-up questions in plain English.

 

Osiris is used by thousands of analysts not only within the CIA but also across the other 18 U.S. intelligence agencies. Raman did not reveal whether it was developed in-house or using third-party technology but stated that the CIA has partnerships with high-profile vendors.

 

"We leverage commercial services," Raman said, adding that the CIA employs AI tools for tasks like translation and alerting analysts during off-hours about potentially important developments. "We need to work closely with the private industry to provide not only well-known services and solutions but also niche services from non-traditional vendors."

 

There is reason to be skeptical about the CIA’s use of AI. In February 2022, Senators Ron Wyden and Martin Heinrich revealed in a public letter that the CIA, despite being generally barred from investigating U.S. citizens and businesses, has a secret data repository including information collected on Americans. Additionally, an Office of the Director of National Intelligence report showed that U.S. intelligence agencies purchase data on Americans from data brokers like LexisNexis and Sayari Analytics with little oversight.

 

If the CIA were to use AI to analyze this data, many Americans would object. It would be a clear violation of civil liberties and could result in unjust outcomes due to AI’s limitations. Several studies have shown predictive crime algorithms tend to disproportionately flag Black communities, and facial recognition results in higher misidentification rates for people of color than for whites.

 

Beyond bias, even the best AI systems today can generate errors, such as attributing quotes to non-existent people. This could become a significant problem in intelligence work, where accuracy is crucial.

 

Raman insisted that the CIA not only complies with all U.S. laws but also follows ethical guidelines and uses AI "in a way that mitigates bias." "I would call it a thoughtful approach ," she said. "We want our users to understand as much as they can about the AI system they are using. Building responsible AI means involving all stakeholders, including AI developers and our privacy and civil liberties office."

 

Recent studies have highlighted the importance of making the limitations of AI systems clear to users. Researchers at North Carolina State University found that AI tools, including facial recognition and gunshot detection algorithms, are used by police without a clear understanding of the technologies or their limitations.

 

A particularly egregious example of law enforcement AI abuse was reported by the NYPD, which used photos of celebrities and distorted images to generate facial recognition matches on suspects when surveillance stills yielded no results.

 

"Any AI-generated output should be clearly understood by the users, which means obviously labeling AI-generated content and providing clear explanations of how AI systems work," Raman said. "Everything we do in the agency, we adhere to our legal requirements, and we ensure our users, partners, and stakeholders are aware of all relevant laws, regulations, and guidelines governing the use of our AI systems, and we comply with all these rules."