Artificial Intelligence has Its Limits

Artificial Intelligence has Its Limits

AI

This post is also available in: heעברית (Hebrew)

As artificial intelligence technology quickly evolves, the U.S. intelligence community has recently released AI principles and an ethics framework to ensure that intelligence organizations are safely and legally developing AI systems. Bias of algorithms, defining the role of humans in AI processes, etc. are only some of the challenges.

The important thing for intelligence analysts is to understand the sources of the data that we have, inherent biases in those data, and then to be able to make their conclusions based on the understanding of that.

The long-awaited principles and framework are meant to outline the intelligence community’s broad values and guidance for the ethical development of AI. They provided six principles:

  • – Respect the law and act with integrity.
  • – Be transparent and accountable.
  • – Be objective and equitable.
  • – Focus on human-centered development and use.
  • – Ensure it is secure and resilient.
  • – Inform decisions via science and technology.

Ben Huebner, chief of Office of Civil Liberties, Privacy, and Transparency at the Office of the Director of National Intelligence (ODNI) said the framework “is a tool that provides the intelligence community with a consistent approach” to artificial intelligence.

The intelligence community is a massive conglomerate of agencies, each tasked with a specific intelligence mission, making it difficult to verify the implementation of these ethics considerations.

To ease oversight challenges, a critical piece of the framework calls on AI users in the intel community to adequately document information about the AI technology under development. That would include explanations on the AI’s intended use, its design, its limitations, related data sets and changes to its algorithm over time.

The documentation guidance will be accessible by legal counsels, inspectors general, and privacy and civil liberties officers.

A major concern with artificial intelligence, no matter who is developing it, is bias in algorithms. The framework tells practitioners to take steps to discover undesired biases that may enter algorithms throughout the life cycle of an AI program.

With the intelligence community charged with providing objective intelligence to policymakers to inform decision, its agencies must ensure that any AI systems used for intelligence collection are accurate. To ensure the intelligence collected and analyzed by artificial intelligence is accurate, humans must also be able to understand how the algorithms informed conclusions.

Humans will also remain a critical part of the intelligence collection process, the framework stated, based on “assessed risk.” In addition, humans will continue to be a cornerstone to intelligence reports because AI can count enemy aircraft on a runway but won’t be able to answer why the number is higher or lower than a previous day, according to c4isrnet.com.