On September 18, 1980, an airman conducting routine maintenance on the Titan II missile site near Damascus Arkansas made a mistake—as humans are prone to do—that resulted in the death of a fellow airman and the destruction of a nuclear missile launch site. His mistake was one even the best of us could have made. He dropped a wrench socket. It fell about 80 feet, piercing the 9-megaton rocket’s first-stage fuel tank. The fuel tank began to leak. The Air Force rushed to evacuate the missile site and surrounding residential areas. A task force was created to save the missile and the launch complex. Senior Airmen D. L. Livingston and J. K. Kennedy were sent in the silo to investigate, and to turn on an exhaust fan. Shortly thereafter the second tank exploded, launching the 740 ton blast door, the warhead, and the two airmen. Kennedy was thrown 150 feet from the silo. He lived despite inhaling oxidizer. Although Livingston survived being thrown out of the area, he died the same evening of his injuries. The silo was never repaired.[1]
What does an accidental missile launch have to do with Artificial Intelligence? More than you might think. Just as a minor accident caused the destruction of a missile silo and took the life of an airman, AI can be susceptible to human accident. AI has been around since 1956, but its use has matured significantly since 2020 and is now prolific. Artificial Intelligence models are used for everything from teaching to law enforcement, to advertising, to piloting self-driving vehicles, and more. Regardless of how perfect the AI model, however, human accidents are still to be expected. Models can be inaccurately trained. Data can be skewed. Or an engineer could misconfigure an integration that would mistakenly make personally identifiable information (PII) public, in violation of law and of ethics.
While we enjoy the benefits of AI, it is imperative that we do the following prior to deploying AI:
1. Understand the model, how it was trained, and how it is intended to be used;
2. Test the model thoroughly;
3. Establish clear guidelines for appropriate use and verify their use; and
4. Create processes to periodically repeat tests and reevaluate the effectiveness of the model.
Are there other precautions you would like to see in place?
Photo credit: Todd Lappin
[1] See https://encyclopediaofarkansas.net/entries/titan-ii-missile-explosion-2543/#:~:text=On%20September%2018%2C%201980%2C%20at,tank%2C%20causing%20it%20to%20leak.
See also https://en.wikipedia.org/wiki/1980_Damascus_Titan_missile_explosion