AI-controlled drones from the USA | Generative ai use Cases Mckinsey | Free Generative ai api | Generative ai Certification Google | Turtles AI
AI-controlled drones from the USA
DukeRem28 August 2023
The #US #AirForce is testing #AI-controlled #drones to serve as "wingmen" to assist #human #fighter #pilots in #combat. #Officials believe these lower-cost unmanned #aircraft operating autonomously could overwhelm adversaries. However, there are moral concerns about letting algorithms make lethal decisions, and human pilots are still wary of fully trusting an AI #wingman.
The US Air Force is exploring using artificial intelligence to control unmanned aircraft in aerial combat. An experimental drone called the XQ-58A Valkyrie is being tested as a "loyal wingman" that can fly alongside human pilots and assist them in battle.
The Valkyrie is seen as a lower-cost supplement to expensive fighter jets like the F-35. It is designed to use AI and sensors to identify threats, get approval from a human operator, and then engage targets. Proponents believe deploying large numbers of these drones could overwhelm adversaries.
However, there are concerns about letting algorithms make lethal decisions. The current Pentagon policy requires a human in the loop for autonomous weapons. Air Force officials say a human will always decide when and how AI drones attack. But some worry about crossing a moral line by outsourcing killing to machines.
The Air Force is working with companies like Kratos and Boeing to develop the aircraft, and newer tech firms to provide the AI software. It's hoped this could shake up the traditional military procurement process dominated by a few large contractors. Officials believe it may take 5-10 years to have an operational system.
Test pilots are now flying alongside the Valkyrie to evaluate its performance and build trust. But they acknowledge it will be difficult to trust an AI wingman fully. The technology is still training and has sometimes acted in unexpected ways. However, the Air Force believes partnering human pilots with unmanned AI drones could make aerial combat more effective and reduce casualties.
Highlights:
- Experimental Valkyrie drone using AI and sensors to identify threats, get approval, and attack
- Seen as lower cost way to supplement expensive fighter jets
- Concerns about outsourcing lethal decisions to algorithms
- Human pilots still building trust with unpredictable AI wingmen
- Could take 5-10 years to have operational system
This news highlights the promise and perils of integrating AI into lethal autonomous weapons. While "loyal wingman" drones could overwhelm adversaries and reduce casualties, we should seriously debate if society is ready to let algorithms decide who lives and dies in combat. What safeguards can be put in place for these AI systems? I'm interested to hear readers' thoughts on the ethical implications.