YouAppearToBeOnFire
Trusted Contributor
- Messages
- 260
They tried with IBM Watson in a simulation.... it started crashing aircraft.
Furthermore, why would they go fully automated? More efficient... ofcourse. However, accountability is a lot harder with a machine.
If a person messes up they can be blamed/fired. Do you fire a machine? No.
If something goes wrong do you shut down the airspace until it's verified to be FMC? Because if it did have an error the public would demand it be taken down until fixed. Yet the expectation to continue operation would be there.
The risk vs reward is too great still. Plus AI is nowhere near capable of what you think it is.
Furthermore, why would they go fully automated? More efficient... ofcourse. However, accountability is a lot harder with a machine.
If a person messes up they can be blamed/fired. Do you fire a machine? No.
If something goes wrong do you shut down the airspace until it's verified to be FMC? Because if it did have an error the public would demand it be taken down until fixed. Yet the expectation to continue operation would be there.
The risk vs reward is too great still. Plus AI is nowhere near capable of what you think it is.