Gari_305 OP t1_ix5xz8k wrote
From the Article
>“In war, unexpected things happen all the time. Outliers are the name of the game and we know that current AIs do not do a good job with outliers,” says Batarseh.
>
>To trust AIs, we need to give them something that they will have at stake
>
>Even if we solve this problem, there are still enormous ethical problems to grapple with. For example, how do you decide if an AI made the right choice when it took the decision to kill? It is similar to the so-called trolley problem that is currently dogging the development of automated vehicles. It comes in many guises but essentially boils down to asking whether it is ethically right to let an impending accident play out in which a number of people could be killed, or to take some action that saves those people but risks killing a lesser number of other people. Such questions take on a whole new level when the system involved is actually programmed to kill.
Rogaar t1_ix6gytz wrote
With the whole trolley problem, replace the AI with a human. How would the human choose in this situation? Probably not logically as there would be emotion that plays a role.
We are projecting these idea's on machines and expecting AI to solve them yet we are don't have solutions ourselves.
AgentTin t1_ix6481v wrote
Do you think you could train an AI vision algorithm to recognize people with concealed weapons? What about suicide bombers?
You could use it arbitrarily to clear an area, but it certainly won't be any more effective than a tomahawk missile at the same job. We've all seen those videos where a robot flicks the unripened cherries out of the air, are we imagining it doing the same with people? Letting the good ones pass while delicately cutting the rest down with a burst of gunfire? Facial recognition technology functions worse the darker your skin is, which is unfortunate due to the number of brown people we like to target with these weapons. I don't imagine these will see a lot of use in Western Europe
.
onedoesnotjust t1_ix6cs89 wrote
I think the opposite tbh.
It will be developped first, and once AI bots start seeing more action it will be a part of war for everyone, like drones. You can have similar protocals.
It's easily justified in the math, easier to replace a robot.
Also when you have smaller countries, that could potentially build a world class military with drones and weaponized bots and way less soldiers. In fact you could simply hire out everyone.
I even see a privatized robot military that works for the highest bidder. Lots of places don't have the same ethical qualms.
How long before these dogs get bombs attached to them.
Once production costs go down, they will be mass produced for consumers, after this 3rd world starts getting all the "cast off" old tech.
AgentTin t1_ix6iliy wrote
Either we are talking about remote controlled robots, like predator drones or we are talking about fully autonomous systems. The question is what job we think is better served by the robots.
Robots work as bombers because people and the stuff necessary to keep them alive are heavy and a plane functions much better without us sitting in it.
A robotic dog isn't really any better in most situations than just a dude, or a regular dog. It is going to require a power grid and maintenance while soldiers just require food, bullets, and water.
AI on the other hand is more interesting. It can notice patterns humans don't and it can potentially make choices and act far more quickly than a human can. One space I think this might be helpful is in point defense. If the AI could recognize car bombs or suicide bombers it could act to neutralize the threat before the guards are even aware of it.
Fork lifts exist because humans are weak, AI exists because people are stupid and expensive. Where can AI either outperform or undercut a human?
onedoesnotjust t1_ix6jxc3 wrote
Training a soldier costs millions. Its more than just food and water.
More equivalent would be longbows vs. crossbows.
Takes years of training to use a longbow, crossbows are easy to use.
You don't have to seperate it all, it's unrealistic.
Combine all that, drones with bombs/cameras, dogs with guns/cameras, AI system sorting through footage real time and giving analysis, and one operator to get operational permission.
That's future war.
Mrsparkles7100 t1_ix75jh5 wrote
Pretty much the loyal wingman program. F35,22 and next gen fighters act as the mobile control centres. Have squadron of fully/semi autonomous drones as wingman. Let’s say 2 human manned planes and 4 or 5 wingman. Have one pilot giving out instructions to drones and then AI takes over as it attacks it target.
Then you can leave that wingman on continuously loitering programs over regions. Have their own Kill list of priority targets. Real time intel gets uploaded, triggers the strike program in the drone.
UK is looking to adapt catapult system to its carriers to adapt to its future loyal wingman program.
So yeah that film Stealth isn’t too crazy sounding given enough time.
Viewing a single comment thread. View all comments