AndreasRaaskov

AndreasRaaskov OP t1_j28acyk wrote

Thank you for the extra sources I will check them out. And hopefully include them in further work.

In the meantime, I hope you have some understanding of the fact that the article was written by a master's student and is freely available, thus not do not expect the same quality and nuance as a book or a paper written by a professor with editorial support and hidden behind a paywall.

I hope one day to get better

2

AndreasRaaskov OP t1_j28802z wrote

Something that was in the original draft but I found was to emphasise more that Artificial intelligence is not like human intelligence. What AI does is it can solve a specific problem better than humans while being unable to do anything outside that specific problem.

A good example would be a pathfinder algorithm in a GPS that can find the fastest route from A to B. It is simple, widely used and performs an intelligent task way faster and sometimes better than a human.

However, my article was about how even simple systems can be dangerous if they don't have a moral code.

Take the GPS again, first of all, death by GPS is a real phenomenon that happens since the GPS doesn't evaluate how dangerous a route may be.

But even in the more mundane setting, we see GPS make ethical choices unaware of it making them. Suppose for example a GPS finds two routes to your location, one is shorter, while the other is longer but faster since it uses the highway. Here you may argue that it should take the sort road to minimise CO2 impact, we could also consider the highway to be more dangerous for the driver of the car, however taking the slow road may put pedestrians at risk. There are also some of the newest GPS that consider the overall traffic based on real-time data, those GPS sometimes face a choice where it could send some cars a longer road to avoid congestion, thus sacrificing some people's time in order to make to overall transport time shorter.

2

AndreasRaaskov OP t1_j25f5g8 wrote

The article doesn't mention it but talking about economics is differently also part of AI ethics.

AI ethics helps you understand the power Elon musk gets if he tweaks the twitter algorithm to promote posts he likes and shadow-ban posts he dislikes.

And the Koch bother was deeply involved in the Cambridge Analytical scandal where machine learning was used to manipulate voter behaviour in order to get Trump elected. Even with Cambridge Analytical gone, roomers still go that Charles Koch and a handful of his billionaire friends are training new models to manipulate future elections.

So even if evil billionaires are all you care about you should still care about AI ethics since it also includes how to protect society from people who use AI for evil.

0

AndreasRaaskov OP t1_j25buis wrote

Honestly, this was my main motivation for writing this article, as an engineer I wanted to know what philosophers thought of AI ethics, but every time I tried to look for it, I only found people talking about superintelligence or Artificial general intelligence (AGI) will kill us all.

As someone with an engineering mindset I am not really that interested in AGI may and may not exist one day unless you know a way to build one. What really interests me though is building an understanding of how the Artificial Narrow Intelligence (ANI) that does exist is currently hurting people.

To be even more specific I wrote about how the Instagram recommendation system may purposefully make teen girls depressed and I wanted to expand on that theory.

https://medium.com/@andreasrmadsen/instagram-influence-and-depression-bc155287a7b7

I do understand that talking about how some people may be hurt by ANI today is disappointing if you expected another, WE ARE ALL GOING TO DIE by AGI article. Yet I find the first problem far more pressing and I really wish that more people in philosophy would focus on applying their knowledge to the philosophical problems that other fields are struggling with instead of only looking at problems far in the future that may never exist.

8