bildramer t1_j24f6fo wrote
That's a somewhat disappointing article. Among other things, the man in the Chinese room is not analogous to the AI itself, he's analogous to some mechanical component of it. Let's write something better.
First, let's distinguish "AI ethics" (making sure AI talks like WEIRD neoliberals and recommends things to their tastes) and "AI notkilleveryoneism" (figuring out how to make a generally intelligent agent that doesn't kill everyone by accident). I'll focus on the second.
To briefly discuss what not killing everyone entails: Even without concerns about superintelligence (which I consider solid), strong optimization for a goal that appears good can be evil. Say you're a newly minted AI, part of a big strawberry company, and your task is to sell strawberries. Instead of any complicated set of goals, you have to maximize a number.
One way to achieve that is to genetically engineer better strawberries, improve the efficiency of strawberry farms, discover more about people's demand for strawberries and cater to it, improve strawberry market efficiency and liquidity, improve marketing, etc. etc. One easier way to achieve that is to spread plant diseases in banana, raspberry, orange, peach farms/planatations. Or your strawberry competitors, but that's more risky. You don't have to be a superhuman genius to generate such a plan, or subdivide it into smaller steps, and ChatGPT can in all likelihood already do it if prompted right. You need others to perform some steps, but that's most large-scale corporate plans.
An AI that can create such a plan can probably also realize that it's illegal, but does it care? It only wants more strawberries. If it cares about the police discovering the crimes, because that lowers the expected number of strawberries made, it can just add stealth to the plan. And if it cares about its corporate boss discovering the crimes, that's solvable with even more stealth. You begin to see the problem, I hope. If you get a smarter-than-you AI and it delivers a plan and you don't quite understand everything it planned but it doesn't appear illegal, how sure are you that it didn't order a subcontractor to genetically engineer the strawberries to be addictive in step 145?
Anyway, that concern generalizes up to the point where all humans are dead and we're not quite sure why. Maybe human civilization as it is today could develop pesticides that stop the strawberry-kudzu hybrid from eating the Amazon within 20 years, and that would decrease strawberry sales. Can we stop this from happening? Most potential solutions to prevent it from happening don't actually work upon closer examination. E.g. "don't optimize the expectation of a number, optimize reaching the 90% quantile of it" adds a bit of robustness, but does not stop subgoals like "stop humans from interfering" or "stop humans from realizing they asked the wrong thing", even if the AI fully understands they would have wanted something else, and why and how the error was made.
So, optimizing for something good, doing your job, something that seems banal to us, can lead to great evil. You have to consider intelligence separate from "wisdom", and take care when writing down goals. Usually your goals get parsed and implemented by other humans, which fully understand that we have multiple goals, and "I want a fast car" is balanced against "I don't want my car to be fueled by hydrazine" and "I want my internal organs to remain unliquefied". AIs may understand but not care.
AndreasRaaskov OP t1_j25buis wrote
Honestly, this was my main motivation for writing this article, as an engineer I wanted to know what philosophers thought of AI ethics, but every time I tried to look for it, I only found people talking about superintelligence or Artificial general intelligence (AGI) will kill us all.
As someone with an engineering mindset I am not really that interested in AGI may and may not exist one day unless you know a way to build one. What really interests me though is building an understanding of how the Artificial Narrow Intelligence (ANI) that does exist is currently hurting people.
To be even more specific I wrote about how the Instagram recommendation system may purposefully make teen girls depressed and I wanted to expand on that theory.
https://medium.com/@andreasrmadsen/instagram-influence-and-depression-bc155287a7b7
I do understand that talking about how some people may be hurt by ANI today is disappointing if you expected another, WE ARE ALL GOING TO DIE by AGI article. Yet I find the first problem far more pressing and I really wish that more people in philosophy would focus on applying their knowledge to the philosophical problems that other fields are struggling with instead of only looking at problems far in the future that may never exist.
robothistorian t1_j26dpqc wrote
>as an engineer I wanted to know what philosophers thought of AI ethics, but every time I tried to look for it, I only found people talking about superintelligence or Artificial general intelligence (AGI) will kill us all.
I'm afraid, in that case you are either not looking hard enough or are looking at the wrong places.
I would recommend you begin by looking into the domain of "technology/computer and ethics". So, for example, you will find a plethora of works collected under various titles such as Value Sensitive Design, Machine Ethics etc.
That being said, it may also be helpful to clarify some elements of your article, which are a bit disturbing.
First, you invoke the Shoah and then focus on Arendt's work in that regard. But, with specific reference to your own situation, the more relevant reference would have been to Aktion T4 of the Nazis (This is an article that lays out how and where the program began). As is well known, the rationale underlying that mass murder system (and it was a "system") was grounded, specifically, on eugenics, and, more abstractly, on the notion of an "idealized human". The Shoah, on the other hand, was grounded on a racial principle according to which any race considered to be "non-Aryan" was a valid target of a racial cleaning program, which resulted in the Shoah. It is important to be conceptually clear about these two distinct operative concepts: The T4 program was one of mass murder; the Shoah was an act of genocide. One may not immediately appreciate the difference, but let me assure you, the difference matters both in legal and in ethico-political terms. This is a controversial perspective in what is considered "Holocaust Studies", but it is, in my opinion, a distinction to be aware of.
Second, the notion of "evil" that you impute to AI is rather imprecise. It is so because it is likely based on an imaginary and speculative notion of AI. Perhaps a more productive way to approach this problem would be to look through the lens of what Gernot Böhme refers to as "invasive technification". There is a lot of work that is being done on the ethical issues surrounding this notion of progressive technification given some of the problems that are arising as a consequence of this emergent and evolving process. The Robodebt problem is a classic example. As Prof. van den Hengen (quoted in the article) points out
>Automation of some administrative social security functions is a very good idea, and inevitable. The problem with Robodebt was the policy, not the technology. The technology did what it was asked very effectively. The problem is that it was asked to do something daft.
This is, generally speaking, also true about most other computerized systems including the "AI systems" that are driving military and combat systems.
Thus, I'd argue that the ethico-moral concern needs to be targeted towards the designers of the systems, the users of the system and only secondarily to the technologies involved. Some, of course, disagree with this. They contend that we should be looking to (and here they slip into a kind of speculative and futuristic mode) design "artificial moral machines", that is to say, machines that are intrinsically capable of engaging in moral behaviour. This is a longer and more detailed treatment of the subject of "moral machines". I have serious reservations about this, but that is irrelevant in this context.
In conclusion, I would like to say that while I am empathetic to your personal situation, but the article that you have shared, while appreciated, is not really on the mark. This kind of a discussion requires a more nuanced and carefully thought out approach, and an awareness of the work that has been done and which is being done in the field currently.
AndreasRaaskov OP t1_j28acyk wrote
Thank you for the extra sources I will check them out. And hopefully include them in further work.
In the meantime, I hope you have some understanding of the fact that the article was written by a master's student and is freely available, thus not do not expect the same quality and nuance as a book or a paper written by a professor with editorial support and hidden behind a paywall.
I hope one day to get better
robothistorian t1_j28b3m5 wrote
>do not expect the same quality and nuance as a book or a paper written by a professor with editorial support and hidden behind a paywall.
If you are going to put something out in public with your name on it (in other words publish) and want it to be taken seriously, then it is necessary to ensure that it is carefully thought through and argued persuasively. This accounts for the "nuance and quality". References are important, but in a relatively informal (non-academic) setting, not mandatory.
Further, professors (and other less senior academics) usually only get editorial support after their work has been accepted for publication, which also means it has been through a number of rounds of peer review.
>I hope one day to get better
I am sure if you put in the effort, you will.
Fmatosqg t1_j29622s wrote
Thx for putting in the effort and starting such conversations. Internet is a tough place and there is value in your output before you have the experience to write a professional level article.
Indigo_Sunset t1_j25hlk9 wrote
If the goal is to morals gate ANI, then the process is limited to the rule construction methodology of instruction writers. This would be the banality of evil within such a system, culpability. It's furthered by the apathy of iteration where a narrowed optimization ai obfuscates instruction sets to greyscale through black box, thereby enabling a loss of complete understanding while denying culpability as 'wasn't me' while pointing at a blackish box they built themselves.
In the case of facebook, the obviousness of the effect has no bearing. It has virtually no consequence without a culpability the current justice system is capable of attending to. Whether due to a lack of applicable laws, or the adver$arial nature of the system, or the expectation of 'free market' corrections by 'rational people', the end product is highly representative of the banality that has no impetus to change.
Viewing a single comment thread. View all comments