So far, I have made a thorough outline of the concepts I want to address in my written component and what evidence I will be using to make my argument. Furthermore, I have mostly settled on the thesis statement I plan on using. I will begin writing a rough draft in the near future (probably this weekend or next week). And will do several revisions once that is complete.
The most important idea I’ve discovered through my working answers is that AI has the potential to be extremely dangerous. Not because AI is going to become sentient and exterminate humanity, but because people don’t really understand how AI works. AI can be used to create intentional harm to other people, and the resources required to develop AI are immense. AI has a massive carbon footprint and requires vast amounts of labour in order to develop. Furthermore, AIs don’t make decisions in the same way humans do, which means they may take approaches to problems that we would find unthinkable. A famous example of this is the paperclip maximizer thought experiment, where an AI is told to maximize the production of paperclips. With these guidelines, the AI’s ultimate goal is to turn all matter in the universe to paperclips, without any regard to human life, etc.
My current idea for my artifact is to create an experience where the viewer has to think like an AI, following an extremely specific set of instructions, to see how hard their behaviour can be to predict. I don’t know what actual form/medium this will take, however. Moving forward, I need to decide on how I want to create this experience. I might consult with my peers or teacher to see how someone not familiar with the topic will react to the artifact.
The most important idea I’ve discovered through my working answers is that AI has the potential to be extremely dangerous. Not because AI is going to become sentient and exterminate humanity, but because people don’t really understand how AI works. AI can be used to create intentional harm to other people, and the resources required to develop AI are immense. AI has a massive carbon footprint and requires vast amounts of labour in order to develop. Furthermore, AIs don’t make decisions in the same way humans do, which means they may take approaches to problems that we would find unthinkable. A famous example of this is the paperclip maximizer thought experiment, where an AI is told to maximize the production of paperclips. With these guidelines, the AI’s ultimate goal is to turn all matter in the universe to paperclips, without any regard to human life, etc.
My current idea for my artifact is to create an experience where the viewer has to think like an AI, following an extremely specific set of instructions, to see how hard their behaviour can be to predict. I don’t know what actual form/medium this will take, however. Moving forward, I need to decide on how I want to create this experience. I might consult with my peers or teacher to see how someone not familiar with the topic will react to the artifact.