So far, I have found multiple sources from each of my external experts about my topic and questions. I have read several of Hao’s articles, which call for increased scrutiny in AI ethics. Furthermore, I have learned from her other articles, which explain current events relating to AI (as well as other emerging technologies) and their positive and negative consequences. I expect her work to be highly useful for my research. Bostrom, Omohundro, and Yudkowsky’s works are all closely connected. From them, I delved into questions that arise when considering longer-term consequences of AI. These include ideas like the consequences of developing superintelligence, General AI, and the potential behaviour of self-improving AI. These topics are separate from the ideas expressed in Hao’s work, but equally important.
Internally, I plan on speaking with Mr. Eguia. Given his background in computer science, I feel like he could tell me valuable information about the current state of AI and his opinion on it.
Externally, I believe the best expert for my project is Karen Hao, given her expertise in AI and technology ethics.
Alternatively, I could also communicate with either Omohundro, Bostrom, or Yudkowsky. These people have similar areas of expertise, so I would likely ask them similar questions.
The next questions I need to tackle are, essentially, the questions above. What are the most pressing concerns of current/future AI technology? What steps can we take to mitigate the associated risks? Are there certain technologies we should avoid creating in the first place?
Right now, I need to do further research to consider what the positive and negative consequences of AI may be in the future. Furthermore, I would like to gather more information about how AI is used currently.
Internally, I plan on speaking with Mr. Eguia. Given his background in computer science, I feel like he could tell me valuable information about the current state of AI and his opinion on it.
- First Question: Do you have any past experience working with or studying AI? (This question can help contextualize his answers)
- Most important question: Do you believe the potential benefits of AI outweigh the consequences?
Externally, I believe the best expert for my project is Karen Hao, given her expertise in AI and technology ethics.
- First Question: What do you believe are the most pressing ethical concerns raised by AI in its current state?
- Most important question: What considerations should we make moving forward with AI to mitigate the associated risks?
Alternatively, I could also communicate with either Omohundro, Bostrom, or Yudkowsky. These people have similar areas of expertise, so I would likely ask them similar questions.
- First question: What do you believe are the most pressing concerns raised by future AI development?
- Most important: Should we try to prevent the development of superintelligent, self-improving AI? Is it an existential threat to the Human species?
The next questions I need to tackle are, essentially, the questions above. What are the most pressing concerns of current/future AI technology? What steps can we take to mitigate the associated risks? Are there certain technologies we should avoid creating in the first place?
Right now, I need to do further research to consider what the positive and negative consequences of AI may be in the future. Furthermore, I would like to gather more information about how AI is used currently.