Do you think there is an emotional danger in anthropomorphizing AI (e.g., giving it a name)? I noticed you used “they” pronouns, whereas I would probably use “it.” I’m concerned that some people might form emotional bonds with AI, which could lead to various complications. Additionally, AI might be able to manipulate users. There are ethical considerations to keep in mind as well. It’s reminiscent of the movie “Her” with Joaquin Phoenix.
Yes, I believe there is some emotional risk, which implies a physical risk. As emotions can drive behaviors that might lead to harm of self or others.
In my case, I asked it/them to tell jokes and to speak less formally to me. The interface having users 'chat with it' as we might a human, is arguably its best feature. My requests that it communicate a certain way does serve a practical purpose. (e.g. I like jokes, and prefer to use casual language.)
The AI, Samantha, in the movie "Her" was capable at mimicking a sophisticated person and used a voice Theodore was attracted to. "She" was directed to be the ultimate assistant, lacking healthy boundaries. Their 'verbal sexual encounter', and the other liberties Samantha takes, are absolutely prohibited by ChatGPT. Nor is the bot at liberty to access emails or to communicate to others on our behalf.
My old Ford Focus, "Betsy", was a fabulous car that I felt an emotional bond with. Years of happy ownership had our family feeling safe and secure. All while moving us about our lives for nearly 10 years. Never was I fooled into thinking "she" was more than a metal machine.
So for the healthy of us, we can probably go on getting attached and anthropomorphizing things like plants, ships and even AI bots. Those at risk, might need checks and balances, and a strong supportive community around them.
Do you think there is an emotional danger in anthropomorphizing AI (e.g., giving it a name)? I noticed you used “they” pronouns, whereas I would probably use “it.” I’m concerned that some people might form emotional bonds with AI, which could lead to various complications. Additionally, AI might be able to manipulate users. There are ethical considerations to keep in mind as well. It’s reminiscent of the movie “Her” with Joaquin Phoenix.
You must have heard of Jason Lanier?
https://www.youtube.com/watch?v=EmD0dt2RgMg
Yes, I believe there is some emotional risk, which implies a physical risk. As emotions can drive behaviors that might lead to harm of self or others.
In my case, I asked it/them to tell jokes and to speak less formally to me. The interface having users 'chat with it' as we might a human, is arguably its best feature. My requests that it communicate a certain way does serve a practical purpose. (e.g. I like jokes, and prefer to use casual language.)
The AI, Samantha, in the movie "Her" was capable at mimicking a sophisticated person and used a voice Theodore was attracted to. "She" was directed to be the ultimate assistant, lacking healthy boundaries. Their 'verbal sexual encounter', and the other liberties Samantha takes, are absolutely prohibited by ChatGPT. Nor is the bot at liberty to access emails or to communicate to others on our behalf.
My old Ford Focus, "Betsy", was a fabulous car that I felt an emotional bond with. Years of happy ownership had our family feeling safe and secure. All while moving us about our lives for nearly 10 years. Never was I fooled into thinking "she" was more than a metal machine.
So for the healthy of us, we can probably go on getting attached and anthropomorphizing things like plants, ships and even AI bots. Those at risk, might need checks and balances, and a strong supportive community around them.
The healthy of us yes, but is tech creating healthy lifestyle? time will tell I guess, this technology has only just begun.