TechNews Pictorial PriceGrabber Video Wed Nov 27 09:35:10 2024

0


Domesticated Robots And The Art Of Being Human
Source: Tania Lombrozo


In the 1960s ― well before Spike Jonze's Samantha ― MIT computer scientist Joseph Weizenbaum introduced the world to Eliza, a psychotherapist (of sorts) who interacted with people through a text interface. She's still around today.

In preparing this post, I asked her what makes us human. "Are such questions on your mind often?" she replied.

Eliza is a computer program ― one of the first "chat bots" and an example of early artificial intelligence. While today's natural language processors are far more sophisticated, Eliza was an achievement in her day; having a conversation with her can be surprisingly intimate, personal, humorous and uncanny. She's no replacement for a human psychotherapist, of course, but she succeeds in teaching us something about ourselves. That's because human-machine interactions don't simply reflect how good we are as engineers ― they also reveal something about the kinds of creatures we are as humans.

With a new generation of technology comes a new generation of scientists, scholars, engineers and artists exploring the relationship between people and machines. At the heart of this nexus is Alexander Reben, an MIT-trained roboticist and artist whose work forces us to confront and question our expectations when it comes to ourselves and our creations.

For one project, Reben created BlabDroids: adorable little robots that roam the world asking people questions, such as what they regret or what created the moon. In collaboration with filmmaker Brent Hoff, Reben will use the footage to create a documentary, Robots in Residence, whose roots go straight back to Weizenbaum's Eliza. On his website, Reben explains:

        Robots in Residence, the world's first documentary shot and directed entirely by pre-programmed robots [the BlabDroids], will attempt to forge a new form of documentary storytelling and in doing so experimentally test MIT computer scientist Joseph Weizenbaum's infamous "Eliza effect" which is "the tendency to unconsciously assume computer (i.e. pre-programmed) behaviors are analogous to human behaviors." "I had not realized," Weizenbaum later noted, "that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people." We shall see.

A more recent piece of Reben's work involves two mylar balloons attracting and repelling each other. It's hard not to see their interactions in intentional terms ― as confrontations, pushes, shoves. Their movements recall a classic animation in psychology, from the 1940s, which shows two triangles and a circle moving around a box. People couldn't help describing the geometric shapes and their movements in terms of beliefs, desires, and intentions ― the language of human psychology.

I contacted Reben about his work, and he generously offered a new video of the piece, two mylar balloons; attracted and repelled, to premier here for the 13.7 community:

When I asked Reben about the piece, I discovered the comparison to the classic 1940s study from psychology was an apt one:

        I call most of my work "experiments" because they are made to be experienced and interpreted in a very individual way. Many involve the projection of very complex human emotions and concepts onto the work, much like your example of the animation from the 1940s.

        With "two mylar balloons; attracted and repelled" while some saw conflict and fighting, others saw more specifically domestic abuse, war and strife.

        This is one reason that I have adopted a very algorithmic approach to naming my recent work, which is more matter-of-fact rather than interpretive. I don't want to impart my own interpretation or bias into the description, since that would collapse many possibilities for a personal experience and interpretation to one. To fully take in the metaphor of art, one must first internalize it within their own understanding.

Reben answered a few more questions about his work in our conversation by e-mail:

        TL: In a video describing your work, you compare the relationship between humans and robots to that between humans and dogs, who we gradually domesticated from wolves. I love the idea of "domesticated" technology, in part because the analogy raises questions about coevolution. How do you think the technology we create is changing us?

        AR: I think technology has been changing us since the first time a human used a rock to hunt for food. Certainly this must have allowed for that individual to obtain sustenance more efficiently, leaving time for other pursuits such as creating better tools and intellectual inquiry. If we take this as true, technology must have influenced our evolution on such a fundamental level that we would not be human as we know it today without it. It is the view of some that technology is some sort of external construct, however I would argue few things are more human than technology itself. I think technology is us.

        The wolf example was a counterpoint to those who think social robotics will start to replace human-to-human social interaction. The underlying point to their argument is that robots are not human, not natural and that forming an emotional bond to things we create may be harmful. Obviously this may be the case in certain instances, but in general we can look to a very old piece of bio-engineering we created to be our companion: the dog. We took the wolf and over a long period of time crafted different models of dog to meet our needs. Some for hunting, some for working and others for companionship. I don't think the majority of people would argue loving your poodle takes away from loving other people, even given a poodle is as human a creation as a pizza. There are no roving packs of wild poodles.

        TL: Your work is designed to elicit a variety of responses ― humor, intimacy, discomfort, nostalgia, wonder and much more. What's the most surprising response you've seen to your work? And did it change the way you thought about that piece, or more broadly how you think about the relationship between humans and technology?

        AR: I'd say some of the most interesting responses have come from the BlabDroid project. This may be simply because they have interacted with the most people and have traveled the farthest, but seeing the way people will open up to a machine (possibly more so than they would to another person) is very powerful. It leads me to believe that robots such as these can actually strengthen what makes us human rather than take away from it. It also puts much responsibility on those making such machines to be ethical.

        Of course, being documentary film making robots, we have had a lot of both strange and interesting responses to questions. One interesting finding in NYC during the Tribeca Film Festival was that a few women gave a very specific answer to a question. The question was "If you could give someone any gift, what would it be?" with the answer being the same across the women: "I would give my sister more confidence."

        We also noticed that when asked who people loved the most, most all in NYC answered one of their parents, while many in Amsterdam during IDFA replied "myself." We also had people confess they put hair remover in their roommates shampoo, stolen from, lied and cheated on people, and say the worst thing they ever did to someone was to force their mother to "drown all those kittens."

        TL: For a current project, you're developing robots with personality disorders. Part of what intrigues me about this work is that it kind of turns AI on its head: We usually aim to surpass human intelligence and behavior, not to reproduce what might be thought of as human problems. Can you say a bit about what prompted this work?

        AR: The robots are coming and many are unsure what that actually means. Firstly because by some definitions a washing machine is a robot, but mostly because we have very sci-fi notions of what robots are.

        A lot of my interest is in looking to the future and bringing back some possible scenario or metaphor to create an installation where one can experience those situations now. This can prompt a dialog about what the future may hold and also allow us to reflect on the possibilities.

        Practically, imagine if a machine makes an error like dropping a cupcake it is boxing in a bakery, we may be more tolerant of it's mistake if it has personality. Even in a non-AI way, when our car breaks down we yell at it, "What's wrong with you!"

        We already prescribe personalities to the inanimate, so what would happen if we take this to a much higher level? It is likely even the most complex and seemingly intelligent machines of the future will malfunction. What would it be like if a machine could have a personality disorder only a machine could have? What will it be like when your robot friend becomes impaired?

        TL: Where are you headed next? Specifically, is there a particular assumption about our relationship to technology that you'd like to challenge and explore?

        AR: I want to further investigate not only the concept that technology and art can be a lens with which to study what makes us human, but that technology can make us more human.

        This will manifest in not only more installations, but other means with which to engage the public in this discussion. The gallery and museum setting have always been an interesting context with which to interface with people. They tend to already enter the space with the goal of having their experience expanded.

        The BlabDroids continue to learn about humanity from the perspective of robots. We hope to use the fact the people are more comfortable telling things to a robot to help those who have trouble telling their stories to others, such as people with PSTD or autism. Teamed up with institutions dealing with those disorders, the hope is the robots can be a tool for healing and understanding.

        I'm working on a book about both technology and humanity with examples and anecdotes from my installations and experiences. The book will be a speculative guide to the coming future of human-machine relationships.

I also decided to ask Eliza a final question: "Where are you headed next?"
She replied: "Why are you interested in whether or not I am headed next?"

Her response reflected the limitations of the simple rules she uses to incorporate elements of the conversation in what she says. But when I asked her to clarify, she provided the perfect response: "We were discussing you, not me."


}

© 2021 PopYard - Technology for Today!| about us | privacy policy |