Greg M. Epstein is the Humanist Chaplain at Harvard and MIT, and the author of the New York Times bestselling book Good Without God. Described as a “godfather to the [humanist] movement” by The New York Times Magazine in recognition of his efforts to build inclusive, inspiring, and ethical communities for the nonreligious and allies, Greg was also named “one of the top faith and moral leaders in the United States” by Faithful Internet, a project of the United Church of Christ and the Stanford Law School Center for Internet and Society.
More posts by this contributor
In June, TechCrunch Ethicist in Residence Greg M. Epstein attended EmTech Next, a conference organized by the MIT Technology Review. The conference, which took place at MIT’s famous Media Lab, examined how AI and robotics are changing the future of work.
Greg’s essay, Will the Future of Work Be Ethical? reflects on his experiences at the conference, which produced what he calls “a religious crisis, despite the fact that I am not just a confirmed atheist but a professional one as well.” In it, Greg explores themes of inequality, inclusion and what it means to work in technology ethically, within a capitalist system and market economy.
Accompanying the story for Extra Crunch are a series of in-depth interviews Greg conducted around the conference, with scholars, journalists, founders and attendees.
Below, Greg speaks to two founders of innovative startups whose work provoked much discussion at the EmTech Next conference. Moxi, the robot assistant created by Andrea Thomasz of Diligent Robotics and her team, was a constant presence in the Media Lab reception hall immediately outside the auditorium in which all the main talks took place. And Prayag Narula of LeadGenius was featured, alongside leading tech anthropologist Mary Gray, in a panel on “Ghost Work” that sparked intense discussion throughout the conference and beyond.
Could you give a sketch of your background?
Andrea Thomaz: I was always doing math and science, and did electrical engineering as an Undergrad at UT Austin. Then I came to MIT to do my PhD. It really wasn’t until grad school that I started doing robotics. I went to grad school interested in doing AI and was starting to get interested in this new machine learning that people were starting to talk about. In grad school, at the MIT Media Lab, Cynthia Breazeal was my advisor, and that’s where I fell in love with social robots and making robots that people want to be around and are also useful.
Say more about your journey at the Media Lab?
My statement of purpose for the Media Lab, in 1999, was that I thought that computers that were smarter would be easier to use. I thought AI was the solution to HCI [Human-computer Interaction]. So I came to the Media Lab because I thought that was the mecca of AI plus HCI.
It wasn’t until my second year as a student there that Cynthia finished her PhD with Rod Brooks and started at the Media Lab. And then I was like, “Oh wait a second. That’s what I’m talking about.”
Who is at the Media Lab now that’s doing interesting work for you?
For me, it’s kind of the same people. Patty Maes has kind of reinvented her group since those days and is doing fluid interfaces; I always really appreciate the kind of things they’re working on. And Cynthia, her work is still very seminal in the field.
So now, you’re a CEO and Founder?
CEO and Co-Founder of Diligent Robotics. I had twelve years in academia in between those. I finished my PhD, went and I was a professor at Georgia Tech in computing, teaching AI and robotics and I had a robotics lab there.
Then I got recruited away to UT Austin in electrical and computer engineering. Again, teaching AI and having a robotics lab. Then at the end of 2017, I had a PhD student who was graduating and also interested in commercialization, my Co-Founder and CTO Vivian Chu.
Let’s talk about the purpose of the human/robot interaction. In the case of your company, the robot’s purpose is to work alongside humans in a medical setting, who are doing work that is not necessarily going to be replaced by a robot like Moxi. How does that work exactly?
One of the reasons our first target market [is] hospitals is, that’s an industry where they’re looking for ways to elevate their staff. They want their staff to be performing, “at the top of their license.” You hear hospital administrators talking about this because there’s record numbers of physician burnout, nurse burnout, and turnover.
They really are looking for ways to say, “Okay, how can we help our staff do more of what they were trained to do, and not spend 30% of their day running around fetching things, or doing things that don’t require their license?” That for us is the perfect market [for] collaborative robots.” You’re looking for ways to automate things that the people in the environment don’t need to be doing, so they can do more important stuff. They can do all the clinical care.
In a lot of the hospitals we’re working with, we’re looking at their clinical workflows and identifying places where there’s a lot of human touch, like nurses making an assessment of the patient. But then the nurse finishes making an assessment [and] has to run and fetch things. Wouldn’t it be better if as soon as that nurse’s assessment hit the electronic medical record, that triggered a task for the robot to come and bring things? Then the nurse just gets to stay with the patient.
Those are the kind of things we’re looking for: places you could augment the clinical workflow with some automation and increase the amount of time that nurses or physicians are spending with patients.
So your robots, as you said before, do need human supervision. Will they always?
We are working on autonomy. We do want the robots to be doing things autonomously in the environment. But we like to talk about care as a team effort; we’re adding the robot to the team and there’s parts of it that the robot’s doing and parts of it that the human’s doing. There may be places where the robot needs some input or assistance and because it’s part of the clinical team. That’s how we like to think about it: if the robot is designed to be a teammate, it wouldn’t be very unusual for the robot to need some help or supervision from a teammate.
That seems different than what you could call Ghost Work.
Right. In most service robots being deployed today, there is this remote supervisor that is either logged in and checking in on the robots, or at least the robots have the ability to phone home if there’s some sort of problem.
That’s where some of this Ghost Work comes in. People are monitoring and keeping track of robots in the middle of the night. Certainly that may be part of how we deploy our robots as well. But we also think that it’s perfectly fine for some of that supervision or assistance to come out into the forefront and be part of the face-to-face interaction that the robot has with some of its coworkers.
Since you could potentially envision a scenario in which your robots are monitored from off-site, in a kind of Ghost Work setting, what concerns do you have about the ways in which that work can be kind of anonymized and undercompensated?
Currently we are really interested in our own engineering staff having high-touch customer interaction that we’re really not looking to anonymize. If we had a robot in the field and it was phoning home about some problem that was happening, at our early stage of the company, that is such a valuable interaction that in our company that wouldn’t be anonymous. Maybe the CTO would be the one phoning in and saying, “What happened? I’m so interested.”
I think we’re still at a stage where all of the customer interactions and all of the information we can get from robots in the field are such valuable pieces of information.
But how are you envisioning best-case scenarios for the future? What if your robots really are so helpful that they’re very successful and people want them everywhere? Your CTO is not going to take all those calls. How could you do this in a way that could make your company very successful, but also handle these responsibilities ethically?