<img height="1" width="1" style="display:none;" alt="" src="https://dc.ads.linkedin.com/collect/?pid=74383&amp;fmt=gif">
Contact Us
Request Demo
Menu
Contact Us
Request Demo

Using A.I. to Map the Brain and Program Behavior with Ramsay Brown

Emma Waldron
08 Oct, 2018

This post is a transcript of S2 E9 of the Voices of Customer Experience Podcast with Mary Drumond and James Conrad, featuring Ramsay Brown, Co-founder and Chief Operations Officer at Boundless Mind.

Episode Transcript:

[00:35] Mary Drumond: You're listening to Voices of Customer Experience. I'm your host, Mary Drumond, and on this podcast we shine the spotlight on individuals who are making a difference in customer experience. We also proudly bring you the very best of customer experience, behavior, economics, data analytics and design. Make sure to subscribe or follow us on social for updates. Voices of Customer Experience is brought to you by Worthix. Discover your worth at worthix.com. 

 [00:35] MD: Ramsay Brown is a neuro technologist and futurist philosopher, trained at USC's brain architecture center, Ramsay worked on brain mapping and pioneered a Google Maps for the brain. Now he's co-founder and chief operations officer at Boundless A.I., where he and his team are leading the edge of persuasive A.I. and behavioral design. An emerging leader and persuasive technology and artificial intelligence, Ramsay's work explores and furthers the intersection of brains, machines, and human flourishing. Joining me from California, here's Ramsay Brown.

[01:11] Ramsey Brown: Hey Mary. Hey James. Pleasure.

[01:12] MD: Yes. I'm here today with James Conrad, my guest co-host. Hey James.

[01:17] James Conrad: Hi Mary. Nice to be here.

[01:18] MD: Great. Ramsay, thanks for coming on today. I'm really interested in the work you're doing. You've got a lot going with artificial intelligence, and that's a topic that we're very interested in as well as behavioral science and neuroscience and stuff. So can you talk a little bit about your background, just so our listeners get a feel of who's talking?

[01:40] RB: Yeah, absolutely. So, like I said, I'm Ramsay Brown. I'm co-founder and chief operations officer at a company out in Venice, California called Boundless Mind, formerly known as Dopamine Labs. I got here, drew a lifelong passion of the things that lived at this intersection of brains, minds, machine and design. So I heard this really great thing from a friend recently, like how he described himself. I've always thought of myself first as an artist because that was the thing that came easy. Like this thing about making new things or building things was the part I didn't have to think too hard about. But formally I'm trained as a neuroscientist, So a neuro-anatomist and a computational neuroscientist. So I spent 10 years at USC here in Los Angeles where I built a Google Maps for the brain and did a lot of brain mapping work before realizing that there was this thing that my co-founder and I knew better than anyone in industry around human behavior and why people do what they do at this core mechanism level inside their heads as opposed to just the storytelling kind of level.

[02:44] RB: So we recognized we had this thing we could bring to the world - this understanding of motivation states deeply. But instead of just becoming consultants or becoming authors, we saw what was going on inside A.I. around all this personalization and prediction. We recognized that there was a natural connection here that no one else had really started working on. So we built out this company, Boundless Mind to predict when and how each person should be surprised and delighted or encouraged or praised for different behaviors they perform to increase their frequency of doing those in the future. This idea of variable reinforcement. Let's take this tested method from science, apply machine learning and A.I. to it to induce this kind of personalization layer, and then put it out there in the world to help great apps or great companies not only read business success, but help their users and their customers achieve the kind of behavior change they wanted.

[03:39] RB: So it's one half how do we take this otherwise could be frightening technology and humanize it, and one half how do we help teams achieve the KPIs they need using this new type of behavioral engineering.

[03:54] MD: Awesome. It sounds like pretty amazing work. I watched a video where you were talking about a neural systems language and Google Maps for the brain. Can you tell a little bit about that?

[04:10] RB: So that was the work I did in my PhD program. So there's this big problem in neuroscience - two big problems I guess - that I tried to solve. The first is the brain is to a first approximation, stupid complicated in terms of its basic wiring. So even if we treat the brain like your iPhone or like the laptop you're recording on right now, it is this information processing machine. We get input, we have memory states, we make output and of course it's far more subtle than that, but it contains structural circuitry.

[04:39] RB: You have specific brain circuits and specific brain structures that do specific things when connected to each other. And somewhere in Cupertino and on the cloud, there's a schematic for your laptop or your iPhone. But we're having a hell of a time figuring out what the schematic of the brain is. Mostly just due to sheer complexity. We've got good methods, but it's just a lot of it. One of the biggest problems is that there's so much of it, and researchers are collecting and amassing data about it so fast that the people who take those reports and manually type them into a machine, type them into the computer to put them in a database are being outpaced by the quantity of research created. So we're making too much new information, and we're not putting it in systems fast enough to analyze and make sense of it.

[05:22] RB: So the neural systems language was this thing I wrote to help computers and humans better cooperate on how they described brain architecture and brain circuitry. So I've built this gooey, that piece of software on the net that a researcher could use and she could take the new brain connections she had found in her lab and use this simple web form, and it would spit out a version describing that. It kind of like computer code that she could put into the paper she published. And now instead of some researcher years later finding that and typing it into a database, a web crawler, an A.I., could go and look through her paper, find that code snippet, and automatically know what to do with it. Because computers have a hard time reading English, but they're really good at reading computer code. So I wanted to see how we could get everybody on the same page.

[06:08] MD: Tell me about what you're doing over at Boundless Minds and the work that you're doing, what problem you're trying to solve, and the mission that you guys set out to accomplish.

[06:16] RB: Right, so the mission is twofold. Apps and companies live or die by whether or not people come back and use the app as they're supposed to. Whether that's a consumer app or a back office, if users don't use your tool, if they leave, if they go to one of your competitors, It's like any other business - you're dead, you're dead in the water. This idea of retaining the users or the customers you have and engaging with them, getting them to do the things they're supposed to do inside your app that are good for them, and create value for you. That's mission critical for every team, but the solutions to it aren't intuitive, and they're not the kind of thing that lend themselves really well towards building in house. So what we want to do is give those companies really powerful, data-driven, provable, tested, and testable solutions for getting people to come back more and do the actions they need them to do.

[07:05] The other half of that coin is when we look at a lot of the things that hurt in society today, they're largely based around behavior. Where a hundred years ago, we were still dying of infectious diseases and pathogens that we barely had names for, today the majority of the global disease burden is around behavior related diseases. The majority of the things that hurt in our lives, ranging from debt to social anxiety, to twitter rage cages, to diabetes and cardiovascular disease. We can actually solve both of these at once. There's a lot of apps out there trying to do good and help people change their lives, and people want these kinds of deep behavior changes. So at Boundless A.I., we take this knowledge about neuroscience and knowledge about A.I., combine the two and now we have this app platform that anyone can plug into and make their app habit forming. It will give their users the perfect little burst of dopamine at the perfect times to keep them coming back longer, doing the behaviors that you need them to do.

[08:02] MD: So anyone can use it in any app. So if I want to use that say to maybe curb my LinkedIn usage, how would that work?

[08:12] RB: So this is part of what we're trying to pioneer with this idea of behavioral design. Just like there is industrial design or architectural design, or graphic design, we think human behavior is designable as well. And it's not always the case that we want to increase behavior. Some other behaviors like your LinkedIn use, you might want to decrease. Humans are complicated, how things we want to start doing more and things you want to stop doing, so if you want to stop doing LinkedIn, and I don't want you to quit LinkedIn, I don't want you to delete your account. I don't want you to remove it from your phone or deleted from your favorites tab, but you might find it useful to have a tool that helped you automatically use it less. So we released this thing called Space. It's available. You just need Space.com and it helps any website on your laptop or a lot of the major addictive apps on your phone

[08:56] RB: become a little less habit forming by inducing you to take a little bit of a breathing moment, a little moment of Zen before you're actually allowed to use LinkedIn. It actually injects this little meditative moment every time you want to browse LinkedIn. It does it.

[09:10] MD: Does it give you some quote like Forbes where they forced us to read a quote of a day or something like that?

[09:16] RB: It gets you to take a couple of deep breaths, just pause, just be at peace with yourself. Check in for a sec and then you can have LinkedIn. But the goal here is that by separating whatever inside you dragged you to compulsively check LinkedIn, from the variable reinforcement you get from opening LinkedIn - sometimes there's notifications, sometimes someone wants to connect with you. Sometimes there's a good news article.

[09:38] MD: Sometimes there's nothing. 

[09:40] RB: Most of the times there's nothing. It's like a slot machine and by disconnecting your ability to hit enter, yes, I want to go to LinkedIn ,from feeling that little hit of dopamine, just even making you wait a few seconds has been shown to decrease how effective the reinforcement actually is. So you still get to use LinkedIn. It's not as sticky anymore. 

[10:02] MD: Voices of Customer Experience is brought to you by Worthix. If you're interested in customer experience, behavior economics or data science, follow Worthix on social media or subscribe to our blog for the best content on the web.

[10:20] MD: At the beginning of the season, we spoke to Yukai Chou who worked with gamification and one of the things that he talked about was how we have white hat and black hat drivers, and these kind of drive our behavior and our addiction habits. And white hat is kind of like a greater good, do something that can either improve ourselves and improve mankind, and these things make us feel very good. But there isn't much of an addiction involved, like it's not very compulsive. We do it for awhile, and then we'll get bored and we abandon. But black hat is very addictive. So you have gambling, you have other addictive behavior or even compulsive gaming or binge watching, right. These things keep bringing us back for more and more. But at the end, we're left with a feeling of being wasted and used and manipulated and cheated regardless of whether we know what's actually going on behind it. So even though we know we're doing something that might be harmful to us, we still feel cheated. We still feel manipulated by that action. You talk a little bit about this. I remember hearing you talk about this. Can you expand?

[11:37] RB: Absolutely. I used to be an academic. I have open public disagreements with people's definition of terms. So I agree with you guys' definition that there do exist things like normatively aligned and normatively unaligned uses of behavioral design and behavioral engineering. So I think he's right in what he's calling the white hat, black hat that that is a meaningful distinction. But where I disagree with him is that the right way to slice reality here with the philosophers knife is about the techniques. And I don't think that's true. I don't think it's the case that the techniques themselves contain normativity, because that's what he's proposing. Regardless of where you want to use them, the techniques themselves are normative. There are good techniques and bad techniques and I disagree aggressively. In the way that hammers can be used to build both hospitals and death camps, hammers contained normativity. They're a tool. They are a means to an end of achieving human ambition. What we build with hammers, the end goal and the drives that led us there, are the part that we can have conversations about norms with. To that extent, I don't think that there are things in the behavioral design toolbox.

[12:49] RB: All of the techniques that behavioral designers have at their disposal that are so far out of the gray zone as to call them intrinsically white or intrinsically black, with maybe one exception around leveraging sunk cost fallacy, almost always feels a little manipulative. But I do agree with him that there is something like a white and a blackness here. We talk a lot about this in the book we just published, Digital Behavioral Design, in which we try to figure out what are some of the constraints of the ethics here. So one of the things I would propose back to you guys that even some of the techniques that he would define perhaps as black hat, when aligned with the type of behavior change that someone wants out of their lives, aren't manipulative or coercive. They're persuasive. It's about what do we want to do with them.

[13:44] RB: You can have someone who's a coach that uses things like optimal challenge or optimal queuing or reinforcement even in real life outside of a digital context, who can coach you to do very nefarious things or can coach you to be the better version of yourself. And that coach's use of those tools that the tools themselves and the techniques aren't normative. It's what you want out of it. So there's places where we've used variable reinforcement, just something that he might consider a black pattern about a people with little burst of dopamine. And we've used it to help people pay down debt on time, to study harder, to adhere better to a medication, to walk more and be more active after cardiovascular treatment. So we've used it in what are really unambiguously normative ways that have a positive aligned normativity in them, even though he might consider them dark patterns. So I think some of that is off, but the idea that there's a good and a bad here I think is very real.

[14:38]  JC: Ramsey, would you say then that it's really looking at the end here and it's not so much, you know, how we get there, if the end is positive and is good for humanity and good for the individual, the way that we motivate those behaviors doesn't really matter as long as the result ends up in a positive place.

[14:58] RB: No, I don't want to go that far. I really don't think that's the case. From the beginning, if we start with telling people we're going to use some coaching techniques drawn out of behavioral science to help you achieve the kind of behavior change you want, is that something you'd like us to do? You can get the user's consent that this is going to be the kind of change they're looking for, and that they're okay being coached using these techniques.

[15:30] RB: What you've gained in that is that they are by it. The user is going to be the version of themselves they aspire to be, and you're going to use these proven techniques to get there. Starting with that is where we can mirror what's going to happen in the end, but get their buy in to the means. I think that's critical.

[15:47] JC: Okay. Gotcha. Because you talked about having more of an open dialogue when we're talking about persuasive computing and using these techniques. So if people are aware that they're going to be subjected to or participate in something that will have different motivators along the way, then that's justified and makes sense.

[16:07] RB: I think that has to be the case moving forward. You know, again, these techniques are currently largely being used by some of the most powerful organizations in the world to drive behaviors online that are good for them and good for their bottom line, but have been demonstrated to correlate with the onset of anxiety or depression. We need something better than that. My team is trying to democratize these tools and bring them out, so anyone who's building something exciting that will help people live their better lives can do that, can have as much force there that the user wants. The user wants to change this direction, but if we don't start having more public dialogues, it's going to keep being this thing just kind of happens in the dark. That's not okay. Use of the tools is fine, but they need to be aligned with what people want.

[16:49] MD: But don't you think that people will still feel used and manipulated? Even if they know, like I work in marketing, so I mean we do all the clickbaits and all of the marketing funnels and I can see them very clearly when I'm navigating the web. And I will still click on a pop up every once in a while. Even though I know exactly what's going on and I'm like, dammit, fell for it again. And I still feel annoyed even I knew exactly what's going on.

[17:17] RB: So just curious, did you or anyone you know as a personal trainer?

[17:21] MD: Yes.

[17:22]  RB: Great. This is an example of a place where they've opted into or you've opted into these coaching techniques, the same types of things going on. But you wanted that change. You were comfortable with that. You know that sometimes, but not every time after you finish a set of exercise, they're going to give you a high five. You know that's going to take place. You still want it, right. That's what you're paying them for. 

[17:46] MD: And then, you know, going along with, I mean other interviews that I've watched of yours where if your coach is constantly giving you high fives, then it's no longer special. It just becomes the norm, and there is nothing unique or encouraging about it. And is that like, in a sense we all kind of want us Simon Cowell or a Gordon Ramsey to say horrible things in our face and call us a sandwich or whatever it is he calls them, so that when the high five comes, it's that much more valuable.

[18:19] RB: So it's not that we want to be called an idiot sandwiches. But what we want is variable reinforcement. We want delight. We want surprise. What we want is the neutral state, where we do the behavior and we get some sort of neutral confirmation that behavior was done. And sometimes, according to an optimized, personalized way of doing it, we want to be delighted. We want it to feel good sometimes, but not every time.

[18:50] RB: And that's what my team and I have built our company around - finding for every person when should Mary, when should James, when should Ramsay - get hit with a little bit of something positive when they don't see it coming, and we can predict we don't see a coming, to put that smile on their face. And it changes for everyone. It's a mathematically intractable problem to go about solving yourself. So we had to bring machine learning into it to do it, and when you do it right, you can see a really large increase in frequency of how often people will do that reinforced behavior.

[19:20] MD: Also here, this is right where the machine learning and the artificial intelligence overlaps with customer experience, which is the main objective of this podcast. So you built an A.I. and what's that A.I. able to do? How does it create these surprise and delight moments?

[19:39] Absolutely. So when we work with a new partner, they tell us what actions they need users to do more. And everyone has a good intuition on this. You know the button you need people to press, you know the flow you need them to complete and how frequently - a couple times a day, a couple times a week. You know from your internal catalog of design or art assets or UX that you have what you have on hand that you could use to show them sometimes, but not every time after they complete that action that would delight them. The hard part that comes is how do you show it. What policy should you have in place for deciding to show that little delightful burst of something rather than the user experience or the customer experience? That's what we tailor. That's what machine learning is taking care of. It has taken models from neuroscience that my founding team and I built around how the brain's motivation circuitry predicts when things are going to be delightful and how that should increase our future behaviors, and it models that for every person. It make some predictions about when it should give that to them.

[20:38] It experimentally tweaks that. It checks and sees that "hey I predicted that giving it to Mary not this time, but the next time was going to be really impactful. Let's see how she's behaving in a week. Let's see if it changed any." And then keeps learning as it goes and then adapts those learnings between every user and every one of our partners.

[20:56] MD: So if it feels like I'm not really engaged by let's say the high five or the surprise and delight at might it do it less or more depending on how I behave. It knows how to pump the brakes, it knows when to pump the gas, it knows when and how to accelerate or decelerate, and doesn't just know it ahead of time. It learns from you. When and how does Mary prefer to be delighted like this. It doesn't ask you. It doesn't try to do some survey. It just looks at from the frequency of your behavior and some statistics it can run. Oh, it appears controlling for all else that worked. That's the big advantage we have here and that's what I think makes our system so powerful. 

[21:40] JC: As you talk about this surprise and delight, it's really interesting and in the CX space, Ramsay, there's been a debate going on and you've studied the brain a lot, so I'd love just to get your thoughts on this, even though I know you don't focus a ton of time necessarily on CX. But there's a debate going on around whether companies should be thinking about surprising and delighting at certain moments and that becomes sort of their mission. How do you know some companies actually, you know, we want to delight you all the time versus others that believe that it's more about predictability and consistency of an experience rather than having these spikes of, if I'm a customer, you know, whatever, have an airline, I expect a certain experience all the time. That's more important than if you delight me once or twice because then it resets my expectations, and then when I'm not delighted actually I feel something's missing. I'd love your thoughts on that. Have you sort of studied behavior in the brain and your point of view on that? And it doesn't mean that you can't have one without the other, but I'd love your thoughts on it.

[22:46] RB: Absolutely. And it's a super critical question that we run into with all of our partners. And we told them that your core value creation needs to be consistent. Like, let's go back to your example of an airline. You need users to have no ambiguity about what they're trying to accomplish, how you're helping them solve that problem, their progress through solving it, and they've got what they wanted out of it. That needs to be dead consistent. What can change is that 10 percent extra use sprinkle on top of the critical behaviors you need people to come back and do more often. That is what can be variable. That is what surprise and delight is made of. Surprise is not take your core value prop and sometimes break it in half or inject some zany into it. No. Users want to solve a problem to accomplish something. Help them do that. Be as clear and as thoughtful as you can and your designs and your iterations to get them there and then don't touch it. Let that be the same.

[23:47] MD: It sounds a bit like Gestalt to me where you're rewarding that good behavior and kind of shocking the bad behavior, is that is that it?

[23:55] RB: It is a piece of it, and we definitely don't want to shock. As it turns out, animal behavior does and which I'm going to lump human in here because we share again how you know I came from academia, why humans do what they do in terms of some of the animals we get to study in the lab. Not just because we can't do the same types of studies and analyses on humans, but because we share so much of our genetics, our brain wiring, and the things we've learned from studying rats or pigeons or dogs teach us a bunch about humans. Are Humans different? Yes. Humans have this added component of grace, autonomy, dignity and something special that makes us, us probably. But a lot of the basic motivational states and how we respond to our environment, we can learn from studying animals, and we find that in humans, punishment doesn't work.

[24:49] RB: We find in humans that punishment actually has more to do about the punisher feeling a sense of justice and righteousness than it does the punishee to revisit as the recipient of punishment changing their behavior. This is partially why prisoners cynicism is so high and why there's been this big reform over the past 40 years in how we think about prison to make it more about rehabilitation and reprogramming instead of just, well, you did something bad so we're gonna do something really bad and separate you from society and maybe learn your lesson if you think about it long enough. No, you haven't rewired the behavior. You have to rewire the habits, mental or physical. So punishment doesn't do a ton, but the idea of reinforcement as you describe it, that praise the person for doing the thing they're supposed to, that is something that came out of the early work and behaviorism in the sixties. Pioneers like BF Skinner and Olds and Milner, and that is largely still what guides our behavior today.

[25:41] MD: Skinner is the name of your A.I., isn't it?

[25:44] RB: It is. It is a name of one of the subsystems of the A.I. Yes. So the part of the machinery that does automated experimentation, and we're dealing with knowing that there's no tractable way for us to be able to run all the stats on all of the people individually. So we've built this machine that designs its own experiments, iterates on them, learns as it goes. That piece is named Skinner, and it's fun. The press certainly took to it because it gets everybody's hair on the back of their head to stand up 10 percent. 

[26:15] MD: Worthix is disrupting the market research industry with cutting edge technology and a revolutionary methodology. Visit worthix.com to learn how we're using artificial intelligence to improve customer experience at companies like Verizon, Jeep, Blizzard, HP, and L'oreal.

[26:36] MD: I heard you say this and I'm going to kind of lead you to it, where if you're getting an A.I. to program human behavior, isn't that kind of
switching up the general idea, which is instead of humans programming machines, it is machines programming humans?

[26:52] RB: It totally challenges our conventional notion about the directionality of control. It totally does. So here we are now in this reciprocal feedback loop with our machinery now, whereas designers of behavior, we can set instructions in the machine, and the machine in turn sets of instructions into people, who in turn sent instructions into the machine. I think a lot of what I'm trying to focus on in my life is that ever fuzzy boundary of who's running who. And we've had a lot of interest recently. The more futuristic end of this, if you know, where do we go? And this is interesting. We're coming out of Google. Where do we go when big data, global scale, start seeing some patterns that humans were never going to and proposes, "Hey, if you tweak this thing in this one part of this group of people's lives, you can achieve the societal means that are subtle ends that you'd hope for in terms of solving some of the remaining problems of Western liberal democracy." 

[27:45] RB: This is very much the machines, programming humans kind of aspect. We are not there yet, but we see this, what we're building as getting there. Given that we've got a lot of fascinating questions in front of us of alignment, how do we make sure that this is still giving people what they actually want out of their lives of control? How do we make sure that this is the kind of thing we can stay on top of and doesn't get out of hand out of transparency? How do we make sure people understand this is now the world they're living in and this is the new normal? So there's a ton of questions that we need to be thinking about every day as we build this.

[28:17] JC: Yeah, it's a really interesting point and of course you start to go down this road and then people think of Hollywood and what they've seen on movies and Skynet and people get a little anxious about these things, especially as you get further away from Silicon Valley and the West Coast. But there's also an ethical issue here to think about is as we learn more about how the brain works and how to manipulate certain drives, there's a  huge responsibility that comes with this I can imagine in how, how companies leverage it. You talk about the responsibility of designers and thinking about how to use this sort of information to do good rather than to manipulate.

[29:04] RB: Totally. There's a huge amount of ethical burden here. And it's funny that you bring up the Hollywood piece. When we think about A.I., it almost always goes the very Terminator direction. You're totally right. The direction that I think about a lot, and this is even reflected in my twitter handle. RAB1138 was George Lucas' first film, THX 1138, which was a sci-fi Dystopia that looked more like the natural end conclusion of Prozac and the DMV. And that's all. It's much more about control than it is destruction. So I think there's lots of things we need to be safeguarding against in terms of what we're building here and why we're building it and what sort of meta implications there are in terms of what does it eventually mean to have a rigorous, mature technology of behavior that is global scale. How do we keep that aligned with making sure people are living their best lives and still free to exercise the necessary dignity and autonomy that they're due as people. And when it comes to how we think about that in terms of the ethics then, we talk not only with behavioral designers about how to use these, but we have rigorous sniff tests that we do on the partners that we work with to make sure that when we go into business with someone who wants to use our services, we're aligned with them and what they want out of the world and what sort of behaviors they're going to encourage in their users.

[30:26] MD: Do you think that in this younger generation or the generation that's coming up now that it's become so normal to have this participation of technology in our lives that we're a bit inconsequential with it, like we don't think of the long term consequences that it might bring? Like for example, we're all totally okay with all of our personal information being just out there for anyone to access, and we're totally okay with our every move being tracked by triangulation software, and we're totally okay with sharing every single detail of our lives to a point where there are certain apps like map that actually tracks your geographical location 24/7. Are we being too irresponsible with our acceptance of technology?

[31:20] RB: So I can only speak for myself here. As a private citizen -I'm taking off the hat of a representative of the company - and I will say I'm 29 years old, so I'm what they are starting to refer to now as an old millennial. When I think about the opinions of those core, maybe 15 or 16, who have grown up in a world defined by connected mobile technology and near ubiquitous data sharing, without speaking for them, I can imagine that this feels like just the way the world works. Whereas the rest of us slowly watch our privacy erode. When we are born into a world that already has a piece of technology in it, we don't think of it as technology. We think of it as just the way the world works. There's this old adage that anything built while you were alive, at first, that's a new piece of technology is considered "robot" or "automation." After awhile it's just a shame and that to the next generation was born with it. It's just stuff. So in the 1950s they had the automated home domestic dishwashing robot, and then in 1970s you had a washing machine and you know, now it's just that thing in the kitchen that takes care of the dishes.

[32:37] RB: I'd imagine the CMS to go for our privacy rights when these young people are born into the world that's defined by these apps that know everything about them, sometimes better than they know about themselves. That is maybe shocking to you and I, but that might be just kinda the way things work. Whether you're right or wrong is a separate question entirely. But I could envision the sense of  that's like air, that's just kind of the way things go. Whether or not it's right.

[33:17] MD: I feel like in customer experience, people are kind of stuck in this position where they don't really know what's right or wrong. Like, okay, people want tailored and catered experiences and then we get into design. So we want our experiences to be designed for us and we want to have peaks and we want to have strong ends to have memorability. But then once again is it manipulation or you understand that there's a fine line there. We keep talking about designing wonderful experiences. But then at the same time are we manipulating these experiences and how good is that?

[33:56] RB: This is a really important thing to define terms on, but this is how you know I'm an old millennial. One of my favorite examples someone trying to get the Frisbee back from their dog and the dog does. The dog wants to throw their Frisbee but never have to actually give it back to keep throwing.

[34:13] RB: You can imagine all these young people saying, I want personalization. No, no, no. No data sharing, but I want personalization. Okay what do you really want here? Because if you want me to throw the Frisbee, then you have to throw it back first, and if you want me to personalize and make you feel like I've connected with you or valid in the way that humans do intuitively, but machines are learning to deal with scale. We're gonna need some data. So I can't help but wonder then from the end, customer's norms, what do we want? Do we want our robots to behave like humans and if so, we're going to need to collect some information about you and your personal preference and your tastes and your past purchase history and behavioral history to get you that or would you prefer this to still kind of feel like the 90s? Because you can have one or the other. That's fine. Now I know what the corporations want. They want to increase their return on investment for their technology investments and the worth of their capital. So they're going to implement any new techniques they can to make sure that they can increase the value of their stock, which is quite a large looming personalization. So here we are. I know instruction is going to go, and even large push backs like gdpr. I'm curious to see how that's going to change things in the next year. But I have something close to low hopes it's going to drastically change what is this creep of how much data we're sharing. But whether or not it's right or wrong goes back to its normative core. I still think it's about what we're doing with it.

[35:41] RB: If we're using the collection of data and we've gotten people's permission to have this data and we're using it aligned with the types of behavior change they want out of their lives, this is probably a normatively aligned thing. If we're using this type of data and we're not talking to people about it and we're using it to persuade them to do things that they didn't really know were going on or they didn't want out of their lives or that didn't align with their aspirational selves, we're exiting persuasion and we're entering coercion and manipulation. I think that's a really important distinction. There is a really hard line between persuasion and manipulation, and I think it's an alignment with what people wanted out of themselves that really comes to define that difference between was this manipulative or persuasive.

[36:25] JC: And Ramsay. So we have put these things in place to help drive the behavior that we want or the company wants, etc. In the research that you've done, do you ever see a point where these triggers can sort of fade away and it becomes habit and people don't necessarily need the same incentive? Is there a point where that happens or do we always have to have these things in place to continue to see the same behaviors?

[36:52] RB: We almost always need to have them in place for this reason. There are a thousand other things pulling at us every day asking us to change our behavior. There is inertia in standing still or stopping the behavior. There is inertia in keeping the behavior going certainly for because there's so many other things pulling us in different directions. We find that this is something almost more akin to bathing or keeping up on practice. High performance athletes don't get good at their sport and then stop because they were good at it once. They have to keep on top of it. People who adhere to diets are prone to slipping because behavior degrades with time when not kept on top of. We find that a continual application of reinforcement like this is critical for long term success. 

[37:36] MD: Ramsay, in case people want to get in touch with you, if they want to hear more of what you have to say, how can they follow you? Where do you tend to share your thoughts? What's the best channel to reach you at?

[37:47] RB: Yeah. They can go to boundless.ai and get in touch with my team and I. We have exciting new products coming out in a few months here called Sesame. It's going to be accessible for small teams who just want to get started with a variable reinforcement behavioral design. We have a growth and enterprise options for teams who are at scale who have these types of mission critical applications where they need best in class behavioral engineering. And as far as personally, I tend to do a fair share of my shenanigans on Instagram. That and the Boundless A.I. blog are the best ways to keep up and get in touch.

[38:37] MD: Perfect. Wonderful. Well thank you so much for being on. James, thank you for coming on as well. I appreciate it and we'll be back next week with more. Thank you.

[38:46] RB: Awesome. Thanks Mary. Thanks James.

[38:52] MD: Thank you for listening to Voices of Customer Experience. If you'd like to hear more or get a full podcast summary, visit the episode details page or go to blog.worthix.com/podcast. This episode of Voices of Customer Experience was hosted and produced by Mary Drumond, co-hosted by James Conrad, and edited by Nic Gomez. Blog copy and summary by Emma Waldron.


 

Subscribe by Email