[00:40] Gui Cerqueira: It’s a pleasure. It’s a pleasure.
[00:42] MD: And we have a Ramsay Brown from Boundless A.I. [00:46] Ramsay Brown: Thanks for having me. [00:48] MD: We’ve got Bob Hayes from Business Over Broadway. [00:50] Bob Hayes: Hey everybody. [00:53] MD: And Michael Bartlett. He’s a customer experience professional, focused on artificial intelligence and works for JMARK. Hey Michael. [00:42] Michael Bartlett: Hey, how’s it going? [01:02] MD: Great. So I’m so glad we can all be here today. So I wanted to start off this webinar by firstly establishing two things, what your definition of artificial intelligence is versus your definition of customer experience. I know that opinions don’t tend to vary that much when it comes to artificial intelligence, but with customer experience people tend to feel strongly about what it means. So I wanted to go ahead and get that out of the way before we get started. So we can start with Gui. Gui, can you expand a little bit about what artificial intelligence is in your perspective, of the artificial intelligence that will affect customer experience, and also what customer experiences?
[01:39] GC: My perspective on artificial intelligence is basically when you feed a machine with data or with enough stimulus so the machine can by itself make a decision like a human being, like an intelligent being. So basically the machine is going to be able to analyze all kinds of data that is coming in and will define different probabilities based on the data that is coming in to try to solve a problem by taking the most likely alternative from everything that is coming. And when you add this to an amazing processor processor power that we have today for most computers and servers, the A.I. can actually empower decisions that humans would be making by millions. When it comes to CX, for me, customer experience is something very personal. So basically what I’m trying to say is that companies don’t work with customer experience. People do. So basically you experience everything that you feel and memorize and use to make decisions as a customer. So companies on the other side, they’re basically designing the use of time, trying to cause experience into customers. So for me, customer experience is everything that someone is actually feeling and memorizing while they’re getting a different stimulus by receiving different value propositions from different companies.
[03:07] RB: I like the definition that John McCarthy gave of artificial intelligence that artificial intelligence is, by any means that we get there, just the automation of the processes that animals are able to do mentally. So any cognitive or mental task that an animal or a human can do, that a machine can be built to simulate the end result of that task can be sd to be artificially intelligent. What I like about that definition is it doesn’t talk at all about how you have to build that or how you have to get there. That can be done through things like expert systems, which were really popular in the 80s, single layer perceptrons like in the sixties, or today, deeper current back propagating neural networks that can, based on their experiences learn as they go. All of these techniques get us to the place where we’re really just after building and rebuilding the cognitive processes that define what we would think of as ourselves for intelligence.
[03:57] RB: That’s exceedingly exciting to me. Then when I think about where rubber hits the road for CX is CX in my book is where user experience as we defined it as this design ideology meets commerce. So there’s a lot of things I can do in my life that are about user experience. They’re not about buying or selling something. And then I entered this commerce space and all of a sudden customer experience is everything I’m sensing in the environment around me towards the goal orientation of getting me to buy a thing or keep buying a thing. And that itself has novel properties where there are important design principles and methodologies we can use to enhance that. So when I think about where these two interact and where these automations of the things humans are able to do like sense and perceive people’s emotions, the things that make good salespeople, good salespeople, and then combine that with being able to do that in an autonomous way that scales really well and things like that, that’s what makes me most excited about where A.I. can be applied here is getting that audit skill, these things that humans were already really good at.
[04:58] MD: Awesome. Bob, you’re next.
[05:01] BH: Hi. I think A.I. is a field of computer science that allows you to develop computer systems that can do tasks that are traditionally performed by humans. So if you can do that, then you have an artificial intelligence system. One component of A.I. and someone talked about this, or was it Ramsay, talked about how you can do it via just an expert system. You can program certain things into the system. Another way is using machine learning where you just feed the system a lot of data and it can train it to find hidden insights, uncover patterns in the data to give you better insights about your customer. Customer experience is simply just the customer’s perception of your brand and their interactions with your brand. So what I like about this is that I think any kind of process can get value from, and if you can analyze that data from that process, you can understand what drives that process. And so that’s why I like A.I. and machinery because you can uncover things that you may not have ordinarily uncovered being a single data scientist who only has one brain and two eyes. So you just feed it some data and give it an outcome and you can find out what kinds of things predict that outcome that you’re interested in very quickly.
[06:30] MD: Awesome. Michael?
[06:32] MB: It’s good to see it as a consensus going on here because I don’t see any radical opinions, which is good. So yeah, I don’t really have much to add. I think with with artificial intelligence, an example I always give is, I think a handheld calculator is A.I., and that really blows people’s mind sometimes. But really if you think about it, if I walk up to a rock and I asked her to do a sum for me, the rock doesn’t do it because it’s just a rock, right? It’s not intelligent. So you can take that analogy and run with it. Finally, you get to a point where you say to a human being “what’s 20 plus 10” and the human being will be able to tell you because it’s intelligent. So that’s why I think the handheld calculator is a good example for A.I. and I agree with pretty much everything that everybody’s already said so far.
[07:17] MB: So I do think it has to be computational. I do think that it’s mimicking something that a human being will be able to do. Although let’s not forget that we have intelligence in animals as well. So, you know, my dogs are intelligent to a degree. Dolphins are intelligent to a degree as well, but just mimicking that kind of understanding and then being able to apply the information it’s got to an outcome. And the customer experience really, and I think Bob said it perfectly, it’s your perception. So it’s your perception no matter how good it really is, objectively customer experiences, a person’s perception of going through an experience.
[07:50] MD: Well, all great definitions right there. I wanted to start off as our number one topic to discuss hyper personalization. Hyper personalization is maybe the greatest use of artificial intelligence that we’ve seen in the market nowadays. So before we had personalization where it was instead of just a regular email, you would get the first name or “Hi Bob, I’m glad you could be with us,” but not necessarily tailored so much to the experience. Nowadays, we’ve got Netflix that is able to tailor the entire experience according to what the machine understands is that human’s preferences and behavior. So I wanted to talk a little bit about how you feel A.I. is contributing to the hype- personalization experience and how it will continue to do so and how it will advance. So whoever wants to start out, go ahead.
[08:52] GC: I like the example you gave about Netflix because for me, Netflix is an icon when it comes to hyper personalization. You can get my user profile in Netflix and use it and it will not feel like my user profile fits your user profile. You can get basically two people like me, same age, same demographics and same every characteristic that I have. And then when you check their user profile, the experience will be entirely different. The movie suggests you’re going to be entirely different. So for me, Netflix is an amazing and very tangible example of a hyper personalization and how A.I. can actually support a customer experience because every time I join Netflix, the suggestions of the movies or the series are super in line with something that I’m expecting. However, there is a concern that I have with hyper personalization. It’s when you stop seeing the world around you and new stimulus and only gets things that are likely to like you. And that’s my concern about hyper personalization when it comes to customer experience, but I want to pass the word to maybe Bob or Michael or even Ramsay they so they can add more on top of it.
[10:06] BH: Yeah, I agree. I think a personalization is just the segmentation of one. So I want to know about a particular customer’s needs, wants and desires and I have all this data about this person and I can better predict what they like, what they don’t like. And again, Gui had a good point. I’ve got a concern about hyper personalization because this may not be related to CX, but somewhat it is, is that, if you look at the news, the goal of A.I. machine learning is to optimize the outcome. Typically, the optimization is number of clicks you get on ad if you buy products or so forth. So if you’re being fed news that you agree with, you will click on it and you will ignore news that you don’t agree with and like you said, you’re kind of masking everything else around you and just seeing what you like only. And that can be problematic not only for society but for a company where you don’t know what other things they like or they might like beyond what they’re clicking on currently.
[11:09] GC: Hyper personalization can actually get boring with time. Right? [11:13] BH: Exactly. And in fact, I make an effort to seek out other stuff, other content, other opinions so I can be better informed it about why my opinion might not be solid. Actively look for other information in this world of A.I.
[11:32] MB: And this brings up something very interesting. So last night I was reading Ramsay’s online book, and I was reading through it and he was talking about how it’s not just whether something is pleasurable but whether it’s unexpected as well. And so if you’re just getting news that you expect all the time, after a while, that dopamine effect lowers down, so I thought it was really interesting, Ramsay. I was wondering if you could expand on that in this context because that’s pretty interesting. I hadn’t heard that before. [11:59] RB: Yeah, absolutely. So for those who haven’t checked out the book, my background is actually, I’m a computational biologist, with specialization in the neuroscience underneath addictions and motivated behavior before coming and founding Boundless. And there’s actually a few things I did want to say prior to that regarding the hyper-personalization problems. And I’m going to go ahead and throw a controversial wrench in the machine here. The first is which is that when everyone leaves this panel with all of us interviewers we’re under deep economic influences to go and explore how we can best leverage hyper personalization. It was very involved to say that we’re worried about hyper personalization out of like philosophical or philanthropic level, but the practical reality is because of all of our businesses and the incentive structures we’re under, this is the new normal.
[12:41] RB: We’re not walking back from it. Second thing is that if you leave YouTube’s autoplay on and just keep presuming to play the next thing is it’s predicting based on your past behavior, what you’re going to want to perfectly adequately to do these prediction models. It almost unilaterally leans you towards videos designed to be very radicalized or extreme. What they found is that these are the places that get the most attention, the most views, and a lot of this crashes out of just the attention economics of the matter. YouTube makes its money off of how much people are just watching, so it places the A.I. under weird optimization goals because for any learning system that building have to set a here’s your success criteria and how you going to turn to that and their success criteria is just eyeballs. So this leads to very alarming and disturbing things. And then the third thing is I spent some time with NATO this past summer at a summit where one of the other presenters had done some data mining that suggested that this idea that what people really need is news from the other sides, so to speak, didn’t actually solve the problem at all. In fact, because of some cognitive biases, if you want someone who is even slightly liberal, something that is even slightly conservative trying to break out of this hyper personalization box, they actually retrench further into their own opinion. They’ve studied this on the net that we are actually almost permanently past being able to fix this part of discourse. So that’s the wrench that I have with personalization and to figure out what the heck we’re going to do with it given the deep philosophical problems with this.
[14:13] RB: And then the deep pragmatic reasons why everyone in this room is going to go and leave and do that. As far as the surprise parts go, a lot of what keeps us glued into these systems and we think from the CX perspective then what we can optimize towards if we’re really interested in getting the most out of this is how we inject those moments of variability. How should it be that in a flow of purchasing or in a success management system with a customer that they already have a brand can go above and beyond to be delightful because it is only in those moments that we retrench the habits of brand affiliation and brand loyalty and brand identity. If the products are always just quality, that’s just a baseline. It is when and how we can predictably go above and beyond in ways that we can model out. We don’t think the customer’s going to see this coming, so it’s going to be effective in terms of rewiring their habits around our brand.
[15:08] MD: I wanted to move on into the future of hyper personalization and what the next stage is going to be. Is it going to be like ultra hyper personalization? How does this get more personal? How is A.I. going to play a role in that?
[15:30] BH: I’ll go first. Well, I think it’s good. The more data we get, the the better or up to a point, the better we will be at predicting an outcome of some sorts. So as long as we have enough data about somebody or a group of people, we can better understand them and suggest products, videos and so forth to see. We live in a big data world where everything is digitized, so companies and governments are just finding out more about us, not just what we buy, but things we look at, where we go, where we hang out, who our friends are. It’s just going to become part, as Ramsay said, we opened the box, so we’re not going back.
[16:15] MD: Right.
[16:16] BH: We’ve got to deal with this somehow.
[16:19] RB: I think it’s interesting because, especially in this post GGPR world, we’re now seeing this cultural backlash that people would prefer to not have all of this known. They also would prefer a really superior customer experience. So what do people really want? Do they want to have an experience that appears like the machine truly gets and knows them or do they want data privacy?
[16:42] MD: It’s throwing the Frisbee at the dog, right?
[16:46] RB: Yeah. It’s like the dog wants to play fetch. He does want to return to the Frisbee, but he wants you to keep throwing the Frisbee.
[16:50] GC: One thing that I believe is going to help, it’s not only going to help hyper personalization, but it’s also going to be the future of hyper personalization when you were talking about the future. I think there are basically two things are about to happen. One is cross industry collaboration, so different companies that will cross reference the information that they’re getting from customers to help them create more personalized experiences for each one of them. So basically airline companies can combine with Netflix database and try to help clients to understand what movies they were watching on the plane or on their profiles. So Netflix can actually build a better selection of movies and vice versa. So you can have like Whole Foods telling who is a vegetarian to another company so they can actually create an experience that is more likely to affect people that are vegetarian.
[17:47] GC: One of the things that I think is going to happen next is like different companies from different industries will actually combine their data to try to maximize the hyper personalization, and there’s an ethical thing behind that. But I see that it will happen, and it’s not going to take long. And another thing is what I call hot data and cold data. So today most of the hyper personalization that we see is based on analytical data that we have based on transactional information. So most companies have access and they know what customers are buying, how much they’re spending, who they are, what are the things that they buying, but they don’t actually know why. So there is a lot of NLP coming in, new technology for natural language processing that will basically start digitizing everything that is being said in contact center conversations, customer service and different dialogues. And we use this information that I call hot data – basically feedback that is more likely to explain the why behind the buy and we’ll combine also this information to the models so companies can not only view hyper personalization using your transaction as a base, but also the things that you expect and the things that you have been talking about. So these are two things that for me are two things that we will drastically affect the next generation of hyper personalization.
[19:05] MD: Well, reeling it back just a little bit. So we established the topic of cross industry collaboration, which is something that we are discussing on this panel. So taking it to that. The stuff that I have seen so far, I saw a very interesting article about restaurants. I just saw that yesterday on the Wall Street Journal I believe, where they were talking about how a lot of restaurants are using information from apps and even from software that controls reservations to be able to record not only allergies, etc., but also what people enjoy about the meal to auto suggest new options for dining habits through digital apps. I’ve also seen a lot of use for this in marketing, which is explainable because if it’s tracing what people’s behavior is, then it makes it a lot easier for us to target ads and other forms of advertisement and reach the people that we want to reach. Can anyone contribute with any more uses of cross industry collaboration and different ways that A.I. or data analytics or which one is best in this regard? Whether it’s data analytical models doing the prediction or if it’s artificial intelligence and what the advantages are?
[20:24] RB: Yeah, we really should probably just get terms straight about the line and discrepancy between artificial intelligence and data analytics processes because it might not be right to count them as very different things here. If there’s anything that the machine’s doing that it’s taking a set of information that we’re giving it and by some process making a prediction coming out of that we can call that artificially intelligent. So let me think about that line between the two. Data analytics and A.I. are probably pointing at the same thing. When you think about what this means for using this different data from different sources, the best future, the best way to predict future behavior is to have a very good understanding of past behavior. So if we have where someone’s been checking in, what they’ve been eating, how they have been spending their money be it a Facebook login that they use for authentication that we then can we go pull almost anything from, including things like personality parameter traits, it’s becomes exceedingly easy then to predict what they’re going to do next.
[21:25] RB: So that part of predictive analytics makes it pretty straightforward to know that the application strategically of particular types of mergers or behavioral engineering interventions increase the probability that we can get someone to perform a particular type of behavior at a particular time, at a particular place. That’s what the future of this is going to be shaped like. As we get more and more of these data streams and we have them tied together, we get a bigger picture, so to speak, not just bigger data but a bigger, more whole picture of who a person is. Not that we necessarily need to know what’s going on inside their mind, what we can predict, what they’re going to do with their behavior. We can predict what they’re going to do with their purchasing.
[22:00] BH: Right. And that’s what a lot of the customer success platforms are doing is they’re just analyzing a customer’s behavior to understand how, like for example, in SAS companies, clients can leave at the end of the month if they don’t like the service. So what companies want to do now is look at how customers are using their product and solution and understand how people use it and if those people are using it a certain way, are they more apt to convert next month maybe to a higher subscription or just taking the customer versus those who behave in a different way and are leaving. So if you know that information about, I want my customers to behave in this fashion, you want to encourage those kinds of behaviors to ensure that they’ll stay on as a long term customer.
[22:43] MD: Right. That’s kind of what you do over there at Boundless, right Ramsay? [22:47] RB: Yeah. So we offer this behavioral analysis and behavioral engineering platform for ingesting and analyzing how people behave. So you can predict what they will do next and we offer automated ways to intervene. It’s not enough that I go around and personally distribute high fives to everyone because that doesn’t scale particularly well. But if we know something about the brain’s motivational machinery, if we have a really good behavioral event stream of how someone has been behaving and we can strategically learn how to reinforce it at the proper times, we can predictably increase at population levels how often people are going to come back, whether or not they’re going to churn, how long they’re going to be retained, whether they’re going to perform key actions. We’ll be able to see this in everything ranging from e-commerce, like adding items to your cart, to medical and pharmaceutical adherence because it turns out the brain is a quite model-able at the behavioral level and then quite plastic if you have good control of the environment.
[23:39] MD: I feel like this is maybe the field that’s got the largest business appeal for corporations in the capacity of increasing revenue and decreasing churn. Is that right?
[23:54] RB: It takes the last chunk of risk out of the equation. The last piece of the risks is how are people actually going to behave. You can control for supply chains. You can do really good market research and make sure your colors are right and the brand points are right, and then there’s that last piece of, well, you know, people still behave like so. This is a way for teams to systematically mitigate that last chunk of risk. It is fraught with peril. It is fraught with humanitarian peril. This is why we talked a lot about the ethics and the imperative of ethics for transparency and alignment around using these behavioral engineering techniques. It really does provide kind of a final frontier on what is happening when rubber hits the road for this interaction between humans and machines where it turns out given adequate knowledge and adequate prediction models, we are very, very knowable and very, very changeable.
[24:45] MD: Yeah.
[24:46] GC: Ramsay, can I ask you something or add to what you’re saying. I see that different companies, let’s take credit card companies as an example, if we get three major brands, they’re basically dealing with the same amount of data with the same customer profile or similar customer profiles and personas and they’re crossing all this data using different models and basically coming to a very similar conclusion and probably creating very commoditized customer experience because the conclusions are going to be very likely, the same. How do you see A.I. helping big companies like this to find new answers in the experience that they could explore to get out of this commoditization zone because even though it’s different clients all the data, it’s very similar. What I’m trying to say here is, for example, I’ve seen many telecommunication companies working with data, and they usually find things related to coverage signal and internet as like main issues and how to address different daily proposition for these three things. But no matter the company you’re dealing with, these are the three most important things because they’re all huge, and they all have the same kind of the same size of population with the same transaction or transactions happening on a daily basis. So how do you see A.I. helping with finding the new answers of the experiences that are more likely to motivate someone to pick your company instead of the other?
[26:19] RB: Yeah, absolutely, and that’s a lot of what we do with Boundless. We work with teams ranging from startups up to Fortune 100 to help them understand what sort of goals they have for human behavior, what hurts right now in the organization, what drives success within the organization and how are people (customers or internally) behaving, and how do you need them to behave to be able to meet these sort of business objectives and humanitarian objectives and when we find what those actions are, then we use some of our other out of the box available or bespoke analysis solutions to help them better know places where they had previously unknowns or they had some risk involved because they weren’t able to adequately collect or predict how someone is about to behave and then leverage these types of interventions in that. So for the examples of people like telecom companies or credit card companies, we would start with analyzing what about your users don’t you know that if you knew this thing about how people behaved regardless of their motivations for the behavior, but how they behaved, you’ll be able to act more efficiently or better personalize the experience to them. And we find that if that metric could be changed, their behavioral engineering intervention, which direction needs to change? Do you need people to be doing something more frequently? Do you need them to do a hard thing once? Do you need them to do something less frequently? And then designing interventions that scale around that. So it’s very much this process of working with these students hands on, taking these out of the box projects that we have, deploying them and tailoring them as they go. Because even though the A.I. solutions are very recyclable and the problems are very analogous, this always ends up being a little different between these organizations.
[27:54] BH: Right. And also some feedback on that. Some organizations have different problems. One organization may have two problems, so you will find a different set of variables that predict churn. Another company may have an upsell problem and find that there’s no set of variables that predict that. So like you said, you need to focus on what your outcome is and build your A.I. models around that outcome.
[28:16] MD: Yeah.
[28:18] RB: This is just a series of tools. This is just a series of hammers, like if you don’t have a thing that you’re trying to solve with them, these are just very interesting python packages. They’re just statistical methods for making based on past behavior predictions about future behavio,r pattern recognizers of current behavior. But unless you have a really tangible business or humanitarian objective, these are just techniques.
[28:37] MD: Right. That’s something that’s also here as a topic for us to talk about – how A.I. is a tool. And I think Gui brought up a very interesting point about that, which is don’t build a wall because you have stones, gather stones because you need to build a wall. You want to expand on that a little bit, Gui?
[28:56] GC: Yeah, it’s because I see sometimes even with our own clients, they just want to see, for example, or a survey dialogue with customers reacting to different stimulus in a different way. And sometimes the survey doesn’t do that, especially when the customers come in with a very complete answer with everything that we want to know, it’s already there. So sometimes the survey simply doesn’t react because the A.I. understood everything that the machine was supposed to understand, and they get frustrated because they wanted to see more interaction than dynamics. And I usually say, it’s not about showing off the A.I. in the sense that we have to create interactions or more and more features so you can see interactions. It’s basically about using A.I. the best way possible so you don’t bother people, so you actually make them have a seamless experience. And another thing that I see happening all the time is companies, for example, focusing on digitalization as if this would solve all of their problems and selecting different technologies based on if it’s written A.I. in the description or not, actually having a question to ask first. Like Ramsay said, it’s all about what are your goals? And then based on your goals, you’re going to find the best tool or the best A.I. You’re even going to check if you need an A.I. to solve that problem or because it’s a trend. Sometimes you don’t need an A.I. Sometimes all you need is a statistical model. Sometimes it’s something different that doesn’t contain A.I.
[30:47] RB: We’re happy to work with them then.
[30:53] MD: This is something that a lot of companies are doing maybe mistakenly not only with artificial intelligence, but with other forms of technology that are out there. They’re trying to fit the technology into their product, as opposed to finding the right technology for their products. Am I right?
[31:08] MB: Yes, just look at blockchain. I mean, that’s a great example.
[31:15] MD: You want to expand a little bit on that Michael?
[30:35] MD: I’m sure there are a lot of people who are watching this webinar right now that are somehow looking for a way to fit A.I. into their business model. Right? And they’re like, oh, I have to get on board that A.I. train. How do I do this?
[31:18] Well, I just see everywhere, like my brother works for a managed service provider company in England, and he told me recently they were at a big fair where all the different businesses are. And they had these two guys dressed in white doctors’ coats up on the stage selling their latest A.I. package and I said “Really, well what is it then? I’d be very interested.” He goes, “I think it’s just a piece of python script, Mike. I don’t think they’ve really built anything.” And everybody’s just jumping in left, right, and center. And what I always tell people is, look, you know, it’s kind of making the point that’s already been made, is let’s just look at the customers. Let’s look at what their needs are. Let’s figure out the best solution for them first. A lot of people throw a lot of money at stuff as well with some big platform and you can’t really leverage it as well.
[32:02] MB: So that it was kind of the main thing that I would want people to take away from this is do you even need an A.I? It’s the stuff that’s around now. I know Ramsay is into some very cutting edge stuff that is. I didn’t receive much difference to the stuff that was around 10, 15 years ago. The neural networks are still there as genetic algorithms. It’s really funny because I did my thesis on genetic algorithms to evolve neural nets in 1999, and now they’re trying to say that evolutionary algorithms to evolve are the new big thing, and this was already being done 20 years ago. I just need just to be cognizant of that.
[32:40] RB: It’s just been a scale. It’s the scale and the speed and the fact that cloud computing made it such that for pennies on the dollar, now we can do things that would have otherwise taken a small company millions of dollars of server operating costs and staffing to run this department. And then two, and this is maybe Michael, I will defend the shift. There’s this really fantastic, almost military industrial complex relationship between Silicon Valley and academia around open source packages for machine learning. So everything that you did your thesis on, for example, it is true that we have not gotten very far in the base theory differences about using evolutionary algorithms to better train the weights and recurrent backpropagation neural networks. You’re right, that hasn’t changed probably since the 80s. What has changed is the ease of deployment and the human usability. So a lot of these open source packages maintained by teams ranging from Google to Cal Tech and USC make it such that instead of being literally a thesis student working on this, you can be an intrepid undergrad who’s got a six pack of red bull and a weekend to kill and you can get something like that. And that’s actually a pretty substantial. That’s substantial.
[33:55] MB: Did you see this video I sent you Mary? So I was working with this person on the net on these evolutionary algorithms, and I found out afterwards it was just a 16 year old student in Egypt. So I thought that was so cool.
[34:09] RB: And that’s the cool part. Why all this is happening right now is because these things have gone from being exceedingly expensive and exceedingly nuance such as only a handful of experts who built them can operate them to they cost almost nothing to rounder, they are available out of the box and they’ve been deeply to formalized, made open source, which is cool. That is cool. That is a difference. But it’s not like a core technology difference. It’s a package. But it doesn’t really get to build new and interesting things out of that. [34:35] MD: I wanted to open now for the panelists to ask questions to each other. So I wanted to start with you Bob, and choose somebody that you have a question that you want to ask and fire away. [34:09] BH: Gui, so how do you use A.I. or machine learning in your new survey methodology?
[34:55] GC: Okay. First of all, let me try to explain what’s the base of what we do. Basically we are trying to avoid what happened with blackberry and most recently with ToysRUs and with most fortune 500 companies that are now gone. So basically they were registering a great customer satisfaction score or net promoter score, while losing a tremendous number of customers. And in the end, the question is were customers lying on these surveys? No, probably not. So these companies are getting two things wrong. First, they were using the wrong metric, and second they were asking the wrong question. So today, most surveys that we see, and there are billions of them around the world, so billions of customer surveys being responded on a yearly basis and there are still designed from inside out with companies measuring satisfaction normally and choosing what they want to ask for customers.
[35:51] So for example, after purchasing a toy and leaving ToysRUs, I would get a survey asking me if the store was clean, if I would recommend it to friends and family, how much of an effort did I make finding the toy, etc. And in the end I remember always having great in store experience with ToysRUs. So I would always give ToysRUs an amazing score for everything they asked. However, since most westerners are structured to verify positive or negative customer emotions with the company’s current value proposition, ToysRUs would have never had a clue why nine out of ten times when my kids ask me for a toy, I would go to amazon, for example. So the problem with traditional surveys is that companies are not building a dialogue. It’s more like an insight interrogation instead of an outside in bilateral, one on one conversation. And that’s where the A.I. plays its part. Worthix is the first customer survey platform where companies don’t have to design a questionnaire where customers will talk about whatever they want, and the survey will reengage with them addressing subjects that they want to talk about, taking into consideration when building this dialogue, everything that is being responded by every other respondent in the same survey and in previous surveys for the same industry. In the end, we help companies to first discover the true worth and and later to keep up with the speed of change. Especially because since traditional surveys like the questionnaire are not aesthetic. When we’re talking about Worthix, every time the market or customer expectations change, the Worthix survey will capture this nuance and this change.
[37:36] So now if you want to know more about the technical way, how we build this real time dialogue because it’s quite different the way we do it. Basically we use a proprietary A.I. That it’s a girl we call LUCI and LUCI stands for listen, understand, converse, and improve. So LUCI’s job on a technical perspective is to understand what customers are saying and open ended text questions and open ended dialogue every time customer responses are too shallow, until it builds a strong feedback content. And then loosely correlate different mentions given by different customers with its impacts in those specific customers’ decisions and inform companies about which experiences, despite of the frequency, because sometimes customers are more likely to talk about things, but those things are not necessarily that impactful. So despite the frequency, we can pinpoint how likely one experience or the other are to make a company, a product, or a service the most worth it alternative in the market when the customer has this need that their value proposition was designed to address.
[38:41] BH: It sounds like you’re combining both content analytics and sentiment analysis to understand the customer better, to understand what drives their satisfaction or their perception of worth of your company.
[38:52] GC: Yeah. We are combining NLP like text analytics and sentiment actually with different variations on how you’re grading different scores. And then we also combine with big data. So we usually dive into companies’ databases to see how people are actually behaving and consuming, and we correlate all this information from whatever they’re saying to how they’re scoring to how they’re spending, and we found among everything that they’re saying, what are the things that are more likely to actually be the motivator of the behavior. It’s not about only stimulating them to behave in the way that we know that they will follow, but why they’re doing the things they’re doing and what’s the next expectation that they have regarding this particular company’s value proposition.
[39:36] MD: Do you want to go ahead and ask a question, Gui?
[39:38] GC: Oh yes. Let me, I, I wrote some questions here. So Ramsay, my question is quite different. It’s regarding techno cognitive bias. So how do we make sure we don’t insert any human biases into the machine while the machine is doing this training stage for companies that use, for example, human in the loop to train NLPs or any other models. So how can we make sure to avoid techno cognitive biases?
[40:18] RB: That is one of the biggest things that’s been identified as one of your challenges around solutions and it’s actually one of the principles for the Allen Institute for Artificial Intelligence. We were at something like a hippocratic oath for A.I. practitioners last year, which was very exciting. He addressed this point a to a first approximation for solution that a lot of people are finding traction with is to have more than one type of person demographically and by background and way of thinking, be the creator of these systems or be the source of the training data set. So if we really only let a group of people who look alike and behave alike and think alike build a system for understanding behavior, they’re going to have left a residue of their own minds on it. But if we have a lot of different types of minds that have different perspectives, backgrounds, and identities from different places, we can mitigate that risk that this thing is going to have a kind of myopia by making sure that it’s been exposed to and created by many different types of minds.
[41:17] And that’s fantastic when you think about the imperatives around building a more inclusive workforce, especially in technology, we actually see that we’re deeply incentivized to do so. Even just from a very pragmatic sense, if we’re going to build learning machines, they have to be built modeled after more than just one person or else they’re all going to increasingly be shaped like white heteronormative, bearded California male technocrats from the Bay who currently are the leaders in building these types of solutions. So it’s no wonder they think and behave like them. So deeply in a more inclusive workplace, we are going to get there, I think. [41:47] BH: That’s a great argument for gender diversity in the workplace. [41:51] RB: You cannot tell people who who think other than yourself build machines. They’re also just going to get a bunch of thinking machines that are tech bros.
[41:58] MD: So to actually have a sample of the general population training the machine, basically, if you want to have a proper representation of the general public and you have to have that sort of sample in your training.
[42:10] RB: And in the development of these systems that we’ll be doing the learning and in the training data they’re exposed to. You’re going to feed it training data that contains latent biases and the machine’s just going to pick up the biopsies and take them as basic truths.
[42:24] MB: This just happened as well. Amazon had to shut down a project because it was for recruiting, and it was unfortunately discriminating against one particular demographic, so they just shut the whole thing down.
[42:35] RB: Police that have found that they often purchase predictive analytics solutions to help them understand which blocks of the city to go patrol and send more cars to, which creates this feedback loop of cognitive biases because this past training data it took to build the prediction models and they already had biases going into what neighborhoods they’d hang around, and so it became a self fulfilling prophecy.
[43:00] MD: Is there actually a way to make sure there are zero biases in a machine learning algorithm? Is that possible?
[43:07] RB: Probably not. To ask a question like that is to propose that there’s a mathematical proof one could run that would return “yes, we can confirm there are zero” and “no they’re not.” So the things you have to do as a design perspective, just be cognizant that these biases exist and try to design around them and have a threshold of say this is adequately de-biased because it’s not perfect. It’s never going to be perfect, but this is going to be good enough that we can look at this with multiple stakeholder perspectives and say, yeah, that cuts it. [43:35] MD: Ramsey, do you have a question you want to ask? [43:37] RB: Yeah, actually the question for everybody. Coming from behavioral neuroscience, there’s oftentimes a deep distrust of surveys because they contain tons of cognitive biases, because people oftentimes have post thought confabulations about why they do what they do. And we are very, very good storytelling animals. And oftentimes people have aggressive blind spots between how they behave and how they think and talk about how they behave or how they talk about their internal motivations. Because a lot of our internal motivations are not clinically available to us where they’re hidden deep in the brain stem. We only get to look at our own behavior, and even then we’re very bad at seeing it. So for you gentlemen, what do you think looks like the future of being able to, given that we know nobody’s knows all biases and where they come from, how might we be using these types of statistical engines to subtract those biases or combine them with other behavioral data to actually get at what people really mean no matter what they tell you? [44:40] MD: This goes along with a question that Michael had proposed as well, which is the future of surveys. What is it going? Are they even going to exist in the future? Gui, do you want to answer that?
[44:51] GC: Yeah. I think that one of the things that most people deploying surveys should be doing is to make sure that they’re correlating everything that they’re getting as a response because humans are great at telling stories. And finding correlations. Like when they’re, say what they’re saying, how they’re actually behaving in terms of spending or not spending, buying or not buying, leaving or staying. So you should actually backtrack the things that they’re saying, how they’re actually behaving to try to isolate among everything people are saying and separating the wheat from the shaft and pinpointing what are the most important things for customers. So my point is, I don’t believe surveys will ever die because transactional data can only show companies who’s buying, when they’re buying, how often they’re buying, and what they’re buying. However, transactional data will never be able to capture the why, like the why behind the buy.
[45:50] GC: And the only way to capture this data is by dialoguing with customers. So surveys won’t die, but they will change a lot. I totally agree that A.I. will change a lot, and A.I. will be the center of this change. So companies will no longer design questionnaires. Surveys will flow a lot more, like a unique dialogue allowing customers to better express themselves based on the most important things for them. And it will sometimes look like a questionnaire or a chat bot, but sometimes it will probably be like Cortana or Siri or Google Assistant or even a video avatar that will talk to you using human and facial expression and not only capture the things that customers are saying, but also capture respondents’ voice, body reactions, their pauses, and everything related to how customers are reacting to the term, how to keep this dialogue flowing. And also all this information will be used and correlated with what I’m calling cold data, which is all the transactional data to verify which kind of a feedback or stimulus, or things that people are justifying or the stories you are creating, which kind of stories are more likely to cause the specific behavior based on true behaviors that were registered. So that’s my point of view about the future of surveys and how companies should be adjusting their surveys.
[47:12] MD: Great. I think we’re going to wrap for now and open for questions from the public and from the audience. So we’re going to take two minutes and we’ll be right back.
Meet the Worthix Content Team. We’re dedicated to bringing our readers value from the crossroads of CX thought leaders, industry experts, on-the-ground CX practitioners and top academics from around the world.