Data Informed Fractal CEO on Tailoring Analytics to Mesh with Corporate Culture

Published By : Data Informed

To make a positive impact with analytics requires much more than expertise in statistics and experience managing data and IT systems. It also calls for embedding the results of analytics systems into business processes managed and used by people—including many who have no experience with statistics or data management.

That reality means that analytics professionals need to calibrate their approach to suit an organization’s existing culture to win widespread adoption, says Srikanth Velamakanni, the CEO of Fractal Analytics.

Velamakanni cites the example a consumer packaged goods company that set ambitious goals for a new demand forecasting system: It had to return results fast and win global adoption by relevant managers. He says the resulting system his team developed accomplished the goal by trading down on the sophistication of their forecasting algorithm so it could be more easily integrated into the company’s business processes. “That was a better way for the company to use analytics and to actually use it for the forecasting purpose,” he says.

Velamakanni has been with Fractal since its founding in Mumbai in 2000, and in a wide-ranging interview on October 16, he offered insights on trends in analytics use cases and implementations, and observations about the lessons analytics developers can learn from Google and others. He also discussed his company’s research into new directions for analytics. What follows is a partial edited transcript of the interview.

Data Informed: You talk about the need to institutionalize analytics. What you have seen in the evolution of your work over time?
Srikanth Velamakanni: We look at the world on a two-dimensional grid. On one axis there is a sophistication of analytics. And on the other axis, the institutionalization of analytics. On the sophistication side you could just be doing spreadsheets, you could just be doing slice and dice, you could be doing advanced analytics and predictive analytics, or you could be doing machine learning. That is a continuum.

On the Y axis you have a continuum that starts with being a very sporadic user of analytics, to a department or an organizational use, to a very institutionalized use. So if you look at that grid and you see all the companies, map all the companies out over the last 14 years that we have been around, we have seen companies systematically move to the northeast to the top-right-hand corner of that grid. Companies such as Google and Amazon and possibly Netflix are in the top-right-hand corner of the grid. And we believe that all other companies are getting there gradually. But if you see the underlying trends I would say there are four key trends that are very critical. One is a man-machine combination [where] human beings are supplementing machines and machines are supplementing human beings in reaching the optimal decision.

And we see that happening more and more. So every decision that can be done through the use of analytics is being done, is gradually moving to the analytics way.

The second big trend that I have seen is around hyper personalization as well as mass customization.

One great example of that is Netflix. They basically used the last many years of data that they have of every customer’s pausing or rewinding or fast-forwarding the video that they are watching. They record all these things as events. And using all this event data they have been able to mass customize a show like “House of Cards,” which arguably is a perfect television show, where they have dialed in all the elements that they believe that customers really want to watch.  And dial down the elements that the US audience doesn’t really like. That’s a great example of mass customization.

On the hyper-personalization side as well we’re seeing a lot that is going on. What we do with customer genomics is an example of that. We look at customer transaction data from millions of transactions and what customers say on social media or other places and we learn what the customers’ preferences are. What do they like, what do they not like, or what is their life stage, or understand whether they are likely to move their house in the next three weeks.

We can understand these things in a very probabilistic manner. And once we have all these different labels of customers which reflect their attitudes and preferences and they are all derived from transaction data, not by survey or other kinds of data. We don’t believe in data sources like Acxiom, or stuff like that tends to get these kinds of data from research.

We don’t believe in asking customers. We believe in observing customers and understanding from their transactions, their actual behavior.

So using that, we probabilistically determine their preferences, and now once we have these preferences about customers and we have all these labels attached against them, we can use that to hyper-personalized marketing that is a lot more effective and delivers a better ROI than otherwise.

The third big trend is, we believe, around experimentation. What we are seeing is a relatively lower importance of pure market research. Especially asking customers what they want and why they like something is not so useful.

It’s still important to observe them but not really to ask them. What we are seeing here is that people are increasingly doing experiments on understanding customer behaviors. So if you look at your Amazon shopping cart and you see that the price of products changes every day or five times a day or 10 times a day that is because what they are doing is that they are trying to understand your responsiveness to price changes  and seeing what happens.

The most interesting trend for me is about understanding what is relevant now and telling you [an answer] before you ask the question. Traditionally analytics has always been about answering a business question. And building a model, predicting a customer response and so on.

But if you think about the amount of data we have collected, most often we are struggling to understand what is the right question, rather than finding answers to a question [when] we don’t even know what is relevant right now.

That I believe is the big question and it is one of the most difficult questions to solve in the areas of analytics because it is undirected exploration. And the only company that I know that has done this very successfully is Google so far with their Google Now product, which basically analyzes your emails, all kinds of other things, and tells you what is relevant to you right now.

And I have personally benefitted from that in the recent past. Where Google alerted me to the fact that my nephew was flying out of the city on that day and I had completely forgotten about that, but it picked up an email that was almost two months old and told me, “Hey, he’s going to fly out tonight and help me go and pick him up,” because Google alerted me in the morning saying, hey, look, this seems like an important event today for you.

That is something that is very, very difficult to do with analytics but we believe that is the next big wave in the analytics world.

Would you be doing research and development on that yourself?
Srikanth Velamakanni: We have set up what we call Fractal Sciences. That team is roughly 18 months old. We do some original research and IP creation in that team. And one of the things that the Fractal Sciences team has set up is what we call a data lab [with] a bunch of people right now. They are given massive amounts of data and some of this is in partnership with our clients who have agreed to co-create IP with us.

So we look at the data and we say, “There are no questions here, just go and explore and tell us what is important right now.” So they are beginning to explore that and the hope is that they will come up with answers to questions that clients were not even aware they should be asking.

What kinds of challenges do you see in customers moving from scattered deployment to institutionalizing analytics?
Srikanth Velamakanni: One example in that is a consumer packaged goods company that is global in 190 different countries. And what they were trying to do was to do demand forecasting across a whole host of countries and categories on a quarterly basis for the next 20 quarters. The idea was to redo these forecasts or revise these forecasts every quarter for the next 20 quarters on a rolling basis.

Now one of the methods here could be just to say take the most sophisticated approach for building these models. And that was our first preference, to actually use the most sophisticated technique. But what we realized when we thought about it a little bit further, specifically along with the client, is that the critical goal here is to institutionalize the use of forecasts in the organization.

The forecast can be produced, that is one thing. What is a business process around the way that business units agree to the forecast and then planning the shipments and stuff like that accordingly and then using that forecast.

So that was very important and also the timing of the forecast was very important. We have to get the forecast out in a quick time because then there is a whole process of the company and various stakeholders buying into that and then using the forecast. The automation of the forecast was very critical. And therefore using the most sophisticated technique was not going to [lead] us to a great amount of automation.

And it would take too much time. So what we actually did was we used a much less sophisticated but still a good technique, a set of techniques called the ARMAX [autoregressive moving average] model, which is basically a slightly enhanced version of ARIMA [autoregressive integrated moving average] models.

Our data scientists would not have been very excited with using that approach in the first place. But we did. And we automated the entire process for 10,000 country-category combinations. And once we got that entire forecasting process in place, over the next one and a half years, we worked to improve the sophistication of the forecasts in terms of techniques that we used and still retain the automation benefits.

So that was a good tradeoff situation where a company actually moved up in terms of institutionalization rather than moving right to more sophistication because that was a better way for the company to use analytics and to actually use it for the forecasting purpose.

It sounds like you want to tailor the approach, or is that a model story for an organization using analytics. Get them embedded and then make them better over time?
Srikanth Velamakanni: This is not a one-solution-fits-all approach because there are organizations that we have seen that the sophistication of analytics is actually really critical.

It really depends on the situation at hand and some companies have a preference naturally to be sophisticated first and institutionalize later. And some companies prefer to institutionalize first and improve sophistication later. That really is a function of the organizational culture as well.

Having said that, I also want to point out that sometimes simplicity is better than complexity in terms of technique because of institutionalization. The other thing that we’ve realized over time especially as we institutionalize analytics is that the complexity of analytics is important. It is important to have complex analytics that solves a problem very, very well. But in terms of how it is consumed in an organization we have to make it very, very simple.

That is the only way to really [win over] consumers in an organization. What we realized is that all the fancy analytics that we can do should all get hidden inside a very easy-to-consume tool like a visual dashboard, a visual storytelling infrastructure.

[With a visual dashboard,] the actual users and decision makers will never see the price elasticity models or the coefficients or something like that. All they will know is that if they tweak the price this way, this is how the market share will be affected. So they have simulators. They have a visual way in which they can look at what will happen if they make decisions based on that.

The best example of that is again, Google. Given how complex and sophisticated the Google search machinery is, it still gives us very intuitive and very easy-to-use results.

That is something that every company has to do. Make it very easy to consume within the organization so that adoption overall increases dramatically.

Can you provide an example of what you called customer genomics?
Srikanth Velamakanni: Customer genomics was born a couple of years ago when we started working with a U.S. retailer that had 60 million customers, 60 million households, and had roughly half a million different products. One of their businesses was a tools business, which is a 100-year-old, very reputable business of theirs. For a long time, they had been segmenting their customers saying these are my expert customers, these are my novice customers, and these are my intermediate customers. And the way they did [this] was that if people bought expensive tools more frequently, they would call them “experts.” And if you buy infrequently and you bought inexpensive tools you would be called “novices.” Which they knew was not the right way of doing it, but that is the best they could do, because you can’t really ask a customer whether they are an expert or a novice. They didn’t have that infrastructure.

And the idea [of customer genomics] that we worked with them on was to say that the basic premise was, you are what you buy.

We went to the merchandisers who were in the tools business and asked them to mark a few of the products. They had thousands of tools-related SKUs. We said mark 50 of them and tell us whether these are expert products, novice products, or intermediate products. Just give us a few examples. And then we said, OK, if you are a customer and you bought one of these products, and that happened to be an expert product, then it’s more likely you are going to be an expert. And if you bought a novice product, more likely that you will be a novice.

We bootstrapped an algorithm that would automatically determine and label all the products as expert, novice, and intermediate, as well as all the customers as expert, novice, and intermediate.

That’s how the problem was structured. Now what we saw after that was some very interesting insight. We found out that the novices really don’t buy cheap tools. They actually buy expensive tools. But they buy cheap accessories. So that was one interesting finding. Another interesting finding was that the experts are not necessarily buying the most expensive tools, but one thing is clear about their buying behavior: They like to buy only the tools and nothing else from the store. They visit the store, they just buy that tool and they go back. They really don’t spend any more time buying products in the store.

And this was the beginning of a full-fledged program across all 17 their business lines, where we basically started using transaction data to understand customer behavior better and then uniquely target them for marketing programs.