Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Scribe Software.
Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.
Gardner |
We’ll examine how business-intelligence (BI) trends are requiring access and automation across data flows from a variety of sources, formats, and from many business applications.
Our discussion focuses on ways that enterprises are effectively harvesting data in all its forms, and creating integration that fosters better use of data throughout the business process lifecycle.
Here now to share their insights into using data strategically by exploiting all of the data from all of the applications across business ecosystems, we’re joined by Jon Petrucelli, Senior Director of Hitachi Solution Dynamics, CRM and Marketing Practice, based in Austin, Texas. Welcome, Jon.
Jon Petrucelli: Thanks, Dana.
Gardner: We’re also here with Rick Percuoco, Senior Vice President of Research and Development at Trillium Software in Bedford, Mass. Welcome, Rick.
Rick Percuoco: Hi, Dana. Thank you.
Gardner: And we're also joined by Betsy Bilhorn, Vice President of Product Management at Scribe Software in Manchester NH. Welcome, Betsy. [Disclosure: Scribe Software is a sponsor of BriefingsDirect podcasts.]
Betsy Bilhorn: Thank you, Dana.
Gardner: Betsy, let me start with you. We know that more businesses are trying to leverage and exploit their data, helping them to become more agile, predictive, and efficient. What's been holding them back from gaining access to the most relevant data? What's the roadblock here?
Bilhorn: There are a couple of things. One is the explosion in the different types and kinds of data. Then, you start mixing that with legacy systems that have always been somewhat difficult to get to. Bringing those all together and making sense of that are the two biggest ones. Those have been around for a long, long time.
Bilhorn |
Gardner: Jon, how about your customers, at Hitachi? What are you seeing in terms of the struggle that they're facing in getting better data for better intelligence and analytics?
Legacy systems
Petrucelli: We work with a lot of large enterprise, global-type customers. To build on what Betsy said, they have a lot of legacy systems. There's a lot of data that’s captured inside these legacy systems, and those systems were not designed to be open architected, with sharing their data with other systems.
When you’re dealing with modern systems, it's definitely getting easier. When you deal with middleware software like Scribe, especially with Scribe Online, it gets much easier. But the biggest thing that we encounter in the field with these larger companies is just a lack of understanding of the modern middleware and integration and lack of understanding of what the business needs. Does it really need real-time integration?
Petrucelli |
It's really a people issue. It's an educational issue of helping them understand that this isn't as hard as they think it is. Let's scope it down. Let's understand what the business really needs. Usually, that becomes something a lot more realistic, pragmatic, and easier to do than they originally anticipated going into the project.
In the last 5 to 10 years, we've seen data integration get much easier to do, and a lot of people just don’t understand that yet. That’s the lack of understanding and lack of education around data integration and how to exploit this big-data proliferation that’s happening. A lot of users don't quite understand how to do that, and that’s the biggest challenge. It’s the people side of it. That’s the biggest challenge for us.
Gardner: Rick Percuoco at Trillium, tell us what you are seeing when it comes to the impetus for doing data integration. Perhaps in the past, folks saw this as too daunting and complex or involved skill sets that they didn't have. But it seems now that we have a rationale for wanting to have a much better handle on as much data as possible. What's driving the need for this?
Percuoco: I would definitely agree with what Betsy and Jon said. In dealing with that kind of client base, I can see that a lot of the principles and a lot of the projects are in their infancy, even with some of the senior architects in the business. Certain companies, by their nature, deal with volume data. Telecom providers or credit card companies are being forced into building these large data repositories because the current business needs would support that anyway.
Percuoco |
Mostly, it's used from an analytics or BI standpoint, because now you have the capability of using big-data SQL engines to link and join across disparate sources. You can ask questions and get information, mines of information, that you never could before.
The aspect of extract, transform, load (ETL) will definitely be affected with the large data volumes, as you can't move the data like you used to in the past. Also, governance is becoming a stronger force within companies, because as you load many sources of data into one repository, it’s easier to have some kind of governance capabilities around that.
Higher scales
Gardner: Betsy, it sounds that as if the technology has moved in such a way that the big-data analytics, the platform for doing analysis, has become much more capable in dealing at higher scales, faster speeds at lower costs. But we still come back to that same problem of getting to the data, putting it in a format that can be used, directing it, managing that flow, automating it, and then, of course, dealing with the compliance, governance, risk, and security issues.
Is that the correct read on this, that we've been able to move quite well in terms of the analytics engine capability, but we're still struggling with getting the fuel to that engine?
Bilhorn: I would absolutely agree with that. When you look at the trends out there, when we talk about big data, big analytics and all of that, that's moved much faster than capturing those data sources and getting them there. Again, it goes back to all of these sources Jon was referring to. Some of these systems that we want to get the data from were never built to be open. So there is a lot of work just to get them out of there.
The other thing a lot of people like to talk about is an application programming interface (API) economy. "We will have an API and we can get through web services at all this great stuff," but what we’ve seen in building a platform ourselves and having that connectivity, is that not all of those APIs are created equal.
The vendors who are supplying this data, or these data services, are kind of shooting themselves in the foot and making it difficult for the customer to consume them, because the APIs are poorly written and very hard to understand, or they simply don’t have the performance to even get the data out of the system.
The vendors who are supplying this data, or these data services
themselves, are kind of shooting themselves in the foot and making it
difficult for the customer to consume them.
On top of that, you have other vendors who have certain types of terms of service, where they cut off the service or they may charge you for it. So when they talk about how it's great that they can do all these analytics, in getting the data in there, there are just so many show stoppers on a number of fronts. It's very, very challenging.
Gardner: Let's think about what we are doing in terms of expanding the requirements for business activities and values here. Customer relationship management (CRM), I imagine, paved the way where we’re trying to get a single view of the customer across many different data type of activities. But now, we’re pushing the envelope to a single view of the patient across multiple healthcare organizations or a single view of a process that has a cloud part, an on-premises part, and an ecosystem supply-chain part.
It seems as if we’ve moved in more complexity here. Jon Petrucelli, how are the systems keeping up with these complex demands, expanding concentric circles of inclusion, if you will, when it comes to a single view of an object, individual, or process?
Petrucelli: That’s a huge challenge. Some people might call it data taxonomy, data structuring, or data hygiene, but you have to be able to define a unique identifier for your primary object in the data. That’s what we see. Sometimes, businesses have a hard time deciding on that, but usually it jumps out at you.
The only things that will transact business with you in the world are people or organizations, generally speaking. A dog, a tree, or an asset is not going to actually transact business with you.
Master key
We have specialists on our team that do this taxonomy, architects that help our organizations, figure out what a master key is, a master global unique identifier for an object. Then, you come up with a schema that allows you to either use one that’s existing or you concatenate a bunch of the data together to create one. That becomes the way that you relate all of the objects to each other that sets the foreign key that they hook up to.
Gardner: I think that helps illustrate how far you can go with this. It seems, though, as if you have to get your own house in order -- your own legacy applications, your own capabilities -- before you can start to expand and gain some of these competitive advantages. It seems that the more data you can bring it to bear on your analytics, the more predictive, the more precise, and the more advantageous your business decisions will be.
I think we understand the complexity, but let's take it back inside the organization. Rick, tell us first about what Trillium Software does and how you're seeing organizations take the steps to begin to get the skills, expertise, and culture to make data integration and data lifecycle management happen better.
Percuoco: Trillium Software has always been a data-quality company. We have a fairly mature and diverse platform for data that you push through. Because for analytics, for risk and compliance, or for anything where you need to use your data to calculate some kind of risk quotient ratios or modeling whereby you run your business, the quality of your data is very, very important.
With
the advent of big data and the volume of more and varied unstructured
data, the problem of data quality is like on steroids now.
If you’re using that data that comes in from multiple channels to make decisions in your business, then obviously data quality and making that data the most accurate that it can be by matching it against structured sources is a huge difference in terms of whether you'll be making the right decisions or not.
With the advent of big data and the volume of more and varied unstructured data, the problem of data quality is on steroids now. You have a quality issue with your data. If anybody who works in any company is really honest with themselves and with the company, they see that the integrity of the data is a huge issue.
As the sources of data become more varied and they come from unstructured data sources like social media, the quality of the data is even more at risk and in question. There needs to be some kind of platform that can filter out the chatter in social media and the things that aren't important from a business aspect.
Gardner: Betsy Bilhorn, tell us about Scribe Software and how what Trillium and Hitachi Solutions are doing helps data management.
Bilhorn: We look at ourselves as the proverbial PVC pipe, so to speak, to bring data around to various applications and the business processes and analytics. Where folks like Hitachi leverage our platform is in being able to make that process as easy and as painless as possible.
We want people to get value out of their data, increase the pace of their business, and increase the value that they’re getting out of their business. That shouldn’t be a multi-year project. It shouldn’t be something that you’re tearing your hair out over and running screaming off a bridge.
As easy as possible
Our goal here at Scribe is to make that data integration and to get that data where it needs to go, to the right person, at the right time, as easily and simply as possible for companies like Hitachi and their clients.
Working with Trillium, one of the great things with that partnership is obviously that there is the problem of garbage in/garbage out. Trillium provides that platform by which not only can you get your data where you need it to go, but you can also have it clean and you can have it deduped. You can have a better quality of data as it's moving around in your business. When you look at those three aspects together, that’s where Scribe sits in the middle.
Petrucelli: We used to do custom software integration. With a lot of our customers we see lot of custom .NET code or other types of codesets, Java for example, that do the integration. They used to do that, and we still see some bigger organizations that are stuck on that stuff. That’s a way to paint yourself into a corner and make yourself captive to some developer.
We highly recommend that people move away from that and go to a platform-based middleware application like Scribe. Scribe is our preferred platform middleware, because that makes it much more sustainable and changeable as you move forward. Inevitably, in integration, someone is going to want to change something later on.
When you have a custom code integration someone has to actually crack open that code, take it offline, or make a change and then re-update the code and things like -- and its all just pure spaghetti code.
We highly recommend that people move away from that and go to a platform-based middleware application like Scribe.
With a platform like Scribe, its very easy to pick up industry-standard training available online. You’re not held hostage anymore. It’s a graphical user interface (GUI). It's literally drag-and-drop mappings and interlock points. That’s really amazing, being this nice capability in their Scribe Online service. Even children can do an integration. It’s a teaching technique that was developed at Harvard or MIT about how to put puzzle pieces together through integration. If it doesn’t work, the puzzle pieces don’t fit.
They’ve done a really amazing job of making integration for rest of us, not just for developers. We highly recommend people to take a look at that, because it just brings the power back to the business and takes it away from just one developer, a small development shop, or an outsourced developer.
That’s one thing. The other thing I want to add is that we see integration as critical to all of the successor projects at the high levels of adoption and return on investment (ROI). Adoption by the users and then ultimately ROI by the businesses is important, because integration is like gas in the sports car. Without the gas, it's not going to go.
We want to give them one user experience or one user interface to productive users -- especially sales reps in the CRM world and customer service reps. You don’t want them all tabbing between a bunch of different systems. So we bring them into one interface, and with a platform like Microsoft CRM, they can use their interface of choice.
They can move from a desktop, to a laptop, to a tablet, to a mobile device and they’re seeing one version of the truth, because they’re all looking into windows looking into the same realm. And in that realm, what is tunneled in comes through pipes that are Scribe.
Built-in integration
What we do for a lot of customers is intentionally build integration into it using Scribe, because we know that if we can take them down from five different interfaces, you're looking at getting a 360-degree view of the customer that’s calling them or that they’re about to call on. We can take that down to one interface from five.
They’re really going to like that. Their adoption is going to be higher and their productivity is going to be higher. If you can raise the productivity of the users, you can raise the top line of the company when you’re talking about a sales organization. So, integration is the key to drive high level of adoption and high level of ROI and high levels of productivity.
Gardner: Let's talk about some examples of how organizations are using these approaches, tools, methods, and technologies to improve their business and their data value. I know that you can’t always name these organizations, but let's hear a few examples of either named or non-named organizations that are doing this well, doing this correctly, and what it gets for them.
If you can raise the productivity of the users, you can raise the top
line of the company when you’re talking about a sales organization.
Petrucelli: One that pops to mind, because I just was recently dealing with them, is the Oklahoma City Thunder NBA basketball team. I know that they’re not a humongous enterprise account, but sometimes it's hard for people to understand what's going on inside an enterprise account.
Most people follow and are aware of sports. They have an understanding of buying a ticket, being a season ticket holder, and what those concepts are. So it's a very universal language.
The Thunder had a problem where they were using a ticketing system that would sell the tickets, but they had very little CRM capabilities. All this ticketing was done at the industry standard for ticketing and that was great, but there was no way to track, for example, somebody's preferences. You’d have this record of Jon Petrucelli who buys season tickets and comes to certain games. But that’s it; that’s all you’d have.
They couldn’t track who my favorite player was, how many kids I have, if I was married, where I live, what my blog is, what my Facebook profile is. People are very passionate about their sports team. They want to really be associated with them, and they want to be connected with those people. And the sports teams really want to do that, too.
So we had a great project, an award winning project. It's won a Gartner award and Microsoft awards. We helped the Oklahoma City Thunder to leverage this great amount of rich interaction data, this transactional data, the ticketing data about every seat they sat in, and every time they bought.
Rich information
That’s a cool record and that might be one line in the database. Around that record, we’re now able to wrap all the rich information from the internet. And that customer, that season ticket holder, wants to share information, so they can have a much more personalized experience.
Without Scribe and without integration we couldn’t do that. We could easily deploy Microsoft CRM and integrate it into the ticketing system, so all this data was in one spot for the users. It was a real true win-win-win, because not only did the Oklahoma City Thunder have a much more productive experience, but their season ticket account managers could now call on someone and could see their preferences. They could see everything they needed to track about them and see all of their ticketing history in one place.
And they could see if they’re attending, if they are not attending, everything about what's going on with that very high-value customer. So that’s a win for them. They can deliver personalized service. On the other end of it, you have the customer, the season ticket holder and they’re paying a lot of money. For some of them, it’s a lifelong dream to have these tickets or their family has passed them down. So this is a strong relationship.
Especially in this day and age, people expect a personalized touch and a personalized experience, and with integration, we were able to deliver that. With Scribe, with the integration with the ticketing system, putting that all in Microsoft CRM where it's real-time, it's accessible and it's insightful.
It’s not just data anymore. It's real time insights coming out of the system. They could deliver a much better user experience or customer experience, and they have been benchmarked against the best customer organizations in the world. The Oklahoma City Thunder are now rated as the top professional sports fan experience. Of all professional sports, they have the top fan experience -- and it's directly relatable to the CRM platform and the data being driven into it through integration.
It’s not just data anymore. It's real time insights coming out of the system.
Gardner: Great. You can actually see where there is transformational benefit. They’re not just iterative or nice to have. It really changes their business in a major way. Rick Percuoco, any thoughts there at Trillium Software of some examples that exemplify why these approaches are so powerful?
Percuoco: I’ve seen a couple of pretty interesting use cases. One of them is with one of our technical partnerships. They have a data platform also where they use a behavior account-sharing model. It's very interesting in that they take multiple feeds of different data, like social media data, call-center data, data that was entered into a blog from a website. As Jon said, they create a one-customer view of all of those disparate sources of data including social media and then they map for different vertical industries behavioral churn models.
In other words, before someone churns their account or gets rid of their account within a particular industry -- like insurance, for example -- what steps do they go through before they churn their account? Do they send an e-mail to someone? Do they call the call center? Do they send social media messages? Then, through statistical analysis, they build these behavioral churn models.
They put data through these models of transactional data, and when certain accounts or transactional data fall out at certain parts, they match that against the strategic client list and then decide what to do at the different phases of the account churn model.
I've heard of companies, large companies, saving as much as $100 million in account churn by basically understanding what the clients are doing through these behavioral churn models.
Sentiment analysis
Probably the other most prevalent that I've seen with our clients is sentiment analysis. Most people are looking at social media data, seeing what people are saying about them on social media channels, and then using all different creative techniques to try and match those social media personas to client lists within the company to see who is saying what about them.
Sentiment analysis is probably the biggest use case that I've seen, but the account churn with the behavioral models was very, very interesting, and the platform was very complex. On top, it had a productive analytics engine that had about 80 different modeling graphs and it also had some data visualization tools. So it was very, very easy to create shots and graphs and it was actually pretty impressive.
Gardner: Betsy, do you have any examples that also illustrate what we're talking about when it comes to innovation and value around data gathering analytics and business innovation.
Bilhorn: I’m going to do a little bit of a twist here on that problem. We have had a recent customer, who is one of the top LED lighting franchisors in United States, and they had a different bit of a problem. They have about 150 franchises out there and they are all disconnected.
Sentiment analysis is probably the biggest use case that I've seen.
So, in the central office, I can't see what my individual franchises are doing and I can't do any kind of forecasting or business reporting to be able to look at the health of all my franchises all over the country. That was the problem.
The second problem was that they had decided on a standardized NetSuite platform and they wanted all of their franchises to use these. Obviously, for the individual franchise owner, NetSuite was a little too heavy for them and they said overwhelmingly they wanted to have QuickBooks.
This customer came to us and said, “We have a problem here. We can't find anybody to integrate QuickBooks to our central CRM system and we can't report. We’re just completely flying blind here. What can you do for us?”
Via integration, we were able to satisfy that customer requirement. Their franchises can use QuickBooks, which was easy for them, and then through all of that synchronized information back from all of these franchises into central CRM, they were able to do all kinds of analytics and reporting and dashboarding on the health of the whole business.
The other side benefit, which also makes them very competitive, is that they’re able to add franchises very, very quickly. They can have their entire IT systems up and running in 30 minutes and it's all integrated. So the franchisee is ready to go. They have everything there. They can use a system that’s easy for them to use and this company is able to have them up and are getting their data right away.
Consistency and quality
So that’s a little bit different. Big data is not social, but it’s a problem that a lot of businesses face. How do I even get these systems connected so I can even run my business? This rapid repeatable model for this particular business is pretty new. In the past, we’ve seen a lot of people try to wire things up with custom codes, or every thing is ad hoc. They’re able to stand up full IT systems in 30 minutes, every single time over and over again with a high level consistency and quality.
Gardner: Well we have to begin to wrap it up, but I wanted to take a gauge of where we are on this. It seems to me that we’re just scratching the surface. It’s the opening innings, if you will.
Will we start getting these data visualizations down to mobile devices, or have people inputting more information about themselves, their devices, or the internet of things? Let's start with you, Jon. Where are we on the trajectory of where this can go?
Petrucelli: We’re working on some projects right now with geolocation, geocaching, and geosensing, where when a user on a mobile device comes within a range of a certain store, it will serve that user up, if they have downloaded the app. It will be an app on their smartphone and they have opted into those. It will serve them up special offers to try to pull them into the store the same way in which, if you’re walking by a store, somebody might say, “Hey, Jon.” They know who I am and know my personalization, when I come in a range, it now knows my location.
Integration is really the key to drive high levels of adoption, which drives high levels of productivity.
This is somebody who has an affinity card with a certain retailer, or it could be a sports team in the venue that the organization knows during the venue, it knows what their preferences are and it puts exactly the right offer in front of the right person, at the right time, in the right context, and with the right personalization.
We see some organizations moving to that level of integration. With all of the available technology, with the electronic wallets, now with Google Glass, and with smart watches, there is a lot of space to go. I don’t know if it's really relevant to this, but there is a lot of space now.
We’re more in the business app side of it, and I don’t see that going away. Integration is really the key to drive high levels of adoption, which drives high levels of productivity which can drive top line gain and ultimately a better ROI for the company that’s how we really look it integration.
Gardner: Where are we on the trajectory here for using these technologies to advance business?
Percuoco: You mentioned specifically location information, and, as Jon mentioned, it is germane to this discussion. There’s the concept of digital marketing, marketing coupons to people in real-time over their smartphones as they’re walking by businesses, and so forth. That’s definitely one of the very prevalent use cases for location objects.
Shopping patterns
There’s also an interesting one that kind of goes on top of that, where you evaluate web traffic shopping patterns of people, using Google location objects. For large ticket items, you can actually email them, in real time, competitor coupons. For example, a mile down the street, this one company has something for $100 or $200 less.
It's another interesting use case kind of intelligent marketing through digital media in the mobile market. I also see the mobile delivery of information being critical as we move forward.
Pretty much all data integration or BI professionals are basically working parents. It’s very, very important to be able to deliver that information, at least in a dashboard format or a summary format on all the mobile devices. You could be at your kid’s Little League game or you could be out to dinner with your wife, but you may have to check things.
The delivery of information through the mobile market is critical, although the user experience has to be different. There needs to be a bunch of work in terms of data visualization, the user experience, and what to deliver. But the modern family aspects of life and people working are forcing the mobile market to come up to speed.
It’s very, very important to be able to deliver that information, at
least in a dashboard format or a summary format on all the mobile
devices.
The other thing that I would say is in terms of integration methods and what Jon was talking about. You do have to watch out for custom APIs. Trillium has a connectivity business as does Scribe.
As long as you stick with industry-standard handshaking methods, like XML or JSON or web services and RESTful APIs, then usually you can integrate packages fairly smoothly. You really need to make sure that you're using industry-standard hand-offs for a lot of the integration methods. You have four or five different ways to do that, but it’s pretty much the same four or five.
Those would be my thoughts on the future. I also see cloud computing, platform as a service (PaaS), and software as a service (SaaS) really taking hold of the market. Even Microsoft and some of the other platform tools like Office 365 and the email systems in CRM, are all cloud-based applications now, and to be honest, they’re better. The service is better, and there’s no on-premise footprint. I really see the market moving toward PaaS and SaaS to the cloud computing market.
Gardner: What is Scribe Software's vision, and what are the next big challenges that you will be taking your technology to?
Bilhorn: Ideally, what I would like to see, and what I’m hoping for, is that with mobile and consumerization of IT you’re beginning to see that business apps act more like consumer apps, having more standard APIs and forcing better plug and play. This would be great for business. What we’re trying to do, in absence of that, is create that plug-and-play environment to, as Jon said, make it so easy a child can do it.
Seamless integration
Our vision in the future is really flattening that out, but also being able to provide seamless integration experience between this break systems, where at some point you wouldn’t even have to buy middleware as an individual business or a consumer.
The cloud vendors and legacy vendors could embed integration and then be able to have really a plug and play so that the individual user could be doing integration on their own. That’s where we would really like to get to. That’s the vision and where the platform is going for Scribe.
Gardner: Well, great. I’m afraid we’ll have to leave it there. We've been listening to a sponsored BriefingsDirect podcast discussion on how business intelligence and big-data trends are requiring improved access and automation to data flows from a variety of sources.
We've learned of ways that enterprises are effectively harvesting data in all it's forms and creating integrations that foster better use of data throughout the entire lifecycle. The result has been the ability to exploit data strategically among more aspects of enterprise businesses and across more types of applications and processes.
So a huge thanks to our guests Jon Petrucelli, Senior Director of Hitachi Solutions Dynamic CRM and Marketing Practice. Thanks so much Jon.
Petrucelli: Thank you, glad to be here.
Percuoco: Also Rick Percuoco, Senior Vice President of Research and Development at Trillium Software. Thank you so much, Rick.
Percuoco: You’re welcome, Dana.
Gardner: And Betsy Bilhorn, Vice President of Product Management at Scribe Software. Thank you, Betsy.
Bilhorn: Thank you again, Dana.
Gardner: And also a huge thank you to our audience for joining this insightful discussion. This is Dana Gardner, Principal Analyst at Interarbor Solutions. Don’t forget to come back and listen next time.
Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Scribe Software.
Transcript of a BriefingsDirect podcast on how creating big-data capabilities are new top business imperatives in dealing with a flood of data from disparate sources. Copyright Interarbor Solutions, LLC, 2005-2013. All rights reserved.
You may also be interested in:
- BI and Big Data Analytics Force an Overdue Reckoning Between IT and Business Interests
- Synthetic APIs Approach Improves Fragmented Data Acquisition for Thomson Reuters Content Sharing Platform
- The Open Group Conference Panel Explores How the Big Data Era Now Challenges the Status Quo
- HP Experts Analyze and Explain the HAVEn Big Data News From HP Discover Conference
Agnostic Tool Chain Approach Proves Key to Fixing Broken State of Data and Information Management - Ariba Product Roadmap Points to New Value from Cloud Data Analytics, Mobile Support, and Managed Services Procurement
No comments:
Post a Comment