Category: Big Data

Machine Learning is State-of-the-Art AI, and It Can Help Enhance the Customer Experience

Posted by on October 05, 2017

connecting-people

Is artificial intelligence the same as machine learning? Machine learning is really a subset of artificial intelligence, and a more precise way to view it is that it is state-of-the-art AI. Machine learning is a “current application of AI” and is centered around the notion that “we should…give machines access to data and let them learn for themselves.” There is no limit to that data (or Big Data).  The challenge is harnessing it for useful purposes.

In his Forbes Magazine piece, contributor Bernard Marr, describes AI as the “broader concept of machines being able to carry out tasks in a way we would consider ‘smart.’” So, AI is any technique that allows computers to imitate human intelligence through logic, “if-then” rules, decisions trees and its crucial component, machine learning.  Machine learning, as an application of AI, employs abstruse (i.e., difficult to understand) statistical techniques, which improve machine performance through exposure to Big Data.

AI has broad applications…

Companies around the world use AI in information technology, marketing, finance and accounting, and customer service. According to  a Harvard Business Review article, IT garners the lion’s share of popularity in AI activities, ranging in applications that detect and deflect security intrusions, to automating production management work. Beyond security and industry, AI has broad applications in improving customer experiences with automatic ticketing, voice- and face-activated chat bots, and much more.

Machine learning is data analysis on steroids…

AI’s subset, machine learning, automates its own model building. Programmers design and use algorithms that are iterative, in that the models learn by repeated exposure to data. As the models encounter new data, they adapt and learn from previous computations. The repeatable decisions and results are based on experience, and the learning grows exponentially.

The return of machine learning

Having experienced somewhat of a slump in popularity, AI and machine learning have, according to one software industry commentator, Jnan Dash, seen “a sudden rise” in their deployment. Dash points to an acceleration in AI/machine learning technology and a market value jump “from $8B this year to $47B by 2020.”

Machine learning, according to one Baidu scientist will be the “new electricity,” which will transform technology. In other words, AI and machine learning will be to our future economy what electricity was to 20th century industry.

The big players are pushing AI and machine learning. Apple, Google, IBM, Microsoft and social media giants Facebook and Twitter are accelerating promoting machine learning. One Google spokesman, for example, recognizes machine learning as “a core transformative way in which we are rethinking everything we are doing.”

How Machine learning has transformed General Electric…

A striking example of how AI and machine learning are transforming one of the oldest American industries, General Electric, is highlighted in this Forbes piece. Fueled by the power of Big Data, GE has leveraged AI and machine learning in a remarkable—and ongoing—migration from an industrial, consumer products, and financial services firm “to a ‘digital industrial’ company” focusing on the “Industrial Internet.” As a result, GE realized $7 billion in software sales in 2016.

GE cashed in on data analytics and AI “to make sense of the massive volumes of Big Data” captured by its own industrial devices.  Their insights on how the “Internet of Things” and machine connectivity were only the first steps in digital transformation led them to the realization that “making machines smarter required embedding AI into the machines and devices.”

After acquiring the necessary start-up expertise, GE figured out the best ways to collect all that data, analyze it, and generate the insights to make equipment run more efficiently. That, in turn, optimized every operation from supply chain to consumers.

5 ways machine learning can also enhance the customer experience…

Machine learning can integrate business data to achieve big savings and efficiency to enhance customer experiences, by:

  1. Reading a customer’s note and figure out whether the note is a complaint or a compliment
  2. Aggregating the customer’s personal and census information to predict buying preferences
  3. Evaluating a customer’s online shopping history or social media participation and place a new product offering in an email, webpage visit, or social media activity
  4. Intuitively segmenting customers through mass customer data gathering, grouping, and targeting ads for each customer segment
  5. Automating customer contact points with voice- or face-activated “chat bots”

How Rivet Logic can make you future-ready and customer friendly

Your business may be nowhere near the size of General Electric. You do, however, have a level playing field when it comes to leveraging Big Data and machine learning products to a winning strategy. What we do is help you plan that strategy by:

  • Aligning your business goals with technology—What are the sources of your own data and how can they harness the power of NoSQL databases, for example?
  • Designing your user experience—What do you need? A custom user interface, or a mobile app with intuitively simple user interfaces?

We can do that and much more. Contact us and we’ll help make your business future-ready to collect, harvest, and leverage all the great things you are doing now.

Find Meaning in Your Data With Elasticsearch

Posted by on March 28, 2017

We’re surrounded by data everywhere we go, and the amount is growing with each action we take. We rely on it regularly, probably a lot more than we even realize or would like to admit. From searching for nearby restaurants while traveling, to reading online product reviews prior to making a purchase, and finding the best route home based on real-time traffic patterns, data helps us make informed decisions every day.

However, all that data on its own is just data. The real value comes when you can find the right, relevant data when it’s needed. Better yet, take it a step further and find meaning in your data, and that’s where the real goldmine is.

Businesses are increasingly turning to search and analytics solutions to derive more value from their data, helping to provide the deep insights necessary to make better business decisions. Some popular use cases include:

  • Intelligent Search Experiences to discover and deliver relevant content
  • Security Analytics to better understand your infrastructure’s security
  • Data Visualization to present your data in a meaningful way
  • Log Analytics to gain deeper operational insight

At Rivet Logic, we realize the importance of data, and see the challenges businesses are facing in trying to make sense of their growing data pools. We’re excited to have partnered with Elastic – the company behind a suite of popular open source projects including ElasticsearchKibanaBeats, and Logstash – to deliver intelligent search and analytics solutions to help our customers get the most value out of their data, allowing them to make actionable improvements to websites for enhanced customer experiences!

A Real-world Use Case

How might this apply in a real-world scenario, you ask?

An example is a global hospitality customer of ours, who has partnered with Rivet Logic to implement three internal facing web properties that enable the company to perform its day to day business operations. With a reach spanning across 110+ countries, these sites are deployed in the cloud on Amazon AWS throughout the US, Europe and Asia Pacific, consisting of many data sources and used across multiple devices.

This customer needed a way to gain deeper insight into these systems — how the sites are being used along with the types of issues encountered to help improve operational efficiencies. Using Elasticsearch and Kibana, this customer is able to gain much better visibility into each site’s utility. Through detailed metrics, combined with the ability to perform aggregations and more intelligent queries, this customer can now gain much deeper insight into their data set through in depth reports and dashboards. In addition, the Elastic Stack solution aggregates all system logs into one place, making it possible to perform complex analysis to provide insightful data to better address operational concerns.

 

In 2016, It’s Time to Drive Deeper Customer Engagement Through Deeper Data Insight

Posted by on January 06, 2016

Customer-Segmentation_775x425_d79

2016 is officially upon us! A new year means a fresh start with new and improved strategies and goals, right? If you haven’t already reflected on the success of your organization’s 2015 customer experience objectives, now’s a good time to do so, to see what worked or didn’t, and how to better strategize in the coming year for better results.

Some questions to ask your self are:

  • Did you achieve the level of customer engagement you’d hoped for?
  • What worked and what roadblocks did you encounter?
  • Can your technology stack adequately handle your existing and future needs?
  • What new strategies and projects should be on your roadmap this year?

Lesson From the Holidays 

The holiday season might be over, but there’s a lot to be learned. In the midst of the season, people were faced with busy schedules as they tried to squeeze in last minute shopping in between holiday parties and travel plans.

Today’s consumers move at a faster pace than ever before, performing tasks on-the-go on mobile devices, and with online shopping rates at an all time high. The challenge for brands is to keep up and stand out from the digital noise.

More consumers are doing comparison shopping between competitors, and the ability for a brand to deliver the right content at the right time through the right device can be the determining factor between winning or losing out on a customer.

The holiday season was a prime example of how critical it is for businesses to incorporate a big data strategy into an overall customer experience strategy to optimally capture the attention of today’s consumers. And this isn’t just limited to retailers or the holiday season specifically, but really applies to any business that can benefit from deeper customer engagement. Ultimately, your goal as a brand is to empower your customers to purchase through whichever channels that suit their needs by helping to move them along their customer journey.

It’s Time to Leverage Your Data

At the same time, the holidays in itself also presented a data goldmine with a wealth of valuable behavioral data for businesses to collect, analyze, and leverage to make better business decisions! Imagine if a travel company knew from previous behavioral data that a specific customer traveled to somewhere warm for the holidays every year and could target them with relevant travel offers? This would more likely result in a purchase vs. sending generic offers that may or may not align with the customer’s interests.

Rivet Logic’s Data Services Practice is focused on helping our customers make better and smarter decisions through deeper data insight, allowing them to keep up with evolving user demands and increasing data volumes. Our full suite of data services solutions include:

  • Enterprise Data Hub – Provide a single view of the business
  • Analytics Platform – Gain insight into data to make real-time decisions
  • Customer Identity Platform – Collect, analyze and personalize the customer experience
  • Mobile Infrastructure – Build a mobile app that’s flexible and scalable
  • Product Catalog – Flexible and responsive to cater to evolving business demands
  • Content Management and Discovery – Manage, discover and surface content for the user

In 2016, it’s time to rethink how to drive deeper engagement with your audience – through better tracking of the customer experience, delivery of personalized contextual content, maximizing the effectiveness of your campaigns, optimizing your business operations, and ultimately increasing your sales and revenue.

Learn more about Rivet Logic’s Data Services solutions and how it can help your business in our datasheet.

NoSQL Design Considerations and Lessons Learned

Posted by on July 29, 2015

At Rivet Logic, we’ve always been big believers and adopters of NoSQL database technologies such as MongoDB. Now, leading organizations worldwide are using these technologies to create data-driven solutions to help them gain valuable insight into their business and customers. However, selecting a new technology can turn into an over engineered process of check boxes and tradeoffs. In a recent webinar, we shared our experiences, thought processes and lessons learned building apps on NoSQL databases.

The Database Debate

The database debate is never ending, where each type of database has its own pros and cons. Amongst the multitude of databases, some of the top technologies we’ve seen out in the marketing include:

  1. MongoDB – Document database
  2. Neo4j – Graph based relationship
  3. Riak – Key value data store
  4. Cassandra – Wide column database

Thinking Non-Relational

When it comes to NoSQL databases, it’s important to think non-relational. With NoSQL databases, there’s no SQL query language or joins. It also doesn’t serve as a drop-in replacement for Relational Databases, as they are two completely different approaches to storing and accessing data.

Another key component to consider is normalized vs. denormalized data. Whereas data is normalized in relational databases, it’s not a necessity or important design consideration for NoSQL databases. In addition, you can’t use the same tools, although that’s improving and technology companies are heavily investing in making their tools integrate with various database technologies. Lastly, you need to understand your data access patterns, and what it looks like from the application level down to the DB.

Expectations

Also keep in mind your expectations and make sure they’re realistic. Whereas the Relational model is over 30 years old, the NoSQL model is much younger at approximately 7 years, and enterprise adoption occurring within the last 5 years. Given the differences in maturity, NoSQL tools aren’t going to have the same level of maturity as those of Relational DB’s.

When evaluating new DB technologies, you need to understand the tradeoffs and what you’re willing to give up – whether it be data consistency, availability, or other features core to the DB – and determine if the benefits outweigh the tradeoffs. And all these DB’s aren’t created equally – they’re built off of different models for data store and access, use different language – which all require a ramp up.

In addition, keep in mind that scale and speed are all relative to your needs. Understanding all of these factors in the front end will help you make the right decision for the near and long term.

Questions to Ask Yourself

If you’re trying to determine if NoSQL would be a good fit for a new application you’re designing, here are some questions to ask yourself:

  1. Will the requirements evolve? Most likely they will, rarely are all requirements provided upfront.
  2. Do I understand the tradeoffs? Understand your must have vs. like to have.
  3. What are the expectations of the data and patterns? Read vs. write, and how you handle analytics (understand operational vs. analytics DB and where the overlap is)
  4. Build vs. Buy behavior? Understand what you’re working with internally and that changing internal culture is a process
  5. Is the ops team on board? When introducing new DB technologies, it’s much easier when the ops team is on board to make sure the tools are properly optimized.

Schema Design Tidbits

Schema is one the most critical things to understand when designing applications for these new databases. Ultimately the data access patterns should drive your design. We’ll use MongoDB and Cassandra as examples as they’re leading NoSQL databases with different models.

When designing your schema for MongoDB, it’s important to balance your app needs, performance and data retrieval. Your schema doesn’t have to be defined day 1, which is a benefit of MongoDB’s flexible schema. MongoDB also contains collections, which are similar to tables in relational DB’s, where documents are stored. However, the collections don’t enforce structure. In addition, you have the option of embedding data within a document, which depending on your use case, could be highly recommended.

Another technology to think about is Cassandra, a wide column database where you model around your queries. By understanding the access patterns, and the types of questions your users are asking the DB, then you can design your schema to be more accurate. You also want to distribute data evenly across nodes. Lastly, you want to minimize partition (groups of rows that share the same key) reads.

Architecture Examples

MongoDB has a primary-secondary architecture, where the secondary would become the primary if it ever failed, resulting in the notion of never having a DB offline. There are also rights, consistency, and durability, with primaries replicating to the secondaries. So in this model, the database is always available, where data is consistent and replicated across nodes, all performed in the backend by MongoDB. In terms of scalability, you’re scaling horizontally, with nodes being added as you go, which introduces a new concept of sharding, involving how data dynamically scales as the app grows.

On the other hand, Cassandra has a ring-based architecture, where data is distributed across nodes, similar to MongoDB’s sharding. There are similar patterns, but implemented differently within technologies. The diagram below illustrates architectural examples of MongoDB and Cassandra. All of these can be distributed globally, with dynamic scalability, the benefit being you can add nodes effortlessly as you grow.

NoSQL Data Solution Examples

Some of the NoSQL solutions we’ve recently built include:

Data Hub (aka 360 view, omni-channel) – A collection of various data sources pooled into a central location (in this case we used MongoDB), where use cases are built around the data. This enables new business units to access data they might not previously have access to, empowering them to build new products, understand how other teams operate, and ultimately lead to new revenue generating opportunities and improved processes across the organization

User Generated Content (UGC) & Analytics – Storing UGC sessions (e.g. blog comments and shares) that need to be stored and analyzed in the backend. A lot of times the Document model makes sense for this type of solution. However, as technologists continue to increase their NoSQL skill sets, there’s going to be an increasing amount of overlap of similar uses cases being built across various NoSQL DB types.

User Data Management – Also known as Profile Management, and storing information about the user, what they recently viewed, products bought, etc. With a Document model, the flexibility really becomes powerful to evolve the application as you can add attributes as you go, without the need to have all requirements defined out of the gate.

Lessons Learned

When talking about successful deployments, some of the lessons learned we’ve noticed include:

  1. Schema design is an ongoing process – From a Data Hub perspective, defining that “golden record” is not always necessary, as long as you define consistent fields that can be applied everywhere.
  2. Optimization is a team effort – It’s not just the developer’s job to optimize the schema, just like it’s not just the Ops team’s job to make sure the DB is always on. NoSQL is going to give you tunability across these, and the best performance and results
  3. Test your shard keys (MongoDB) – If sharding is a new concept for you, make sure you do your homework, understand and validate with someone that knows the DB very well.
  4. Don’t skimp on testing and use production data – Don’t always assume that the outcome is going to be the same in production.
  5. Shared resources will impact performance – Keep in mind if you’re deploying in the cloud that shared resources will impact distributed systems. This is where working with your Ops team will really help and eliminate frustrations.
  6. Understand what tools are available and where they are in maturity – Don’t assume existing tools (reporting, security, monitoring, etc.) will work in the same capacity as with Relational DB’s, and understand the maturity of the integration.
  7. Don’t get lost in the hype – Do your homework.
  8. Enable the “data consumer” – Enable the person that’s going to interact with the DB (e.g. data analyst) to make them comfortable working with the data.
  9. JSON is beautiful

To summarize, education will eliminate hesitation, and don’t get lost in the marketing fluff. Get Ops involved, the earlier and more often you work with your Ops team, the easier and more successful your application and your experience with these technologies will be. Lastly, keep in mind that these are just DB tools, so you’ll still need to build a front end.

Click here to see a recording of the webinar.

Click here for the webinar slides.