Archive for the ‘Interactive marketing’ Category

12
Jun
   

Back in 2005, when we first founded Aster Data, our vision was to take some of the latest technology innovations – including MPP shared-nothing architectures; Linux-based commodity hardware; and novel analytical interfaces like Google’s MapReduce – and bring them to mainstream enterprises. This vision translated into a strategy focused not only on big data innovations, but also on delivering technologies that make big data viable for enterprise environments. SQL-MapReduce®, our industry-leading patented technology that combines standard SQL processing with a native MapReduce execution environment, is one example of how we make big data enterprise ready.

Today we have completed another major milestone on providing value to our customers by announcing a major innovation: Aster SQL-H™, a seamless way to execute SQL & SQL-MapReduce on Apache™ Hadoop™ data.

This is a significant step forward from what was state-of-the-art until yesterday. What was missing? A common DBMS-Hadoop connector operating at the physical layer. This means that getting data from Hadoop to a database required a Hadoop expert in the middle to do the data cleansing and the data type translation. If the data was not 100% clean (which is the case in most circumstances) a developer was needed to get it to a consistent, proper form. Besides wasting the valuable time of that expert, this process meant that business analysts couldn’t directly access and analyze data in Hadoop clusters. Other database connectors require duplicating the data into HDFS by using proprietary formats; a cumbersome and expensive approach by any measure.

SQL-H, an industry-first, solves all those problems.

First, we have integrated Aster’s metadata engine with Hadoop’s emerging metadata standard, HCatalog. This means that data stored in Hadoop using Pig, Hive & HBase can be “seen” in an Aster system as if they are just another Aster view. The business implication is that a business analyst using standard SQL or a BI tool can have full and seamless access to Hadoop data through the Aster’s standard ODBC/JDBC connector and Aster’s SQL engine. There is no need to have a human in the middle to translate the data or ensure its consistency; and no need to file tickets or call up experts to get the data the business needs. Everything happens transparently, seamlessly, and instantly. This is an industry first, since today all available Hadoop tools either do not provide standard SQL interfaces that are well optimized, do not provide native BI compatibility, or require manual data translation and movement from Hadoop to a third party system. None of these approaches are viable options for SQL & BI execution on Hadoop data, thus making it hard for enterprises to get value from Hadoop.

Secondly, SQL-H provides a high-performance, type-safe data connector, that can take a SQL or SQL-MapReduce query that involves Hadoop data, automatically select the minimum subset of data in Hadoop that is required for execution of the query, and run the query on the Aster system. The performance of running SQL and SQL-MapReduce analytics in Aster is significantly higher than Hadoop because (a) Aster can optimize data partitioning and distribution, thus reducing network transfers and overhead; (b) Aster’s engine can keep statistics about the data and use that to optimize execution of both SQL & MapReduce; (c) Aster’s SQL queries are cost-based-optimized which means that it can handle very complex SQL, including SQL produced by BI tools, very efficiently.

In addition, one can take advantage of SQL-H to apply the 50+ pre-build SQL-MapReduce apps that Teradata Aster provides on Hadoop data, thus doing big data analytics that are impossible to do in every other database without having to write a single line of Java MapReduce code! These apps include functions for path & pattern analysis, statistics, graph, text analysis, and more.

Teradata Aster is committed to groundbreaking product innovation as the key strategy in maintaining our #1 position in the big analytics market. SQL-H is another important step that we expect will make Hadoop and big data analytics much more palatable for enterprise environments, allowing business analysts, SQL power-users & BI tool users to analyze Hadoop data without having to learn about Hadoop interfaces and code.

If you want to find out more we’ll be talking about SQL-H at Hadoop Summit, on webcast taking place June 21st, at the upcoming Big Analytics 2012 events in Chicago & New York, and at the annual Teradata Partners event.



21
Mar
   

The conversation around “big data” has been evolving beyond a technology discussion to focus on analytics and applications to the business.  As such, we’ve worked with our partners and customers to expand the scope of the Big Data Summit events we initiated back in 2009 and have created Big Analytics 2012 – a new series of roadshow events kicking off in San Francisco on April 19, 2012 .

According to previous attendees and market surveys, the greatest big data application opportunities in businesses are:

- Digital marketing applications such as multi-channel analytics and testing to better understand and engage your customers

- Using data science and analytics to explore and develop new markets or data-driven services

Companies like LinkedIn, Edmodo, eBay,  and others have effectively applied data science and analytics to take advantage of the new economics of data. And they are ready to share details of what they have learned along the way.

Big Analytics 2012 is a half-day event, is absolutely free to attend, and will include insight from industry insiders in two different tracks: Digital Marketing Optimization, and Data Science and Analytics. Big Analytics 2012 is a great way to meet and hear from your peers such as: executives who want to learn more about leveraging advanced analytics to a competitive advantage, interactive marketing innovators who want access to “game changing” insights for digital marketing optimization, enterprise architects and business intelligence professionals looking to provide big data infrastructure and data scientists and business analysts who are responsible for developing new data-driven products or business insights.

Come to learn from the panel of experts and stay for an evening networking reception that will put you in touch with big data and analytics professionals from throughout the industry. Big Analytics 2012 will be coming soon to a city near you. Click here to learn more about the event and to register now.

 



19
Mar
By Tasso Argyros in Analytics, Business analytics, Interactive marketing, Teradata Aster on March 19, 2012
   

Tomorrow, I will have the pleasure of presenting “Radical Loyalty – Data Science Applied to Marketing” at the GigaOm Structure:Data event with Marc Parrish, the VP of Membership and Customer Retention Marketing at Barnes & Noble. In contrast with most talks at this event, Marc and I will be focusing on the business opportunities of Big Data and specifically on marketing loyalty programs and how they relate to Big Data analytics.

The concept of a loyalty program is certainly nothing new. Brick and mortar companies have been leveraging customer loyalty in a variety of unique ways for decades. What’s different is the ability of businesses to use new types of data to take their customer loyalty insights and strategies to a completely new level. At tomorrow’s conference, we will explore ways in which modern retailers like Barnes & Noble with a strong digital marketing strategy leverage their customers’ loyalty using Big Data and how to make loyalty programs worthwhile for customers and their needs.

Barnes & Noble has proven an ability to innovate their business model by leveraging data. I look forward to sharing some insight with Marc on retail and other real world applications of Big Data.



28
Feb
By Stephanie in Interactive marketing on February 28, 2012
   

On a recent webinar, Rob Bronson from Forrester Research pointed out that 45% of Big Data implementations are in marketing.  One of the use cases we most hear about for customers is the need to move from single-touch attribution methods like last-click and first-click to multi-channel, multi-touch attribution.  Today we announced an extension of our Digital Marketing Solutions to deliver multi-touch attribution. 

When I speak with customers about moving to multi-touch attribution it feels like hearing about HDTV for the first time.   More clarity, more detail, and a richer experience that is more like the real-life experience of consumers.  So, multi-touch attribution is basically the HD equivalent of single-touch attribution.

What’s different?  First of all, consumers interact across many touch-points, social, mobile, search, websites as well as offline channels.  Most existing attribution solutions look at multiple touch-points within a single channel, like an ad network or web visitors.  With a Big Data Analytics approach it is easier to blend more channels into the mix and find customer connections.

This is critical today, because it better reflects the customer journey.   To be customer-centric, it is critical to be able to look at your brand through the eyes of the consumer.  A few years ago, this was impossible or at least difficult and expensive.  Now Big Data marketing analytics makes it possible to see the multi-channel journeys with incredible clarity.

As consumers dynamically adopt new technologies, keeping up with them is one of today’s marketers biggest challenges.  To do that, you can’t be stuck in legacy single-touch or annual reviews of attribution.  Big Data Analytics makes it possible to discover new patterns, test new programs and iterate to optimize in the time scales that the market demands.

An additional value is that Big Data Analytics can deliver a 3D-type enhancement to attribution.  Teradata Aster gives you the ability to use different measures for each touch point so you can use uniform, variable or exponential weightings in your model in order to test and iterate to get the right approach for your business.

Another big difference using Teradata Aster to analyze attribution is to be able to link to additional data in a Teradata Data Warehouse to include Revenue, Profit and Lifetime Value which extends attribution beyond conversion to real bottom-line performance.

Lastly, the ability to integrate into the Aprimo marketing platform makes this insight actionable.   With Aster and Aprimo being part of Teradata, it becomes possible to operationalize your Big Data Analytics more effectively.

The infographic above highlights why some marketers might feel like they have an attribution problem.  You can download a PDF of it here. On the same page, you will also find a white paper we created with Aprimo to go into more detail around what attribution looks like today, and an On-Demand webinar with Forrester and Razorfish that looks at attribution in some depth.  For those who want to read more, check out an addition to this Delicious stack.

So my question for this post is – Do you have an attribution problem?  And if so, how can having multi-touch, multi-channel attribution model make it better?



11
Jun
By Mayank Bawa in Analytics, Blogroll, Business analytics, Interactive marketing on June 11, 2008
   

I had the opportunity to work closely with Anand Rajaraman while at Stanford University and now at our company. Anand teaches the Data Mining class at Stanford as well, and recently he did a very instructive post on the observation that efficient algorithms on more data usually beat complex algorithms on small data. He followed it up with an elaboration post. Google also seems to believe in a similar philosophy.

I want to build upon that observation here. If you haven’t read the posts, do read them first. It is well-worth the time!

I propose that there are 2 forces in action that help simple algorithms on big data beat complex algorithms on small data:

  1. The freedom of big data allows us to bring in related datasets that provide contextual richness.
  2. Simple algorithms allow us to identify small nuances by leveraging contextual richness in the data.

Let me expand my proposal using Internet Advertising Networks as an example.

Advertising networks essentially make a guess about a user’s intent and present an advertisement (creative) to the consumer. If the user is indeed interested, the user clicks through the creative to learn more.

Advertising networks are used today on a CPC model (Cost-Per-Click). There are stronger variants of CPL (Cost-Per-Lead) or CPA (Cost-Per-Acquisition) but these variants are as applicable to the discussion as the simpler CPC model. There is a simpler variant of CPM (Cost-Per-Impression) but an advertiser ends up effectively computing CPC by keeping track of click-through rates for money spent via the CPM model. The CPC model dictates that Advertising Networks do not make money unless the user clicks on a creative.

Today, the best advertising networks have a click through rate of less than 1%. In other words, advertising networks correctly interpret a user’s intentions 1% of the time, 99% of the time they are ineffective!I find this statistic immensely liberating. Here is a statistic that shows that even if we are correct 1% of the time, the rewards are significant. Why is the click-through rate so low? I think it is because human behavior is difficult to predict. Even sophisticated algorithms (that are computationally practical only on small datasets) do a bad job of predicting human behavior.It is much more powerful to think of efficient algorithms that execute across larger, diverse datasets to exploit the richness inherent in the context to enable a higher click-through rate.I’ve observed people in the field sample behavioral data to reduce their operating dataset. I submit that a sample of 1% will lose the nuances and the context that can cause an uplift and growth in revenue.For example, a Content Media site may have 2% of their users who come in to read Sports stay on to read Finance articles. A sampling of 1% is certain to reduce this 2% population trait to a statistically insignificant portion in the sample. Should we or should we not derive this insight to identify and engage the 2% by serving them better content?Similarly, an Internet Retailer may have 2% of their users who come in to buy flat-panel TV have also bought video games recently. Should we or should we not act on this insight to identify and engage the 2% by offering them better deals on games? Given that games are a high-margin product, the net effect on revenue via cross-sell could be higher than 2% in dollars.We often want to develop an algorithm that is provably correct under all circumstances. In a bid to satisfy this urge, we restrict our datasets to find a statistically significant model that is a good predictor. I associate that with a purist way of algorithm development that was drilled into us at school.Anand’s observation is a call for practitioners to think simple, use context and come up with rules that segment and win locally. It will be faster to develop, test and win on simple heuristics than waiting for a perfect “Aha!” that explains all things human.