Archive for the ‘Frontline data warehouse’ Category

06
Oct
By Steve Wooledge in Analytics, Blogroll, Frontline data warehouse on October 6, 2008
   

Aster announced the general availability of our nCluster 3.0 database, complete with new feature sets. We’re thrilled with the adoption we saw before GA of the product, and it’s always a pleasure to speak directly with someone who is using nCluster to enable their frontline decision-making.

Lenin Gali, director of business intelligence for the online sharing platform ShareThis, is one such friend. He recently sat down with us to discuss how Internet and social networking companies can successfully grow their business by rapidly analyzing and acting on their massive data.

You can read the full details of our conversation on the Aster Website.



06
Oct
By Mayank Bawa in Analytics, Blogroll, Frontline data warehouse on October 6, 2008
   

It is really remarkable how many companies today view data analytics as the cornerstone of their businesses.

aCerno is an advertising network that uses powerful analytics to predict which advertisements to deliver to which person at what time. Their analytics are performed on completely anonymous consumer shopping data of 140M users obtained from an association of 450+ product manufacturers and multi-channel retailers. There is a strong appetite at aCerno to perform analytics that they have not done before because each 1% uplift in the click-through rates is a significant revenue stream for them and their customers.

Aggregate Knowledge powers a discovery network (The Pique Discovery Network) that delivers recommendations of products and content based on what was previously purchased and viewed by an individual using the collective behavior of the crowds that had behaved similarly in the past. Again, each 1% increase of engagement is a significant revenue stream for them and their customers.

ShareThis provides a sharing network via a widget that makes it simple for people to share things they find online with their friends. In a short period of time since their launch, ShareThis has reached over 150M unique monthly users. The amazing insight is that ShareThis knows which content users actually engage with, and want to tell their friends about! And in its sheer genius, ShareThis gives away its service to publishers and consumers free; relying on delivering targeted advertising for its revenue: by delivering relevant ad messages while knowing the characteristics of that thing being shared. Again, the better their analytics, the better their revenue.

Which brings me to my point: data analytics is a direct contributor of revenue gains in these companies.

Traditionally, we think of data warehousing as a back-office task. The data warehouse can be loaded in separate load windows; loads can run late (the net effect is that business users will get their reports late); loads, backups, and scale-up can take data warehouses offline –which is OK since these tasks can be done on non-business hours (nights/weekends).

But these companies rely on data analytics for their revenue.

·    A separate exclusive load window implies that their service is not leveraging analytics during that window;
·    A late-running load implies that the service is getting stale data;
·    An offline warehouse implies that the service is missing fresh trends

Any such planned or unplanned outage results in lower revenues.

On the flip side, a faster load/query provides the service a competitive edge -a chance to do more with their data than anyone else in the market. A nimbler data model, a faster scale-out, or a more agile ETL process helps them implement their “Aha!” insights faster and gain revenue from a reduced time-to-market advantage.

These companies have moved data warehousing from the back-office to the frontlines of business: a competitive weapon to increase their revenues or to reduce their risks.

In response, the requirements of a data warehouse that supports these frontline applications go up a few notches: the warehouse has to be available for querying and loading 365x24x7; the warehouse has to be fast and nimble; the warehouse has to allow “Aha!” queries to be phrased.

We call these use cases “frontline data warehousing“. And today we released a new version of Aster nCluster that rises up those few notches to meet the demands of the frontline applications.



06
Oct
By Tasso Argyros in Analytics, Blogroll, Frontline data warehouse on October 6, 2008
   

Back in the days when Mayank, George and I were still students at Stanford, working hard to create Aster, we had a pretty clear vision of what we wanted to achieve: allow the world to do more analytics on more data. Aster has grown tremendously since these days, but that vision hasn’t changed. And one can see this very clearly in the new release of our software, Aster nCluster 3.0, which is all about doing more analytics with more data. Because 3.0 introduces so many and important features, we tried to categorize them in three big buckets: Always Parallel, Always On, and In-Database MapReduce.

Always Parallel has to do with the “Big Data” part of our vision. We want to build systems that can handle 10x -100x more data than any other system today. But this is too much data for any single “commodity server” (that is, a server with reasonable cost) that one can buy. So we put a lot of R&D effort into parallelizing every single function of the system -not only querying, but also loading, data export, backup, and upgrades. Plus, we allow our users to choose how much they want to parallelize all these functions, without having to scale up the whole system.

Always On also stems from the need to handle “Big Data”, but in a different way. In order for someone to store and analyze anything from a terabyte to a petabyte, she needs to use a system with more than a single server. But then availability and management can become a huge problem. What if a server fails? How do I keep going, but also how do I recover from the failure (either by introducing the same server or a new, replacement, server) with no downtime? And how can I seamlessly expand the system, in order for me to realize the great promise of horizontal scaling, without taking the system down? And, finally, how do I backup all these oceans of data without disrupting my system’s operation? All these issues are handled in our new 3.0 release.

We introduced In-Database MapReduce in a previous post so I won’t spend too much time here. But I want to point out how this fits our overall vision. Having a database which is always parallel and always on allows you to handle Big Data with high performance, low cost, and high availability. But once you have all this data, you want to do more analytics - to extract more value and insights. In-Database MapReduce is meant to do exactly that -push the limits of what insights you can extract by providing the first-ever system that tightly integrates MapReduce (a powerful analytical paradigm) with a wide-spread standard like SQL.

These are the big features in nCluster 3.0, and in the majority of our marketing materials we stop here. But I also want to talk about the other great things we have in there; things more subtle or technical to mention in the headlines, but still very important. We’ve added table compression features that offer online, multi-level compression for cost-savings. With table compression, you can choose your compression ratio and algorithm and have different tables compressed differently. This paves the way for data life-cycle management that can compress data differently depending on its age.

We’ve also implemented richer workload management to offer quality of service for fine-grained mixed workload prioritization via priority and fair-share based resource queue.  You can even allocate resource weights based on transaction number or time (useful when both big and small jobs occur).

3.0 also has Network Aggregation (NIC “bonding”) for performance and fault tolerance. This is a one-click configuration that automates network setup -usually a tedious error-prone sys admin task. And that’s not the end of it -we also are introducing an Upgrade Manager that automates upgrades from one version of nCluster to another, including what most frequently breaks upgrades: the operating system components. This is another building block of the low cost of ongoing administration that we’re so proud of achieving with nCluster. I could go on and on (new SQL enhancements, new data validation tools, heterogeneous hardware support, LDAP authentication, …), but since blog space is supposed to be limited, I’ll stop here. (Check out our new resource library if you want to dig deeper.)

Overall, I am delighted to see how our product has evolved towards the vision we laid out years back. I’m also thrilled that we’re building a solid ecosystem around Aster nCluster -we now support all the major BI platforms -and are establishing quite a network of systems integrators to help customers with implementation of their frontline data warehouses. In a knowledge-based economy full of uncertainty, opportunities, and threats, doing more analytics on more data will drive the competitiveness of successful corporations -and Aster nCluster 3.0 will help you deliver just that for your own company.