Archive for the ‘Blogroll’ Category

30
Sep
New garage days ahead
By George Candea in Blogroll on September 30, 2009
   

After a long thinking process, I have decided to step down from my day-to-day operating role as Chief Scientist. I will be checking in on the team every now and then, and I will continue to serve Aster on the board of directors, where we shape the strategic direction of the company and products.

Aster is now in a new phase, when maximal focus on product delivery and responding to customers’ every need is taking center stage. We have a fantastic product that is growing quickly and is being deployed at many, many customer sites. The level of product focus in our team is astounding. Yes, my friends, the garage days are now officially over! And I’m delighted we’ve come this far.

One should always invest their energy in those activities that most benefit from that energy. Aster is a grown-up now and is solidly set on a path with great momentum toward executing our “big data” vision. At the same time, I have a budding team of incredibly bright students back in my lab at EPFL, who are hungry for my help and scientific direction.  For me, it’s time for the next “garage project.”

As you might imagine, this is not an easy decision for a founder. Perhaps this is how parents feel when sending their kid to college away from home but time does not stand still.

It has been a thrill to start Aster with Mayank and Tasso, see her emerge from 5 PCs cooled by a portable fan in my living room, and then working for her all these years. I’m proud of what we’ve achieved so far, both in terms of product and market impact, as well as our published research papers. At a future stage in Aster’s growth, it may make sense for me to resume my role in leading the company’s long-term research. Until then, I look forward to helping the company from behind the scenes and to watch Aster grow and develop the best data management platform in the market.

Carpe datum!



14
Sep
Aster Data in Europe
By Mayank Bawa in Blogroll on September 14, 2009
   

Aster Data has seen tremendous growth in North America. We announced today that we have opened a Europe office in West London, England. The office will be headed by Bob Pearson, our newly appointed Europe Area Director. Bob is an entrepreneurial industry leader and had earlier introduced Opsware into Europe, eventually propelling Opsware to be #1 in Europe in its market. We had been in conversations with Bob for 12 months - understanding the European market - before we opened our office this summer.

We also announced today that our first customer in Europe is the #1 online poker gaming site in the world, Full Tilt Poker. We have been working with Full Tilt Poker for 8 months now helping deploy Aster nCluster to power their fraud prevention systems and provide enhanced customer service to their players.

It is no surprise that data size growth is a world-wide phenomenon, and certainly occurs across “the pond” as well. We have noticed that European customers in numerous industries, such as financial services and insurance, online retailing, social networking, communications, and gaming are deploying new (and sometimes custom) applications to leverage big data.

Aster Data is certainly the most application friendly big-data infrastructure in the market, and we look forward to working with our European customers in the coming years!



21
Aug
VLDB 2009: SQL/MapReduce Turns One Year Old
By Aster in Blogroll, nPath on August 21, 2009
   

John Cieslewicz, Eric Friedman, and Peter Pawlowski of Aster Data Systems

This post was co-authored by John Cieslewicz, Eric Friedman, and Peter Pawlowski of Aster Data Systems

One year ago we introduced SQL/MapReduce for the Aster nCluster database, which integrates MapReduce and SQL to enable deep analytics within the database. Pushing computation inside the database and close to the data is increasingly important as data sizes grow exponentially. As SQL/MR turns one year old, we are happy to announce that we will be presenting our SQL/MR innovations next week at the 35th International Conference on Very Large Data Bases (VLDB), the premier international forum for database research.

The title of our conference paper is SQL/MapReduce: A Practical Approach to Self-describing, Polymorphic, and Parallelizable User-defined Functions. The title of the paper may be large, but so are the advanced analytics possibilities created by the invention of SQL/MR!

We developed SQL/MR because we saw a growing gap between the deep analytics and application needs of very large data and the capabilities provided by SQL and traditional relational-only data processing. We call this gap, the “SQL Gap.”

The SQL Gap

SQL and the relational query processing model are well suited for many, but not all data processing tasks. Some queries are cumbersome, non-intuitive or impossible to express in SQL (note: now that SQL is turing complete, nothing is strictly impossible, but it can be very painful and perform very badly) - check out our paper for some examples. Moreover, query optimizers have a limited number of algorithms at their disposal to process data, which leads to convoluted data processing in situations where applying a little domain knowledge can yield a much more straightforward algorithm.

We found traditional user-defined functions (UDFs) to fall short in bridging this gap between SQL and the answers to challenging analytic problems that need to be solved. UDFs are often user-unfriendly, inflexible, and not easily parallelized. SQL/MR functions, in contrast, are designed to be easy to develop, easy to install, and easy to use - providing developers and analysts with a powerful tool to tackle the challenges posed by very large data.

To do this, we integrated the MapReduce programming model with SQL. MapReduce is a well known paradigm for parallel, fault-tolerant data processing that allows developers to write procedural code that is then applied to data in parallel. Pure MapReduce, however, misses out on aspects of SQL and relational data processing that are great - such as query optimizations, managed data, and transactions. By integrating SQL and MapReduce on top of Aster nCluster’s hardware management and fault tolerance, we leverage the strengths of each, resulting in a system that is much more powerful than either in isolation.

SQL/MapReduce At VLDB
SQL/MR is much more than a user-defined function. As our paper title states, a SQL/MR function is self-describing, polymorphic, and parallelizable. Let’s explore each of these characteristics and see why we hope researchers at VLDB will be as excited as we are by SQL/MR.

Self-Describing and Polymorphic
The behavior and output characteristics of a SQL/MR function are determined dynamically at query-time instead of statically at install-time as is the case with traditional user-defined functions. These characteristics allow SQL/MR functions to behave much more like general purpose library functions than single-use, specific application user-defined functions. When a SQL/MR function is used in a query, the nCluster query planner negotiates a contract with the SQL/MR function, providing the function’s input schema and optional user-supplied parameters. In return, the SQL/MR function agrees to a contract that specifies its output schema for the duration of the query. This contract negotiation is what makes SQL/MR functions self-describing, and their ability to be invoked on different input with different optional parameters makes them polymorphic as well. To summarize, by invoking a SQL/MR function on different input, with different optional parameters, the SQL/MR function may export a different output schema and perform different computation - the possibilities are entirely up to the developer!

Parallelizable
The SQL/MR programming model, like that of MapReduce, is inherently parallelizable. Developers write procedural code in the language of their choice, but at runtime that code will run in parallel across hundreds of nodes within an nCluster database. Advanced analytics and application code can now be executed in parallel, directly on data stored within nCluster making nCluster an application-friendly, high performance data warehouse and data application system. Advanced analytics capabilities that we have already pushed inside nCluster using SQL/MR include click-stream sessionization, general purpose time series path matching, dynamic and massively parallel data loading from heterogeneous sources, and genetic sequence analysis.

Bridging the Gap
SQL/MapReduce has matured greatly over the past year and is used by our customers in creative ways we never imagined - proof positive that SQL/MapReduce is an effective way to bridge the “SQL Gap” to deeper analytics on very large data. Check out the other application examples and sample code videos on www.asterdata.com, as well as the MapReduce category of this blog

We’d love to hear from you about any type of analysis you’re doing where stand-alone SQL is becoming overly-complex … and please look us up at the VLDB 2009 show if you’re making the trip to Lyon, France!

(Apart from the authors, Brent Chun, Mohit Aron, Abhishek Marwah, Raghu Venkat, Vinay Bondhugula, and Prasan Roy of Aster Data Systems are notable contributors to the overall SQL/MR effort.)



03
Aug
Netezza’s Change in Architecture - Move towards Commodity
By Mayank Bawa in Blogroll, TCO on August 3, 2009
   

Netezza pre-announced last week that they will be moving to a new architecture - one based around IBM blades (Linux + Intel + RAM) with commodity SAS disks, RAID controllers, and NICs. The product will continue to rely on an FPGA, but that would sit much further from the disks & RAID controller, beyond the RAM but adjacent to the Intel CPU, in contrast to their previous product line.

In assembling a new hardware stack, Netezza calls this re-architecture as a change but not really a change - the FPGA will continue to offload data compression/decompression, selection and projection from the Intel CPU; the Intel CPU will be used to push-down joins and group bys; the RAM will be used to enable caching (thus helping improve mixed workload performance).

I think this is a pretty significant change for Netezza.

Clearly, Netezza would not have invested in this change - assemble & ship a new hardware stack to share revenue with IBM vs. a 3rd party hardware assembler - if Netezza’s old FPGA-dominant hardware was not being out-priced and out-performed by our Intel-based commodity hardware.

It was a matter of time before the market realized that FPGA’s had reached their end-of-life status in the data warehousing market. In realizing the writing on the wall, and responding to it early, Netezza has made a bold decision to change - and yet, clung to the warm familiarity of an FPGA as a “side car”.

Netezza, and the rest of the market, will soon become aware that a change in hardware stack is not a free lunch. The richness of CPU and RAM resources in an IBM commodity blade come at a cost that a resource-starved FPGA-based architecture never had to account for.

In 2009, after having engineered its software for an FPGA over the last 9 years, Netezza will need to come to terms with commodity hardware in production systems and demonstrate that they can:

- Manage processes and memory spawned by a single query across 100s of blade servers

- Maintain consistent caches across 100s of blade servers - after all, it is Oracle’s Cache Fusion technology that is the bane of scaling Oracle RAC beyond 8 blade servers

- Tolerate the higher frequency of failures that a commodity Linux + RAID Controller/driver + Network driver stack incur when put under rigorous data movement (e.g., allocation/de-allocation of memory contributing to memory leaks)

- Add a new IBM blade and ensure incremental scaling of their appliance

- Upgrade the software stack in place - unlike an FPGA-based hardware stack that customers are OK to floor-sweep in their upgrade

- Contain run-away queries from allocating the abundant CPU and RAM resources and starving other concurrent queries in the workload

- Reduce network traffic for a blade with 2 NICs that is managing 8 disks vs. a Power-PC/FPGA that had 1 NIC for 1 disk

-

If you take a quick pulse of the market, apart from our known installations of 100+ servers, there is no other vendor - mature or new-age - who has demonstrated that 100′s of commodity servers can be made to work together to run a single database.

And I believe that there is a fundamental reason for this lack of proof-point even a decade after Linux has matured and commodity servers have been used for computing - software not built from the ground-up to leverage the richness and contain the limitations of commodity hardware is incapable of scaling. Aster nCluster has been built ground up to have these capabilities on a commodity stack. Netezza’s software written for proprietary hardware cannot be retrofitted to work on commodity hardware (else, Netezza would have completely taken the FPGAs out, now that they have powerful CPUs!). Netezza has its work cut-out - they have taken a dramatic shift that has the ability to bring the company and its production customers to its knees. And there-in lies Netezza’s challenge - they must succeed while supporting their current customers on an FPGA-based platform while moving resources to build out a commodity-based platform.

And we have not even touched upon the extension of SQL with MapReduce to power big data manipulation using arbitrary user-written procedures.

If a system is not fundamentally designed to leverage commodity servers, it’s only going to be a band-aid on seams that are bursting. Overall, we will curiously watch how long it takes Netezza to eliminate their FPGAs completely and move to a real commodity stack so that the customers can have the freedom to choose their own hardware and not be locked down to Netezza-supplied custom hardware.



14
Jul
By Shawn Kung in Blogroll, Frontline data warehouse on July 14, 2009
   

When you hear the word “warehouse,” you normally think of an oversized building with high ceilings and a ton of storage space. In the data warehousing world, it’s all too easy to fill that space faster than expected. Even companies with predictable data growth trajectories don’t want to pay for storage space they won’t need for months or even years out. For either type of company, the ability to scale on-demand, and to the appropriate degree, is critical.

That’s why I’m so excited about a webinar we are hosting next week with James Kobielus, Senior Analyst for Forrester Research. In case you haven’t read it, James recently released his report “Massive But Agile: Best Practices for Scaling the Next-Generation Data Warehouse.” In the report, James thoroughly address several issues around scalability for which Aster is well-suited (parallelism, optimized storage, in-database analytics, etc.).

We’ll get into much more detail on these and other issues over the course of the webinar. If you haven’t had a chance yet, please register for the webinar to hear what James, a leader and visionary in the industry, has to say. And make sure to leave a comment below if there are any facets of data warehouse scalability that you would like us to cover.



29
Jun
By Mayank Bawa in Blogroll on June 29, 2009
   

We are announcing the availability of an Enterprise-Ready MapReduce Data Warehouse Appliance.

The appliance is powered by Dell hardware and Aster’s nCluster SQL/ MR database, with optional software for BI platform from Microstrategy and data modeling software from Aqua Data Studio.

Our product portfolio now allows our customers to get the benefits of our flagship Aster nCluster SQL/MR database in the packaging that they are most comfortable with - on-premise software, in-cloud service, or pre-packaged appliance.

The appliance offering packs a lot of punch compared to other data warehousing appliances in the market - it has the highest ratio of compute & memory to data sizes, allowing you to run rich queries on the appliance without breaking a sweat.

We are especially proud of the open nature of our appliance - the hardware is from Dell built from industry-standard components, the BI server is from Microstrategy, and the data modeling tool is from AquaFold (Aqua Data Studio). The appliance brings together industry-leading components of a full data warehouse stack together - all pre-tested and configured for optimal performance.

Even the programming of our appliance is open - our SQL/MR framework allows applications to push computation into the appliance using industry standard SQL augmented with MapReduce in the language of your choice (Java, C#, Perl, Python, etc.).

We have been approached by a number of customers seeking a get-started-quickly system, especially those groups of users and departments seeking a Hadoop framework to build their solutions upon.

In response to the requests, we are proud to announce an Express Edition of the appliance that is designed to work for upto 1TB of user data. And it comes in an even more attractive price - that of $50K only - complete with hardware and software!

Give us a call - we’ll get your warehouse setup on our appliance to ensure that the time-to-first-query is measured in hours, not months!



09
Jun
By Peter Pawlowski in Blogroll on June 9, 2009
   

The Aster SQL/MapReduce framework allows developers to push analytics code for applications closer to the data in the database, without dealing with the headaches of extracting and analyzing data outside of the database. We’ve supported a variety of language from day one, including Java, Python, and Perl. Today we’re pleased to announce official support for the .NET family of languages via Mono, an excellent open source .NET implementation. This will allow developers who use .NET languages like C# and VB (and, of course, F#) to more easily leverage nCluster for massively parallel analytics.

Our .NET support is enabled through our Stream SQL/MR function, which allows users to process data via a simple streaming interface: provide a program that reads rows from the console (stdin) and writes rows back to the console (stdout). Let’s consider a simple C# program called Tokenize, which splits incoming rows into tokens, and then output each token (one per line):

To run this program over data stored in nCluster, a developer just needs to compile the above Tokenize.cs into Tokenize.exe with a C# compiler (in our case, the Mono C# compiler gmcs). With the compiled executable in hand, one command in our terminal client will install it in nCluster. The program can be then invoked from SQL. The below example will run the program over all the rows in the documents table, outputting a table with a single column (token). Each row in the result of the query will correspond to a single token in the input documents.

It’s as simple as that. We hope our new .NET support will enable an ever-broader group of developers take advantage of SQL/MR, our in-database analytics technology!If you’re interested in learning more, please check out a host of new resources around our implementation of MapReduce within Aster nCluster including example applications and code.



05
Jun
By Mayank Bawa in Blogroll on June 5, 2009
   

 Rajeev was a close friend and a cherished mentor. We were saddened to hear the news today and we will miss him dearly. Our thoughts are with his family.



22
Apr
The Importance of Visibility Across Rows
By Peter Pawlowski in Blogroll, nPath on April 22, 2009
   

Aster’s SQL/MR framework (In-Database MapReduce) enables our users to write custom analytic functions (SQL/MR functions) in a programming language like Java or Python, install them in the cluster, and then invoke them from SQL to analyze data stored in nCluster database tables. These SQL/MR functions transform one table into another, but do so in a massively parallel way. As increasingly valuable analytic functions are pushed into the database, the value of constructing a data structure once, and reusing it across a large number of rows, increases substantially. Our API was designed with this in mind.

What’s the SQL/MR API look like? The SQL/MR function is given an iterator to a set of input rows, as well as an emitter for outputting rows. We decided on this interface for a number of reasons, with one of the most important being the ability to maintain state between rows. We’ve found that many useful analytic functions need to construct some state before processing a row of input, and this state construction should be amortized over as many rows as possible.

Here’s a wireframe of one type of SQL/MR function (a RowFunction):

class RealAsterFunction implements RowFunction
{
  void operateOnSomeRows(RowIterator iterator, RowEmitter outputEmitter)
  {
   //
   // Construct some data structure to enable fast processing.
   //
   ...
   //
   // Read a row from iterator, process it, and emit a result.
   //
   ...
 }
}

When this SQL/MR function is invoked in nCluster, the system starts several copies of this function on each node (think: one per CPU core). Each function is given an iterator to the rows that live in its local slice of the data. An alternative design, which is akin to the standard scalar UDF, would have been:

class NotRealAsterFunction implements PossibleRowFunction
{
 static void operateOnRow(Row currentRow, RowEmitter outputEmitter)
 {
   //
   // Process the given row and emit a result.
   //
   ...
 }
}

In this design, the static operateOnRow method would be called for each row in the function’s input. State can no longer be easily stored between rows. For simple functions, like computing the absolute value or a substring of a particular column, there’s no need for such inter-row state. But, as we’ve implemented more interesting analytic functions, we’ve found that enabling the storage of such state, or more specifically paying only once for the construction of something complex and then reusing it, has real value. Without the ability to save state between rows, the construction of this state would dominate the function’s execution.

Examples abound. Consider a SQL/MR function which applies a complex model to score the data in the database, whether it’s scoring a customer for insurance risk, scoring an internet user for an ad’s effectiveness, or scoring a snippet of text for its sentiment. These functions often construct a data structure in memory to accelerate scoring, which works very well with the SQL/MR API: build the data structure once and reuse it across a large number of rows.

A sentiment analysis SQL/MR function, designed to classify a set of notes written up by customer service reps or a set of comments post on a blog, would likely first build a hash table of words to sentiment scores, based on some dictionary file. This function would then iterate through each snippet of text, converting each word to its stem and then doing a fast lookup via the hash table. Such a persistent data structure accelerates the sentiment scoring of each text snippet.

Another example is Aster’s nPath SQL/MR function. At a high level, this function looks for patterns in ordered data, with the pattern specified with a regular expression. When nPath runs, it converts the pattern into a data structure optimized for fast, constant-memory pattern matching. If state couldn’t be maintained between rows, there’d be a large price to reconstructing this data structure on each new row.

Repeating the high bit: as increasingly valuable analytic functions are pushed into the database, the value of constructing a data structure once, and reusing it across a large number of rows, increases substantially. The SQL/MR API was designed with this in mind.



07
Apr
Largest Independent Online Ad Network Chooses Aster
By Steve Wooledge in Blogroll on April 7, 2009
   

I’m delighted to welcome Specific Media to the quickly-growing family of Aster customers! I had the pleasure of briefly meeting the folks from Specific Media in our offices last week. Similar to Aster, Specific Media is incredibly focused on doing more with data to increase the value they provide to their customers: advertisers which represent 300 of the top Fortune 500 brands.

They’re also really smart and humble about what they do, which makes it a pleasure to work with them. And what you wouldn’t know from a brief introduction is how cutting-edge their analytic methodologies and capabilities are.  We’re just starting our partnership together and hope to have some success metrics to share later about how they are using the Aster nCluster database for their data warehouse. They have some interesting ideas for using the Aster In-Database MapReduce framework to perform rich analysis of data efficiently for improved ad targeting and relevancy.