Archive for the ‘Database’ Category

15
Sep
By Tasso Argyros in Blogroll, Database on September 15, 2008
   

Dave Kellog’s blog reminded me that the Claremont DB Research report was recently released. The Claremont report is the result of two days of discussion among some of the world’s greatest academics in databases and aims to identify and promote the most promising research directions in databases.

As I was reading the report, I realized that Aster Data is at the forefront of some of the most exciting database research topics. In particular, the report mentions four areas (out of a total of six) where Aster has been driving innovation very aggressively.

1. Revisiting database engines. MPP is the answer to Big Data, among other things.

2. Declarative programming for emerging platforms. MapReduce is explicitly mentioned here, noting its potential in data management. This is a very important development given that certain database academics (that participated in the report) have repeatedly shown their depreciation and ignorance on the topic.

3. Interplay of structured and unstructured data. This is an important area where MapReduce can play a huge role.

4. Cloud data services. Database researchers realize the potential of the cloud, both as a data management and a research tool. With our precision scaling feature, we are a strong fit for internal Enterprise clouds.

The world of databases is changing fast and this is an opportunity for us to provide the most cutting-edge database technology to our customers.

We’ve also found a lot of benefit from our strong ties with academia, by nature of our background and advisors, and we intend to strengthen these even more.



06
Sep
By Tasso Argyros in Blogroll, Database, MapReduce on September 6, 2008
   

In response to Aster’s In-Database MapReduce initiative, I’ve been asked the following question:

“How does Aster Data Systems compete with open source MapReduce implementations, such as Hadoop?�

My answer – we simply do not.

Hadoop and Google’s implementation of MapReduce are targeted to the development (coding) community. The primary interface of these systems is the command line; and the primary means of accessing data is through Java or Python code. There have been efforts to build higher-level interfaces on top of these systems, but they are usually limited, do not follow any existing standard, and are incompatible with the existing filesystem.

Such tools are ideal for environments that are dominated by engineers, such as academic institutions, research labs or technology companies like Google/Yahoo that have a strong culture of in-house development (often hundreds of thousands of lines of code) to solve technical problems.

Most enterprises are unlike the culture of Google/Yahoo and each “build vs. buy” decision is carefully considered. Good engineering talent is a precious resource that is directed towards adding business value, not in building infrastructure from the ground up. The Data Services groups are universally under-staffed and consist of people that understand and leverage databases. As such, there are corporate governance expectations from any data management tool that they use:

- it has to comply with applicable standards like ANSI-SQL,

- it needs to provide a set of tools that IT can use & manage, and

- it needs to be ecosystem-friendly (BI and data integration tools compatibility).

In such an environment, using Java or developer-centric command line as the primary interface will increase the burden on the data services group and their IT counter-parts.

I strongly believe, that while existing MapReduce tools are good for development organizations, they are totally inappropriate for a large majority of enterprise IT departments.

Our goal is not to build yet another tool for development groups, but rather to create a product that unleashes the power of MapReduce for the enterprise IT organization.

How can we achieve that?

First, we’ve developed Aster to be a super-fast, always-parallel database for large-scale data warehousing using SQL. Then we allow our customers and partners to extend SQL through a tightly integrated MapReduce functionality.

The person that develops our MapReduce functions, naturally, needs to be a developer; but the person that is using this functionality can be an analyst using a standard BI tool (e.g., Microstrategy, Business Objects, Pentaho) over ODBC or JDBC connections!

Invoking MapReduce functions in Aster looks almost identical to writing standard SQL code. This way, the powerful MapReduce extensions that are developed by a small set of developers (either within an IT organization or by Aster itself) can be used by people with SQL skills using their existing sets of tools.

Integrating MapReduce and SQL is not an easy job; we had to innovate on multiple levels to achieve that, e.g. by creating a new type of UDFs that are both parallel and polymorphic, to make MapReduce extensions almost indistinguishable from standard SQL.

In summary, we have enabled:

- The flexible, parallel power of MapReduce to enable deep analytical insights that are impossible to express in standard SQL

- Seamless integration with ANSI standard SQL and all the rich commands, types, functions, etc. that are inherent in this well-known language

- Full JDBC/ODBC support ensures interoperability between Aster In-Database MapReduce and 3rd party database ecosystem tools like BI, reporting, advanced analytics (e.g., data mining), ETL, monitoring, scheduling, GUI administration, etc.

- SQL/MR functions – powerful plug-in operators that any non-engineer can easily plug into standard ANSI SQL to exploit the power of MapReduce analytic applications

- Polymorphism – unlike static, unreliable UDFs, SQL/MR functions unleash the power of polymorphism (run-time/dynamic) for cost-efficient reusability.  Built-in sandboxing ensures fault tolerance to avoid system crashes commonly experienced with UDFs

To conclude, it is important to understand that Aster nCluster is not yet another MapReduce implementation nor does it compete with Hadoop for resources or audience.

Rather, Aster nCluster is the world’s most powerful database that breaks traditional SQL barriers allowing Data Services groups and IT organizations to extract more knowledge out of their data



27
Aug
How Aster In-Database MapReduce Takes UDF’s to the Next Level
By Tasso Argyros in Blogroll, Database, MapReduce on August 27, 2008
   

Building on Mayank’s post, let me dig deeper into a few of the most important differences between Aster’s In-Database MapReduce and User Defined Functions (UDFs):

Feature User Defined Functions Aster SQL/MR Functions What does it mean?
———— ———— ———— ————
Dynamic Polymorphism No. Requires changing the code of the function and static declarations Yes SQL/MR functions work just like SQL extensions - no need to change function code
———— ———— ———— ————
Parallelism Only in some cases and for few number of nodes Yes, across 100s of nodes Huge performance increases even for the most complex functions
———— ———— ———— ————
Availability Ensured No. In most cases UDFs run inside the database Always. Functions run outside the database Even if functions have bugs, the system remains resilient to failures
———— ———— ———— ————
Data Flow Control No. Requires changing the UDF code or writing complex SQL subselects Yes. “PARTITION BY” and “SEQUENCE BY” natively control the flow of data in and out of the SQL/MR functions Input/output of SQL/MR functions can be redistributed across the database cluster in different ways with no actual function code change

In this blog post we’ll focus on Polymorphism - what it is and why it’s so critically important for building real SQL extensions using MapReduce.

Polymorphism allows Aster SQL/MR functions to be coded once (by a person that understands a programming or scripting language) and then used many times through standard SQL by analysts. In this context, comparing Aster SQL/MR functions and UDFs is like comparing SQL with the C language. The former is flexible, declarative and dynamic, the latter requires customization and recompilation even for the slightest change in usage.

For instance, take a SQL/MR function that performs sessionization. Let us assume that we have a webclicks(userId int, timestampValue timestamp, URL varchar, referrerURL varchar); table that contains a record of clicks for each user on our website. The same function, with no additional declarations, can be used in all the following ways:


SELECT sessionId, userId, timestampValue
FROM Sessionize( 'timestamp', 60 ) ON webclicks;


SELECT  sessionId, userId, timestampValue
FROM Sessionize( 'timestamp', 60 ) ON
(SELECT userid, timestampValue FROM webclicks WHERE userid = 50 );

[Note how the number of input arguments changed (going from all columns of webclicks; to just two columns of webclicks) in the above clause but the same function can be used. This is not possible with a plain UDF without writing additional declarations and UDF code]


SELECT  sessionId, UID, TS
FROM Sessionize( 'ts', 60 ) ON
(SELECT userid as UID, timestampValue as TS FROM webclicks WHERE userid = 50 );

[Note how the names of the arguments changed but the Sessionize() function does the right thing]

In other words, Aster SQL/MR functions are real SQL extensions - once they’ve been implemented there is zero need to change their code or write additional declarations - there is perfect separation between implementation and usage. This is an extremely powerful concept since in many cases the people that implement UDFs (engineers) and the people that use them (analysts) have different skills. Requiring a lot of back-and-forth can simply kill the usefulness of UDFs - but not so with SQL/MR functions.

How can we do that? There’s no magic, just technology. Our SQL/MR functions are dynamically polymorphic. To do this, our SQL/MR implementation (the sessionize.o file) includes not only code but also logic to determine its output schema based on its input, which is invoked at every query. This means that there is no need for a static signature as is the case with UDFs!

In-Database MapReduce Flow

Polymorphism also makes it trivial to nest different functions arbitrarily. Consider a simple example with two functions, Sessionize() and FindBots(). FindBots() can filter the input from any users that seem to act as bots, e.g. users whom interactions are very frequent (who could click on 10 links per second? probably not a human). To use these two functions in combination, one would simply write:


SELECT  sessionId, UserId, ts, URL
FROM Sessionize( 'ts', 60 ) ON FindBots( 'userid', 'ts' ) ON webclicks;

Using UDFs instead of SQL/MR functions would mean that this statement would require multiple subselects and special UDF declarations to accommodate the different inputs that come out of the different stages of the query.

So what is it that we have created? SQL? Or MapReduce? It really doesn’t matter. We just combined the best of both worlds. And it’s unlike anything else the world has seen!



26
Aug
By Mayank Bawa in Analytics, Analytics tech, Blogroll, Database, MapReduce on August 26, 2008
   

Pardon the tongue-in-cheek analogy to Oldsmobile when describing user-defined functions (UDFs), but I want to draw out some distinctions between this new class of functions that In-Database MapReduce enables.

Not Your Granddaddy's Oldsmobile

While similar on the surface, in practice there are stark differences between Aster In-Database MapReduce and traditional UDF’s.

MapReduce is a framework that parallelizes procedural programs to offload traditional cluster programming. UDF’s are simple database functions and while there are some syntactic similarities, that’s where the similarity ends. Several major differences between In-Database MapReduce and traditional UDF’s include:

Performance: UDF’s have limited or no parallelization capabilities in traditional databases (even MPP ones).  Even where UDF’s are executed in parallel in an MPP database, they’re limited to accessing local node data, have byzantine memory management requirements, require multiple passes and costly materialization.  In constrast, In-Database MapReduce automatically executes SQL/MR functions in parallel across potentially hundreds or even thousands of server nodes in a cluster, all in a single-pass (pipelined) fashion.

Flexibility: UDF’s are not polymorphic. Some variation in input/output schema may be allowed by capabilities like function overloading or permissive data-type handling, but that tends to greatly increase the burden on the programmer to write compliant code.  In contrast, In-Database MapReduce MR/SQL functions are evaluated at run-time to offer dynamic type inference, an attribute of polymorphism that offers tremendous adaptive flexibility previously only found in mid-tier object oriented programming.

Manageability: UDF’s are generally not sandboxed in production deployments. Most UDF’s are executed in-process by the core database engine, which means bad UDF code can crash a database. SQL/MR functions execute in their own process for full fault isolation (bad SQL/MR code results in an aborted query, leaving other jobs uncompromised). A strong process management framework also ensures proper resource management for consistent performance and progress visibility.



25
Aug
By Mayank Bawa in Blogroll, Database, MapReduce on August 25, 2008
   

I am very pleased to announce today that Aster nCluster now brings together the expressive power of a MapReduce framework with the strengths of a Relational Database!

Jeff Dean and Sanjay Ghemawat at Google had invented the MapReduce framework in 2004 for processing large volumes of unstructured data on clusters of commodity nodes. Jeff and Sanjay’s goal was to provide a trivially parallelizable framework so that even novice developers (a.k.a interns) could write programs in a variety of languages (Java/C/C++/Perl/Python) to analyze data independent of scale. And, they have certainly succeeded.

Once implemented, the same MapReduce framework has been used successfully within Google (and outside, via Yahoo! sponsored Apache’s Hadoop) to analyze structured data as well.

In mapping our product trajectory, we realized early on that the intersection of MapReduce and Relational Databases for structured data analysis has a powerful consonance. Let me explain.

Relational Databases present SQL as an interface to manipulate data using a declarative interface rooted in Relational Algebra. Users can express their intent via set manipulations and the database runs off to magically optimize and efficiently execute the SQL request.

Such an abstraction is sunny and bright in the academic world of databases. However, any real-world practitioner of databases knows the limits of SQL and those of its Relational Database implementations: (a) a lack of expressive power in SQL (consider doing a Sessionization query in SQL!), and (b) a cost-based optimizer that often has a mind of its own refusing to perform the right operations.

Making an elephant dance!A final limitation of SQL is completely non-technical: most developers struggle with the nuances of making a database dance well to their directions. Indeed, a SQL maestro is required to perform interesting queries for data transformations (during ETL processing or Extract-Load-Transform processing) or data mining (during analytics).

These problems become worse at scale, where even minor weaknesses result in longer run-times. Most developers (the collective us), on the other hand, are much more familiar with programming in Java/C/C++/Perl/Python than in SQL.

MapReduce presents a simple interface for manipulating data: a map and a reduce function written in the language of choice (Java/C/C++/Perl/Python) of a developer. Its real power lies in the Expressivity it brings: it makes the phrasing of really interesting transformations and analytics breathtakingly easy. The fact that MapReduce, in its use of Map and Reduce functions is a “specific implementation of well known techniques developed nearly 25 years ago” is its beauty: every programmer understands it and knows how to leverage it.

As a computer scientist, I am thrilled at the simple elegant interface that we’ve enabled with SQL/MR. If our early beta trials with customers are any indication, databases have just taken a major step forward!

You can program a database too!You can now write against the database in a language of your choice and invoke these functions from within SQL to answer critical business questions. Data analysts will feel liberated to have simple powerful tools to compete effectively on analytics. More importantly, analysts now have simplicity, working within the environs of simple SQL that we all love.

The Aster nCluster will orchestrate resources transparently to ensure that tasks make progress and do not interfere with other concurrent queries and loads in the database.

Aster: Do More!We proudly present our SQL/MapReduce framework in Aster nCluster as the most powerful analytical database. Seamlessly integrating MapReduce with ANSI SQL provides a quantum leap that will empower analysts and ultimately unleash the power of data for the masses.

That is our prediction. And we are working to make it happen!



19
Aug
   

I am curious if anyone out there is attending the TDWI World Conference in San Diego this week? If so and you would like to meet up with me, please do drop me a line or comment below as I will be in attendance. I’m of course very excited to be making the trip to sunny San Diego and hope to catch a glimpse of Ron Burgundy and the channel 4 news team! :-)

But of course it’s not all fun and games, as I’ll participate in one of TDWI’s famous Tool Talk evening sessions discussing data warehouse appliances. This should make for some great dialogue between me and other database appliance players, especially given the recent attention our industry has seen. I think Aster has a really different approach to analyzing big data and look forward to discussing exactly why.

For those interested in the talk, here are the details..come on by and let’s chat!
What:TDWI Tool Talk Session on data warehouse appliances
When: Wednesday, August 20, 2008 @ 6:00p.m.
Where: Manchester Grand Hyatt, San Diego, CA



05
Aug
By Mayank Bawa in Analytics, Blogroll, Business analytics, Business intelligence, Database on August 5, 2008
   

Today we are pleased to welcome Pentaho as a partner to Aster Data Systems. What this means is that our customers can now use Pentaho open-source BI products for reporting and analysis on top of Aster nCluster.

We have been working with Pentaho for some time on testing the integration between their BI products and our analytic database. We’ve been impressed with Pentaho’s technical team and the capabilities of the product they’ve built together with the open source community. Pentaho recently announced a new iPhone application which is darn cool!

I guess, by induction, Aster results can be seen on the iPhone too. :-)



25
Jul
   

Stuart announced yesterday that Microsoft has agreed to acquire DATAllegro. It is pretty clear Stuart and his team have worked hard for this day: it is heartening to see that hard work gets rewarded sooner or later. Congratulations, DATAllegro!

Microsoft is clearly acquiring DATAllegro for its technology. Indeed, Stuart says that DATAllegro will start porting away from Ingres to SQL Server once the acquisition completes. Microsoft’s plan is to provide a separate offering from its traditional SQL Server Clustering.

In effect, this event provides a second admission from a traditional database vendor that OLTP databases are not up to the task for large-scale analytics. The first admission was in 1990s when Sybase (ironically, originator of SQL Server code base) offered Sybase IQ as a separate product from its OLTP offering.

The market already knew this fact: the key point here is that Microsoft is waking up to the realization.

A corollary is that it must have been really difficult for Microsoft SQL Server division to scale SQL Server for larger scale deployments. Clearly, Microsoft is an engineering shop and the effort of integrating alien technology into their SQL Server code-base must have been carefully evaluated for a build-vs-buy decision. The buy decision is a tacit admission that it is incredibly hard to scale their SQL Server offering with its roots in traditional OLTP database.

We can expect Oracle, IBM, and HP to have similar problems in scaling their 1980s code-base for the needs of data-scale and query-workloads of today’s data warehousing systems. Will the market wait for Oracle, IBM, and HP’s efforts to scale to come to fruition? Or will Oracle, IBM, and HP soon acquire companies to improve their own scalability?

It is interesting to note that DATAllegro will be moving to an all-Microsoft platform. The acquisition could also be read as a defensive move by Microsoft. All of the large-scale data warehouse offerings today are based on Unix variants (Unix/Linux/Solaris), thus leading to the uncomfortable situation at some all-Microsoft shops who chose to run Unix-based data warehouse offerings because SQL Server would not scale. Microsoft needed an offering that could preserve their enterprise-wide customers on Microsoft platforms.

Finally, there is a difference in philosophy between Microsoft’s and DATAllegro’s product offerings. Microsoft SQLServer has sought to cater to the lower end of the BI spectrum; DATAllegro has actively courted the higher end. Correspondingly, DATAllegro uses powerful servers, fast storage, and expensive interconnect to deliver a solution. Microsoft SQL Server has sought to deliver a solution at a much lower cost. We can only wait and watch: will the algorithms of one philosophy work well in the infrastructure of the other?

At Aster Data Systems, we believe that the market dynamics will not change as a result of this acquisition: companies will want the best solutions to derive the most value from data. In the last decade, Internet changed the world and old-market behemoths could not translate their might into the new market. In this decade, Data will produce a similar disruption.



17
Jun
By Tasso Argyros in Analytics, Analytics tech, Blogroll, Database, Scalability on June 17, 2008
   

 

I’m delighted to be able to bring to a guest post to our blog this week. David Cheriton, one of Aster Data Systems’ angel investors, leads the Distributed Systems Group at Stanford University and has been known for making some smart investments. Below is what David has to say about the need to address the network interconnect in MPP systems - we hope this spurs some interesting conversation!

“A cluster of commodity computer nodes clearly offers a very cost-effective means of tackling demanding large-scale applications such as data mining over large data sets. However, most applications require substantial communication. For example, consider a query that requires a join between three tables that share no common key to partition on (non-parallelizable query), a frequent case in analytics. In conventional architectures, such operations need to move huge amounts of data among different nodes and depend on the interconnect to deliver adequate performance.

The cost and performance impact of the interconnect for the cluster to support this communication is often an unpleasant surprise, particularly without careful design of the cluster software. Yes, we are seeing the cost of 10G Ethernet coming down in cost, both in switches and NICs, and the IEEE is starting work on 100G Ethernet. However, the interconnect is, and will remain, an issue for several reasons.

First, in a parallelizable query, you need to get data from one node to several others. The bandwidth out of this one node is limited by its NIC bandwidth, Bn. In a uniformly configured cluster, each of the receiving nodes has the same NIC bandwidth Bn, so with K receivers, each is receiving at 1/K. However, the actual performance of the cluster can be limited by data hotspots, where the requirement for data from a given node far exceeds its NIC and/or memory bandwidth.

The inverse problem, often called the incast problem, arises when K nodes need to send data to a single node. Each can send at bandwidth Bn for a total bandwidth demand of K*Bn, but the target node can only receive at Bn or 1/K of the offered load. The result can be congestion, packet drop from overflowing packet queues, TCP timeouts and backoff, resulting in dramatically lower goodput than even Bn. Here, I say “dramatically” because the performance can collapse to 1/10 of expected or worse because of packet drop, timeout and retries that can occur at the TCP level. In systems with as little as 10 nodes, connected via a Gigabit Ethernet interconnect, performance can deteriorate to under 10 MB per second per node! For higher number of nodes, the problem becomes even worse.

Phanishayee et al have studied the incast problem. They show that TCP tuning does not help significantly. They observe that significantly larger switch buffering helps up to some scale, but that drives up the cost of the switches substantially. Besides some form of link-level flow control (which suffers from head-of-line blocking, is not generally available and usually does not work between switches), the other solution is just adding more NICs or faster NICs per node, to increase the send and receive bandwidth.

Moreover, with k NICs per node, an N node network now requires k*N ports, requiring a larger network to interconnect all the nodes in the cluster. Large fast networks are an engineering and operation challenge. The simplest switch is a single-chip shared memory switch. This type of switch is limited by the memory and memory bandwidth available for buffering. For instance, a 24-port 10 Gbps switch requires roughly 30 Gbytes/sec of memory bandwidth, forcing the use of on-chip memory or off-chip SRAM, in either case rather limited in size, aggravating TCP performance problems. This memory bandwidth demand tends to limit the size of shared memory switches.

The next step up is a crossbar switch. In effect, each line card is a shared memory switch, possibly splitting the send and receive sides, connected by a special interconnect, the crossbar. The cost per port increases because of the interconnect and the overall complexity of the system and the lower volume for large-scale switches. In particular each line card needs to solve the same congestion problems as above in sending through the interconnect to other line cards.

Scaling larger means building a multi-switch network. The conventional hierarchical multi-switch network introduces bottlenecks within the network, such as from the top-of-rack switch to the inter-rack switch, leading to packet loss inside the network. Various groups have proposed building Clos networks out of commodity GbE switches, but these require specialized routing support and complex configuration and a larger number of components, leading to more failures and complex failure behavior and extra cost.

Overall, you can regard the problem as being k nodes of a cluster needing to read from and write to the memory of the other nodes. The network is just an intermediary trying to handle this aggregate of read and write traffic across all the nodes in the cluster, thus requiring expensive high-speed buffering because these actions are asynchronous/streamed. Given this aggregate demand, faster processors and faster NICs just make the challenge greater.

In summary, MPP databases are more MPP than databases, in the sense that for complex distributed queries the network performance (major bottleneck in MPP systems) is much more challenging than disk I/O performance (major bottleneck in conventional database systems). Smart software that is able to minimize demands on the network and avoid hotspots and incast can significantly reduce the demand on the network and achieve far more cost-efficient scaling of the cluster, plus avoid dependence on complex (CLOS) or non-sweet spot networking technologies (i.e. non-Ethernet). It’s a great investment in software and processor cycles when the network is intrinsically a critical resource. In some sense, smart software in the nodes is the ultimate end-to-end solution, achieving good application performance by minimizing its dependence on the intermediary, the interconnect.”

- Prof. David Cheriton, Computer Science Dept., Stanford University

 



19
May
By Tasso Argyros in Blogroll, Database, Manageability, Scalability on May 19, 2008
   

One of the most interesting, complex and perhaps overused terms in data analytics today is scalability. People constantly talk about “scaling problems� and “scalable solutions.� But what really makes a data analytics system “scalable�? Unfortunately, despite its importance, this question is rarely discussed so I wanted to post my thoughts here.

Any good definition of scalability needs to be a multi-dimensional concept. In other words, there is no single system property that is enough to make a data analytics system scalable. But what are the dimensions that separate scalable from non-scalable systems? In my opinion the three most important are (a) data volume; (b) analytical power; and (c) manageability. Let me provide a couple of thoughts on each.

(a) Data Volume. This is definitely an important scale dimension because enterprises today generate huge amounts of data. For a shared-nothing MPP system this means accommodating a sufficient number of nodes to accommodate the available data. Evolution in disk and server technology have made it possible to store 10s of TBs of data per node, so this scale dimension alone can be achieved even with a relatively small number of nodes.

(b) Analytical Power. This is an equally important scale dimension to Data Volume because storing large amounts of data alone has little benefit; one needs to be able to extract deep insights out of it to provide real business value. And for non-trivial queries in a shared-nothing environment this presents two requirements. First, the system needs to be able to accommodate a large number of nodes to have adequate processing power to execute complex analytics. And secondly, the system needs to scale its performance linearly as more nodes are added. The latter is particularly hard for queries that involve processing of distributed state such as distributed joins: really intelligent algorithms have to be in place or else interconnect bottlenecks just kill performance and the system is not truly scalable.

(c) Manageability. Scalability across the manageability dimension means that a system can scale up and keep operating at a large scale without armies of administrators or downtime. For an MPP architecture this translates to seamless incremental scalability, scalable replication and failover, and little if any requirement for human intervention during management operations. Despite popular belief, we believe manageability can be measured and we need to take such metrics into account when characterizing a system as scalable or non-scalable.

At Aster, we focus on building systems that scale across all dimensions. We believe that even if one dimension is missing our products do not deserve to be called scalable. And since this is such an important issue, I’ll be looking forward to more discussion around it!