We are very excited to share with you that today we announced our company, Aster Data, is being acquired by Teradata, who as you all know commands the #1 position in data warehousing. Together, we will tackle the massive opportunity in the big data and big data analytics market. Upon close, Aster Data will become part of the Teradata organization and our products will become part of the Teradata family of products, sold stand-alone, and integrated into their product line.
The combined goal is big, as said on Teradata’s web site home page:
Today marks a major milestone in our continuing journey, and we are thrilled to join forces with the market leader in data management. Our company has achieved a lot since our inception just 5 years ago, and we look forward to accelerating our innovation and market reach even further – with the market strength of Teradata and the speed of our combined cultures. In 5 years, we’ve played a big role in shaping the Big Data Analytics Platform market and innovated on new technologies that enable customers to store diverse, granular data and process it in diverse ways. The big data opportunity as we see it is more about extracting insights from your diverse data than just finding cost-effective ways to store it. Processing and extracting deep insights from diverse and big data is where we’ve innovated and broken new ground, and with this merger we will accelerate it further.
Our journey started when we realized that (a) it was hard and expensive to manage big data, and (b) it was nearly impossible to process and analyze diverse (non-relational) data types like Web clicks, social connections, and text files at scale. The two worlds of data management and data processing were separate – RDBMSs would store and manage data in their world; however, applications and tools would do analytics outside of the database. This division severely restricted the types of analytics possible on large amounts of data. We discussed this in more detail on an earlier blog post from January 26.
The real impact of the above two restrictions was that organizations were drawing in a flood of data and couldn’t make any sense out of it. For instance, organizations couldn’t analyze enough data to understand their customers at an individual level, and thus they couldn’t improve their products and customer experience. Or, they couldn’t detect advanced fraud schemes because the offenders were hiding in terabytes of data (the outliers) and complicated money network schemes, resulting in huge losses.
Foreseeing this opportunity, we decided to change the enterprise data infrastructure and build a platform that (a) uses commodity hardware to scale at unprecedented levels while keeping costs low, (b) combines data management and data processing in one platform to allow much deeper analysis of data at much larger scale, and (c) accommodates the processing of diverse data types (e.g. machine generated data, social network data, text data, etc.) in a single platform.
Over the past 5 years we have been aggressively building our technology and developing this big new market. We’ve had continuous and increasing success – one recognition of this was Gartner’s recent Magic Quadrant. And looking forward, we were seeing a 2011 where the new market we were creating would become mainstream reality across organizations. As a Gartner press release recently stated: “2011 will be the year when data warehousing reaches what could well be its most-significant inflection point since its inception… The biggest, and possibly most-elaborate data management system in the IT house is changing. The new data warehouse will introduce new scope for flexibility in adding new information types and change detection.”
And this execution now sets the stage for our joining forces with Teradata. We love this merger for 3 reasons:
First, we love that Teradata is by far the most successful data warehousing and data-driven applications company in the world. As founders, we understood that Teradata will accelerate our vision and will back us in realizing the full potential of the Big Data Analytics Platform.
Second, we have always had a big and ambitious technology vision. A bold vision needs time and resources to execute to its full potential. As part of Teradata, we will have the resources and support needed to accelerate our technology. We will also have access to a global sales organization and channel to accelerate the adoption of the Big Data Analytics Platform, and ultimately bring more benefits to our customers, more quickly.
Third, Aster Data nCluster is very complementary to Teradata’s existing product portfolio. By combining products from both companies, we can come to market with solutions that solve a very wide range of diverse data management and data analysis problems using “best of breed” components. We expect both Aster Data and Teradata customers to find our joint offerings very unique and valuable for their business, thus increasing their opportunities and decreasing their costs.
In closing, we want to re-iterate that we have never been more excited about our market, our company and our opportunity! Our vision has proven to be right early on and we’ve watch other players in our market try to follow suit – that’s just one external validation of our direction, and there have been many more as our customers use our products to break new ground in analytics insights on diverse and big data. As we innovated and as we delivered on the vision for big and diverse data management, our team’s execution has truly defined and helped shape the market. And in this evolution, we are more confident and tremendously excited as we write the next chapter of this market.
Upon close of this transaction, the merger with Teradata is about taking our products, our innovations, our IP, and the Aster Data team, and accelerating our lead in the big data and big data analytics market. Or simply put it’s about ‘going big.’
We really want to thank our customers that believed in us and drove key input into our product roadmap and see the big data opportunity. We promise that our commitment and support to you all will only increase in the future. Also our team, who joined a small company and have worked hard to make it so successful. And finally, our investors who understood the opportunity and believed they were going to be part of something new, valuable and exciting.
I’m delighted to announce that we’ve appointed a new CEO, Quentin Gallivan, to lead our company through the next level of growth.
We’ve had tremendous growth at our company in the past 4 years – having grown Aster Data from 3 persons to a strong, well-rounded team and stellar management team, shipped products with market-defining features, working with customers doing fascinating projects across many industries including retail, Internet, media and publishing and financial services and established key partnerships that we’re really excited about. Tasso and I’ll be working closely with Quentin as he accelerates our trajectory, taking our company to the next level of market leadership, sales and partnership execution, and our international expansions.
Quentin brings more than 20 years of senior executive experience to Aster Data. He has held a variety of CEO and senior executive positions with leading technology companies. Quentin joins us from PivotLink, the leading provider of BI solutions, where as CEO, he rapidly grew the company to over 15,000 business users from mid-sized companies to F1000 companies, across key industries including retail, financial services, CPG, manufacturing and high technology. Prior to PivotLink, Quentin served as CEO of Postini where he scaled the company to 35,000 customers and over 10 million users until its eventual acquisition by Google in 2007. Quentin also served as executive vice president of worldwide sales and services at VeriSign where he grew sales from $20M to $1.2B and was responsible for the global distribution strategy for the company’s security and services business. Quentin has also held a number of key executive and leadership positions at Netscape Communications and GE Information Services.
I’ll transition to a role that I’m really passionate about. I’ll be working closely with our customers and, as our Chief Customer Officer, I’ll lead our organization devoted to ensuring customer success and innovation in our fast-growing customer base. When the company was smaller, I was very actively involved in our customer deployments. As the company scaled, I had to withdraw into operations. In my new role, I’ll be back doing tasks that I relish – solving problems at the intersection of technology and usage – and providing a feedback loop from customers to Tasso, our CTO, to chart our product development.
Together, Quentin, Tasso and I are excited to accelerate our momentum and success in the market.
Our architecture enables SAS software procs to run natively inside the database thereby preserving the statistical integrity of SAS software computations while giving unprecedented performance increases during analysis of large data sets. SAS Institute partners in this initiative with other databases too – but the difference is that each of these databases require the re-implementation of SAS software procs as proprietary UDFs or Stored Procedures.
We also allow dynamic workload management capabilities to enable graceful resource sharing between SAS software computations, SQL queries, loads, backups and scale-outs – all of which may be going on concurrently. The workload management enables administrators to dial-up or dial-down resources to the data mining operations based on the criticality of the mining and other tasks being performed.
Our fast loading and trickle feed capabilities ensure that SAS software procs have access to fresh data for modeling and scoring, ensuring a timely and accurate analysis. This avoids the need to export snapshots (or samples) of data to an external SAS server for analysis, saving analysts valuable time in their iterations and discovery cycles.
We’ve been working with SAS Institute for a while now, and it is very evident why SAS has been the market leader in analytic applications for three decades. The technology team is very sharp, driven to innovate and execute. And as a result we’ve achieved a lot working together in a short time.
We look forward to working with SAS Institute to dramatically advance analytics for big data!
I had commented that a new set of applications are being written that leverage data to act smarter to enable companies to deliver more powerful analytic applications. Operating a business today without serious insight into business data is not an option. Data volumes are growing like wildfire, applications are getting more data-heavy and more analytics-intensive, and companies are putting more demands on their data.
The traditional 20-year old data pipeline of Operational Data Stores (to pool data), Data Warehouses (to store data), Data Marts (to farm out data), Application Servers (to process data) and UI (to present data) are under severe strain – because we are expecting a lot of data to move from one tier to the other. Application Servers pull data from Databases for computations and push the results of the computation to the UI servers. But data is like a boulder – the larger the data, the more the inertia, and therefore the larger the time and effort needed to move it from one system to another.
The resulting performance problems of moving ‘big data’ are so severe that application writers unconsciously compromise the quality of their analysis by avoiding “big data computations” – they first reduce the “big data” to “small data” (via SQL-based aggregations/windowing/sampling) and then perform computations on “small data” or data samples.
The problem of ‘big data’ analysis will continue to grow severe in the next 10 years as data volumes grow and applications demand more data granularity to model behavior and identify patterns so as to better understand and service their customers. To do this, you have to analyze all your available data. For the last 5 years, companies have routinely upgraded their data infrastructure every 12-18 months as data sizes double and the traditional data pipeline buckles under the weight of larger data movement – and they will be forced to continue doing this in the next 10 years if nothing fundamental changes.
Clearly, we need a new, sustainable solution to address this state of affairs.
The ‘aha!’ for big data management is to realize that traditional data pipeline suffers from an architecture problem – of moving data to applications – that must change to allow applications to move to the data.
I am very pleased to announce a new version of Aster Data nCluster that addresses this challenge head-on.
Moving applications to the data requires a fundamental change in the traditional database architecture where applications are co-located inside the database engine so that they can iteratively read, write and update all data. The new infrastructure acts as a ‘Data-Application Server’ managing both data and applications as first-class citizens. Like a traditional database, it provides a very strong data management layer. Like a traditional application server, it provides a very strong application processing framework. It co-locates applications with data, thus eliminating data movement from the Database to the Application server. At the same time, it keeps the two layers separate to ensure the right fault-tolerance and resource-management models – bad data will not crash the application, and vice-versa a bad application will not crash the database.
Our architecture and implementation ensures that apps should not have to be re-written to make this transition. The application is pushed down into the Aster 4.0 system and transparently parallelized across the servers that store the relevant data. As a result, Aster Data nCluster 4.0 simultaneously also delivers 10x-100x boost in performance and scalability.
Those using Aster Data’s solution, including comScore, Full Tilt Poker, Telefonica I+D, Enquisite – are testament to the benefits of this fundamental change. In each case, it was the embedding of the application with the data that enables them to scale seamlessly and perform ultra-fast analysis.
The new release brings to fruition a major product roadmap milestone that we’ve been working on for the last 4 years. There is a lot more innovation coming – and this milestone is significant enough that we issue a clarion call to all persons working on “big data applications” – we need to move applications to the data because the other way round is unsustainable in this new era.
There has been a lot of turmoil this past week in Financial Services. Several good people had their projects stalled, or even lost their jobs, due to market forces beyond their control.
I’d like to call out to Quantitative Computer Scientists who have been affected. If you are good with data and know how to extract intelligence from it, we want you in our team!
We are hiring. You’ll have the chance to work with a number of our customers and help them do more with their data. You’ll bring a fresh perspective to the business processes at our customers; in turn, you’ll gain from learning about the business processes of various verticals. An invaluable education when you want to go back to Financial Services after the crisis has passed in a couple of years.
Drop us a note at careers [at] asterdata [dot] com. We’d love to hear from you!
The trend is inevitable: purchasing becomes easier and more frictionless. You could buy something at the store or from your home. But now you can buy stuff while you jog in the park, while you bike (it’s not illegal yet), or even while you’re reading a distressing email on your iPhone (shopping therapy at its best.)
As purchasing gets easier and pervasive, we’ll tend to buy things in smaller quantities and more often. Which means more consumer behavior data will be available for analysis by advertisers and retailers to better target promotions to the right people at the right time.
In this new age, where interaction of buyers with shops and brands is much more frequent and intimate, enterprises who use their data to understand their customers will have a huge advantage over their competition. That’s one of the reasons why at Aster we’re so excited building the tools for tomorrow’s winners.
We took a decision early on in building the company that we’d make our platform open in technology and have an inclusive philosophy on business.
I am glad to say that this year we have started delivering on our business philosophy.
We have good relationships with several smart consulting teams, and are actively working with them to bring innovative solutions to the market for our joint customers. We recently recommended a partner to a company where we were not a good fit because we felt that our partner could bring a lot of value to the prospect and that such introductions strengthen our extended network. We were genuinely surprised at the warmth it generated at both the company and the partner for us!
In the last few years, we’ve actively built our product to work on a variety of hardware platforms: we have customers running IBM, HP, Dell, and even white-box offerings! Earlier this week, we announced our partnership with Informatica. You will see a series of announcements appearing in the next few months.
We are actively looking for a person who can lead our efforts in establishing meaningful partnerships in the data warehousing space. If you know one, or are one, who shares an inclusive philosophy, drop us a note!
I am glad to share the news that one of our first customers, MySpace, has scaled their Aster nCluster enterprise data warehouse to more than 100 Terabytes of actual data.
It is not easy to cross the 100TB barrier, especially when loads happen continuously and queries are relentless, as they are at MySpace.com.
Hala, Richard, Dan, Jim, Allen, and Aber, you have been awesome partners for us! It has been a great experience for Aster to work with you and we can see the reasons behind MySpace’s continued success. Your team is amazingly strong and capable and there is a clear sense of purpose. Tasso and I often remark that we need to replicate that culture in our company as we grow. At the end of the day, it is the culture and the strength of a team that makes a company successful.
And to everyone at Aster, you have been great from Day 1. It is impressive how a fresh perspective and a clean architecture can solve a tough technical challenge!
Thank you. And I wish everyone as much fun in the coming days!
Have you ever discovered a wonderful little restaurant off the beaten path? You know the kind of place. It’s not part of some corporate conglomerate. They don’t advertise. The food is fresh and the service is perfect–it feels like your own private oasis. Keeping it to yourself would just be wrong (even if you selfishly don’t want the place to get too crowded).
My name is Mayank, and I co-founded Aster Data Systems with George and Tasso in 2005.
Shortly after incorporation, the three of us were eating lunch at a Chinese restaurant and out popped a fortune slip from a cookie reading:
Indeed. The Internet is changing the speed at which we communicate, processes are being automated to react and execute in the blink of an eye, and data is playing a key role in guiding execution. Analysis of data is moving front-and-center, breaking out of the passive world of warehousing and reporting, as applications create intelligent processes, and companies live-and-die by their ability to monetize their data.
A new set of applications are being written – or waiting in-the-wings to be written – that will leverage data to act smarter. Consider the rapid evolution of online advertising networks: in the past 5 years, we have seen a spate of successful companies carving out a niche for themselves in the market. Their differentiation? The unique ability to match advertising inventory with consumer segments. Their basis of differentiation? Data!
And yet, a majority of these advertising networks do not use databases for their optimizations! Google and Yahoo! have famously built their own platforms; so did the amazingly talented teams at Right Media, Kosmix and Revenue Science. Of course, these companies use databases: but only for reportingandbilling purposes.
How did we get to this cross-road where data is being analyzed outside the database?
For too long, databases have been clunky, monolithic systems that are rigid and inflexible, locking up the data in architectures that are
1. Hard to query
2. Hard to scale
3. Hard to manage
Meanwhile, the landscape of applications around a database is changing, shifting away slowly but surely.
We will use this blog to outline our thoughts on this changing landscape, along with our experiences in building an analytics database and a company that participates in this change.