13
Jan
By Mayank Bawa in Blogroll on January 13, 2009
   

We announced today that we have raised $12M in our Series B funding.

We’re thrilled that our vision and message have resonated so well in these times when investors have become extremely selective. And frankly that is because our message is resonating so well with our customers: within 4 months of publicly launching the company in May 2008, our customer base had grown to double digits with several of our customers deploying/considering a second production system.

The inside story is that we had scheduled our first meetings to raise Series B for the week of October 5. It turned out to be the week of a perfect financial storm. The week began with Iceland’s currency dropping 30% against the Euro. On October 6, several European governments stepped up to guarantee bank deposits. On October 7, as Iceland and Russia exchanged Yes and No statements on a Euro 4B loan, Sequoia Capital gathered its portfolio CEOs in an auditorium and warned them (and by the power and reach of the Web, every venture capitalist, entrepreneur, executive and journalist) that the time had come to “Get Real or Go Home”. By October 10, the cost of short-term credit had spiraled up and as a result DJI dove to 8,451 from 9,955 at the start of the week, its lowest point in the previous 5 years.

Tasso and I still went to our scheduled meetings to present Aster’s opportunity and our customer adoption, and got invited back. In the next 5 weeks, as the financial markets continued their drop off a cliff, we continued to receive strong interest for investing that matured with multiple term-sheets for us to consider.

We were very excited to accept JAFCO Ventures as our new investor to lead Series B. We signed our term-sheet with JAFCO on November 10, a fast turn-around by any standard and an incredible one given the roller-coaster the world was riding. As a previous investor in DATAllegro (acquired by Microsoft), we are beaming that JAFCO Ventures chose to back us against the big giants in the DBMS market. Return-backer Sequoia Capital also has significant experience in the space, having been an early investor in the now-public Netezza. It is a great vote of confidence that the two VCs who understand the data warehousing space the best have chosen to invest in Aster as we lead the revolution in next-generation data warehousing.

In addition, we also announced today that Ben Horowitz, former CEO of Opsware, has joined our company advisory board. Ben co-founded Opsware and built it into one of the great enterprise technology companies of this era. Ben provides us a wealth of experience to tap into for growing an enterprise software company from scratch to $1B+.

We will use this funding and support to fuel our growth. There is a big opportunity in working with customers and partners to help change how people and applications use data. And we intend to rise to the opportunity!

Bookmark and Share


09
Jan
By Mayank Bawa in Blogroll on January 9, 2009
   

When I can afford a multi-week break, I head home to India. It is really impressive how home refreshes and recharges a person as nothing else does.

India has a burdgeoning media and communications industry. The movie industry is well-known and huge, but in truth the story of media in India in the last decade has been that of television coming of age. Internet as a media channel is still in its infancy here. Even in communications, Internet’s growth is dwarfed substantially by the growth of mobile networks.

The power of media in shaping conversations and spreading awareness really hit home in two separate anecdotes.

I had flown into Mumbai when an airport-based pre-paid cab driver offered me very generous rates if I were to retain him on a consistent basis for the week of my stay. I am always suspicious of cab drivers anywhere giving a good deal, so I asked him the reason for his generosity. “Sahib, business weak hai recession ke waaste (Sir, business is weak due to recession)” came the answer.

I went to a get-together, and uncles/aunts who are one-generation older than me asked if the US recession is as severe as they hear, and gave sensible advice to my cousins on safeguarding their jobs. The last generation in India had lived their formative and middle years in a government-regulated economy. I was in high school in India during the 1990-91 recession when our small town was severely hit when the dominant livelihood-providing steel manufacturing plant suffered huge losses. I remember that the same uncles/aunts then fairly-and-squarely blamed the lack of efficiency in government enterprises, the lack of product quality and a consistent lack of investment over the last decade (i.e., everything except a recession!).

The media has successfully educated a broad section of the population on reasons for their livelihood misery. And people were responding appropriately to protect their interests.

But then something happened today that motivated this post.

Two days ago, the oil workers in Indian state-run oil companies went on a strike. Today, the cable news networks were full of doomsday talk of oil-dry gas stations and replaying clips of long consumer queues. I marveled at the power of oil in bringing Delhi to its knees within 48 hours of the supply chain being halted, and decided to go check it out myself. Imagine my surprise when I found normal traffic at the pump. Intrigued, I drove around (yes, I’m on vacation with lots of free time :-) ) finding pump after pump and not finding the promised long queues.

The same media that spread awareness was actively spreading needless panic!

I am sure if more people in Delhi had as much free time as I did, they would have gone to the gas stations to hoard in anticipation and then queues would have formed. Luckily, as dusk fell and normal office hours ended (giving people free time), the strike had been called off.

We escaped news from generating reality, as opposed to the other way around.

Happy new year!

Bookmark and Share


05
Jan
Aster Mentioned in Gartner Magic Quadrant
By Steve Wooledge in Blogroll, Frontline data warehouse on January 5, 2009
   

If you would have told me that Aster Data Systems would be referenced by Gartner in the Magic Quadrant for Data Warehouse Database Management Systems report after just 6 months of coming out of stealth mode, I would have said you were pretty ambitious. There are some pretty high criteria for being placed, such as having a generally-available product for more than one year, as well as over 10 customers in production.

Well, we were included. Even if it was only a mention. And I’m proud to say that as of the publishing date, I believe the only criteria that held us back from being placed on it was having a product available for less than one year.

Although Aster’s high-performance analytic database, Aster nCluster, has been generally available since May - we’re actually on the third major release tested in customer environments.  We were busy building out its functionality and testing Aster nCluster in some pretty rigorous frontline data warehouses such as Aggregate Knowledge and MySpace, where we have strict requirements for scale (100+ terabytes) and uptime (no unplanned downtime for more than one year).

2008 was a break-out year for Aster.  2009 will be even more exciting as we continue to set new limits for what customers expect of a relational database management system.

I suppose the next thing you’ll tell me is that we should be placed in the Leaders or Visionaries Quadrants on the chart?  That’s ambitious. But it’s just the sort of thing this team and product are capable of.

——————————————————————————————————————————————————-

The Magic Quadrant is copyrighted 2008 by Gartner, Inc. and is reused with permission. The Magic Quadrant is a graphical representation of a marketplace at and for a specific time period. It depicts Gartner’s analysis of how certain vendors measure against criteria for that marketplace, as defined by Gartner. Gartner does not endorse any vendor, product or service depicted in the Magic Quadrant, and does not advise technology users to select only those vendors placed in the “Leaders” quadrant. The Magic Quadrant is intended solely as a research tool, and is not meant to be a specific guide to action. Gartner disclaims all warranties, express or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

Bookmark and Share


19
Dec
Happy Holidays!
By Mayank Bawa in Blogroll on December 19, 2008
   

We received a very nice Holiday Card today from one of our investors, First Round Capital, that we helped create :-)

Rob Hayes visited our office in early December with a hand-held video camera asking us to dance! I was quite nervous us filming the video: I had of course seen Matt’s “Where the hell is Matt” video. The untold back-story is that we’re supposed to just start dancing to an imagined tune: there is no-one keeping beats or playing a song!

The end-result is equal parts goofy and equal parts fun! Happy holidays!!

Bookmark and Share


16
Dec
Lower Your Data Warehousing Costs
By Steve Wooledge in Blogroll on December 16, 2008
   

With all the hype in the analytic database and database appliance market around “100x performance!” and “dramatic cost savings!”, we’ve stayed away from making bold, unsubstantiated claims of our own.  Instead, we’ve chosen to talk about the ability for companies to “do more” with their data through advanced analytics powered by Aster’s In-Database MapReduce, etc.

However, Aster does provide some unique cost and time advantages. We’ve produced a technical brief and webcast which outline the four main areas where Aster can help you lower your data warehousing costs:

[1] Hardware (Aster uses off-the-shelf commodity hardware - no proprietary appliances)

[2] Management (Aster makes it really simple to manage a large distributed database system)

[3] Scaling (you can scale incrementally and easily)

[4] Downtime (Aster doesn’t need any for administration and can recover on its own from any failures)

Check it out and let us know what you think.

Bookmark and Share


19
Nov
By John Guevara in Analytics, Blogroll on November 19, 2008
   

I recently attended a panel discussion in New York on media fragmentation consisting of media agency execs including:

- Bant Breen (Interpublic - Initiative – President, Worldwide Digital Communications),
- John Donahue (Omnicom Media Group - Director of BI Analytics and Integration),
- Ed Montes (Havas Digital - Executive Vice President),
- Tim Hanlon (Publicis - Executive Vice President/Ventures for Denuo)

The discussion was kicked off of by Brian Pitz, Principle of Equity Research for Bank of America.  Brian set the stage for a spirited discussion regarding the continuing fragmentation of online media along with research on the issues posed by this.  The panel discussion touched upon many issues including fear placement around unknown user-generated content, agency lack of skill set to address this medium and lack of standards.  However, what surprised me most was the unanimous consensus in opinion that there is more value further out on “The Tail� of the online publisher spectrum due to the targeted nature of the content.  Yet the online media buying statistics conflict with this opinion (over 77% of online ad spending is still flowing to the top 10 sites).

When asked “why the contrast?� between their sentiment and the stats, the discussion revealed the level of uncertainty due to a lack of transparency into “The Tail�.  Despite the 300+ ad networks that have emerged to address this very challenge, the value chain lacks the data to confidently invest the dollars.  In addition, there was a rather cathartic moment when John Donahue professed that agencies should “Take Back Your Data From Those that Hold It Hostage�.

It is our belief that the opinions expressed by the panel serve as evidence of a shift towards a new era in media where evidential data will drive valuation across media rather than sampling-based ratings acting as the currency.  No one will be immune from this:

- Agencies need it to confidentially invest their clients dollars and show demonstrable ROI of their services
- Ad networks need it to earn their constituencies’ share of marketing budgets
- Ad networks need it to defend the targeted value and the appropriateness of their collective content
- 3rd Party measurement firms (comScore, Nielsen Online, ValueClick) need it to maintain the value of their objective value
- Advertisers need it to support the logic budget allocation decisions
- BIG MEDIA needs it to defend their 77% stake

You might be thinking, “The need for data is no great epiphany�.  However, I submit that the amount of data and the mere fact that all participants should have their own copy is a shift in thinking.  Gone are the days where:

- The value chain is driven solely by 3rd Party’s and their audience samples
- Ad Servers/Ad Networks are the only keepers of the data
- Service Providers can offer data for a fee

Bookmark and Share


18
Nov
By Tasso Argyros in Blogroll, Frontline data warehouse on November 18, 2008
   

I recently wrote an article with Enterprise Systems Journal on how retailers can benefit from innovations in “always on” databases. This story is based on the real-life experiences of one of our customers during the last holiday shopping season. They saw a spike in traffic and quickly scaled their frontline data warehouse built with Aster Data Systems. This allowed them to maintain the service level agreements with the business for product cross-promotion without needing to wait to plan an upgrade in the off-season.

Thinking about more than query performance and initial system cost is something we believe firmly in. People often overlook the cost of maintaining and scaling systems in a time of need.

After next week, let us know - how did your favorite e-tailers fare on ‘Black Friday’ and ‘Cyber Monday’? Did their recommendation engines deliver, or did they leave cash in your wallet that got spent elsewhere?

Bookmark and Share


14
Nov
By Tasso Argyros in Blogroll on November 14, 2008
   

Last week I conducted a course at the TDWI World Conference in New Orleans, LA called, “Introduction to Map/Reduce Data Transformations“.  If you weren’t able to make the session, my slides are embedded here.

I’m pleased to have been given the opportunity to introduce this new approach to in-database analytics and parallel data processing, in general. The most consistent feedback I had was that there wasn’t enough time to cover this topic in-depth, and attendees were eager to learn more! In that case, my previous post on educational resources for MapReduce may be of interest.

Since Aster Data Systems  introduced In-Database MapReduce for the Aster nCluster relational database, there has been tremendous interest in the data warehousing and technology community, with recent coverage in the NY Times and by influential blogs like DBMS2, Beyond Search, and Cloud N, just to name a few.

Hopefully TDWI will turn this into a 1/2-day course in the future. (If you agree, feel free to contact them at info@tdwi.org).

If anyone knows of other good resources on this emerging topic, please feel free to put links in the comments here.

Introduction to MapReduce Data Transformations
View SlideShare presentation or Upload your own. (tags: mapreduce tdwi)
Bookmark and Share


06
Nov
Accelerating the “Aha!” Moment
By Mayank Bawa in Analytics, Blogroll, Frontline data warehouse on November 6, 2008
   

I was at Defrag 2008 yesterday and it was a wonderful, refreshing experience. A diverse group of Web 2.0 veterans and newcomers came together to accelerate the “Aha!” moment in today’s online world. The conference was very well organized and there were interesting conversations on and off the stage.

The key observation was that individuals, groups and organizations are struggling to discover, assemble, organize, act on, and gather feedback from data. Data itself is growing and fragmenting at an exponential pace. We as individuals feel overwhelmed by the slew of data (messages, emails, news, posts) in the microcosm, and we as organizations feel overwhelmed in the macrocosm.

The very real danger is that an individual or organization’s feeling of being constantly overwhelmed could result in the reduction of their “Aha!� moments - our resources will be so focused on merely keeping pace with new information that we won’t have the time or energy to connect the dots.

The goal then is to find tools and best practices to enable the “Aha!� moments - to connect the dots even as information piles up on our fingertips.

My thought going into the conference was that we need to understand what causes these “Aha!� moments. If we understand the cause, we can accelerate the “Aha!� even at scale.

Earlier this year, Janet Rae-Dupree published an insightful piece in the International Herald Tribune on Reassessing the Aha! Moment. Her thesis is that creativity and innovation - “Aha! Moments� - do not come in flashes of pure brilliance. Rather, innovation is a slow process of accretion, building small insight upon interesting fact upon tried-and-true process.

Building on this thesis, I focused my talk on using frontline data warehousing as an infrastructure piece that allows organizations to collect, store, analyze and act on market events. The incremental fresh data loads in a frontline data warehouse add up over time to build a stable historical context. At the same time, applications can contrast fresh data with historical data to build the small contrasts gradually until the contrasts become meaningful to act upon.

I’d love to hear back from you on how massive data can accelerate, rather than impede, the “Aha!” moment.

Aster Defrag 2008 97
View SlideShare presentation or Upload your own. (tags: systems data)
Bookmark and Share


06
Nov
Forget About Total Cost of Ownership (TCO) - Think TCS
By Tasso Argyros in Blogroll on November 6, 2008
   

Forget about total cost of ownership (TCO). In the Internet age, scaling up can be the biggest cost factor in your infrastructure. And overlooking the scaling challenge can be disastrous.

Take Bob, for instance. Bob is the fictional database manager at FastGrowth, an Internet company with a fast-growing user base.  Bob is 36 and has done more than a dozen data warehouse deployments in his career. He’s confident in his position. Granted, this is his first Internet gig… But this shouldn’t matter a lot, right? Bob’s been there, done that, got the tee-shirt.

Bob’s newest gig is to implement a new data warehouse system to accommodate FastGrowth’s explosive data growth. He estimates that there will be 10TB of data in the next 6 months, 20TB in the next 12, and 40TB 18 months from now.

Bob needs to be very careful about cost (TCO); getting way overboard on his budget could cost him his reputation or even (gasp) his job. He thus asks vendors and friends how much hardware and software he needs to buy at each capacity level. He also makes conservative estimates about the number of people required to manage the system and its data at 10, 20, and 40 terabytes.

Fast-forward 18 months. Bob’s DW is in complete chaos; it can hardly manage half of the 40 TB target and it required twice the number of people and dollars … so far. Luckily for Bob, his boss (Suzy), has been doing Internet infrastructure projects for her whole career and knew exactly what mistake Bob made (and why he deserves a second chance.)

What went wrong? Bob did everything almost perfectly. His TCO estimates at each scale level were, in fact, correct. But what he did not account for was the effort of going from one scale level to the other in such a short time! Doubling the size of data every 6 months is 3x faster than Moore’s law. That’s like trying to build a new car that is 3x faster than a Ferrari. As a result, growing from 10TB to 20TB in six months may cost many times more than (in terms of people, time and dollars) running a 20TB system for 6 months.

In some way, this is no news. The Internet space is full of stories where scaling was either too expensive or too disruptive to be carried out properly. Twitter, with its massive success, has had to put huge effort to scale up its systems. And Friendster lost the opportunity to be a top social network partly because it was taking too long to scale up its infrastructure. Moreover, as new data sources become available, companies outside Internet are facing similar kind of challenges – scaling needs that are too hard to manage!

So how can we reason about this new dimension of infrastructure cost? What happens when data is growing constantly, and scaling up ends up being the most expensive part of most projects?

The answer, we believe, is that the well-known concept of TCO not good enough to capture scaling costs in this new era of fast data growth. Instead, we need to also start thinking about the Total Cost of Scaling – or TCS.

Why is TCS useful? TCS captures all costs – in terms of hardware, software and people – that are required to increase the capacity of the infrastructure. Depending on the application, capacity can mean anything such as amount of data (e.g. for data warehousing projects) or queries per second (for OLTP  systems.)  TCO together with TCS gives a true estimate of project costs for environments that have been blessed with a growing business.

Let’s see how TCS works in an example. Say that you need 100 servers to run your Web business at a particular point in time, and you have calculated the TCO for that. You can also calculate the TCO of having 250 servers running 12 months down the road, when your business has grown. But going from 100 severs to 250 – that’s where TCS comes in. The careful planner (e.g. Bob in his next project) will need to add all three numbers together – TCO at 100 servers, TCO at 250 servers and TCS for scaling from 100 to 250 – to get an accurate picture of the full cost.

At Aster, we have been thinking about TCS from day one exactly because we design our systems for environments of fast data growth. We have seen TCS dominating the cost of data projects. As a result, we have built a product that is designed from the ground-up to make scalability seamless and reduce the TCS of our deployments to a minimum. For example, one of our customers scaled up their Aster deployment from 45 to 90 servers with a click of a button. In contrast, traditional scaling approaches – manual, tedious and risky – bloat TCS and can jeopardize whole projects.

As fast data growth becomes the rule rather than the exception, we expect more people to start measuring TCS and seek ways to reduce it. As Francis Ford Coppola put it, “anything you build on a large scale or with intense passion invites chaos.� And while passion is hard to manage, there is something we can do about scale.

Bookmark and Share