Aster Parallelism: V-Workers

By Michael Riordan, Teradata Aster

One of the keys to Aster’s parallel processing power are Virtual Workers (or v-workers).  V-Workers are the compute and storage units in an Aster cluster, much like AMPS are for Teradata.  In this tutorial, I’ll show you how to easily add another v-worker in a procedure called ‘partition splitting’.  This will double the “parallel computing” of our Aster Express cluster.

With Aster Express, we are showcasing Aster’s features and functionality with only a single Worker node, so we’re limited to what tuning options we have. (If you’re lucky enough to have a very powerful PC with 6GB+ of RAM and at least a quad-core CPU, we’ll have a tutorial to show how to add a second Worker to your cluster).  So the default configuration with Aster Express has been only one v-worker on our lone Worker node.  In the following easy steps, we'll go ahead an reconfigure the Aster Express cluster with a second v-worker.   In fact, this step will also be required to leverage some of the SQL-MR components in the Aster analytic library, including the Teradata Aster connector.  Hopefully this will again show you how simple the configuration and management of an Aster cluster can be.

Let’s start with the Aster Management Console (AMC) that we’ve already used in our previous tutorials (Aster Express: Getting Started).   Starting at the Admin/Cluster Management screen, which shows us our Queen and Worker nodes, click on the Worker IP address (192.168.100.150), which links to the node details screen. 

14-1

At the node details screen, you see the Virtual Worker: 1 configuration count.

14-2

Now, to increase the number of Virtual Workers, we’ll need to run a configuration script on the Aster Queen node.  To do this, either login to the Queen using an SSH terminal or tool, or simply use the Queen image directly as we’ve done in earlier tutorials (Using Aster Express: Act 1) and open a GNOME terminal on the Queen’s desktop.

From the command prompt, run the Change Partition Count utility which requires at least one argument: the desired partition count. In our case we will be setting this value to 2.

14-3

You can also use the --help option for a full listing of all the possible options and parameters.

The Aster cluster then reboots automatically as the final step. After this reboot, the partition split is complete.  If the operation fails, you should restart it by re-running command again, using the same parameters you used the first time.

14-4

Note that this process may take some time, based on the power of your PC and the amount of data that you may have already loaded onto your Aster cluster.  In my case, the total process took about 15 minutes, so please keep it running until the partition splitting completes and you see a new command prompt in your terminal window. 

Meanwhile, you can go back to the AMC web page and watch the changes that are taking place in your Aster cluster.  Log into the AMC and go to the Node: Partition Map tab to monitor the progress of your partition split. Green squares represent active v-workers (you’ll see a new yellow square when the process first starts). When the number of active v-workers reaches our new partition count, the split is complete.

14-5

And going back to the Worker node details screen (clicking on the Worker IP address 192.168.100.150 link), we’ll now see the Virtual Worker count at 2.  Success!  Though, remember to wait for the command on the Queen node to complete — when you see the new command prompt.

14-6