The compute nodes boot by Pre-Execution Environment (PXE), using the front-end node as the server. All of the nodes of the cluster get their file systems from the same CD image, so it is guaranteed that all nodes run the same software. The CD image is created by running a single script, which makes it possible to customise the live CD image with extra Debian packages.
Pelican is created using Debian Live. To make your own version you only need live_helper (also deboostrap, rsync, any others?) and the make_pelican script, which is provided below.
The LAM-MPI and OpenMPI implementations of MPI are installed. Both 32 and 64 bit versions are available. Debian testing (Lenny) is the base for both.
Contains extensive example programs using GNU Octave and MPITB. Also has the Linpack HPL benchmark.
You can use any Class C network you like. By default, the cluster is on 10.11.12.*
xfce4 window manager, konqueror for browsing and file management, ksysguard for monitoring the cluster, kate and nano for editing. As noted, it is very easy to add packages. Pelican is a bare-bones framework for setting up a HPC cluster.
Pelican releases and all testing is currently done using Debian Lenny as the base. Squeeze or sid may or may not work.
Updates: (via Distrowatch):
Michael Creel has announced the release of PelicanHPC 2.3, a Debian-based live CD for high performance computing clusters formerly known as ParallelKnoppix: "PelicanHPC 2.3 is available. From this release forward, Debian 'Squeeze' will be the base for PelicanHPC, until further notice. Also, PelicanHPC is henceforth available only in a 64-bit edition. There are no major changes since version 2.2, apart from the newer versions of most packages. In particular, the kernel is now at 2.6.32, and Xfce is looking sharp at version 4.6.2. In the move from 'Lenny' to 'Squeeze' as the base, the Ganglia monitoring system has stopped working, because the configuration files have not yet been updated. I would be happy to receive gmond.conf and gmetad.conf files that cause the installed version of Ganglia to work properly on PelicanHPC. KSysGuard still works well as a cluster monitor, though."
Visit the project's home page to read the brief release announcement.
Download: pelicanhpc-v2.3.iso (649MB, MD5).
• 2011-01-12: Distribution Release: PelicanHPC 2.3
• 2010-09-11: Distribution Release: PelicanHPC 2.2
• 2010-07-22: Development Release: PelicanHPC 2.2 RC
• 2010-01-13: Distribution Release: PelicanHPC 2.0
• 2009-02-04: Distribution Release: PelicanHPC 1.8
• 2008-05-20: Distribution Release: PelicanHPC 1.5.1
This allows for full headless remote administration, and makes it considerably more convenient to use PelicanHPC to run a permanent cluster. New in this version:
- ~/pelican_config file to allow for persistence, customization and headless boot. People interested in doing serious work with PelicanHPC are encouraged to examine this self-documented file
- autodetection of persistent frontend home
- autodetection of frontend and node local scratch space
- ability to run local scripts post boot and setup
- node beep after boot
- automated node booting using wake-on-lan
- configuration of slots and optional frontend inclusion for mpi
- static IP assignment configurable using MAC addresses.
- node startup/shutdown script
- possibility to serve DHCP to machines that are not compute nodes
- python frontend for mpi software included (mpi4py)
- can now build PelicanHPC on PelicanHPC, using live-build
- added a couple of text editors, joe and nano
PelicanHPC is a distribution of GNU/Linux that runs as a "live CD" (it can also be put on a USB device, it can be booted from a hard disk partition, or it can be used as a virtualized OS). If the ISO image file is put on a CD or USB, it can then be used to boot a computer. The computer on which PelicanHPC is booted is referred to as the "frontend node". It is the computer with which the user interacts. Once PelicanHPC is running, a script - "pelican_setup" - may be run. This script configures the frontend node as a netboot server. After this has been done, other computers can boot copies of PelicanHPC over the network. These other computers are referred to as "compute nodes". PelicanHPC configures the cluster made up of the frontend node and the compute nodes so that MPI-based parallel computing may be done.
A "live CD" such as PelicanHPC by default does not use the hard disks of any of the nodes (except Linux swap space, if it exists), so it will not destroy or alter your installed operating system. When the PelicanHPC cluster is shut down, all of the computers are in their original state, and will boot back into whatever operating systems are installed on them. PelicanHPC can optionally be made to use hard disk storage, so that its state can be preserved across boots. It can be configured to boot without user intervention, with access possible by ssh. There is also the possibility of making the compute nodes boot using wake-on-LAN. With these more advanced optional features, PelicanHPC can be used to run a headless permanent cluster.
PelicanHPC is made using Debian GNU/Linux as its base, through the Debian Live system. It is made by running a single script using the command "sh make_pelican-v*". Customized versions of PelicanHPC, for example, containing additional packages, can easily be made by modifying the make_pelican script.The make_pelican script and the needed packages are provided on PelicanHPC, so you can build a custom image using the provided images. You can also run make_pelican from any GNU/Linux distro if you install live-build and a few other packages.
- The frontend node can be a real computer booted using a CD or a USB device, or a virtual machine that is booted using the CD image file. With this last option, PelicanHPC can be used at the same time as the normal work environment, which may be any of the common operating systems.
- The compute nodes are normally real computers, for maximum performance, but they can also be virtual.
- Supports MPI-based parallel computing using Fortran (77, 90), C, C++, GNU Octave and Python.
- Offers the Open MPI implementation of MPI.
- Cluster can be resized to add or remove nodes using the "pelican_restarthpc" command.
- Easily extensible to add packages. Also easily modifiable, since the PelicanHPC CD image is created using a single script that relies on the Debian Live system. For this reason, the distributed version is fairly basic and lightweight.
- Contains example software: Linpack HPL (now at v2.0) benchmark and extensive examples that use GNU Octave. Also has mpi4py.
Limitations and requirements
- The compute nodes must be booted over the network. This is an option offered by all modern networking devices supplied with motherboards, but it often must be enabled in the BIOS setup. Enable it, and give it higher priority that booting from hard disk or other sources. If you have a network card that won't do netboot, it is possible to work around this using rom-o-matic. Another thing to be aware of is that the PelicanHPC frontend operates as a dhcp server. You should not use it on an open network, or you will cause dhcp conflicts. This will get you into a world of trouble with the network administrators. Plus, your compute nodes will not boot properly.
- A PelicanHPC cluster is designed to be used by a single person - there is only one user, with the username "user".
- Released versions are for 64 bit CPUs only (Opteron, Turion, Core 2, etc.). make_pelican can be used to make a 32 bit version, if needed.
- The PelicanHPC web page lists some other similar distros that may be more appropriate for certain uses.
Licensing and Disclaimer
PelicanHPC is a CD image made by running a script (see below). The script is licensed GPL v3. The resulting CD image contains software from the Debian distribution of GNU/Linux, and several other sources, which is subject to the licenses chosen by the authors of that software.
This released PelicanHPC CD images are distributed in the hope that they will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
The two main commands for administration of the cluster are "pelican_setup", to configure the frontend as a server, NFS export /home, etc., and "pelican_restarthpc", which is used to add/remove nodes after the initial setup. The rest of this explains how this works.
The frontend and all compute nodes must be networked together. IMPORTANT: the frontend node will act as a DHCP server, so be sure to isolate the network used for the cluster from other networks, to avoid conflicts with other DHCP servers. If you start handing out IP addresses to your co-workers' computers, they may become annoyed. If the frontend node has multiple network interfaces, you can use one to connect to the cluster and another to connect to the Internet.
Put the CD in the computer that will be the frontend, and turn it on. Make sure the BIOS setup lets you boot from CD. When you boot up, you'll see something like the following. Here, if you press
Once you boot up, eventually you see:
This screen gives you the opportunity to use a permanent storage device for the /home directory of the PelicanHPC user. By default, if you just press
IMPORTANT NOTE: there is another way to use permanent storage that is quite flexible. This is documented in the file /home/user/pelican_config, which you can see if you boot using the default. If this is your first experience with PelicanHPC, I recommend doing a default boot, study pelican_config, and then choose the option for permanent storage that you find most appropriate.
Next, you will see
You will probably want to choose "yes", unless you are re-using work you saved in a previous session.
Next, you are prompted to change the default password:
You should backspace to remove the default and then type in a new password. This will be the password for user "user" on the frontend node and on all of the compute nodes, too.
Finally, you are all booted up and the login prompt appears:
Enter the username "user" and then the password that you set a moment ago.
Now you are logged in:
To set up a cluster, type "pelican_setup". You can do this from the console as in these instructions, or from Xfce by opening up a terminal. Next, we see the following, supposing that you have more than 1 network device:
After you choose the net device, services need to be started. Please read the warning in the following screenshot. Setting up a PelicanHPC dhcp server will get you in trouble with your network administrators if you do this on an open network. You should make sure that the network device used for the cluster is isolated from all networks except the cluster. When you see the following screen, choose "yes".
Next you will see
Press enter, and go turn on the compute nodes.
When a compute node starts to netboot, you'll see this whiz by:
When a compute node is done booting, you'll see this, supposing that it has a monitor:
Back on the frontend node, you see the following:
Once a node has booted up, the count goes up:
Keep hitting "no" until all of your compute nodes have booted up. Once you click yes, you'll see something like the following, depending on how many nodes you have.
Finally, a quick test of the cluster is run. You should see something like the following:
OK, that's it, the cluster is ready to use. Some other tips:
you can add software to the frontend node using "apt-get install whatever", supposing that the frontend has a second net card that you have configured to enable Internet access. This software is not available on the compute nodes. To add software so that it is available to all the nodes, it should be installed somewhere in /home/user.
the default MPI setup is in the file /home/user/tmp/bhosts. This assigns ranks to hosts in a round robin fashion. If your hosts have different speeds, numbers of cores, etc., you should modify this file. If the frontend node is virtual but the compute nodes are real, you should probably remove the frontend node from the calculations.
ksysguard is available, and a small amount of effort will turn it into a nice cluster monitor. See this post for general information on how to do it.
if you need other packages, then you can make your own version pretty easily using the make_pelican script that is available on the PelicanHPC homepage. This is explained (somewhat) below.
You can resize the cluster (add or remove compute nodes) whenever you like, by running "pelican_restarthpc".
IMPORTANT: In the /home/user directory is the file pelican_config. This file contains switches for advanced options that allow features such as use of permanent storage, booting without user intervention, changing the network of the cluster, wake-on-LAN, etc. Casual users do not need to explore this, but people who want a permanent cluster should look at it. It is self-documented.
PelicanHPC has the Linpack HPL benchmark, and some extensive examples from the field of econometrics that use GNU Octave. Econometrics is a field of study that applies statistical methods to economic models. The software is in the Econometrics directory:
There is a document "econometrics.pdf" that has a lot of information, including some about parallel computing:
Open a terminal, type "octave" and then "kernel_example" (please note that underscore back there, ... music please, maestro):
et viola! some nice pictures:
That last picture screenshot shows the output of kernel_example.m if it is run serially, on a single core. To see how to run it in parallel, see the next shot. NOTE: the kernel routines do no computations on rank 0 (it is used to gather the results), so you must specify at least 2 MPI ranks.
Other things to try are "bfgsmin_example", "mle_example", "gmm_example", "mc_example" and a few others I'm forgetting about. To find where the code is, type "help mc_example", for example, while in Octave. Then, go edit the relevant file to learn more about what it does. Or, while in Octave, type "edit bfgsmin_example" (or edit whatever you like) and the file found in Octave's path will open up in the vim editor.
Return to contents
By default, PelicanHPC images put /home/user on a ramdisk which disappears when you shut down. You need to save your work between sessions, if you want to re-use it. There are many options, such as mounting a hard disk, using a USB device, etc. If you have an Internet connection configured, you can email it to yourself, as is illustrated in the next shot:
If you use PelicanHPC for serious work, it is very convenient to mount a storage device to use as /home, so that your work will be saved between sessions without taking any special steps. When you boot up the frontend node, you have the option to select a storage device to use. This is a feature for advanced users, and I strongly advise that you dedicate a hard disk partition for use with PelicanHPC. If you use a partition with other data on it, you should make sure to back it up before using it with PelicanHPC! Only ext2 and ext3 formats are known to work. This feature has been tested using a very limited set of hardware, so use it with caution. There is also the option to automatically mount a volume that has a special name. See pelican_config in /home/user. This is the best solution for users who want to use PelicanHPC on a long term basis.
Using the make_pelican script
The distributed ISO images provide a bare bones cluster setup system, plus some packages that I use in my research and teaching. There are a few examples taken from my work, which may be of interest to those learning the basics of MPI, or to people interested in econometrics. However, many users will find that Pelican does not contain packages that they need. If one uses pelican_config properly, it is possible to give all nodes of the cluster internet access through the connection of the frontend node, so packages can be simply added using "apt-get". Nevertheless, some users will prefer to have a custome version of the CD image. PelicanHPC is made by running a single script "make_pelican", which is available on the download page, and also on the released images. If you have the prerequisites for running the script, it is very easy to make a customized version of Pelican. The prerequisites are installed on PelicanHPC, so you can build a custom version using the released version. The prerequisites are:
an installed version of GNU/Linux. This can be a minimal installation in a chroot jail if you prefer to run something else for your normal work. You could even use a virtual machine under Windows, if you are a Windows user.
the live-build package. Use the version available in Debian unstable: get it at http://packages.debian.org/sid/live-build. It is available as a .deb, and also as source code for use with other distributions. You also need the debootstrap, wget and rsync packages.
examine the make_pelican script, which contains some self-explanatory comments. Add the packages you need to the package list section.
you need to run the make_pelican script as the root user. A fast internet connection is helpful, since a lot of packages need to be downloaded. Also, it helps to build the image on a fast, hopefully multicore computer. Parts of the build process are parallelized and will take advantage of multiple cores. Build time for the default configuration on a decent dual core laptop with lot of RAM is less than half an hour.
when you are done, there will be a file "binary-hybrid.iso" in the ../
/frontend directory, where ../ is the location of the make_pelican script, and is either i386 or amd64, depending on which you left uncommented in the script.
There is a manual for Debian Live. Please have a look at it before trying to use make_pelican. Additional information is on the Debian Live homepage. This information is the main documentation, since make_pelican is just a script that provides a specific configuration to the Debian Live system of building a live CD image. Also remember that "man live-build", "man lb_config" and "man lb_build" will give you information.
If you liked this article, subscribe to the feed by clicking the image below to keep informed about new contents of the blog: