Computing+Facilities

Here are some general information about the computer facilities used by the group.

CFD Computer Lab
The UTIAS CFD lab has around 30 workstations (desktop computers) running Linux. These computing resources are shared with Professor Groth's group. The first two workstations inside the door are for general use. If those computers are in use, you may go to any of the other available computer on individual desks. You will need an account (username and password) in order to access the network.

A very helpful overview of the CFD Computer Lab can be found in the following pdf: 

This following website is a guide for new linux users: []

HPACF
The High Performance Aerospace Computer Facility is a computing cluster that are independent from the CFD lab. You need a separate account to access this cluster. HPACF is often used to run test cases for OPTIMA2D and Jetstream.

To access HPACF from any computer on the network type the following command on the terminal: //ssh fe03//

Please see the HPACF User's Guide for how to use this computer resource.

Sci-Net
You need a separate account (user/password) for SciNet.

Start by following the procedure described to obtain an account on SciNet.

Please read the user's guide. Here is a home-brew script to colourfully view you own queue (similar to **qstat**) which can be run on Scinet computers.

**Lattice on Westgrid**
You need to first apply for an account on WestGrid (through your Compute Canada portal) and then e-mail the WestGrid administrators for an account on Lattice. When you get confirmation of your account, you can login via "ssh lattice.westgrid.ca". You can then follow the instructions in the following document to get set up:

(Recent as of June 25, 2012)

**Parallel on Westgrid**
This system shares a file system with Lattice and has a similar architecture, but needs to be compiled separately. To compile on Parallel, the Lattice set-up instructions above still apply, just replace the word "lattice" with "parallel" when compiling. The different compiler option just adds a "p" to the executable so that the compiled code is now called "jetstreamp" instead of "jetstream". This is done to keep the Lattice and Parallel executables separate if running on both systems, and the submit script will have to reflect this change. Another difference that has to be reflected is that Parallel uses 12 processors per node instead of 8, and 22-23gb per node instead of 10-11gb, both of which must be reflected in the submit script as well.

Also, to use fewer than the maximum processors per node, the syntax is "-npernode x", where "x" is the number of processors to use per node. This applies to both Latiice and Parallel. This differs from SciNet, which is "-perhost x". This can be used if we need more memory per process.

(This section last updated September 2012)

**Guillimin:**
We also use this system. Someone who is familiar with it can fill this section out.