Linux and Free Software
- Lightweight PaaS on the NCI OpenStack Cloud
- (2013) Kevin Pulo.
- Conference presentation (25 mins) at linux.conf.au 2013 OpenStack Miniconf,
Australian National University, Canberra, Australia,
January 29, 2013.
-
[ main ]
[ show abstract ]
[ slides ]
[ slides (odp) ]
- video:
[ YouTube ]
[ mp4 ]
[ ogv ]
[ webm ]
- local copy:
[ mp4 ]
[ ogv ]
[ webm ]
- Abstract:
National Computational Infrastructure (NCI), Australia's peak High-Performance Computing facility, will be hosting a High Performance node of the National eResearch Collaboration Tools And Resources (NeCTAR) "National Research Cloud". This is an OpenStack-based cloud supporting world-class research by academics across Australia. However, plain IaaS is a bit too "raw" for most researchers, so we have developed a lightweight PaaS to augment our OpenStack deployment. The system is integrated with LDAP and manages instances with Puppet manifests in git repositories, along with some supporting scripts.
This presentation will cover the main features of our setup, including: how we use Puppet across a multi-tenant situation; using git and gitolite to host Puppet manifests, allowing collaboration on Puppet setups while keeping sensitive configs private; and our nova-boot wrapper script and userdata helper script, which automate the application of the Puppet repository to instances and ensures they are kept in sync. If time permits, a brief demo may be given of cloning a repository, starting an instance from it, and then adjusting the configuration for that instance.
- Syzix: meandering down the garden $PATH
- (2013) Kevin Pulo.
- Conference presentation (40 mins) at linux.conf.au 2013 Cross-Distro Miniconf,
Australian National University, Canberra, Australia,
January 29, 2013.
-
[ show abstract ]
[ slides ]
[ slides (odp) ]
- video:
[ YouTube ]
[ mp4 ]
[ ogv ]
[ webm ]
- local copy:
[ mp4 ]
[ ogv ]
[ webm ]
- Abstract:
Syzix (syzix.org) is an experimental new system with a slightly crazy premise: every package is installed into its own separate location in the filesystem, and users and processes can dynamically change the packages and versions that they can see and use. It is currently targeted at advanced users and developers, and is under active development.
Syzix has evolved out of the site package management system used on the peak supercomputer systems at NCI (National Computational Infrastructure) at ANU. It is expected to be used on upcoming NCI compute systems, including the 57,000 core, Linux-based, petascale HPC facility (scheduled for January 2013), and the NCI cloud.
Syzix is similar in concept to GoboLinux and package management systems like GNU Stow. Multiple versions of the same package can be installed concurrently, and the filesystem is the package manager, where manually-installed software is first-class. Unlike GoboLinux, Syzix retains a traditional Unix filesystem layout for non-package files, eg. /home, /boot, /dev, /etc. By default, packages are installed into /sw/$repo/$arch/$pkgname/$version.
The difference in Syzix is that the packages available to a process are controlled by its environment - as opposed to symlinks (which affect all processes), or messing around with absolute paths (which is a pain). This idea shouldn't be a big surprise - it's just managing $PATH (and $LD_LIBRARY_PATH, $CPATH, etc), and is what it's always been designed for.
A brief demo of Syzix will be given during the Sysadmin Miniconf on Monday morning. This presentation will go into more detail about the system, its components, how they fit together, and some of the benefits and challenges that arise.
- Syzix: heading off the beaten $PATH
- (2013) Kevin Pulo.
- Conference presentation (10 mins) at linux.conf.au 2013 Sysadmin Miniconf,
Australian National University, Canberra, Australia,
January 28, 2013.
-
[ main ]
[ show abstract ]
[ extended abstract ]
[ slides ]
[ slides (odp) ]
- video:
[ YouTube ]
[ mp4 ]
[ ogv ]
[ webm ]
- local copy:
[ mp4 ]
[ ogv ]
[ webm ]
- Abstract:
Syzix (syzix.org) is an experimental new system with a slightly crazy premise: every package is installed into its own separate location in the filesystem, and users and processes can dynamically change the packages and versions that they can see and use. It is currently targeted at advanced users and developers, and is under active development.
Syzix has evolved out of the site package management system used on the peak supercomputer systems at NCI (National Computational Infrastructure) at ANU. It is expected to be used on upcoming NCI compute systems, including the 57,000 core, Linux-based, petascale HPC facility (scheduled for January 2013), and the NCI cloud.
Syzix is similar in concept to GoboLinux and package management systems like GNU Stow. Multiple versions of the same package can be installed concurrently, and the filesystem is the package manager, where manually-installed software is first-class. Unlike GoboLinux, Syzix retains a traditional Unix filesystem layout for non-package files, eg. /home, /boot, /dev, /etc. By default, packages are installed into /sw/$repo/$arch/$pkgname/$version.
The difference in Syzix is that the packages available to a process are controlled by its environment - as opposed to symlinks (which affect all processes), or messing around with absolute paths (which is a pain). This idea shouldn't be a big surprise - it's just managing $PATH (and $LD_LIBRARY_PATH, $CPATH, etc), and is what it's always been designed for.
This makes Syzix a rather unique and interesting creature that can be difficult to characterise. It's both stable and unstable - installing bleeding-edge packages doesn't compromise system stability. A new version of a package can easily be installed and thoroughly tested before being made the default. Or the default can be updated straight away, and then rolled back if any breakage is noticed - perhaps only for some processes.
Since everything lives inside /sw, it can be installed alongside another distribution without partitioning or dual booting. This is great for taking Syzix for a test drive, or using it for certain packages while keeping your favourite existing distribution. Similarly, users can recompile packages into $HOME/sw, eg. on machines they don't manage.
- Fun with LD_PRELOAD
- (2009) Kevin Pulo.
- Conference presentation (50 mins) at linux.conf.au 2009,
University of Tasmania, Hobart, Australia,
January 23, 2009.
-
[ main ]
[ show abstract ]
[ slides (local) ]
[ associated video (local) ]
- related software:
[ libsysconfcpus ]
[ xlibtrace ]
[ xmultiwin ]
[ tunerlimit ]
- Abstract:
LD_PRELOAD is a useful mechanism provided by many dynamic linkers (including the GNU C library (glibc)). It enables users to specify additional shared libraries that are loaded when executing dynamically linked programs. Typically these preloaded libraries will override or intercept functions defined by other (regular) shared libraries. This allows the behaviour of existing programs and libraries to be modified non-invasively, that is, without requiring recompilation or relinking. Some of the creative applications of this powerful technique include:
1. Filesystem shenanigans, for example, presenting a modified view of the filesystem, or logging all filesystem accesses.
2. Network shenanigans, for example, preventing programs from accessing the network, or limiting their access.
3. Debugging and testing, for example, causing memory allocation or IO to deliberately fail to test graceful error handling.
4. Annoyance reduction, for example, intercepting terminal beeping.
5. Library tracing and logging, similar to ltrace/strace.
6. Graphical augmentation, for example, adding a frames-per-second (fps) display to OpenGL applications.
7. Reverse-engineering, extending and modifying the behaviour of closed-source software, for example, adding awareness of cpusets to proprietary (but hardware-optimised) MPI libraries provided by HPC vendors.
This presentation is aimed at people who are C programmers (or at least familiar with C) and are interested in learning about LD_PRELOAD. Practical code examples will be given, and live demos shown (where possible and appropriate).
There are four main sections of the presentation. First is a broad overview of dynamic linking and LD_PRELOAD, and its advantages, disadvantages, applicability and limitations. This is followed by a review of existing LD_PRELOAD applications in the areas mentioned above.
The third section shows how write LD_PRELOAD-oriented C code. This includes skeletal code and the basic structure of a function-intercepting LD_PRELOAD library, motivated by the in-presentation development of a simple LD_PRELOAD library (with example autoconf/automake compilation code).
The final section will demonstrate a non-trivial application of LD_PRELOAD to X Windows applications, namely, xmultiwin and xlibtrace. xmultiwin is a preload library that transparently "clones" X11 windows. This allows unmodified X11 applications to be displayed simultaneously (and interacted with) in multiple windows, for example, one on each monitor of a multi-monitor Xinerama setup. xlibtrace is a library for the X11 client library interface (libX11 and friends). It is similar to ltrace/strace, except that it is specifically optimised for the X11 library, and is similar to xtrace except that it traces communication between the application and the X11 client library, not just communication between the X11 client and server. Both of these projects additionally feature heavy use of automatic code generation using C preprocessor macros and shell scripting.
Information Visualisation and Human-Computer Interaction
- Panemalia: visualising longitudinal datasets at the Australian Data Archive
- (2011) Kevin Pulo, Ben Evans, Deborah Mitchell, Steven McEachern.
- Conference presentation at eResearch Australiasia 2011,
Sebel & Citigate Albert Park, Melbourne, Australia,
November 7, 2011.
-
[ main ]
[ show abstract ]
[ extended abstract (local) ]
[ slides ]
- Abstract:
Longitudinal surveys are a very rich form of social science data, often containing a wealth of as-yet untapped hidden knowledge. However, such datasets are typically examined using analytic techniques and simple graphs. We believe that much better can be done in the analysis and exploration of such fertile datasets. Panemalia is the application of an advanced visualisation technique to longitudinal survey data. It is a highly interactive DHTML application, integrated with the data repository at ADA, is accessible by non-IT savvy social science users, and supports the requirements of data familiarisation, exploration and quality assurance.
- Parallel Coordinate Plots for Fun and Profit
- (2010) Kevin Pulo.
- Invited presentation at Workshop on Analysis and Visualisation of Large and Complex Data Sets (AVLCD) 2010,
University of Sydney, Sydney, Australia,
December 13, 2010.
-
[ show abstract ]
[ slides ]
- Abstract:
Parallel Coordinate Plots are a very direct way of visualising very high dimensional data, however they are not without their challenges. This talk will briefly detail two applications of Parallel Coordinate Plots: Panemalia, an interactive visualisation tool for social science data (longitudinal datasets), and the Dow Jones Animated Parallel Multiverse, a historical stockmarket micro-data animation.
- Direct Visualization of Longitudinal Data
- (2010) Kevin Pulo.
- Conference presentation at IASSIST (International Association for Social Science Information Services and Technology) 2010,
Cornell University, Ithaca, New York, USA,
June 1-4, 2010.
-
[ main ]
[ show abstract ]
[ slides (local) ]
- Abstract:
This session examines best practice standards associated with managing and archiving longitudinal data. The data structure of longitudinal studies is more complex than for one-dimensional study designs and therefore new issues typically arise with the process of data management, archiving and analysis. There are a number of different types of longitudinal studies now in existence, including but not limited to:
- panel surveys following a cohort of individuals
- panel surveys following a random population sample of individuals over a period of time
- repeated cross-sectional surveys with a different cross-section of individuals sampled at each time point.
Five major longitudinal panel studies are currently archived at the Australian Social Science Data Archive, and there is increasing demand to archive data from regional and national longitudinal surveys, as well as repeated cross-sectional data such as Australian election campaigns. The presentations in this session will examine archiving practices associated with the different stages of longitudinal data archiving, including:
- Archiving cross-sectional time series data (Leanne den Hartog)
- Archiving longitudinal panel data (Steven McEachern)
- Visualisation of longitudinal panel data (Kevin Pulo)
- Visualisation of Complex Social Science Data: Parallel Coordinate Plots for Visualisation of Longitudinal Survey Data
- (2009) Kevin Pulo, Rhys Hawkins.
- Conference presentation at eResearch Australiasia 2009,
Novotel Manly Pacific, Sydney, Australia.
November 9, 2009.
-
[ main ]
[ show abstract ]
[ slides ]
[ intro slides ]
- Abstract:
There is a growing importance of longitudinal and panel data that require new tools and technologies to analyse and present observations that may be collected over many years. Statistical packages typically use a varierty of summary measures to provide researchers with the ability to determine trends but with a consequent "loss" of information. This session will demonstrate some tools which allow data collected to be more easily analysed by using modern visualisation techniques. GIS applications in the social sciences require that researchers can view data from various spatial aggregations eg: postcodes, electoral districts, census units but with resolutions which protect the confidentiality of survey respondents. One of these tools, that builds upon the google maps and google technologies, will be demonstrated.
- The Dow Jones Animated Parallel Multiverse
- (2009) Kevin Pulo.
- Visualization Challenge Second Prize Winner, awarded and exhibited at eResearch Australiasia 2009,
Novotel Manly Pacific, Sydney, Australia.
November 12, 2009.
-
[ main ]
[ show abstract ]
[ extended abstract (local) ]
[ associated video (local) ]
- Abstract:
At any instant in time, the stockmarket can be thought of as a many-dimensional space, with the dimensions being quantities such as: the price of each stock, the volume traded, the number of trades, the number and volume of bid/ask orders, and so on. For the 30 stocks in the Dow Jones average, considering just the price and volume of trades gives a 60-dimensional space.
Most direct visualisation techniques can only deal with low dimensional data, that is, up to about 10 dimensions (3 spatial, 1 temporal, plus some variable "visual attributes" such as colour, texture, glyph size/shape). Thus a technique is needed to map this high dimensional space down to a low dimensional one for visualisation.
Parallel Coordinate Plots are a method of mapping such high dimensional spaces to a low dimensional visualisation. Rather than plotting each point in space as the intersection of coordinates on orthogonal axes, parallel coordinate plots draw the axes as a set of parallel lines. A point in the many-dimensional space is then represented as a line joining the corresponding values on each axis.
This visualisation uses an animated parallel coordinate plot to directly show the evolution of the Dow Jones Index stocks during the week of 29 September 2008 to 3 October 2008. The animation, which runs for about 1½ minutes, is similar to a time-lapse video, with each frame showing the state of "multidimensional universe" of Dow Jones stocks during one minute of the week (1500 times faster than real-time). The primary data shown is the change in price of stocks traded, as a percentage relative to its price at the start of the week.1
The parallel coordinate plot is augmented in several ways. First, the relative volume of the trades is shown relative to the maximum per-minute trade volume (per stock). These volumes are shown as vertical segments along each stock's axis, with maximum trade volume being represented by a segment half the height of the plot. Second, the Reuters news reports are shown as brief yellow highlights of the relevant stocks' axis which quickly fade, along with display of the headline vertically along the axis. Finally, the average trade price (and 2 standard deviations) is shown as a purple segment along each axis, providing some overall context for the movement of the stocks.
One "free parameter" of parallel coordinate plots is the axis ordering. This visualisation shows the stocks ordered by their (decreasing) relative price change at the end of the week. This allows the viewer to follow the progression of the stocks toward their ultimate "position" in the market at the week's end. Other orderings are possible, for example, average trade price, trade price volatility, total trade volume, total trade value, measures of company size, etc.
The data was processed using the awk language, which generated data and scripts for visualisation using gnuplot. The individual frame images were then converted into animations using transcode.
Footnote: 1. Bid/ask quote data is not shown, as it was found to be visually indistinguishable from the trade price (consistent with the economic law of one price). Also, there are many trades every second, and the prices of these are averaged, weighted by volume. Shorter timescales could be used for a more direct, detailed view of the data, but this would result in a slower, longer animation.
- Parallel Coordinate Plots for Visualisation of Longitudinal Survey Data
- (2008) Kevin Pulo.
- Conference presentation at OzVis 2008,
Australian National University (ANU), Canberra, Australia,
December 3-4, 2008.
-
[ slides ]
- Navani: Navigating Large-Scale Visualisations with Animated Transitions
- (2007) Kevin Pulo.
- Conference presentation at 11th International Conference Information Visualisation (IV 2007),
Zurich, Switzerland.
July 4-6, 2007.
-
[ main ]
[ show abstract ]
[ paper ]
[ software ]
- Abstract:
When visualising datasets that are too large to be displayed in their entirity, interactive navigation is a common solution. However, instantaneous updates of the visualisation when navigating can result in disruption to the user's mental map. Animated transitions are one way of addressing this problem. This paper presents the Data-Model-View-Controller (DMVC) architecture for navigation-based interactive systems. Navani, a software framework based on DMVC for supporting animated transitions during navigation, is presented, along with a sample application of it to hierarchical data.
- Structural Focus + Context Navigation of Relational Data
- (2004) Kevin James Pulo.
- PhD thesis (Computer Science),
School of Information Technologies, University of Sydney, Australia,
2004, Peter Eades, Supervisor.
-
[ main ]
[ show abstract ]
- Abstract:
Traditional information visualisation is concerned with methods of drawing a complete picture of an entire dataset. By contrast, much of modern information visualisation deals with the problem of how to see datasets that are too large to be displayed in toto. This problem is known as large scale information visualisation. The most common approach is to display only part of the dataset, but allow the user to navigate easily to other parts of the dataset that are not shown. Focus + Context techniques address large scale information visualisation by presenting a small amount of "focus" data at a high level of detail, surrounded by the majority of the remaining data at a low level of detail, the "context". The majority of Focus + Context techniques to date have been based on geometric distortion, where the visualisation of the entire dataset is adjusted to show the focus region at normal magnification, whilst demagnifying the context region.
An alternative to geometric distortion is data-driven Focus + Context, where the concepts of "focus", "context", "zooming" and "navigation" are defined in terms of multi-detail datasets that store the data using multiple levels of detail. Data-driven methods require the development of new techniques for the presentation and animation of information. This thesis presents a new data-driven Focus + Context technique which we call Structural Zooming.
Structural Zooming is presented in a visualisation independant way that allows any illustration of any type of data to be adapted for use with Structural Zooming. Further, a method is given for performing Structural Zooming of relational data, namely trees and clustered graphs. This has the advantages of geometric zooming techniques (such as Graphical Fisheye Views and the Hyperbolic Browser), including high detail focus, low detail context, smoothly animated transitions during navigation and preservation of a high quality, aesthetically pleasing layout. In addition, it has advantages over geometric zooming, including an approximately constant level of visual complexity by presenting less data at lower detail in the context region, preservation of spatial properties and the ability to leverage existing information visualisation techniques.
We define empirical quality measures and present an experimental evaluation of Structural Zooming of relational data using these measures. This evaluation utilises a corpus of data files from three application domains, and navigation data derived both from real users and computational models of navigation, in order to validate the design choices made in the application of Structural Zooming to relational data.
- Smooth Structural Zooming of h-v Inclusion Tree Layouts
- (2003) Kevin Pulo, Peter Eades and Masahiro Takatsuko.
- Proceedings of International Conference on Coordinated & Multiple Views in Exploratory Visualization (associated with 7th International Conference on Information Visualisation (IV03)),
London, UK, July 15, 2003.
-
[ show abstract ]
[ paper ]
[ associated video ]
- Abstract:
We present a new paradigm for achieving Focus + Context visualizations called smooth structural zooming, which varies the level of detail of the data in different areas of the visualization, as opposed to geometrically distorting the visualization or employing rapid zooming techniques. A smooth structural zooming technique for horizontal-vertical (h-v) inclusion tree layouts is described and applied to the domain of the software design process, specifically, Design Behaviour Trees (DBTs). This system has the ability to navigate and explore data too large to be fully displayed, whilst maintaining an approximately constant level of visual complexity, good visualization aesthetics and preservation of the user's mental map through animation. The technique may be readily extended to arbitrary layout styles and algorithms, and to other hierarchical data structures and relational information, such as clustered graphs.
- Smooth Structural Zooming as a Tool for Navigating Large Inclusion Hierarchies
- (2003) Kevin Pulo, Peter Eades and Masahiro Takatsuko.
- Poster/demo session of ACM Symposium on Software Visualization (Softvis 03) (associated with 2003 Federated Computing Research Conference (FCRC 03)),
San Diego, USA, June 11 - 13, 2003.
-
[ show abstract ]
[ paper ]
[ associated video ]
- Abstract:
We present a new method for achieving Focus + Context visualizations called smooth structural zooming, which varies the level of detail of the data being visualized, rather than geometrically distorting the visualization. We apply a preliminary smooth structural zooming technique to the horizontal-vertical (h-v) inclusion tree layout convention, in particular Design Behaviour Trees (DBTs). We illustrate several advantages of this system, including the ability to navigate and explore inclusion tree layout data too large to be displayed at once, keeping good layouts at all times and preserving the userQUOTEs mental map with animation.
- Inclusion Tree Layout Convention: An Empirical Investigation
- (2003) Kevin Pulo and Masahiro Takatsuko.
- Proceedings of Australasian Symposium on Information Visualisation (invis.au 03),
Adelaide, Australia, February 3 - 4, 2003, CRPIT Vol 24, Tim Pattison and Bruce Thomas, Eds, ACS,
pp 27 - 37.
-
[ show abstract ]
[ paper (local) ]
- Abstract:
The inclusion tree layout convention involves drawing trees as nested rectangles rather than the more common node-link diagrams. Finding good inclusion layouts presents some unique challenges, for example, the quantification of what is meant by the "size" of a rectangle. This paper empirically evaluates and investigates several rectangle size measures for their usefulness in the inclusion tree layout convention. We find that the area size measure, commonly used in graph drawing, is very poorly suited to the inclusion layout convention, whilst size measures based on the aspect ratio of the layout are more appropriate and give better results.
- Direct Interaction with Large-Scale Display Systems using Infrared Laser tracking Devices
- (2003) Kelvin Cheng and Kevin Pulo.
- Proceedings of Australasian Symposium on Information Visualisation (invis.au 03),
Adelaide, Australia, February 3 - 4, 2003, CRPIT Vol 24, Tim Pattison and Bruce Thomas, Eds, ACS,
pp 67 - 74.
-
[ show abstract ]
[ paper (local) ]
- Abstract:
Existing large scale display systems generally adopt an indirect approach to user interaction. This is due to the use of standard desktop-oriented devices, such as a mouse on a desk, to control the large wall-sized display. By using an infrared laser pointer and an infrared tracking device, a more direct interaction with the large display can be achieved, thereby reducing the cognitive load of the user and improving their mobility. The challenge in designing such systems is to allow users to interact with objects on the display naturally and easily. Our system addresses this with hotspots, regions surrounding objects of interest, and gestures, movements made with the laser pointer which triggers an action, similar to those found in modern web browsers (e.g. Mozilla and Opera). Finally, these concepts are demonstrated by an add-in module for Microsoft® PowerPoint® using the NaturalPoint™ Smart-Nav™ tracking device.
- Recursive Space Decompositions in Force-Directed Graph Drawing Algorithms
- (2001) K. J. Pulo.
- Proceedings of Australian Symposium on Information Visualisation (invis.au),
Sydney, Australia, December 3 - 4, 2001, CRPIT Vol 9, Peter Eades and Tim Pattison, Eds,
pp 95 - 102.
-
[ show abstract ]
[ paper (local) ]
- Abstract:
Force-directed graph drawing algorithms are a popular method of drawing graphs, but poor scalability makes them unsuitable for drawing large graphs. The FADE paradigm uses the proximity information in recursive space decompositions to address this problem and that of high visual complexity. The FADE paradigm has been presented with a simple and common recursive space decomposition known as the quadtree. However, quadtrees have the disadvantage of not being robust with respect to small perturbations and some transformations of the input data, and this can adversely affect the resultant graph drawing. This paper investigates the FADE paradigm using an alternative recursive space decomposition known as the recursive voronoi diagram, which avoids some of the problems found in quadtrees at an additional time complexity cost. Preliminary results with random graphs and graphs in the domain of software engineering are presented and suggest that using better recursive space decompositions has promise, but the additional computational effort may not be easily justified.
High Performance Computing
- SimParm: A simple, flexible and collaborative configuration framework for interactive and batch simulation software
- (2007) Kevin Pulo.
- Conference presentation at Scientific Day, ISC07: International Supercomputing Conference 2007,
Dresden, Germany, June 26, 2007.
-
[ main ]
[ show abstract ]
[ paper ]
[ software ]
- Abstract:
The configuration of parameters in simulation software is an often overlooked aspect of the development process. SimParm is a C++ framework that alleviates the burden of managing configuration parameters from software developers. It has been designed to be simple, easy to use and flexible, both when defining parameters and using them in the simulation. Plain text configuration files are supported, as well as overriding values on the command line. SimParm allows interactive real-time adjustment of parameters during the simulation - when running locally and remotely. Furthermore, multiple users can adjust parameters, allowing collaborative exploration of the parameter space. This helps users to determine suitable parameter values for unfamiliar datasets - even when the dataset is too large to run be run on the local workstation. This paper describes the design and usage of SimParm, and includes an example application of a simple mass-spring simulation of a triangular mesh.
- SimParm: Simple and flexible C++ configuration framework
- Managing the APAC NF Altix cluster
- (2005) Kevin Pulo.
- Proceedings of APAC05: The 2005 Australian Partnership for Advanced Computing (APAC) Conference and Exhibition on Advanced Computing, Grid Applications and eResearch: "Empowering Research Communities",
Gold Coast, Australia, September 26-30, 2005.
-
[ show abstract ]
[ paper ]
- Abstract:
The APAC National Facility has set system management goals of providing an environment that allows consistent, high performance for all jobs while maintaining very high utilisation. The Facility's newly installed SGI Altix cluster presents a number of challenges in terms of achieving these goals. At a minimum the topology of a cluster of large NUMA SMP nodes must be respected in scheduling and job placement decisions. Even more challenging has been the requirement to overcome deficiencies and limitations in the proprietary, closed source MPI job launch used on Altix clusters.In this paper we present the techniques and policies implemented by APAC NF on this system to ensure consistently good performance under this diverse and competitive workload. This includes issues encountered in the PBS-based batch queueing system, the SGI MPT-based MPI system and general system configuration and administration.
Software Engineering
- Evaluation of Virtual World Systems
- (2001) Kevin Pulo and Michael E. Houle.
- Proceedings of 2001 Australian Software Engineering Conference (ASWEC 2001),
Canberra, Australia, August 26 - 28, 2001, IEEE Computer Society,
pp 98 - 107.
-
[ show abstract ]
[ paper (local) ]
- Abstract:
A virtual world system is an artificial environment, created inside a computer, which mimics some aspect of the real world. These systems are multiuser, allowing many people to be present and to interact simultaneously in the virtual world. The performance of virtual world systems is important because the quality of the user's experience depends on the responsiveness of the system. This paper looks at issues involved in evaluating the performance of such multiuser virtual world systems. A flexible, object-oriented framework is presented for supporting these evaluations experimentally. As an example of its usage, this framework is applied to real virtual world system and some results are presented and discussed.
- A Flexible Network Simulator for Multiple Server Virtual World Systems
- (2000) Kevin Pulo.
- Honours thesis, BSc (Advanced, Honours) degree, Basser Department of Computer Science, University of Sydney, Australia,
2000, Michael E. Houle, Supervisor.
-
[ main ]
[ show abstract ]
- Abstract:
A virtual world is an artificial environment, created inside a computer, which mimics some aspect of the real world. Virtual worlds are often multiuser, which means that many people may be present in the virtual world simultaneously and may interact with each another and the virtual environment. With the advent of the global Internet connectivity, users from all over the real world may participate in multiuser virtual worlds and interact without any regard for geographic boundaries.
However, achieving believable realism is not easy and there are several factors which can hinder a user's virtual world experience. In order to avoid these kinds of problems and provide the best possible experience, the underlying network must be designed and implemented carefully.
This thesis presents a tool which can guide the design and implementation of virtual world systems to avoid potential problems. This is achieved by simulating the system in various scenarios and evaluating the performance of the system and the potential solutions.
|