window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-41145778-1'); window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-41145778-1');
List

Analyses on the University of Iowa Argon HPC require submission using a job script file. Technical details for the contents of the job script file can be found at: https://wiki.uiowa.edu/display/hpcdocs/Basic+Job+Submission. This page provides some basic details on the options for these files for High throughput jobs.

High throughput jobs consist of jobs that run fairly short scripts multiple times. In my line of work, this often means replications of a simulation study.

The example in this page is a simple item response theory simulation study to examine the accuracy of model parameter recovery with a sample size of 100. For each simulation replication, we will generate data from and estimate parameters of a unidimensional two-parameter logistic item response theory model. The estimation method will be marginal maximum likelihood using the mirt package in R. Each replication of this analysis won’t take long on most computers, but we will want to run multiple replications, necessitating the use of the HPC. 

The files for this example can be found at:

This analysis requires two phases:

  1. Running a high throughput array job (a job that runs multiple times) that conducts the simulation replications. Each replication outputs replication-specific results.
  2. Running a high performance job that aggregates all replication-specific results.

The simulation job script is slightly different from those used in high performance analyses as it uses an array. The script text is:

#!/bin/bash

#####Set Scheduler Configuration Directives#####

#Name the job:
#$ -N IRT-Simulation

#Send e-mail at beginning/end/suspension of job
#$ -m bes

#E-mail address to send to
#$ -M PUT-YOUR-EMAIL-ADDRESS-HERE@uiowa.edu

#Start script in current working directory
#$ -cwd

#####End Set Scheduler Configuration Directives#####


#####Resource Selection Directives#####

#See the HPC wiki for complete resource information: https://wiki.uiowa.edu/display/hpcdocs/Argon+Cluster

#Select the queue to run in
#$ -q UI

#Request one core (not needed for this example but shown for completeness)
#$ -pe smp 1 #####End Resource Selection Directives#####
#####Job script (bash shell commands)#####

module load stack/2022.2
module load stack/2022.2-base_arch
module load r/4.2.2_gcc-9.5.0

Rscript IRTsimulationRep.R $SGE_TASK_ID
 

Job Script Syntax Formatting

In the job script file, the following distinctions are made:

  • Lines beginning with # are comments and are not interpreted
  • Lines beginning with #$ are called directives and tell the scheduler program the specs under which to run the analysis
  • Lines beginning with #! tell the scheduler which Linux shell to use
  • Lines beginning with none of the above are Linux commands that will be run to execute the analysis

Details about the commands in the job script file are provided below:

Linux Shell Selection

#!/bin/bash

Sets the Linux shell. If you are unfamiliar with shells, using Bash is probably best, so leave this as-is. More information on shells can be found at: https://en.wikipedia.org/wiki/Unix_shell.

Scheduler Configuration Directives

Scheduler configuration directives give details about the analysis and specifics about how the scheduler will provide information once the job is running or has finished running.

#Name the job:
#$ -N IRT-Simulation

Sets the name of the job. Useful for high performance jobs that take a long time as any notification or listing of the job will include the name. This can be used to differentiate which jobs are running and which are stopped.

#Send e-mail at beginning/end/suspension of job
#$ -m bes

Tells the scheduler to send an email notification at the beginning, end, and suspension of a job run. Change “bes” to which you prefer by deleting letters (i.e., “s” only sends email notifications if jobs have been suspended–typically meaning canceled).

#E-mail address to send to
#$ -M PUT-YOUR-EMAIL-HERE@uiowa.edu

Defines the email address where notifications are to be sent. Be sure to put your email address here.

#Start script in current working directory
#$ -cwd

Tells the scheduler to start the script in the directory where the script was submitted from. This is useful to not have to specify the full path of analysis files in the script (see the final commands below).

Resource Selection Directives

Resource selection directives instruct the scheduler to request specific types of computational resources to run the job. These resources are the types of machines that are used for the analysis.

#Select the queue to run in
#$ -q UI

Selects the queue where the job will be run. A queue is a set of jobs that are run with under specific policies. Users will only have access to a few queues.

The list of queues at the University of Iowa are shown here: https://wiki.uiowa.edu/display/hpcdocs/Queues+and+Policies.

The all.q is the queue that will enable the quickest access for running high throughput jobs. As noted by the Argon HPC Documentation:

This queue encompasses all of the nodes and contains all of the available job slots. It is available to everyone with an account and there are no running job limits. However, it is a low priority queue instance on the same nodes as the higher priority investor and UI queue instances. The all.q queue is subordinate to these other queues and jobs running in it will give up the nodes they are running on when jobs in the high priority queues need them. The term we use for this is “job eviction”. Jobs running in the all.q queue are the only ones subject to this.

 

https://wiki.uiowa.edu/display/hpcdocs/Queues+and+Policies (The all.q queue section)

Queue selection can be made simple:

  • Use the UI queue for high performance jobs that may take some time to run. The UI queue ensures the job will be able to finish.
  • Use the all.q queue for high throughput jobs (such as simulation replications) that can be “evicted” and re-run later. The all.q queue ensures more jobs can be run at once.
#Request one core (not needed for this example but shown for completeness)
#$ -pe smp 1

This directs the scheduler to run the analysis on a machine with four cores. A core is a processing unit. Here, only one core is used as the we will use mirt in serial (not parallel processes).

For high throughput jobs, this line is often omitted.

Analysis Syntax

#####Job script (bash shell commands)#####
module load stack/2022.2
module load stack/2022.2-base_arch
module load r/4.2.2_gcc-9.5.0

Rscript IRTsimulationRep.R $SGE_TASK_ID

The analysis syntax portion includes commands that can be run in the terminal window separately. The commands listed here are the following:

module load stack/2022.2
module load stack/2022.2-base_arch
module load r/4.2.2_gcc-9.5.0

Loads R so that it can be run in Argon.

Modules are programs that are installed in Argon that can be used for analyses. To see which modules are available, in a terminal window in Argon, run the command:
module avail

Finally, we run our analysis file using the command:

Rscript IRTsimulationRep.R $SGE_TASK_ID

This command assumes the IRTsimulationRep.R file is in the same folder as the job syntax file (and the job syntax file has the directive #$ -cwd).

The option $SGE_TASK_ID provides the replication number to R. In the R script file, you will see the use of the commandArgs() function. Here, we capture the replication number and use it to set the replication-specific random number seed and set the names of replication-specific output files.

Submitting the Array Job

To submit this job, the following command is used:

qsub -t 1-10 Simulation.job

The -t 1-10 command notes there will be 10 “tasks” to run for this job (the array). Each task will have a number from 1 through 10.

Simulation Job Completion

When all jobs have completed, you should see a series of .RData files in the job script folder. All simulation replications that successfully completed will have a file named simResultsRep[#].RData (with [#] replaced by the task number in the job array).

If this job was run in the all.q, it is possible some of the replications were “evicted” prior to completion. This happens when the owner of the HPC resources being used for a job submits their own job script.

Results Aggregation Script

Once all replication scripts have finished, to aggregate the results across all simulation replications, the simulation results aggregation script is used. (https://jonathantemplin.com/wp-content/uploads/2023/08/Results.job). This is a high performance job script. Details on these scripts are available here: https://jonathantemplin.com/university-of-iowa-argon-hpc-system-job-script-file-example-for-high-performance-jobs/.

Results Aggregation Job Completion

Upon successful completion of the results aggregation job script, you will find a file named finalResults.RData. This file will contain all aggregated results from the simulation replications.

  Posts

1 2 3
February 28th, 2022

IRT estimation with R packages mirt and lavaan

This is a brief how-to for estimating IRT models in the R packages mirt and lavaan. The example is based […]

April 24th, 2020

Introduction to the University of Iowa High Performance Computing System (Argon) and Iowa Interactive Data Analytics Service (IDAS)

Updated: August 20, 2023 High performance computing is seemingly becoming a way of life in many fields, both for research […]

April 24th, 2020

University of Iowa Argon HPC System: Job Script File Example for High Performance Jobs

Analyses on the University of Iowa Argon HPC require submission using a job script file. Technical details for the contents […]

April 24th, 2020

University of Iowa Argon HPC System: Job Script File Example for High Throughput Jobs

Analyses on the University of Iowa Argon HPC require submission using a job script file. Technical details for the contents […]

March 4th, 2019

NCME ITEMS Module (2019): Supplementary Material

National Council on Measurement in Education, ITEMS Module on Diagnostic Classification Model Checklists The files on this page are part […]

March 11th, 2018

2018 NCAA Men’s College Basketball Division 1 Team Ratings

Team Rating Estimates Team ratings are based on a model factoring in team performance and team consistency. Ratings are produced by […]

March 11th, 2018

2018 NCAA Tournament Predictions

Just in time for the 2018 NCAA Men’s Basketball Tournament, I used a statistical model to assess team strength and […]

March 21st, 2017

Latest NCAA Tournament Predictions: Still No Overwhelming Favorite

The latest NCAA Tournament predictions are up and the chaos of the tournament is likely to continue. Although Gonzaga has […]

March 13th, 2017

2017 NCAA Tournament Prediction: Get Ready for Bracket Chaos

Just in time for the 2017 NCAA Men’s Basketball Tournament, I have put together a new statistical model to assess […]

March 6th, 2017

2017 NCAA Tournament Predictions

Just in time for the 2017 NCAA Men’s Basketball Tournament, I have put together a new statistical model to assess […]