Posts
My thoughts and ideas
Welcome to the blog
My thoughts and ideas
Introduction to bioinformatics for RNA sequence analysis
This tutorial explains how Amazon cloud instances were configured for the course. This exercise is not to be completed by the students but is provided as a reference for future course developers that wish to conduct their hands on exercises on Amazon AWS.
A helpful introduction of AWS can be found here
In general if no new web server is needed, you may pick an existing security group. The security group used for 2025 was “SSH/HTTP/Jupyter/Rstudio - with outbound rule”. It has the following inbound and outbound rules that allows for connection to the Jupyter and R studio servers:
| Rule type | IP version | Type | Protocol | Port range | Source / Destination |
|---|---|---|---|---|---|
| Inbound Rule | IPv4 | Custom TCP | TCP | 8888 | 0.0.0.0/0 |
| IPv4 | Custom TCP | TCP | 8787 | 0.0.0.0/0 | |
| IPv4 | HTTP | TCP | 80 | 0.0.0.0/0 | |
| IPv4 | HTTPS | TCP | 443 | 0.0.0.0/0 | |
| IPv6 | HTTP | TCP | 80 | ::/0 | |
| IPv4 | SSH | TCP | 22 | 0.0.0.0/0 | |
| Outbound Rule | IPv4 | All traffic | All | All | 0.0.0.0/0 |
m6a.xlarge.
3) Increase root volume (e.g., 60GB)(type:gp3) and add a second volume (e.g., 500GB)(type:gp3).
4) Choose appropriate security group (for 2025 course, choose security group “SSH/HTTP/Jupyter/Rstudio - with outbound rule”).
5) If necessary, create a new key pair, name and save it locally somewhere safe.
5) Review and Launch. Select ‘View Instances’. Take note of public IP address of newly launched instance.chmod 400 [instructor-key].pem
ssh -i [instructor-key].pem ubuntu@[public.ip.address]
Note for TAs when setting up these instances
Usually the instances are setup in the following sequence: 1) Email instructors to request any packages/programs that needs to be installed. 2) Make a draft instance with the instructions above, configure and install everything. 3) Make an AMI. 4) Make an instructor instance by launching a new Ubuntu instance with the same specifications above, but using the AMI. 5) Inform instructors and TAs to test their code on the instructor instance, send them the instructor key pem file. 6) Modify the draft instance as needed. 7) Once finalized, create an AMI. This AMI will be distributed to students for the course.
sudo apt-get update
sudo apt-get upgrade
sudo apt-get -y install make gcc zlib1g-dev libncurses5-dev libncursesw5-dev git cmake build-essential unzip python3-numpy python3-dev python3-pip python-is-python3 gfortran libreadline-dev default-jdk libx11-dev libxt-dev xorg-dev libxml2-dev apache2 csh ruby-full gnuplot cpanminus libssl-dev gcc g++ gsl-bin libgsl-dev apt-transport-https software-properties-common meson libvcflib-dev libjsoncpp-dev libtabixpp-dev libbz2-dev docker.io libpcre2-dev libharfbuzz-dev libfribidi-dev libfreetype6-dev libpng-dev libtiff5-dev libjpeg-dev libdbi-perl libdbd-mysql-perl libcairo2-dev
sudo ln -s /usr/include/jsoncpp/json/ /usr/include/json
sudo timedatectl set-timezone America/New_York
sudo usermod -aG docker ubuntu
Then exit shell and log back into instance.
We first need to setup the additional storage volume that we added when we created the instance.
# Create mountpoint for additional storage volume
cd /
sudo mkdir workspace
# Mount ephemeral storage
cd
sudo mkfs -t ext4 /dev/nvme1n1
sudo mount /dev/nvme1n1 /workspace
In order to make the workspace volume persistent, we need to edit the etc/fstab file in order. AWS provides instructions for how to do this here.
# Make ephemeral storage mounts persistent
# See http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-using-volumes.html for guidance on setting up fstab records for AWS
# get UUID from sudo lsblk -f
UUID=$(sudo lsblk -f | grep nvme1n1 | awk {'print $4'})
#if want to double check, can do 'echo $UUID' to see the UUID.
#then add that UUID to /etc/fstab
echo -e "LABEL=cloudimg-rootfs / ext4 defaults,discard 0 0\nUUID=$UUID /workspace ext4 defaults,nofail 0 2" | sudo tee /etc/fstab
#'less /etc/fstab' , to see if the new line has been added
# Change permissions on required drives
sudo chown -R ubuntu:ubuntu /workspace
# Create symlink to the added volume in your home directory
cd ~
ln -s /workspace workspace
- All tools should be installed locally (e.g., /home/ubuntu/bin/) in a different location from where students will install tools in their exercises.
man ls and if the problem exists, add the following to .bashrc:export MANPAGER=less
/home/ubuntu/bin/ and its install location is exported to the $PATH variable for easy access.mkdir ~/bin
cd bin
WORKSPACE=/home/ubuntu/workspace
HOME=/home/ubuntu
~/bin
wget https://github.com/samtools/samtools/releases/download/1.18/samtools-1.18.tar.bz2
bunzip2 samtools-1.18.tar.bz2
tar -xvf samtools-1.18.tar
cd samtools-1.18
make
./samtools
#add the following line to .bashrc
export PATH=/home/ubuntu/bin/samtools-1.18:$PATH
export SAMTOOLS_ROOT=/home/ubuntu/bin/samtools-1.18
cd ~/bin
git clone https://github.com/genome/bam-readcount
cd bam-readcount
mkdir build
cd build
cmake ..
make
export PATH=/home/ubuntu/bin/bam-readcount/build/bin:$PATH
uname -m
cd ~/bin
curl -s https://cloud.biohpc.swmed.edu/index.php/s/oTtGWbWjaxsQ2Ho/download > hisat2-2.2.1-Linux_x86_64.zip
unzip hisat2-2.2.1-Linux_x86_64.zip
cd hisat2-2.2.1
./hisat2 -h
export PATH=/home/ubuntu/bin/hisat2-2.2.1:$PATH
cd ~/bin
wget http://ccb.jhu.edu/software/stringtie/dl/stringtie-2.2.1.tar.gz
tar -xzvf stringtie-2.2.1.tar.gz
cd stringtie-2.2.1
make release
export PATH=/home/ubuntu/bin/stringtie-2.2.1:$PATH
cd ~/bin
wget http://ccb.jhu.edu/software/stringtie/dl/gffcompare-0.12.6.Linux_x86_64.tar.gz
tar -xzvf gffcompare-0.12.6.Linux_x86_64.tar.gz
cd gffcompare-0.12.6.Linux_x86_64/
./gffcompare
export PATH=/home/ubuntu/bin/gffcompare-0.12.6.Linux_x86_64:$PATH
pip install HTSeq
# to check version of HTSeq
# pip show HTSeq
TopHat will not install if the version of OpenSSL is too old.
To get version:
openssl version
If version is OpenSSL 1.1.1f, then it needs to be updated using the following steps.
cd ~/bin
wget https://www.openssl.org/source/openssl-1.1.1g.tar.gz
tar -zxf openssl-1.1.1g.tar.gz && cd openssl-1.1.1g
./config
make
make test
sudo mv /usr/bin/openssl ~/tmp #in case install goes wrong
sudo make install
sudo ln -s /usr/local/bin/openssl /usr/bin/openssl
sudo ldconfig
Again, from the terminal issue the command:
openssl version
Your output should be as follows:
OpenSSL 1.1.1g 21 Apr 2020
Then create ~/.wgetrc file and add to it
ca_certificate=/etc/ssl/certs/ca-certificates.crt using vim or nano.
cd ~/bin
wget https://ccb.jhu.edu/software/tophat/downloads/tophat-2.1.1.Linux_x86_64.tar.gz
tar -zxvf tophat-2.1.1.Linux_x86_64.tar.gz
cd tophat-2.1.1.Linux_x86_64
./gtf_to_fasta
export PATH=$/home/ubuntu/bin/tophat-2.1.1.Linux_x86_64:$PATH
Note: There are a couple of arguments only supported in kallisto legacy versions (version before 0.50.0). Also how_are_we_stranded_here uses kallisto == 0.44.x. Thus, installation steps below if for 1 of the legacy versions. But if run into problem, consider using a more updated version.
cd ~/bin
wget https://github.com/pachterlab/kallisto/releases/download/v0.44.0/kallisto_linux-v0.44.0.tar.gz
tar -zxvf kallisto_linux-v0.44.0.tar.gz
cd kallisto_linux-v0.44.0
./kallisto
export PATH=/home/ubuntu/bin/kallisto_linux-v0.44.0:$PATH
sudo apt-get install -y default-jre
cd ~/bin
sudo wget https://www.bioinformatics.babraham.ac.uk/projects/fastqc/fastqc_v0.12.1.zip
sudo unzip fastqc_v0.12.1.zip
sudo chmod 755 fastqc
./fastqc --help
export PATH=/home/ubuntu/bin/FastQC:$PATH
cd /home/ubuntu/bin
wget http://opengene.org/fastp/fastp
chmod a+x ./fastp
./fastp
# Add fastp to bashrc
export PATH=/home/ubuntu/bin/fastp:$PATH
cd ~/bin
pip3 install multiqc
export PATH=/home/ubuntu/.local/bin:$PATH
multiqc --help
cd ~/bin
wget https://github.com/broadinstitute/picard/releases/download/2.26.4/picard.jar -O picard.jar
java -jar ~/bin/picard.jar
export PICARD=/home/ubuntu/bin/picard.jar
sudo apt install flexbar
cd ~/bin
git clone https://github.com/griffithlab/regtools
cd regtools/
mkdir build
cd build/
cmake ..
make
./regtools
export PATH=/home/ubuntu/bin/regtools/build:$PATH
pip3 install RSeQC
~/.local/bin/read_GC.py
export PATH=~/.local/bin/:$PATH
cd ~/bin
mkdir bedops_linux_x86_64-v2.4.41
cd bedops_linux_x86_64-v2.4.41
wget -c https://github.com/bedops/bedops/releases/download/v2.4.41/bedops_linux_x86_64-v2.4.41.tar.bz2
tar -jxvf bedops_linux_x86_64-v2.4.41.tar.bz2
./bin/bedops
export PATH=~/bin/bedops_linux_x86_64-v2.4.41/bin:$PATH
cd ~/bin
mkdir gtfToGenePred
cd gtfToGenePred
wget -c http://hgdownload.cse.ucsc.edu/admin/exe/linux.x86_64/gtfToGenePred
chmod a+x gtfToGenePred
./gtfToGenePred
export PATH=/home/ubuntu/bin/gtfToGenePred:$PATH
cd ~/bin
mkdir genePredtoBed
cd genePredtoBed
wget -c http://hgdownload.cse.ucsc.edu/admin/exe/linux.x86_64/genePredToBed
chmod a+x genePredToBed
./genePredToBed
export PATH=/home/ubuntu/bin/genePredtoBed:$PATH
#note: the path has lowercase 't' at in 'genePredtoBed'
#genePredToBed
pip3 install git+https://github.com/kcotto/how_are_we_stranded_here.git
check_strandedness
cd ~/bin
wget -O cellranger-9.0.1.tar.gz "https://cf.10xgenomics.com/releases/cell-exp/cellranger-9.0.1.tar.gz?Expires=1761476807&Key-Pair-Id=APKAI7S6A5RYOXBWRPDA&Signature=EzblHLl1V4c8zY~f1gqgkJDfwXHdpXeUUsoqiipDGTTRc1cDXifB5CZdV2te0aGah4VoEUssh8ERdRpmLzMNZzSBdsjmX9t6FE3PwZ83c-cOKAVJK3-v7z8GhMH9HZMqVxEb3w6SwztkZmipGhCyG95yT2fv-sNQZssJmUEzx8Wnc4t69iS~u0uMrd4rj9EDkyw6CYCfkRGGoxCclUAyPqMBQGRbcCogVlSoVk6sc~UsD5vpXXoxlgoVRrThdEZ4DpUUtInLST8cvtS127nrWIJcJ1e1Jk8dSndpKvAHEwTGB~U21oyiOb8lZhXVsY7VCXKsvivRKaRXWj0kh8whOA__"
tar -xzvf cellranger-9.0.1.tar.gz
export PATH=/home/ubuntu/bin/cellranger-9.0.1:$PATH
sudo apt-get install tabix
cd ~/bin
git clone https://github.com/lh3/bwa.git
cd bwa
make
export PATH=/home/ubuntu/bin/bwa:$PATH
#bwa mem #to call bwa
cd ~/bin
wget https://github.com/arq5x/bedtools2/releases/download/v2.31.0/bedtools-2.31.0.tar.gz
tar -zxvf bedtools-2.31.0.tar.gz
cd bedtools2
make
export PATH=/home/ubuntu/bin/bedtools2/bin:$PATH
cd ~/bin
wget https://github.com/samtools/bcftools/releases/download/1.18/bcftools-1.18.tar.bz2
bunzip2 bcftools-1.18.tar.bz2
tar -xvf bcftools-1.18.tar
cd bcftools-1.18
make
./bcftools
export PATH=/home/ubuntu/bin/bcftools-1.18:$PATH
cd ~/bin
wget https://github.com/samtools/htslib/releases/download/1.18/htslib-1.18.tar.bz2
bunzip2 htslib-1.18.tar.bz2
tar -xvf htslib-1.18.tar
cd htslib-1.18
make
sudo make install
#htsfile --help
export PATH=/home/ubuntu/bin/htslib-1.18:$PATH
cd ~/bin
git clone https://github.com/brentp/peddy
cd peddy
pip install -r requirements.txt
pip install --editable .
cd ~/bin
wget https://github.com/brentp/slivar/releases/download/v0.3.0/slivar
chmod +x ./slivar
cd ~/bin
wget https://github.com/quinlan-lab/STRling/releases/download/v0.5.2/strling
chmod +x ./strling
sudo apt install freebayes
sudo apt install libvcflib-tools libvcflib-dev
cd ~/bin
wget https://repo.anaconda.com/archive/Anaconda3-2023.09-0-Linux-x86_64.sh
bash Anaconda3-2023.09-0-Linux-x86_64.sh
Press Enter to review the license agreement. Then press and hold Enter to scroll.
Enter “yes” to agree to the license agreement.
Saved the installation to /home/ubuntu/bin/anaconda3 and chose yes to initializng Anaconda3.
Add in bashrc:
export PATH=/home/ubuntu/bin/anaconda3/bin:$PATH
To see location of conda executable: which conda
Note: Install VEP in workspace because cache file for that takes a lot of space (~25G).
Describes dependencies for VEP 110, used in this course for variant annotation. When running the VEP installer follow the prompts specified:
cd ~/workspace
sudo git clone https://github.com/Ensembl/ensembl-vep.git
cd ensembl-vep
sudo perl -MCPAN -e'install "LWP::Simple"'
sudo perl INSTALL.pl --CACHEDIR ~/workspace/ensembl-vep/
export PATH=/home/ubuntu/workspace/ensembl-vep:$PATH
#vep --help
Followed this website and this website Note: The old jupyter notebook was split into jupyter-server and nbclassic. The steps to set up jupyter on ec2 in the first link therefore have been adapted based on suggestions in the second link to accommodate this migration.
First, we need to add Jupyter to the system’s path (you can check if it is already on the path by running: which python, if no path is returned you need to add the path) To add Jupyter functionality to your terminal, add the following line of code to your .bashrc file:
export PATH=/home/ubuntu/bin/anaconda3/bin:$PATH
Then you need to source the .bashrc for changes to take effect.
source .bashrc
We then need to create our Jupyter configuration file. In order to create that file, you need to run:
jupyter notebook --generate-config
Optional: After creating your configuration file, you will need to generate a password for your Jupyter Notebook using ipython:
Enter the IPython command line:
ipython
Now follow these steps to generate your password:
from notebook.auth import passwd
passwd()
You will be prompted to enter and re-enter your password. IPython will then generate a hash output, COPY THIS AND SAVE IT FOR LATER. We will need this for our configuration file.
Run exit in order to exit IPython.
Next, go into your jupyter config file (/home/ubuntu/.jupyter/jupyter_server_config.py):
cd .jupyter
vim jupyter_notebook_config.py
And add the following code at the beginning of the document:
c = get_config() #add this line if it's not already in jupyter_notebook_config.py
c.ServerApp.ip = '0.0.0.0'
#c.ServerApp.password = u'YOUR PASSWORD HASH' #uncomment this line if decide to use password
c.ServerApp.port = 8888
Optional: We then need to create a directory for your notebooks. In order to make a folder to store all of your Jupyter Notebooks simply run:
mkdir Notebooks
# You can call this folder anything, for this example we call it `Notebooks`.
After the previous step, you should be ready to run your notebook and access your EC2 server. To run your Notebook simply run the command:
jupyter nbclassic
# to open jupyter lab on the instance
jupyter lab
From there you should be able to access your server by going to:
http://(your AWS public dns):8888/
or
http://(your AWS public dns):8888/(tree?token=... - in the message generated while running 'jupyter nbclassic')
(Note: if ever run into problem accessing server, double check whether you are using http or https. If you didnt add https port in security group configuration step when create the instance, then you wouldn’t be able to access server with https.)
Follow this guide from cran website to install R ver 4.4 website
# update indices
sudo apt update -qq
# install two helper packages we need
sudo apt install --no-install-recommends software-properties-common dirmngr
# add the signing key (by Michael Rutter) for these repos
# To verify key, run gpg --show-keys /etc/apt/trusted.gpg.d/cran_ubuntu_key.asc
# Fingerprint: E298A3A825C0D65DFD57CBB651716619E084DAB9
wget -qO- https://cloud.r-project.org/bin/linux/ubuntu/marutter_pubkey.asc | sudo tee -a /etc/apt/trusted.gpg.d/cran_ubuntu_key.asc
# add the R 4.0 repo from CRAN -- adjust 'focal' to 'groovy' or 'bionic' as needed
sudo add-apt-repository "deb https://cloud.r-project.org/bin/linux/ubuntu $(lsb_release -cs)-cran40/"
#install R and its dependencies
sudo apt install --no-install-recommends r-base
Note, linking the R-patched bin directory into your PATH may cause weird things to happen, such as man pages or git log to not display. This can be circumvented by directly linking the R* executables (R, RScript, RCmd, etc.) into a PATH directory.
For this tutorial we require:
R
install.packages(c("devtools","dplyr","gplots","ggplot2","sctransform","Seurat","RColorBrewer","ggthemes","cowplot","data.table","Rtsne","gridExtra","UpSetR","tidyverse"),repos="http://cran.us.r-project.org")
quit(save="no")
Note: if asked if want to install in personal library, type ‘yes’.
For this tutorial we require:
R
# Install Bioconductor
if (!requireNamespace("BiocManager", quietly = TRUE))
install.packages("BiocManager")
BiocManager::install(c("genefilter","ballgown","edgeR","GenomicRanges","rhdf5","biomaRt","scran","sva","gage","org.Hs.eg.db","DESeq2","apeglm","clusterProfiler","enrichplot","pathview"))
quit(save="no")
R
install.packages("devtools")
devtools::install_github("pachterlab/sleuth")
quit(save="no")
(Note: in cshl2023 version of the course, install this gatk 4.2.1.0 instead of an more updated ver since this work with the current Java version - Java ver 11)
cd ~/bin
wget https://github.com/broadinstitute/gatk/releases/download/4.2.1.0/gatk-4.2.1.0.zip
unzip gatk-4.2.1.0.zip
export PATH=/home/ubuntu/bin/gatk-4.2.1.0:$PATH #add to .bashrc
gatk --help
gatk --list
cd ~/bin
curl -L https://github.com/lh3/minimap2/releases/download/v2.26/minimap2-2.26_x64-linux.tar.bz2 | tar -jxvf -
./minimap2-2.26_x64-linux/minimap2
export PATH=/home/ubuntu/bin/minimap2-2.26_x64-linux:$PATH #add to .bashrc
minimap2 --help
pip install NanoPlot
#which NanoPlot
#NanoPlot -h
cd ~/bin
curl -L -k -o VarScan.v2.4.2.jar https://github.com/dkoboldt/varscan/releases/download/2.4.2/VarScan.v2.4.2.jar
java -jar ~/bin/VarScan.v2.4.2.jar
conda create -n fasplit_env bioconda::ucsc-fasplit
source activate fasplit_env
conda activate fasplit_env
#faSplit
conda deactivate
To prevent dependencies conflicts, install packages for this lab in a conda environment.
Packages:
conda create --name snapatac2_env python=3.11
source activate snapatac2_env
conda activate snapatac2_env
pip install snapatac2
#pip show snapatac2
pip install scanpy
pip install MACS2
pip install --user magic-impute
pip install deeptools
conda deactivate
To run virtual environment in jupyter nbclassic, there are a few extra set up steps:
#Step 1: Activate the Conda Environment of interest:
conda activate snapatac2_env
#Step 2: Install Ipykernel:
conda install ipykernel
#Step 3: Create a Jupyter Kernel for the environment
python -m ipykernel install --user --name=snapatac2_env_kernel
```bash
Then run jupyter notebook as usual:
```bash
jupyter nbclassic
Access server by adding to the browser: http://(your AWS public dns):8888/ We can either create a notebook using desired environment kernel, or just create a notebook using the default ipykernel and change kernel within the notebook itself.
R
#if (!require("BiocManager", quietly = TRUE))
#install.packages("BiocManager")
BiocManager::install("ATACseqQC")
quit(save="no")
To prevent dependencies conflicts, install packages for this lab in a conda environment.
Packages:
conda create --name scRNAseq_env python=3.11
source activate scRNAseq_env
conda activate scRNAseq_env
pip install 'scanpy[leiden]'
pip install gtfparse==1.2.0
pip install scrublet
pip install fast_matrix_market
pip install harmony-pytorch
conda install ipykernel
python -m ipykernel install --user --name=scRNAseq_env_kernel
conda deactivate
Packages:
#pyenv
#note: after installation, add pyenv configs in .bashrc (see below)
cd ~/bin
curl https://pyenv.run | bash
#virtualenv
sudo apt install pipx
pipx install virtualenv
#beautifulsoup4
pip install beautifulsoup4
#requests
pip install requests
#vcfpy
#installation note for vcfpy: https://github.com/KarchinLab/open-cravat/issues/98
conda install -c bioconda open-cravat
pip install vcfpy
#vcftools
#installation note: https://github.com/vcftools/vcftools/issues/188
cd ~/bin
sudo apt-get install autoconf
git clone https://github.com/vcftools/vcftools.git
cd vcftools
./autogen.sh
./configure
make
sudo make install
#pysam
#conda config --show channels #to see if 3 needed channels are already configured in the conda environment. if not, add:
#conda config --add channels defaults
#conda config --add channels conda-forge
#conda config --add channels bioconda
#conda install pysam
pip install pysam
#installed at: ./bin/anaconda3/lib/python3.11/site-packages
#civicpy
pip install civicpy
#pandas
pip install pandas
#jq
sudo apt-get install jq
add pyenv configs in .bashrc
## pyenv configs
export PYENV_ROOT="/home/ubuntu/bin/.pyenv"
export PATH="$PYENV_ROOT/bin:$PATH"
if command -v pyenv 1>/dev/null 2>&1; then
eval "$(pyenv init -)"
fi
For 2021 version of the course, rather than exporting each tool’s individual path. I moved all of the subdirs to ~/src and cp all of the binaries from there to ~/bin so that PATH is less complex.
We will start an apache2 service and serve the contents of the students home directories for convenience. This allows easy download of files to their local hard drives, direct loading in IGV by url, etc. Note that when launching instances a security group will have to be selected/modified that allows http access via port 80.
sudo vim /etc/apache2/apache2.conf
Add the following content to apache2.conf
<Directory /workspace>
Options Indexes FollowSymLinks
AllowOverride None
Require all granted
</Directory>
sudo vim /etc/apache2/sites-available/000-default.conf
Change document root in 000-default.conf to ‘/workspace’
DocumentRoot /workspace
sudo service apache2 restart
To check if the server works, type in browser of choice: http://[public ip address of ec2 instance]. You should see the content within /workspace .
Finally, save the instance as a new AMI by right clicking the instance and clicking on “Create Image”. Enter an appropriate name and description and then save. If desired, you may choose at this time to include the workspace snapshot in the AMI to avoid having to explicitly attach it later at launching of AMI instances. Change the permissions of the AMI to “public” if you would like it to be listed under the Community AMIs. Copy the AMI to any additional regions where you would like it to appear in Community AMI searches.
From AWS Console select Services -> IAM. Go to Users, Create User, specify a user name, and Create. Download credentials to a safe location for later reference if needed. Select the new user and go to Security Credentials -> Manage Password -> ‘Assign a Custom Password’. Go to Groups -> Create a New Group, specify a group name and Next. Attach a policy to the group. In this case we give all EC2 privileges but no other AWS privileges by specifying “AmazonEC2FullAccess”. Hit Next, review and then Create Group. Select the Group -> Add Users to Group, select your new user to add it to the new group.
ssh -i cshl_2025_student.pem ubuntu@[public.ip.address].Rather than handing out ip addresses for each student instance to each student you can instead set up DNS records to redirect from a more human readable name to the IP address. After spinning up all student instances, use a service like http://dyn.com (or http://entrydns.net, etc.) to create hostnames like
Currently, all miscellaneous data files, annotations, etc. are hosted on an ftp server at the Genome Institute. In the future more data files could be pre-loaded onto the EBS snapshot.