Skip to content

Commit

Permalink
Merge remote-tracking branch 'upstream/master' into mkdocs-strict
Browse files Browse the repository at this point in the history
  • Loading branch information
joonas-somero committed Feb 11, 2025
2 parents 4cf6ca5 + 57fc77f commit 1f6ce27
Show file tree
Hide file tree
Showing 95 changed files with 1,704 additions and 838 deletions.
2 changes: 1 addition & 1 deletion csc-overrides/assets/javascripts/constants.js
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ const dropdownSites = [
active: false
},
{
name: 'My CSC',
name: 'MyCSC',
description: 'customer portal',
url: 'https://my.csc.fi',
external: true,
Expand Down
2 changes: 1 addition & 1 deletion csc-overrides/partials/announcement.html
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
{# Put the announcement, in HTML, right under this comment! #}
<p>
&#128295; No, it's not just you! The navigation sidebars <em>do</em> have a new look. <a href="/support/wn/training-new/#visual-changes-for-docs-csc">Click here for details.</a> &#128295;
&#128227; Due to licensing changes, <a href="/apps/maestro/">Schrödinger Maestro</a> versions older than 2023.1 will no longer work at CSC after 13.3.2025! <a href="/support/wn/apps-new/#maestro-versions-older-than-20231-will-not-work-after-1332025">Click here for details.</a> &#128227;
</p>
{# Remember
- to set the announcement_visible to true in mkdocs.yml
Expand Down
1 change: 1 addition & 0 deletions docs/apps/by_discipline.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,7 @@
* [CD-hit](cd-hit.md) Sequence clustering and redundancy removal tool
* [Chipster](https://chipster.csc.fi/) Easy-to-use analysis platform for RNA-seq, single cell RNA-seq and other NGS data
* [Chipster_genomes](chipster_genomes.md) Tool to download aligner indexes used by Chipster to Puhti
* [CryoSPARC](cryosparc.md) Tool to analyse Cryo-EM data on Puhti/Mahti
* [Cutadapt](cutadapt.md) Trimming high-throughput sequencing reads
* [Diamond](diamond.md ) Sequence similarity search tool for proteins and nucloeotides
* [Discovery Studio](discovery-studio.md) Protein modeling environment
Expand Down
4 changes: 2 additions & 2 deletions docs/apps/cirq-on-iqm.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ Currently supported [cirq-on-iqm](https://iqm-finland.github.io/cirq-on-iqm/) ve

| Version | Module | LUMI | Notes |
|:--------|:-------------------------------------|:-----:|-----------------|
| 15.1 | `helmi_cirq/15.1` | X | |
| 15.2 | `helmi_cirq/15.2` | X | |


All modules are based on Tykky using LUMI-container-wrapper.
Expand Down Expand Up @@ -71,7 +71,7 @@ Example batch script for running a quantum job on Helmi:
module use /appl/local/quantum/modulefiles
module load helmi_cirq

python -u first_quantum_job.py
python -u quantum_job.py
```

Submit the script with `sbatch <script_name>.sh`.
Expand Down
2 changes: 1 addition & 1 deletion docs/apps/cp2k.md
Original file line number Diff line number Diff line change
Expand Up @@ -91,7 +91,7 @@ double the number of cores the calculation should be at least 1.5 times faster.
module load gcc/13.2.0 openmpi/5.0.5
module load cp2k/2024.2

srun cp2k.popt H2O-64.inp > H2O-64.out
srun cp2k.psmp H2O-64.inp > H2O-64.out
```

=== "Mahti (mixed MPI/OpenMP)"
Expand Down
46 changes: 46 additions & 0 deletions docs/apps/cryosparc.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,46 @@
---
tags:
- Other
system:
- www-puhti
- www-mahti
---

# CryoSPARC

CryoSPARC (Cryo-EM Single Particle Ab-Initio Reconstruction and Classification) is a state-of-the-art scientific software platform for processing cryo-electron microscopy (cryo-EM) single particle analysis data used in research and drug discovery activities by solving 3D structures of biological specimens, such as soluble and membrane proteins and their complexes, viruses, nucleic acids, and more. It can also process negative stain electron microscopy data.


## Available

The software can be installed on Puhti and Mahti. CSC recommends using Mahti for CryoSPARC usage due to its large needs of scratch disk space.


## License

CryoSPARC has non-profit and commercial licensing options. The software is free-of-charge for non-profit academic use but user must request [a licence key](https://cryosparc.com/download/) from the CryoSPARC home page. Please consult Structura Biotechnology Inc.(<sales@structura.bio>) for commercial usage.


## Installation

Note that every user needs to install his or her instance of cryoSPARC and shared installations are not recommended. Request port numebrs for your CryoSPARC usage by sending an e-mail to <servicedesk@csc.fi>. The port number and login node will be reserved to you on Puhti and/or Mahti. The CryoSPARC installation tar file contains over 160k files which exceeds the default file quota (100k) for /projappl disk space. Users thus need to apply for an extension to the default quota when installing CryoSPARC. CSC maintains a centralised installation of cryoSPARC worker. If you follow CSC internal instructions and use the correct lane templates, you do not need to install worker at all. The internal installation instructions on Puhti or Mahti are available at path: /appl/soft/bio/cryosparc/documentation/cryoSPARC_at_CSC.pdf. It is helpful to setup a passwordless login for CryoSPARC usage. Please consult [CSC documentation](../computing/connecting/index.md) on setting up SSH keys.


!!! note ""
CryoSPARC users may not use web interfaces for logging into Puhti/Mahti as login nodes are randomly assigned. Please note that each user is assigned to a specific login node with a specific port range for CryoSPARC usage.


## References

Please consult all publications including the one mentioned below:

> cryoSPARC: algorithms for rapid unsupervised cryo-EM structure determination
Punjani, Ali ; Rubinstein, John L ; Fleet, David J ; Brubaker, Marcus A
Nature Methods, 2017-03, Vol.14 (3), p.290-296, Article 290


## More information

- [CryoSPARC home page](https://cryosparc.com/)
- [CryoSPARC official installation instructions](https://guide.cryosparc.com/setup-configuration-and-management/how-to-download-install-and-configure)
- [CryoSPARC documentation](https://guide.cryosparc.com/)
4 changes: 2 additions & 2 deletions docs/apps/maestro.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,8 +25,8 @@ self-learning materials.

## Available

* Puhti: 2023.1, 2023.2, 2023.3, 2023.4, 2024.1, 2024.2, 2024.3, 2024.4
* Mahti: 2023.1, 2023.2, 2023.3, 2023.4, 2024.1, 2024.2, 2024.3, 2024.4
* Puhti: 2023.2, 2023.3, 2023.4, 2024.1, 2024.2, 2024.3, 2024.4, 2025.1
* Mahti: 2023.2, 2023.3, 2023.4, 2024.1, 2024.2, 2024.3, 2024.4, 2025.1

A two-year cleaning cycle is applied on the Maestro modules on CSC supercomputers.
Specifically, this means that module versions older than two years will be removed.
Expand Down
112 changes: 72 additions & 40 deletions docs/apps/molpro.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,62 +9,94 @@ MOLPRO is a software package geared towards accurate ab initio quantum chemistry

## Available

- Puhti: 2024.1
- Puhti: 2024.3
- Mahti: 2024.3

## License

- The use of the software is restricted to non-commercial research.

## Usage

Initialise MOLPRO on Puhti:
Initialise MOLPRO on Puhti or Mahti:

```bash
module load molpro/2024.1
module load molpro/2024.3
```

Molpro has been built with the Global Arrays toolkit (`--with-mpi-pr`) that allocates one helper process per node for parallel MPI runs.

### Example batch script for Puhti using MPI parallelization

```bash
#!/bin/bash
#SBATCH --partition=test
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=40 # MPI tasks per node
#SBATCH --account=<project> # insert here the project to be billed
#SBATCH --time=00:10:00 # time as `hh:mm:ss`

module load molpro/2024.1

export MOLPRO_TMP=$PWD/MOLPRO_TMP_$SLURM_JOB_ID
mkdir -p $MOLPRO_TMP

$MOLPROP -d$MOLPRO_TMP -I$MOLPRO_TMP -W$PWD test.com
rm -rf $MOLPRO_TMP
```
Molpro has been built with the Global Arrays toolkit (`--with-mpi-pr`) that allocates one helper process per node for parallel MPI runs.

!!! info "Note"
Particularly some of the wavefunction-based electron correlation methods can be very disk I/O intensive. Such jobs benefit from using the [fast local storage](../computing/running/creating-job-scripts-puhti.md#local-storage) on Puhti. Using local disk for such jobs will also reduce the load on the Lustre parallel file system.
Although some parts of the code support shared memory parallelism (OpenMP), its use is not generally recommended.

### Example batch script for Puhti using MPI parallelization and local disk (NVMe)
### Example batch scripts

```bash
#!/bin/bash
#SBATCH --partition=small
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=40 # MPI tasks per node
#SBATCH --account=<project> # insert here the project to be billed
#SBATCH --time=00:10:00 # time as `hh:mm:ss`
#SBATCH --gres=nvme:100 # requested local disk space in GB

module load molpro/2024.1
export MOLPRO_TMP=$LOCAL_SCRATCH/$SLURM_JOB_ID
mkdir -p $MOLPRO_TMP

$MOLPROP -d$MOLPRO_TMP -I$MOLPRO_TMP -W$PWD test.com
rm -rf $MOLPRO_TMP
```
!!! info "Note"
Wave function-based correlations methods, both single and multireference, often create a
substantial amount of disk I/O. In order to achieve maximal performance for the job and to
avoid excess load on the Lustre parallel file system it is advisable to use the local disk.

=== "Puhti"

```bash
#!/bin/bash
#SBATCH --partition=test
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=40 # MPI tasks per node
#SBATCH --account=yourproject # insert here the project to be billed
#SBATCH --time=00:15:00 # time as `hh:mm:ss`
module purge
module load molpro/2024.3

export MOLPRO_TMP=$PWD/MOLPRO_TMP_$SLURM_JOB_ID
mkdir -p $MOLPRO_TMP

$MOLPROP -d$MOLPRO_TMP -I$MOLPRO_TMP -W$PWD test.com
rm -rf $MOLPRO_TMP
```

=== "Puhti, local disk"

```bash
#!/bin/bash
#SBATCH --partition=large
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=40
#SBATCH --account=yourproject # insert here the project to be billed
#SBATCH --time=00:15:00 # time as `hh:mm:ss`
#SBATCH --gres=nvme:100 # requested local disk space in GB
module purge
module load molpro/2024.3
export MOLPRO_TMP=$LOCAL_SCRATCH/MOLPRO_TMP_$SLURM_JOB_ID
mkdir -p $MOLPRO_TMP

$MOLPROP -d$MOLPRO_TMP -I$MOLPRO_TMP -W$PWD test.com
rm -rf $MOLPRO_TMP
```

=== "Mahti"

On Mahti, it is often necessary to undersubscribe cores per node to ensure sufficient memory per core. See the [Mahti job script guidelines](../computing/running/creating-job-scripts-mahti.md#undersubscribing-nodes) for more details.

```bash
#!/bin/bash
#SBATCH --partition=test
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=16
#SBATCH --cpus-per-task=8
#SBATCH --account=yourproject # insert here the project to be billed
#SBATCH --time=0:10:00 # time as hh:mm:ss
# set --ntasks-per-node=X and --cpus-per-task=Y so that X * Y = 128
module purge
module load molpro/2024.3

export MOLPRO_TMP=$PWD/MOLPRO_TMP_$SLURM_JOB_ID
mkdir -p $MOLPRO_TMP

$MOLPROP -d$MOLPRO_TMP -I$MOLPRO_TMP -W$PWD test.com
rm -rf $MOLPRO_TMP
```

### Example of scalability

Expand Down
5 changes: 3 additions & 2 deletions docs/apps/mothur.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ Free to use and open source under [GNU GPLv3](https://www.gnu.org/licenses/gpl-3

## Available

- Puhti: 1.39.5, 1.44.0, 1.48.0
- Puhti: 1.39.5, 1.44.0, 1.48.0, 1.48.2
- [Chipster](https://chipster.csc.fi) graphical user interface

## Usage
Expand Down Expand Up @@ -70,7 +70,8 @@ module load mothur
mothur my_mothur_task.txt
```

If you want to use multiple cores, adjust parameter `--cpus_per_task`. You must also adjust the `processors` parameter for each command in the Mothur command file accordingly. Note that only some [Mothur commands](https://docs.hpc.qmul.ac.uk/apps/bio/mothur/) can use multiple cores.
If you want to use multiple cores, adjust parameter `--cpus_per_task`. You must also adjust the `processors` parameter for each command in the Mothur command file accordingly. Note that only some [Mothur commands](https://mothur.org/wiki/tags/#commands) can use multiple cores. Check the
documentation to check if the options for the command include `processors`.

Mothur jobs need to run inside a single node, so the maximum number of cores you can use on Puhti is 40. You should check the scalability before submitting large jobs. Many Mothur tasks won't scale well beyond a few cores. Using too many cores may even make your job run slower.

Expand Down
19 changes: 9 additions & 10 deletions docs/apps/nextflow.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,16 +6,17 @@ tags:
# Nextflow

Nextflow is a scientific workflow management system for creating scalable,
portable, and reproducible workflows.
portable, and reproducible workflows. It is a groovy-based language for expressing the entire workflow in a single script and also supports running scripts (via script/run/shell directive of Snakemake rule) from other languages such as R, bash and Python.


[TOC]

## Available

Versions available on CSC's servers

* Puhti: 21.10.6, 22.04.5, 22.10.1, 23.04.3
* Mahti: 22.05.0-edge
* Puhti: 21.10.6, 22.04.5, 22.10.1, 23.04.3, 24.01.0-edge.5903, 24.10.0
* Mahti: 22.05.0-edge, 24.04.4
* LUMI: 22.10.4

!!! info "Pay attention to usage of Nextflow version"
Expand All @@ -38,13 +39,13 @@ Nextflow is released under the
module use /appl/local/csc/modulefiles
```

Nextflow is activated by loading `nextflow` module as below:
Nextflow is activated by loading `nextflow` module:

```bash
module load nextflow
```

Example of loading `nextflow` module with a specific version:
The default version is usually the latest. Choose the version of the Nextflow depending on the requirements of your own pipeline. It is recommended to load Nextflow module with a version, for the reproducibility point of view. To load `nextflow` module with a specific version:

```bash
module load nextflow/22.04.5
Expand All @@ -57,7 +58,7 @@ nextflow -h
```

More detailed instructions can be found in
[CSC's Nextflow tutorial](../support/tutorials/nextflow-puhti.md).
[CSC's Nextflow tutorial](../support/tutorials/nextflow-tutorial.md).

## References

Expand All @@ -69,7 +70,5 @@ computational workflows. Nat. Biotechnol. 35, 316–319 (2017).

## More information

* [Nextflow documentation](https://www.nextflow.io/docs/latest/index.html)
* [Running Nextflow on Puhti](../support/tutorials/nextflow-puhti.md)
* [High-throughput Nextflow workflow using HyperQueue](../support/tutorials/nextflow-hq.md)
* [Contact CSC Service Desk for technical support](../support/contact.md)
* [Nextflow official documentation](https://www.nextflow.io/docs/latest/index.html)
* [CSC Nextflow tutorial](../support/tutorials/nextflow-tutorial.md)
13 changes: 7 additions & 6 deletions docs/apps/pennylane.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,9 +13,9 @@ Currently supported pennylane versions:

| Version | Module | LUMI | Notes |
|:--------|:-------------------------------------|:-----:|-----------------|
| 0.38.0 | `pennylane-lightning/0.38.0-gpu` | X | default version |
| 0.37.0 | `pennylane-lightning/0.37.0-gpu` | X | |
| 0.36.0 | `pennylane-lightning/0.36.0-gpu` | X | |
| 0.40.0 | `pennylane-lightning/0.40.0-gpu` | X | default version |
| 0.39.0 | `pennylane-lightning/0.39.0-gpu` | X | |
| 0.38.0 | `pennylane-lightning/0.38.0-gpu` | X | |

All modules are based on Tykky using LUMI-container-wrapper.
Wrapper scripts have been provided so that common commands such as `python`,
Expand Down Expand Up @@ -46,10 +46,10 @@ If you wish to have a specific version ([see above for available
versions](#available)), use:

```bash
module load pennylane-lightning/0.38.0-gpu
module load pennylane-lightning/0.40.0-gpu
```

where `0.38.0-gpu` is the specified version
where `0.40.0-gpu` is the specified version

This command will also show all available versions:

Expand All @@ -76,8 +76,9 @@ Example batch script for reserving one GPU and CPU core in a single node:
export OMP_PROC_BIND=spread
export OMP_PLACES=threads

module load Local-quantum # or module use /appl/local/quantum/modulefiles
module load pennylane-lightning
python3 <file_name>.py
python <file_name>.py
```

Submit the script with `sbatch <script_name>.sh`
Expand Down
4 changes: 1 addition & 3 deletions docs/apps/qiskit-on-iqm.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,8 +9,6 @@ Qiskit on IQM is an open-source qiskit adapter for IQM quantum computers. It is
installed as `helmi_qiskit` on LUMI. It is used for running quantum circuits on
[Helmi](../computing/quantum-computing/helmi/running-on-helmi.md).

!!! info "News"
**28.10.2024** Installed `helmi_qiskit/15.4` which supports Qiskit==1.1.2

## Available

Expand All @@ -19,7 +17,7 @@ versions:

| Version | Module | LUMI | Notes |
|:--------|:-------------------------------------|:-----:|-----------------|
| 15.4 | `helmi_qiskit/15.4` | X | |
| 15.5 | `helmi_qiskit/15.5` | X | |

All modules are based on Tykky using LUMI-container-wrapper.
Wrapper scripts have been provided so that common commands such as `python`,
Expand Down
2 changes: 1 addition & 1 deletion docs/apps/qiskit.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ tags:
Qiskit is an open-source software for working with quantum computers at the level
of circuits, pulses, and algorithms. This page contains information in regard to running Quantum simulations using qiskit inside of a singularity container.
For information pertaining to running jobs on Helmi using qiskit please refer to this documentation:
[Running on Helmi](https://csc-guide-preview.2.rahtiapp.fi/origin/QT-qiskit-singularity/computing/quantum-computing/helmi/running-on-helmi/)
[Running on Helmi](../computing/quantum-computing/helmi/running-on-helmi.md).

!!! info "News"
**23.01.2025** Installed `qiskit/1.2.4` in a singularity container on LUMI with all major Qiskit packages and
Expand Down
1 change: 1 addition & 0 deletions docs/apps/snakemake.md
Original file line number Diff line number Diff line change
Expand Up @@ -63,5 +63,6 @@ If you use Snakemake in your work, please cite:
* [Snakemake official documentation](https://snakemake.readthedocs.io/en/stable/index.html)
* [How to run Snakemake workflow on Puhti](../support/tutorials/snakemake-puhti.md)
* [CSC Snakemake Hackathon 2024](https://coderefinery.github.io/snakemake_hackathon/)
* [Master thesis by Antoni Gołoś comparing automated workflow approaches on supercomputers](https://urn.fi/URN:NBN:fi:aalto-202406164397)
* [Contact CSC Service Desk for technical support](../support/contact.md)

Loading

0 comments on commit 1f6ce27

Please sign in to comment.