Abschnittsübersicht

Kursübersicht

    • Faith HPC Cluster

       

      Welcome to the Faith HPC Cluster! Please follow the instructions below to get access and start using the cluster efficiently.

      Our cluster is currently in its beta phase. This means it is still undergoing testing, and we encourage users to contact us with any problems, comments, or suggestions. You can do so by posting a message to the "Faith Cluster Users" group on Microsoft Teams.

      We greatly appreciate your collaboration in helping us make the cluster more robust and stable. Thank you for your participation in this beta phase!


      Important Notes

      • No Backup: Please be aware that there is no backup for the servers. Make sure to regularly save your important data elsewhere.


      Subscription & Account Creation

      • To create an account, please visit the following link:
        DIUF New Linux Account / Change Password

      • When subscribing, specify that you need access to the Faith cluster.

      • Accounts are valid until the end of the calendar year and will need to be renewed for continued access.


      Beta Phase & Maintenance

      • The Faith cluster is currently in a beta phase, meaning there may be occasional tuning operations and brief maintenance periods.


      Accessing the Cluster

      • To access the Faith HPC cluster: Use SSH to connect to the server, ssh your_username@diufrd200.unifr.ch

      • VPN Required: This server is only accessible via the university VPN.


      Cluster Configuration

      For the moment, the Faith cluster consists of the following hardware:

      • Storage: 80TB available on the master node.

      • Compute Nodes:

        1. CPU Node (diufrd201): Contains 2 x AMD EPYC 7763 64-Core Processors and 1TB of RAM.

        2. GPU Node (diufrd202): Contains 2 x AMD EPYC 7742 64-Core Processors and 0.5TB of RAM, with 6 x GTX 3080 GPUs.

        3. GPU Node (diufrd203): Contains 2 x AMD EPYC 9554 64-Core, 1538 GiB, with 4 x L40S
        4. GPU Node (diufrd204): Contains Intel(R) Xeon(R) Gold 6142 CPU 64-core Prossor and 768 GiB of RAM, with 8x Tesla V100 SXM2 32
        5. GPU Node (diufrd205): Contains 2 xAMD EPYC™ 9654 96-core Prossor and 2304 GiB of RAM, with 8x Nvidia RTX PRO 6000 Blackwell Server Edition, 96GB GDDR7. 

         


      Software Available

      • Basic Linux Commands

      • Slurm (job scheduling system)

      • CUDA Library (for GPU computing)

      • Anaconda Environment (for Python and data science projects)


      Examples provided by users :