[{"id":223,"title":"From PC to Cloud or High Performance Computing","url":"https://staging.dresa.org.au/materials/from-pc-to-cloud-or-high-performance-computing.json","description":"Most of you would have heard of Cloud and High Performance Computing (HPC), or you may already be using it. HPC is not the same as cloud computing. Both technologies differ in a number of ways, and have some similarities as well.  \n  \n We may refer to both types as “large scale computing” – but what is the difference? Both systems target scalability of computing, but in different ways.  \n  \n This webinar will give a good overview to the researchers thinking to make a move from their local computer to Cloud of High Performance Computing Cluster.\n\nIntroduction  \n HPC vs Cloud computing  \n When to use HPC  \n When to use the Cloud  \n The Cloud – Pros and Cons  \n HPC – Pros and Cons\n\nThe webinar has no prerequisites.","doi":"10.5281/zenodo.6423543","remote_updated_date":null,"remote_created_date":null,"scientific_topics":[],"operations":[]},{"id":228,"title":"Getting started with HPC using PBS Pro","url":"https://staging.dresa.org.au/materials/getting-started-with-hpc-using-pbs-pro.json","description":"Is your computer’s limited power throttling your research ambitions? Are your analysis scripts pushing your laptop’s processor to its limits? Is your software crashing because you’ve run out of memory? Would you like to unleash to power of the Unix command line to automate and run your analysis on supercomputers that you can access for free?  \n  \n High-Performance Computing (HPC) allows you to accomplish your analysis faster by using many parallel CPUs and huge amounts of memory simultaneously. This course provides a hands on introduction to running software on HPC infrastructure using PBS Pro.\n\nConnect to an HPC cluster  \n Use the Unix command line to operate a remote computer and create job scripts  \n Submit and manage jobs on a cluster using a scheduler  \n Transfer files to and from a remote computer  \n Use software through environment modules  \n Use parallelisation to speed up data analysis  \n Access the facilities available to you as a researcher  \n  \n This is the PBS Pro version of the Getting Started with HPC course.\n\nThis course assumes basic familiarity with the Bash command line environment found on GNU/Linux and other Unix-like environments. To come up to speed, consider taking our \\Unix Shell and Command Line Basics\\ course.","doi":"10.5281/zenodo.6423641","remote_updated_date":null,"remote_created_date":null,"scientific_topics":[],"operations":[]},{"id":229,"title":"Getting started with HPC using Slurm","url":"https://staging.dresa.org.au/materials/getting-started-with-hpc-using-slurm.json","description":"Is your computer’s limited power throttling your research ambitions? Are your analysis scripts pushing your laptop’s processor to its limits? Is your software crashing because you’ve run out of memory? Would you like to unleash to power of the Unix command line to automate and run your analysis on supercomputers that you can access for free?  \n  \n High-Performance Computing (HPC) allows you to accomplish your analysis faster by using many parallel CPUs and huge amounts of memory simultaneously. This course provides a hands on introduction to running software on HPC infrastructure using Slurm.\n\n- Connect to an HPC cluster\n- Use the Unix command line to operate a remote computer and create job scripts\n- Submit and manage jobs on a cluster using a scheduler\n- Transfer files to and from a remote computer\n- Use software through environment modules\n- Use parallelisation to speed up data analysis\n- Access the facilities available to you as a researcher\n\nThis is the Slurm version of the Getting Started with HPC course.\n\nThis course assumes basic familiarity with the Bash command line environment found on GNU/Linux and other Unix-like environments. To come up to speed, consider taking our [Unix Shell and Command Line Basics](https://intersect.org.au/training/course/unix101/) course.","doi":"10.5281/zenodo.6423645","remote_updated_date":null,"remote_created_date":null,"scientific_topics":[],"operations":[]},{"id":230,"title":"Parallel Programming for HPC","url":"https://staging.dresa.org.au/materials/parallel-programming-for-hpc.json","description":"You have written, compiled and run functioning programs in C and/or Fortran. You know how HPC works and you’ve submitted batch jobs.  \n  \n Now you want to move from writing single-threaded programs into the parallel programming paradigm, so you can truly harness the full power of High Performance Computing.\n\nOpenMP (Open Multi-Processing): a widespread method for shared memory programming  \n MPI (Message Passing Interface): a leading distributed memory programming model\n\nTo do this course you need to have:  \n  \n A good working knowledge of HPC. Consider taking our  \n Getting Started with HPC using PBS Pro course to come up to speed beforehand.  \n Prior experience of writing programs in either C or Fortran.","doi":"10.5281/zenodo.6423649","remote_updated_date":null,"remote_created_date":null,"scientific_topics":[],"operations":[]},{"id":153,"title":"WEBINAR: Where to go when your bioinformatics outgrows your compute","url":"https://staging.dresa.org.au/materials/webinar-where-to-go-when-your-bioinformatics-outgrows-your-compute.json","description":"This record includes training materials associated with the Australian BioCommons webinar ‘Where to go when your bioinformatics outgrows your compute’. This webinar took place on 19 August 2021.\n\nBioinformatics analyses are often complex, requiring multiple software tools and specialised compute resources. “I don’t know what compute resources I will need”, “My analysis won’t run and I don’t know why” and \"Just getting it to work\" are common pain points for researchers. In this webinar, you will learn how to understand the compute requirements for your bioinformatics workflows. You will also hear about ways of accessing compute that suits your needs as an Australian researcher, including Galaxy Australia, cloud and high-performance computing services offered by the Australian Research Data Commons, the National Compute Infrastructure (NCI) and Pawsey.  We also describe bioinformatics and computing support services available to Australian researchers. \n\nThis webinar was jointly organised with the Sydney Informatics Hub at the University of Sydney.\n\nMaterials are shared under a Creative Commons Attribution 4.0 International agreement unless otherwise specified and were current at the time of the event.\n\nFiles and materials included in this record:\n\n\n\t\n\tEvent metadata (PDF): Information about the event including, description, event URL, learning objectives, prerequisites, technical requirements etc.\n\t\n\t\n\tIndex of training materials (PDF): List and description of all materials associated with this event including the name, format, location and a brief description of each file.\n\t\n\t\n\tWhere to go when your bioinformatics outgrows your compute - slides (PDF and PPTX): Slides presented during the webinar\n\t\n\t\n\tAustralian research computing resources cheat sheet (PDF): A list of resources and useful links mentioned during the webinar.\n\t\n\n\nMaterials shared elsewhere:\n\nA recording of the webinar is available on the Australian BioCommons YouTube Channel:\n\nhttps://youtu.be/hNTbngSc-W0","doi":"10.5281/zenodo.5240578","remote_updated_date":null,"remote_created_date":null,"scientific_topics":[],"operations":[]},{"id":154,"title":"WEBINAR: High performance bioinformatics: submitting your best NCMAS application","url":"https://staging.dresa.org.au/materials/webinar-high-performance-bioinformatics-submitting-your-best-ncmas-application.json","description":"This record includes training materials associated with the Australian BioCommons webinar ‘High performance bioinformatics: submitting your best NCMAS application’. This webinar took place on 20 August 2021.\n\nBioinformaticians are increasingly turning to specialised compute infrastructure and efficient, scalable workflows as their research becomes more data intensive. Australian researchers that require extensive compute resources to process large datasets can apply for access to national high performance computing facilities (e.g. Pawsey and NCI) to power their research through the National Computational Merit Allocation Scheme (NCMAS). NCMAS is a competitive, merit-based scheme and requires applicants to carefully consider how the compute infrastructure and workflows will be applied. \n\nThis webinar provides life science researchers with insights into what makes a strong NCMAS application, with a focus on the technical assessment, and how to design and present effective and efficient bioinformatic workflows for the various national compute facilities. It will be followed by a short Q\u0026amp;A session.\n\nMaterials are shared under a Creative Commons Attribution 4.0 International agreement unless otherwise specified and were current at the time of the event.\n\nFiles and materials included in this record:\n\n\n\t\n\tEvent metadata (PDF): Information about the event including, description, event URL, learning objectives, prerequisites, technical requirements etc.\n\t\n\t\n\tIndex of training materials (PDF): List and description of all materials associated with this event including the name, format, location and a brief description of each file.\n\t\n\t\n\tHigh performance bioinformatics: submitting your best NCMAS application - slides (PDF and PPTX): Slides presented during the webinar\n\t\n\n\n \n\nMaterials shared elsewhere:\n\nA recording of the webinar is available on the Australian BioCommons YouTube Channel:\n\nhttps://youtu.be/HeFGjguwS0Y","doi":"10.5281/zenodo.5239883","remote_updated_date":null,"remote_created_date":null,"scientific_topics":[],"operations":[]},{"id":13,"title":"Introduction to Gadi - Part 2","url":"https://staging.dresa.org.au/materials/introduction-to-gadi-part-2.json","description":"Gadi is Australia’s most powerful supercomputer, a highly parallel cluster comprising more than 150,000 processor cores on ten different types of compute nodes. Gadi accommodates a wide range of tasks, from running climate models to genome sequencing, from designing molecules to astrophysical modelling. \r\nIntroduction to Gadi - Part 2 naturally follows on from Part 1, and is designed for beginners or users looking for a refresher on Gadi basics.\r\nTo register for this training, click here: https://bit.ly/IntroGadi2","doi":"","remote_updated_date":null,"remote_created_date":null,"scientific_topics":[],"operations":[]},{"id":9,"title":"Introduction to Gadi - part 1","url":"https://staging.dresa.org.au/materials/introduction-to-gadi-part-i.json","description":"Gadi is Australia’s most powerful supercomputer, a highly parallel cluster comprising more than 150,000 processor cores on ten different types of compute nodes. Gadi accommodates a wide range of tasks, from running climate models to genome sequencing, from designing molecules to astrophysical modelling. \r\nIntroduction to Gadi - Part 1 is designed for new users, or users that want a refresher on the basics of Gadi.\r\nTo register for this training, click here: https://bit.ly/IntroGadi1\r\nIf you have any questions regarding this training, please contact training.nci@anu.edu.au.","doi":"","remote_updated_date":null,"remote_created_date":null,"scientific_topics":[],"operations":[]}]