There is an Official FAQ for CO servers. This document contains some documentation that I wrote, which is yet to be merged to the official FAQ. In addition, there is an official documentation for EE servers as well.
If you want to do DL on any of the servers, first create a conda environment and install pytorch/tensorflow on it. You can run the code through SSH or Jupyter notebooks after that.
-
Custom google search for the Department
- Few experiments to build custom FAQ AI bots
- Gemini Gem
- Claude: We can create a .skill file. However, there you have to upload it manually.
- ChatGPT: No way to do this on free versions.
-
using-conda-without-sudo-peradeniya-servers: Merged to CO FAQ here
-
pytorch-tensorflow-versions-for-Peradeniya-servers
-
Jupyter notebooks: Merged to CO FAQ here.
-
VS code SSH development for Turing/Kepler. Then, you can install pylance for the Turing server and select the correct conda environment for code completion.
-
Requesting/managing project storage on Peradeniya Servers: Merged to CO FAQ here.
-
Using Mobaxterm to access the terminal and file storage on Turing/Kepler. Here, the public IP server (Tesla/Aiken) is the jump host and the internal IP server (Kepler/Turing) is the remote host.
-
Minimizing the storage usage on Turing localhomes (We no longer use turing:/localhomes)
-
FAQ for new HP Server (2023): Merged to CO FAQ here.
Files for Server Admin
Private files
Projects for volunteers
- We are looking for someone to copy the content on this webpage to faq.ce.pdn.ac.lk. If you are interested, please contact Akila/Nuwan via email.
- We are looking for students to work on some projects to improve the ML/DL tools for Peradeniya servers. [Read more]
Last updated 04-Jan-2026.