I followed instructions (video and manual) on a cloud service which is a single node with 2 CPU, and I changed configuration files accordingly.
I installed Slurm and got it running.
After I typed the start command, it shows like this:
But it is actually not running, as I use report command:
Any idea how to solve this?
Two things I would suggest :
#1 - Check Slurm log files to see if you can find anything there
–> should be in /var/log/slurm/
#2 - Do you have a share file system that is mounted on the login and controller node as well on the compute nodes when they start ?
–> output of df- h on the controller/login/compute nodes
–> Check /apps/slurm/scripts/custom-compute-install and /apps/slurm/scripts/custom-controller-install
Thanks for replying.
I am testing VSVF, so I am running Slurmctrld and Slurmd on one node which is a cloud server with 2 CPUs, and also submitting VSVF on it.
My version of Slurm is 17.02 on ubuntu.
This is the log of Slurmctrld:
And this is the log of Slurmd:
As df -h:
And I cannot find the path like “/apps/slurm/scripts/custom-compute-install and /apps/slurm/scripts/custom-controller-install”.
When I run report:
In this case, if I run “srun -N1 /bin/hostname”, I got:
My version of Slurm is 19.05.6 and running on CentOS, so I’m not exactly in the same conditions. But in my context the login node gets the /home and apps/ folders mounted from the controller node.
$ df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 3.6G 0 3.6G 0% /dev
tmpfs 3.6G 0 3.6G 0% /dev/shm
tmpfs 3.6G 192M 3.4G 6% /run
tmpfs 3.6G 0 3.6G 0% /sys/fs/cgroup
/dev/sda2 20G 2.9G 17G 15% /
/dev/sda1 200M 12M 189M 6% /boot/efi
gcluster-controller:/home 50G 4.0G 46G 8% /home
gcluster-controller:/apps 50G 4.0G 46G 8% /apps
gcluster-controller:/etc/munge 50G 4.0G 46G 8% /etc/munge
10.102.14.42:/virtualflow 2.5T 291M 2.4T 1% /mnt/virtualflow
tmpfs 732M 0 732M 0% /run/user/3006594208
Also, I have installed a file storage shared system as described in this post to be able to have the compute/worker nodes reading/writing from a same share. I don’t see that in your setup and you definitely need some sort of shared file storage to be able to run VirtualFlow.
I switched to a “slurm-ready-to-use” server to test, and it is finally working.
I guess there was something wrong with slurm set-up.
But another thing came up, I got the result from the exactly same set up as in the tutorial, but this is the reslut: filename: Z1385360109_1_T1_replica-1.pdbqt
The vdW and Elec in the result are all 0. Is that normal?
Hi @Lee ,
Yes, that is normal. The precise output depends on the docking program and settings which you are using.
Thanks for replying and I admire your work.
The fact is that vdW and Elec are not likely to be 0.00.
I used the same setting as the video posted by you on youtube: https://virtual-flow.org/tutorials.
Do you get 0.00 in your result? Or how can I get actual value for vdW and Elec.
Thank you, I hope VirtualFlow will be helpful for you with your project
You are right that these terms are not zero. The docking programs work as they should, they just don’t print these terms in the pdbqt output files. I also get the zeros which you get.
It also depends on the scoring function, not all of them are able to separate these terms.
Have a nice weekend,
I am the developer of QuickVina02 and I can tell you in a high level of confidence that it ignores the values written in these 2 fields.
Usually they are set to 1.0 and 0.0 or 0.0 and 0.0 respectively. Asking where did the 0.0 and 0.0 come from, most probably they came from the preparation stage.
However I can assure you that it doesn’t make a difference to set them to 0, infinity, or even a negative value. They are just ignored.
By the way, you can refer to this page for details on the PDBQT format.
Thanks for replying, that is quite helpful.
I certainly learned a lot from you and Christoph.