Tagged with #drmaa
0 documentation articles | 0 announcements | 3 forum discussions

No articles to display.

No articles to display.

Created 2015-09-16 07:21:04 | Updated | Tags: queue drmaa memory slurm

Comments (2)


I am trying to use GATK Queue to submit scatter-gather jobs on our local cluster. However, the jobs almost always run out of memory, because only 1GB is allocated by default. I run Queue with the

-jobRunner Drmaa

option, and I have tried all different combinations of

-memLimit 8
-resMemLimit 8
-resMemReq 8

to see if I could somehow force it to allocate more, but the problem persists.

Any ideas how to increase the memory allocation? Our cluster uses SLURM for job submission.

Created 2014-03-06 21:31:54 | Updated | Tags: queue jobrunner java drmaa

Comments (2)

Hi (once more) I am attempting to run Queue with a scala script and scheduling it with jobrunner. The script works nicely, but when I run it with jobRunner I get the error

"Exception in thread "main" java.lang.UnsatisfiedLinkError: Unable to load library 'drmaa':libdrmaa.so: cannot open shared object file: No such file or directory."

When I try to pass the location of the libdrmaa.so file (-Djava.library.path=/opt/sge625/sge/lib/lx24-amd64/) the result is the same.

How would I point jobRunner to the correct path for the Drmaa.so library?

Created 2012-08-14 13:15:37 | Updated 2013-01-07 19:51:08 | Tags: queue drmaa

Comments (28)

I've been running Queue using the DRMAA, and I've noticed one thing which I would like to bring up for discussion. The job names are generated using the following code at this point:

 // Set the display name to < 512 characters of the description
  // NOTE: Not sure if this is configuration specific?
  protected val jobNameLength = 500
  protected val jobNameFilter = """[^A-Za-z0-9_]"""
  protected def functionNativeSpec = function.jobNativeArgs.mkString(" ")

  def start() {
    session.synchronized {
      val drmaaJob: JobTemplate = session.createJobTemplate

      drmaaJob.setJobName(function.description.take(jobNameLength).replaceAll(jobNameFilter, "_"))

For me this yields names looking something like this:


This is not very useful for telling the jobs apart. I'm running my jobs via drmaa on a system using the SLURM resource manager. So the cut-off in the name above can be attributed to the slurm system cutting of the name. Even so, I think that there should be more reasonable ways to create the name - using the function.jobName for example.

So, this leads me to my question - is there any particular reason that the job names are generated the way they are? And if not, do you (the gatk team) want a patch changing this to using the funciton.jobName instead?

Furthermore I would be interested in hearing from other users using gatk queue over drmaa, since I think it might be interesting to develop this further. I have as an example implemented setting a had to implement setting a hard wall time in the jobRunner, since the cluster I'm running on demands this. I'm sure that there are more solutions like that out there, and I would be thrilled to hear about them.