Re: [galaxy-user] Amazon EC2: An error occurred running this job: Job output not returned from cluster

classic Classic list List threaded Threaded
3 messages Options
Reply | Threaded
Open this post in threaded view
|

Re: [galaxy-user] Amazon EC2: An error occurred running this job: Job output not returned from cluster

Dave Lin
I am getting similar errors as Brian reported back in March. (Note, we appear to have the same last name, but no relation)

 An error occurred with this dataset: Job output not returned from cluster

- Running on Cloudman with 5-6 nodes. (xlarge)
- The error seems to occur consistently when I launch multiple workflows in batch (using bioblend)
- Probably not relevant, but is failing on a BWA step.
- I am able to run successfully the same workflow against one of the datasets that failed in batch.
- Change-set is from Feb 8, 2013. 8794:1c7174911392. Stable branch. Prior to that, I was running different galaxy instances using changesets from last year and never ran into this problem.
- I'm seeing errors like:
galaxy.jobs.runners.drmaa WARNING 2013-05-02 17:07:51,991 Job output not returned from cluster: [Errno 2] No such file or directory\
: '/mnt/galaxyData/tmp/job_working_directory/002/2066/2066.drmec'
- In this example, the /mnt/galaxyData/tmp/job_working_directory/002/2066 folder and /mnt/galaxyData/tmp/job_working_directory/002/2066/2066.drmec files do not exist.


Any suggestions? Seems like this might be some type of resource contention issue, but I'm not sure where to investigate next.

Thanks in advance,
Dave



On Mon, Mar 11, 2013 at 9:04 AM, Brian Lin <[hidden email]> wrote:
Hi guys, I'm running a galaxy cloudman instance and running the usual tophat->cufflinks->cuffdiff workflow from RNAseq data.
I am using a m2.4xlarge as a master node, and autoscaling from 0-4 workers of the m2.xlarge type.
I have gotten the error: An error occurred running this job: Job output not returned from cluster
when running fasta groomer, tophat, and now cufflinks.
Following up troubleshooting from other people in the mailing list, I have set a new line in universe_wsgi.ini of retry_job_output_collection=30
Unfortunately, this does not seem to have fixed the problem.
The stdout is blank, and stderr gives Job output not returned from cluster

Under manage jobs in the admin panel, it lists 4 out of the 6 jobs as currently running. What is confusing is that of the 4 "running," one has already returned the error in the user dataset panel and yet is still listed as running.

From the SGE log, I see these errors:


03/11/2013 14:32:52|worker|ip-10-159-47-223|W|job 42.1 failed on host ip-10-30-130-84.ec2.internal before writing exit_status because: shepherd exited with exit status 19: before writing exit_status
03/11/2013 14:32:52|worker|ip-10-159-47-223|W|job 43.1 failed on host ip-10-30-130-84.ec2.internal before writing exit_status because: shepherd exited with exit status 19: before writing exit_status
03/11/2013 14:39:41|worker|ip-10-159-47-223|E|adminhost "ip-10-30-130-84.ec2.internal" already exists
03/11/2013 14:40:07|worker|ip-10-159-47-223|E|exechost "ip-10-30-130-84.ec2.internal" already exists
03/11/2013 14:50:54|worker|ip-10-159-47-223|E|adminhost "ip-10-30-130-84.ec2.internal" already exists
03/11/2013 14:50:55|worker|ip-10-159-47-223|E|exechost "ip-10-30-130-84.ec2.internal" already exists

Does anyone have any idea how to solve this error? It has removed my ability to use workflows completely and I still have not been able to run a single analysis to completion due to it.

Thanks for any insight anyone provide!

Brian

___________________________________________________________
The Galaxy User list should be used for the discussion of
Galaxy analysis and other features on the public server
at usegalaxy.org.  Please keep all replies on the list by
using "reply all" in your mail client.  For discussion of
local Galaxy instances and the Galaxy source code, please
use the Galaxy Development list:

  http://lists.bx.psu.edu/listinfo/galaxy-dev

To manage your subscriptions to this and other Galaxy lists,
please use the interface at:

  http://lists.bx.psu.edu/


___________________________________________________________
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/
Reply | Threaded
Open this post in threaded view
|

Re: [galaxy-user] Amazon EC2: An error occurred running this job: Job output not returned from cluster

Dave Lin
Dear Galaxy-Dev

I was hoping to see if anybody had any suggestions to resolve this error.

To summarize, I'm using cloudman/Amazon EC2.
I typically batch analyze 20-100 data sets against a workflow. (launched serially using bioblend script)

I'm consistently seeing the following error "An error occurred running this job: Job output not returned from cluster" when I launch a large number of samples.

If I analyze the same data sets/workflow, but launch 5 at a time, the analysis proceeds smoothly.

Any pointers would be appreciated.

Thanks
Dave




On Thu, May 2, 2013 at 1:51 PM, Dave Lin <[hidden email]> wrote:
I am getting similar errors as Brian reported back in March. (Note, we appear to have the same last name, but no relation)

 An error occurred with this dataset: Job output not returned from cluster

- Running on Cloudman with 5-6 nodes. (xlarge)
- The error seems to occur consistently when I launch multiple workflows in batch (using bioblend)
- Probably not relevant, but is failing on a BWA step.
- I am able to run successfully the same workflow against one of the datasets that failed in batch.
- Change-set is from Feb 8, 2013. 8794:1c7174911392. Stable branch. Prior to that, I was running different galaxy instances using changesets from last year and never ran into this problem.
- I'm seeing errors like:
galaxy.jobs.runners.drmaa WARNING 2013-05-02 17:07:51,991 Job output not returned from cluster: [Errno 2] No such file or directory\
: '/mnt/galaxyData/tmp/job_working_directory/002/2066/2066.drmec'
- In this example, the /mnt/galaxyData/tmp/job_working_directory/002/2066 folder and /mnt/galaxyData/tmp/job_working_directory/002/2066/2066.drmec files do not exist.


Any suggestions? Seems like this might be some type of resource contention issue, but I'm not sure where to investigate next.

Thanks in advance,
Dave



On Mon, Mar 11, 2013 at 9:04 AM, Brian Lin <[hidden email]> wrote:
Hi guys, I'm running a galaxy cloudman instance and running the usual tophat->cufflinks->cuffdiff workflow from RNAseq data.
I am using a m2.4xlarge as a master node, and autoscaling from 0-4 workers of the m2.xlarge type.
I have gotten the error: An error occurred running this job: Job output not returned from cluster
when running fasta groomer, tophat, and now cufflinks.
Following up troubleshooting from other people in the mailing list, I have set a new line in universe_wsgi.ini of retry_job_output_collection=30
Unfortunately, this does not seem to have fixed the problem.
The stdout is blank, and stderr gives Job output not returned from cluster

Under manage jobs in the admin panel, it lists 4 out of the 6 jobs as currently running. What is confusing is that of the 4 "running," one has already returned the error in the user dataset panel and yet is still listed as running.

From the SGE log, I see these errors:


03/11/2013 14:32:52|worker|ip-10-159-47-223|W|job 42.1 failed on host ip-10-30-130-84.ec2.internal before writing exit_status because: shepherd exited with exit status 19: before writing exit_status
03/11/2013 14:32:52|worker|ip-10-159-47-223|W|job 43.1 failed on host ip-10-30-130-84.ec2.internal before writing exit_status because: shepherd exited with exit status 19: before writing exit_status
03/11/2013 14:39:41|worker|ip-10-159-47-223|E|adminhost "ip-10-30-130-84.ec2.internal" already exists
03/11/2013 14:40:07|worker|ip-10-159-47-223|E|exechost "ip-10-30-130-84.ec2.internal" already exists
03/11/2013 14:50:54|worker|ip-10-159-47-223|E|adminhost "ip-10-30-130-84.ec2.internal" already exists
03/11/2013 14:50:55|worker|ip-10-159-47-223|E|exechost "ip-10-30-130-84.ec2.internal" already exists

Does anyone have any idea how to solve this error? It has removed my ability to use workflows completely and I still have not been able to run a single analysis to completion due to it.

Thanks for any insight anyone provide!

Brian

___________________________________________________________
The Galaxy User list should be used for the discussion of
Galaxy analysis and other features on the public server
at usegalaxy.org.  Please keep all replies on the list by
using "reply all" in your mail client.  For discussion of
local Galaxy instances and the Galaxy source code, please
use the Galaxy Development list:

  http://lists.bx.psu.edu/listinfo/galaxy-dev

To manage your subscriptions to this and other Galaxy lists,
please use the interface at:

  http://lists.bx.psu.edu/



___________________________________________________________
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/
Reply | Threaded
Open this post in threaded view
|

Re: [galaxy-user] Amazon EC2: An error occurred running this job: Job output not returned from cluster

Brian Lin
In reply to this post by Dave Lin
I am unfortunately not suuuper sure how I fixed this issue, as I was doing some pretty bad troubleshooting techniques and changing a ton of things at once.
There are two things I may be able to suggest. When you create the amazon instance, there is an option to use optimized EBS storage. There is a possibility that that option combined with the retry option>30 in my first email are able to solve the issue.

If you haven't already, it would be worth it to try just adding that extra line, commit changes and restart the service afterwards. I noticed the first two times I tried modifying the ini file, the changes were not committed, so that may contribute to it not working.

Good luck,
Brian



On Fri, May 3, 2013 at 4:33 PM, Dave Lin <[hidden email]> wrote:
Dear Galaxy-Dev

I was hoping to see if anybody had any suggestions to resolve this error.

To summarize, I'm using cloudman/Amazon EC2.
I typically batch analyze 20-100 data sets against a workflow. (launched serially using bioblend script)

I'm consistently seeing the following error "An error occurred running this job: Job output not returned from cluster" when I launch a large number of samples.

If I analyze the same data sets/workflow, but launch 5 at a time, the analysis proceeds smoothly.

Any pointers would be appreciated.

Thanks
Dave




On Thu, May 2, 2013 at 1:51 PM, Dave Lin <[hidden email]> wrote:
I am getting similar errors as Brian reported back in March. (Note, we appear to have the same last name, but no relation)

 An error occurred with this dataset: Job output not returned from cluster

- Running on Cloudman with 5-6 nodes. (xlarge)
- The error seems to occur consistently when I launch multiple workflows in batch (using bioblend)
- Probably not relevant, but is failing on a BWA step.
- I am able to run successfully the same workflow against one of the datasets that failed in batch.
- Change-set is from Feb 8, 2013. 8794:1c7174911392. Stable branch. Prior to that, I was running different galaxy instances using changesets from last year and never ran into this problem.
- I'm seeing errors like:
galaxy.jobs.runners.drmaa WARNING 2013-05-02 17:07:51,991 Job output not returned from cluster: [Errno 2] No such file or directory\
: '/mnt/galaxyData/tmp/job_working_directory/002/2066/2066.drmec'
- In this example, the /mnt/galaxyData/tmp/job_working_directory/002/2066 folder and /mnt/galaxyData/tmp/job_working_directory/002/2066/2066.drmec files do not exist.


Any suggestions? Seems like this might be some type of resource contention issue, but I'm not sure where to investigate next.

Thanks in advance,
Dave



On Mon, Mar 11, 2013 at 9:04 AM, Brian Lin <[hidden email]> wrote:
Hi guys, I'm running a galaxy cloudman instance and running the usual tophat->cufflinks->cuffdiff workflow from RNAseq data.
I am using a m2.4xlarge as a master node, and autoscaling from 0-4 workers of the m2.xlarge type.
I have gotten the error: An error occurred running this job: Job output not returned from cluster
when running fasta groomer, tophat, and now cufflinks.
Following up troubleshooting from other people in the mailing list, I have set a new line in universe_wsgi.ini of retry_job_output_collection=30
Unfortunately, this does not seem to have fixed the problem.
The stdout is blank, and stderr gives Job output not returned from cluster

Under manage jobs in the admin panel, it lists 4 out of the 6 jobs as currently running. What is confusing is that of the 4 "running," one has already returned the error in the user dataset panel and yet is still listed as running.

From the SGE log, I see these errors:


03/11/2013 14:32:52|worker|ip-10-159-47-223|W|job 42.1 failed on host ip-10-30-130-84.ec2.internal before writing exit_status because: shepherd exited with exit status 19: before writing exit_status
03/11/2013 14:32:52|worker|ip-10-159-47-223|W|job 43.1 failed on host ip-10-30-130-84.ec2.internal before writing exit_status because: shepherd exited with exit status 19: before writing exit_status
03/11/2013 14:39:41|worker|ip-10-159-47-223|E|adminhost "ip-10-30-130-84.ec2.internal" already exists
03/11/2013 14:40:07|worker|ip-10-159-47-223|E|exechost "ip-10-30-130-84.ec2.internal" already exists
03/11/2013 14:50:54|worker|ip-10-159-47-223|E|adminhost "ip-10-30-130-84.ec2.internal" already exists
03/11/2013 14:50:55|worker|ip-10-159-47-223|E|exechost "ip-10-30-130-84.ec2.internal" already exists

Does anyone have any idea how to solve this error? It has removed my ability to use workflows completely and I still have not been able to run a single analysis to completion due to it.

Thanks for any insight anyone provide!

Brian

___________________________________________________________
The Galaxy User list should be used for the discussion of
Galaxy analysis and other features on the public server
at usegalaxy.org.  Please keep all replies on the list by
using "reply all" in your mail client.  For discussion of
local Galaxy instances and the Galaxy source code, please
use the Galaxy Development list:

  http://lists.bx.psu.edu/listinfo/galaxy-dev

To manage your subscriptions to this and other Galaxy lists,
please use the interface at:

  http://lists.bx.psu.edu/




___________________________________________________________
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/