error and question from using galaxy kickstart

classic Classic list List threaded Threaded
2 messages Options
Reply | Threaded
Open this post in threaded view
|

error and question from using galaxy kickstart

Rui Wang
Hi Folks,

So I managed to ran through the ansible. What I did was to remove certain steps. For example, I don't need slurm-drmaa and docker, then I skipped the update cache. After these minor changes, it finished successfully(?) with an error message it ignored. Then I tried to access the UI, but nothing worked. I pasted it below for your reference. If anyone has seen this before, please help. :-)

Sorry for the beginner question, so once the ansible playbook ran through...seems the galaxy and the postgresql etc were started. How do I start/stop them manually? Do I have to run the playbook every time I want to run galaxy or only after I modify the playbook? Also, if I don't look at the log, I don't even know that the UI is at 127.0.0.1:4001.  Is there any documentation for this?

Thanks,
Rui


error message:

galaxy.web.stack INFO 2018-10-12 19:41:53,874 [p:102477,w:1,m:0] [MainThread] Galaxy server instance 'main.web.1' is running
Starting server in PID 101567.
serving on uwsgi://127.0.0.1:4001
galaxy.jobs.handler ERROR 2018-10-12 19:42:48,487 [p:102477,w:1,m:0] [JobHandlerQueue.monitor_thread] Exception in monitor_step
Traceback (most recent call last):
  File "lib/galaxy/jobs/handler.py", line 213, in __monitor
    self.__monitor_step()
  File "lib/galaxy/jobs/handler.py", line 272, in __monitor_step
    .order_by(model.Job.id).all()
  File "/media/libraryfiles/bioinfoadmin/bioinfoadmin/.venv/local/lib/python2.7/site-packages/sqlalchemy/orm/query.py", line 2737, in all
    return list(self)
  File "/media/libraryfiles/bioinfoadmin/bioinfoadmin/.venv/local/lib/python2.7/site-packages/sqlalchemy/orm/query.py", line 2889, in __iter__
    return self._execute_and_instances(context)
  File "/media/libraryfiles/bioinfoadmin/bioinfoadmin/.venv/local/lib/python2.7/site-packages/sqlalchemy/orm/query.py", line 2912, in _execute_and_instances
    result = conn.execute(querycontext.statement, self._params)
  File "/media/libraryfiles/bioinfoadmin/bioinfoadmin/.venv/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 948, in execute
    return meth(self, multiparams, params)
  File "/media/libraryfiles/bioinfoadmin/bioinfoadmin/.venv/local/lib/python2.7/site-packages/sqlalchemy/sql/elements.py", line 269, in _execute_on_connection
    return connection._execute_clauseelement(self, multiparams, params)
  File "/media/libraryfiles/bioinfoadmin/bioinfoadmin/.venv/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1060, in _execute_clauseelement
    compiled_sql, distilled_params
  File "/media/libraryfiles/bioinfoadmin/bioinfoadmin/.venv/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1200, in _execute_context
    context)
  File "/media/libraryfiles/bioinfoadmin/bioinfoadmin/.venv/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1413, in _handle_dbapi_exception
    exc_info
  File "/media/libraryfiles/bioinfoadmin/bioinfoadmin/.venv/local/lib/python2.7/site-packages/sqlalchemy/util/compat.py", line 203, in raise_from_cause
    reraise(type(exception), exception, tb=exc_tb, cause=cause)
  File "/media/libraryfiles/bioinfoadmin/bioinfoadmin/.venv/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1193, in _execute_context
    context)
  File "/media/libraryfiles/bioinfoadmin/bioinfoadmin/.venv/local/lib/python2.7/site-packages/sqlalchemy/engine/default.py", line 507, in do_execute
    cursor.execute(statement, parameters)
OperationalError: (psycopg2.OperationalError) server closed the connection unexpectedly
        This probably means the server terminated abnormally
        before or while processing the request.
 [SQL: 'SELECT EXISTS (SELECT history_dataset_association.id, history_dataset_association.history_id, history_dataset_association.dataset_id, history_dataset_association.create_time, history_dataset_association.update_time, history_dataset_association.state, history_dataset_association.copied_from_history_dataset_association_id, history_dataset_association.copied_from_library_dataset_dataset_association_id, history_dataset_association.name, history_dataset_association.info, history_dataset_association.blurb, history_dataset_association.peek, history_dataset_association.tool_version, history_dataset_association.extension, history_dataset_association.metadata, history_dataset_association.parent_id, history_dataset_association.designation, history_dataset_association.deleted, history_dataset_association.visible, history_dataset_association.extended_metadata_id, history_dataset_association.version, history_dataset_association.hid, history_dataset_association.purged, history_dataset_association.hidden_beneath_collection_instance_id \nFROM history_dataset_association, job_to_output_dataset \nWHERE job.id = job_to_output_dataset.job_id AND history_dataset_association.id = job_to_output_dataset.dataset_id AND history_dataset_association.deleted = true) AS anon_1, EXISTS (SELECT history_dataset_collection_association.id \nFROM history_dataset_collection_association, job_to_output_dataset_collection \nWHERE job.id = job_to_output_dataset_collection.job_id AND history_dataset_collection_association.id = job_to_output_dataset_collection.dataset_collection_id AND history_dataset_collection_association.deleted = true) AS anon_2, job.id AS job_id, job.create_time AS job_create_time, job.update_time AS job_update_time, job.history_id AS job_history_id, job.library_folder_id AS job_library_folder_id, job.tool_id AS job_tool_id, job.tool_version AS job_tool_version, job.state AS job_state, job.info AS job_info, job.copied_from_job_id AS job_copied_from_job_id, job.command_line AS job_command_line, job.dependencies AS job_dependencies, job.param_filename AS job_param_filename, job.runner_name AS job_runner_name_1, job.stdout AS job_stdout, job.stderr AS job_stderr, job.exit_code AS job_exit_code, job.traceback AS job_traceback, job.session_id AS job_session_id, job.user_id AS job_user_id, job.job_runner_name AS job_job_runner_name, job.job_runner_external_id AS job_job_runner_external_id, job.destination_id AS job_destination_id, job.destination_params AS job_destination_params, job.object_store_id AS job_object_store_id, job.imported AS job_imported, job.params AS job_params, job.handler AS job_handler \nFROM job \nWHERE job.state = %(state_1)s AND job.handler = %(handler_1)s AND job.id NOT IN (SELECT job.id \nFROM job JOIN job_to_input_dataset ON job.id = job_to_input_dataset.job_id JOIN history_dataset_association ON history_dataset_association.id = job_to_input_dataset.dataset_id JOIN dataset ON dataset.id = history_dataset_association.dataset_id \nWHERE job.state = %(state_2)s AND (history_dataset_association.state = %(_state_1)s OR history_dataset_association.deleted = true OR dataset.state != %(state_3)s OR dataset.deleted = true)) AND job.id NOT IN (SELECT job.id \nFROM job JOIN job_to_input_library_dataset ON job.id = job_to_input_library_dataset.job_id JOIN library_dataset_dataset_association ON library_dataset_dataset_association.id = job_to_input_library_dataset.ldda_id JOIN dataset ON dataset.id = library_dataset_dataset_association.dataset_id \nWHERE job.state = %(state_4)s AND (library_dataset_dataset_association.state IS NOT NULL OR library_dataset_dataset_association.deleted = true OR dataset.state != %(state_5)s OR dataset.deleted = true)) ORDER BY job.id'] [parameters: {'state_3': 'ok', 'handler_1': 'main.web.1', 'state_1': 'new', '_state_1': 'failed_metadata', 'state_2': 'new', 'state_5': 'ok', 'state_4': 'new'}] (Background on this error at: http://sqlalche.me/e/e3q8)
galaxy.jobs.handler ERROR 2018-10-12 19:42:48,488 [p:102478,w:2,m:0] [JobHandlerQueue.monitor_thread] Exception in monitor_step
Traceback (most recent call last):
  File "lib/galaxy/jobs/handler.py", line 213, in __monitor
    self.__monitor_step()
  File "lib/galaxy/jobs/handler.py", line 272, in __monitor_step
    .order_by(model.Job.id).all()
  File "/media/libraryfiles/bioinfoadmin/bioinfoadmin/.venv/local/lib/python2.7/site-packages/sqlalchemy/orm/query.py", line 2737, in all
    return list(self)
  File "/media/libraryfiles/bioinfoadmin/bioinfoadmin/.venv/local/lib/python2.7/site-packages/sqlalchemy/orm/query.py", line 2889, in __iter__
    return self._execute_and_instances(context)
  File "/media/libraryfiles/bioinfoadmin/bioinfoadmin/.venv/local/lib/python2.7/site-packages/sqlalchemy/orm/query.py", line 2912, in _execute_and_instances
    result = conn.execute(querycontext.statement, self._params)
  File "/media/libraryfiles/bioinfoadmin/bioinfoadmin/.venv/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 948, in execute
    return meth(self, multiparams, params)
  File "/media/libraryfiles/bioinfoadmin/bioinfoadmin/.venv/local/lib/python2.7/site-packages/sqlalchemy/sql/elements.py", line 269, in _execute_on_connection
    return connection._execute_clauseelement(self, multiparams, params)
  File "/media/libraryfiles/bioinfoadmin/bioinfoadmin/.venv/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1060, in _execute_clauseelement
    compiled_sql, distilled_params
  File "/media/libraryfiles/bioinfoadmin/bioinfoadmin/.venv/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1200, in _execute_context
    context)
  File "/media/libraryfiles/bioinfoadmin/bioinfoadmin/.venv/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1413, in _handle_dbapi_exception
    exc_info
  File "/media/libraryfiles/bioinfoadmin/bioinfoadmin/.venv/local/lib/python2.7/site-packages/sqlalchemy/util/compat.py", line 203, in raise_from_cause
    reraise(type(exception), exception, tb=exc_tb, cause=cause)
  File "/media/libraryfiles/bioinfoadmin/bioinfoadmin/.venv/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1193, in _execute_context
    context)
  File "/media/libraryfiles/bioinfoadmin/bioinfoadmin/.venv/local/lib/python2.7/site-packages/sqlalchemy/engine/default.py", line 507, in do_execute
    cursor.execute(statement, parameters)
OperationalError: (psycopg2.OperationalError) server closed the connection unexpectedly
        This probably means the server terminated abnormally
        before or while processing the request.
 [SQL: 'SELECT EXISTS (SELECT history_dataset_association.id, history_dataset_association.history_id, history_dataset_association.dataset_id, history_dataset_association.create_time, history_dataset_association.update_time, history_dataset_association.state, history_dataset_association.copied_from_history_dataset_association_id, history_dataset_association.copied_from_library_dataset_dataset_association_id, history_dataset_association.name, history_dataset_association.info, history_dataset_association.blurb, history_dataset_association.peek, history_dataset_association.tool_version, history_dataset_association.extension, history_dataset_association.metadata, history_dataset_association.parent_id, history_dataset_association.designation, history_dataset_association.deleted, history_dataset_association.visible, history_dataset_association.extended_metadata_id, history_dataset_association.version, history_dataset_association.hid, history_dataset_association.purged, history_dataset_association.hidden_beneath_collection_instance_id \nFROM history_dataset_association, job_to_output_dataset \nWHERE job.id = job_to_output_dataset.job_id AND history_dataset_association.id = job_to_output_dataset.dataset_id AND history_dataset_association.deleted = true) AS anon_1, EXISTS (SELECT history_dataset_collection_association.id \nFROM history_dataset_collection_association, job_to_output_dataset_collection \nWHERE job.id = job_to_output_dataset_collection.job_id AND history_dataset_collection_association.id = job_to_output_dataset_collection.dataset_collection_id AND history_dataset_collection_association.deleted = true) AS anon_2, job.id AS job_id, job.create_time AS job_create_time, job.update_time AS job_update_time, job.history_id AS job_history_id, job.library_folder_id AS job_library_folder_id, job.tool_id AS job_tool_id, job.tool_version AS job_tool_version, job.state AS job_state, job.info AS job_info, job.copied_from_job_id AS job_copied_from_job_id, job.command_line AS job_command_line, job.dependencies AS job_dependencies, job.param_filename AS job_param_filename, job.runner_name AS job_runner_name_1, job.stdout AS job_stdout, job.stderr AS job_stderr, job.exit_code AS job_exit_code, job.traceback AS job_traceback, job.session_id AS job_session_id, job.user_id AS job_user_id, job.job_runner_name AS job_job_runner_name, job.job_runner_external_id AS job_job_runner_external_id, job.destination_id AS job_destination_id, job.destination_params AS job_destination_params, job.object_store_id AS job_object_store_id, job.imported AS job_imported, job.params AS job_params, job.handler AS job_handler \nFROM job \nWHERE job.state = %(state_1)s AND job.handler = %(handler_1)s AND job.id NOT IN (SELECT job.id \nFROM job JOIN job_to_input_dataset ON job.id = job_to_input_dataset.job_id JOIN history_dataset_association ON history_dataset_association.id = job_to_input_dataset.dataset_id JOIN dataset ON dataset.id = history_dataset_association.dataset_id \nWHERE job.state = %(state_2)s AND (history_dataset_association.state = %(_state_1)s OR history_dataset_association.deleted = true OR dataset.state != %(state_3)s OR dataset.deleted = true)) AND job.id NOT IN (SELECT job.id \nFROM job JOIN job_to_input_library_dataset ON job.id = job_to_input_library_dataset.job_id JOIN library_dataset_dataset_association ON library_dataset_dataset_association.id = job_to_input_library_dataset.ldda_id JOIN dataset ON dataset.id = library_dataset_dataset_association.dataset_id \nWHERE job.state = %(state_4)s AND (library_dataset_dataset_association.state IS NOT NULL OR library_dataset_dataset_association.deleted = true OR dataset.state != %(state_5)s OR dataset.deleted = true)) ORDER BY job.id'] [parameters: {'state_3': 'ok', 'handler_1': 'main.web.2', 'state_1': 'new', '_state_1': 'failed_metadata', 'state_2': 'new', 'state_5': 'ok', 'state_4': 'new'}] (Background on this error at: http://sqlalche.me/e/e3q8)

___________________________________________________________
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  https://lists.galaxyproject.org/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/
Reply | Threaded
Open this post in threaded view
|

Re: error and question from using galaxy kickstart

Rui Wang
Never mind folks, it was that I had an older version of nginx running with a different config file. After I added the upload module and restarted it, now I could simply go to the UI by just specifying the host name. 

But...how do I stop/start this still remains unclear. please help if you could. :-)

I see that I have so many python processes and postgresql processes:

bioinfo+ 101567 101415  0 19:41 ?        00:00:04 /media/libraryfiles/bioinfoadmin/bioinfoadmin/.venv/bin/python2 /media/libraryfiles/bioinfoadmin/bioinfoadmin/.venv/bin/uwsgi --virtualenv /media/libraryfiles/bioinfoadmin/bioinfoadmin/.venv --ini-paste /media/libraryfiles/bioinfoadmin/bioinfoadmin/config/galaxy.ini --logdate --thunder-lock --master --processes 2 --threads 2 --logto /media/libraryfiles/bioinfoadmin/bioinfoadmin/uwsgi.log --socket 127.0.0.1:4001 --pythonpath lib --stats 127.0.0.1:9191 -b 16384
bioinfo+ 101568 101415  1 19:41 ?        00:01:38 /media/libraryfiles/bioinfoadmin/bioinfoadmin/.venv/bin/python ./lib/galaxy/main.py -c /media/libraryfiles/bioinfoadmin/bioinfoadmin/config/galaxy.ini --server-name=handler0 --log-file=/media/libraryfiles/bioinfoadmin/bioinfoadmin/handler0.log
bioinfo+ 101569 101415  1 19:41 ?        00:01:37 /media/libraryfiles/bioinfoadmin/bioinfoadmin/.venv/bin/python ./lib/galaxy/main.py -c /media/libraryfiles/bioinfoadmin/bioinfoadmin/config/galaxy.ini --server-name=handler1 --log-file=/media/libraryfiles/bioinfoadmin/bioinfoadmin/handler1.log
bioinfo+ 101570 101415  1 19:41 ?        00:01:38 /media/libraryfiles/bioinfoadmin/bioinfoadmin/.venv/bin/python ./lib/galaxy/main.py -c /media/libraryfiles/bioinfoadmin/bioinfoadmin/config/galaxy.ini --server-name=handler2 --log-file=/media/libraryfiles/bioinfoadmin/bioinfoadmin/handler2.log
bioinfo+ 101571 101415  1 19:41 ?        00:01:38 /media/libraryfiles/bioinfoadmin/bioinfoadmin/.venv/bin/python ./lib/galaxy/main.py -c /media/libraryfiles/bioinfoadmin/bioinfoadmin/config/galaxy.ini --server-name=handler3 --log-file=/media/libraryfiles/bioinfoadmin/bioinfoadmin/handler3.log
bioinfo+ 102477 101567  1 19:41 ?        00:01:39 /media/libraryfiles/bioinfoadmin/bioinfoadmin/.venv/bin/python2 /media/libraryfiles/bioinfoadmin/bioinfoadmin/.venv/bin/uwsgi --virtualenv /media/libraryfiles/bioinfoadmin/bioinfoadmin/.venv --ini-paste /media/libraryfiles/bioinfoadmin/bioinfoadmin/config/galaxy.ini --logdate --thunder-lock --master --processes 2 --threads 2 --logto /media/libraryfiles/bioinfoadmin/bioinfoadmin/uwsgi.log --socket 127.0.0.1:4001 --pythonpath lib --stats 127.0.0.1:9191 -b 16384
bioinfo+ 102478 101567  1 19:41 ?        00:01:39 /media/libraryfiles/bioinfoadmin/bioinfoadmin/.venv/bin/python2 /media/libraryfiles/bioinfoadmin/bioinfoadmin/.venv/bin/uwsgi --virtualenv /media/libraryfiles/bioinfoadmin/bioinfoadmin/.venv --ini-paste /media/libraryfiles/bioinfoadmin/bioinfoadmin/config/galaxy.ini --logdate --thunder-lock --master --processes 2 --threads 2 --logto /media/libraryfiles/bioinfoadmin/bioinfoadmin/uwsgi.log --socket 127.0.0.1:4001 --pythonpath lib --stats 127.0.0.1:9191 -b 16384

$ ps -ef | grep -i post
postgres 102744      1  0 19:41 ?        00:00:00 /usr/lib/postgresql/10/bin/postgres -D /var/lib/postgresql/10/main -c config_file=/etc/postgresql/10/main/postgresql.conf
postgres 102747 102744  0 19:41 ?        00:00:00 postgres: 10/main: checkpointer process
postgres 102748 102744  0 19:41 ?        00:00:00 postgres: 10/main: writer process
postgres 102749 102744  0 19:41 ?        00:00:00 postgres: 10/main: wal writer process
postgres 102750 102744  0 19:41 ?        00:00:00 postgres: 10/main: autovacuum launcher process
postgres 102751 102744  0 19:41 ?        00:00:00 postgres: 10/main: stats collector process
postgres 102752 102744  0 19:41 ?        00:00:00 postgres: 10/main: bgworker: logical replication launcher
postgres 103679 101415  0 19:42 ?        00:00:00 /usr/lib/postgresql/9.5/bin/postmaster -D /var/lib/postgresql/9.5/main -c config_file=/etc/postgresql/9.5/main/postgresql.conf
postgres 103689 103679  0 19:42 ?        00:00:00 postgres: checkpointer process
postgres 103690 103679  0 19:42 ?        00:00:00 postgres: writer process
postgres 103691 103679  0 19:42 ?        00:00:00 postgres: wal writer process
postgres 103692 103679  0 19:42 ?        00:00:00 postgres: autovacuum launcher process
postgres 103693 103679  0 19:42 ?        00:00:01 postgres: stats collector process
postgres 103694 103679  0 19:42 ?        00:00:04 postgres: bioinfoadmin galaxy 127.0.0.1(36144) idle
postgres 103695 103679  0 19:42 ?        00:00:04 postgres: bioinfoadmin galaxy 127.0.0.1(36146) idle
postgres 103696 103679  0 19:42 ?        00:00:04 postgres: bioinfoadmin galaxy 127.0.0.1(36148) idle
postgres 103697 103679  0 19:42 ?        00:00:04 postgres: bioinfoadmin galaxy 127.0.0.1(36150) idle
postgres 103698 103679  0 19:42 ?        00:00:11 postgres: bioinfoadmin galaxy 127.0.0.1(36152) idle
postgres 103699 103679  0 19:42 ?        00:00:11 postgres: bioinfoadmin galaxy 127.0.0.1(36154) idle
postgres 103700 103679  0 19:42 ?        00:00:12 postgres: bioinfoadmin galaxy 127.0.0.1(36156) idle
postgres 103701 103679  0 19:42 ?        00:00:12 postgres: bioinfoadmin galaxy 127.0.0.1(36158) idle
postgres 103702 103679  0 19:42 ?        00:00:12 postgres: bioinfoadmin galaxy 127.0.0.1(36160) idle
postgres 103703 103679  0 19:42 ?        00:00:11 postgres: bioinfoadmin galaxy 127.0.0.1(36162) idle
postgres 103704 103679  0 19:42 ?        00:00:11 postgres: bioinfoadmin galaxy 127.0.0.1(36164) idle
postgres 103705 103679  0 19:42 ?        00:00:11 postgres: bioinfoadmin galaxy 127.0.0.1(36166) idle
postgres 103706 103679  0 19:42 ?        00:00:11 postgres: bioinfoadmin galaxy 127.0.0.1(36168) idle
postgres 103707 103679  0 19:42 ?        00:00:11 postgres: bioinfoadmin galaxy 127.0.0.1(36170) idle
postgres 103708 103679  0 19:42 ?        00:00:11 postgres: bioinfoadmin galaxy 127.0.0.1(36172) idle
postgres 103709 103679  0 19:42 ?        00:00:11 postgres: bioinfoadmin galaxy 127.0.0.1(36174) idle
postgres 103710 103679  0 19:42 ?        00:00:04 postgres: bioinfoadmin galaxy 127.0.0.1(36176) idle
postgres 103711 103679  0 19:42 ?        00:00:04 postgres: bioinfoadmin galaxy 127.0.0.1(36178) idle

Is there a script to do the work?

Thanks,
Rui

On Fri, Oct 12, 2018 at 8:10 PM Rui Wang <[hidden email]> wrote:
Hi Folks,

So I managed to ran through the ansible. What I did was to remove certain steps. For example, I don't need slurm-drmaa and docker, then I skipped the update cache. After these minor changes, it finished successfully(?) with an error message it ignored. Then I tried to access the UI, but nothing worked. I pasted it below for your reference. If anyone has seen this before, please help. :-)

Sorry for the beginner question, so once the ansible playbook ran through...seems the galaxy and the postgresql etc were started. How do I start/stop them manually? Do I have to run the playbook every time I want to run galaxy or only after I modify the playbook? Also, if I don't look at the log, I don't even know that the UI is at 127.0.0.1:4001.  Is there any documentation for this?

Thanks,
Rui


error message:

galaxy.web.stack INFO 2018-10-12 19:41:53,874 [p:102477,w:1,m:0] [MainThread] Galaxy server instance 'main.web.1' is running
Starting server in PID 101567.
serving on uwsgi://127.0.0.1:4001
galaxy.jobs.handler ERROR 2018-10-12 19:42:48,487 [p:102477,w:1,m:0] [JobHandlerQueue.monitor_thread] Exception in monitor_step
Traceback (most recent call last):
  File "lib/galaxy/jobs/handler.py", line 213, in __monitor
    self.__monitor_step()
  File "lib/galaxy/jobs/handler.py", line 272, in __monitor_step
    .order_by(model.Job.id).all()
  File "/media/libraryfiles/bioinfoadmin/bioinfoadmin/.venv/local/lib/python2.7/site-packages/sqlalchemy/orm/query.py", line 2737, in all
    return list(self)
  File "/media/libraryfiles/bioinfoadmin/bioinfoadmin/.venv/local/lib/python2.7/site-packages/sqlalchemy/orm/query.py", line 2889, in __iter__
    return self._execute_and_instances(context)
  File "/media/libraryfiles/bioinfoadmin/bioinfoadmin/.venv/local/lib/python2.7/site-packages/sqlalchemy/orm/query.py", line 2912, in _execute_and_instances
    result = conn.execute(querycontext.statement, self._params)
  File "/media/libraryfiles/bioinfoadmin/bioinfoadmin/.venv/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 948, in execute
    return meth(self, multiparams, params)
  File "/media/libraryfiles/bioinfoadmin/bioinfoadmin/.venv/local/lib/python2.7/site-packages/sqlalchemy/sql/elements.py", line 269, in _execute_on_connection
    return connection._execute_clauseelement(self, multiparams, params)
  File "/media/libraryfiles/bioinfoadmin/bioinfoadmin/.venv/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1060, in _execute_clauseelement
    compiled_sql, distilled_params
  File "/media/libraryfiles/bioinfoadmin/bioinfoadmin/.venv/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1200, in _execute_context
    context)
  File "/media/libraryfiles/bioinfoadmin/bioinfoadmin/.venv/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1413, in _handle_dbapi_exception
    exc_info
  File "/media/libraryfiles/bioinfoadmin/bioinfoadmin/.venv/local/lib/python2.7/site-packages/sqlalchemy/util/compat.py", line 203, in raise_from_cause
    reraise(type(exception), exception, tb=exc_tb, cause=cause)
  File "/media/libraryfiles/bioinfoadmin/bioinfoadmin/.venv/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1193, in _execute_context
    context)
  File "/media/libraryfiles/bioinfoadmin/bioinfoadmin/.venv/local/lib/python2.7/site-packages/sqlalchemy/engine/default.py", line 507, in do_execute
    cursor.execute(statement, parameters)
OperationalError: (psycopg2.OperationalError) server closed the connection unexpectedly
        This probably means the server terminated abnormally
        before or while processing the request.
 [SQL: 'SELECT EXISTS (SELECT history_dataset_association.id, history_dataset_association.history_id, history_dataset_association.dataset_id, history_dataset_association.create_time, history_dataset_association.update_time, history_dataset_association.state, history_dataset_association.copied_from_history_dataset_association_id, history_dataset_association.copied_from_library_dataset_dataset_association_id, history_dataset_association.name, history_dataset_association.info, history_dataset_association.blurb, history_dataset_association.peek, history_dataset_association.tool_version, history_dataset_association.extension, history_dataset_association.metadata, history_dataset_association.parent_id, history_dataset_association.designation, history_dataset_association.deleted, history_dataset_association.visible, history_dataset_association.extended_metadata_id, history_dataset_association.version, history_dataset_association.hid, history_dataset_association.purged, history_dataset_association.hidden_beneath_collection_instance_id \nFROM history_dataset_association, job_to_output_dataset \nWHERE job.id = job_to_output_dataset.job_id AND history_dataset_association.id = job_to_output_dataset.dataset_id AND history_dataset_association.deleted = true) AS anon_1, EXISTS (SELECT history_dataset_collection_association.id \nFROM history_dataset_collection_association, job_to_output_dataset_collection \nWHERE job.id = job_to_output_dataset_collection.job_id AND history_dataset_collection_association.id = job_to_output_dataset_collection.dataset_collection_id AND history_dataset_collection_association.deleted = true) AS anon_2, job.id AS job_id, job.create_time AS job_create_time, job.update_time AS job_update_time, job.history_id AS job_history_id, job.library_folder_id AS job_library_folder_id, job.tool_id AS job_tool_id, job.tool_version AS job_tool_version, job.state AS job_state, job.info AS job_info, job.copied_from_job_id AS job_copied_from_job_id, job.command_line AS job_command_line, job.dependencies AS job_dependencies, job.param_filename AS job_param_filename, job.runner_name AS job_runner_name_1, job.stdout AS job_stdout, job.stderr AS job_stderr, job.exit_code AS job_exit_code, job.traceback AS job_traceback, job.session_id AS job_session_id, job.user_id AS job_user_id, job.job_runner_name AS job_job_runner_name, job.job_runner_external_id AS job_job_runner_external_id, job.destination_id AS job_destination_id, job.destination_params AS job_destination_params, job.object_store_id AS job_object_store_id, job.imported AS job_imported, job.params AS job_params, job.handler AS job_handler \nFROM job \nWHERE job.state = %(state_1)s AND job.handler = %(handler_1)s AND job.id NOT IN (SELECT job.id \nFROM job JOIN job_to_input_dataset ON job.id = job_to_input_dataset.job_id JOIN history_dataset_association ON history_dataset_association.id = job_to_input_dataset.dataset_id JOIN dataset ON dataset.id = history_dataset_association.dataset_id \nWHERE job.state = %(state_2)s AND (history_dataset_association.state = %(_state_1)s OR history_dataset_association.deleted = true OR dataset.state != %(state_3)s OR dataset.deleted = true)) AND job.id NOT IN (SELECT job.id \nFROM job JOIN job_to_input_library_dataset ON job.id = job_to_input_library_dataset.job_id JOIN library_dataset_dataset_association ON library_dataset_dataset_association.id = job_to_input_library_dataset.ldda_id JOIN dataset ON dataset.id = library_dataset_dataset_association.dataset_id \nWHERE job.state = %(state_4)s AND (library_dataset_dataset_association.state IS NOT NULL OR library_dataset_dataset_association.deleted = true OR dataset.state != %(state_5)s OR dataset.deleted = true)) ORDER BY job.id'] [parameters: {'state_3': 'ok', 'handler_1': 'main.web.1', 'state_1': 'new', '_state_1': 'failed_metadata', 'state_2': 'new', 'state_5': 'ok', 'state_4': 'new'}] (Background on this error at: http://sqlalche.me/e/e3q8)
galaxy.jobs.handler ERROR 2018-10-12 19:42:48,488 [p:102478,w:2,m:0] [JobHandlerQueue.monitor_thread] Exception in monitor_step
Traceback (most recent call last):
  File "lib/galaxy/jobs/handler.py", line 213, in __monitor
    self.__monitor_step()
  File "lib/galaxy/jobs/handler.py", line 272, in __monitor_step
    .order_by(model.Job.id).all()
  File "/media/libraryfiles/bioinfoadmin/bioinfoadmin/.venv/local/lib/python2.7/site-packages/sqlalchemy/orm/query.py", line 2737, in all
    return list(self)
  File "/media/libraryfiles/bioinfoadmin/bioinfoadmin/.venv/local/lib/python2.7/site-packages/sqlalchemy/orm/query.py", line 2889, in __iter__
    return self._execute_and_instances(context)
  File "/media/libraryfiles/bioinfoadmin/bioinfoadmin/.venv/local/lib/python2.7/site-packages/sqlalchemy/orm/query.py", line 2912, in _execute_and_instances
    result = conn.execute(querycontext.statement, self._params)
  File "/media/libraryfiles/bioinfoadmin/bioinfoadmin/.venv/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 948, in execute
    return meth(self, multiparams, params)
  File "/media/libraryfiles/bioinfoadmin/bioinfoadmin/.venv/local/lib/python2.7/site-packages/sqlalchemy/sql/elements.py", line 269, in _execute_on_connection
    return connection._execute_clauseelement(self, multiparams, params)
  File "/media/libraryfiles/bioinfoadmin/bioinfoadmin/.venv/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1060, in _execute_clauseelement
    compiled_sql, distilled_params
  File "/media/libraryfiles/bioinfoadmin/bioinfoadmin/.venv/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1200, in _execute_context
    context)
  File "/media/libraryfiles/bioinfoadmin/bioinfoadmin/.venv/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1413, in _handle_dbapi_exception
    exc_info
  File "/media/libraryfiles/bioinfoadmin/bioinfoadmin/.venv/local/lib/python2.7/site-packages/sqlalchemy/util/compat.py", line 203, in raise_from_cause
    reraise(type(exception), exception, tb=exc_tb, cause=cause)
  File "/media/libraryfiles/bioinfoadmin/bioinfoadmin/.venv/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1193, in _execute_context
    context)
  File "/media/libraryfiles/bioinfoadmin/bioinfoadmin/.venv/local/lib/python2.7/site-packages/sqlalchemy/engine/default.py", line 507, in do_execute
    cursor.execute(statement, parameters)
OperationalError: (psycopg2.OperationalError) server closed the connection unexpectedly
        This probably means the server terminated abnormally
        before or while processing the request.
 [SQL: 'SELECT EXISTS (SELECT history_dataset_association.id, history_dataset_association.history_id, history_dataset_association.dataset_id, history_dataset_association.create_time, history_dataset_association.update_time, history_dataset_association.state, history_dataset_association.copied_from_history_dataset_association_id, history_dataset_association.copied_from_library_dataset_dataset_association_id, history_dataset_association.name, history_dataset_association.info, history_dataset_association.blurb, history_dataset_association.peek, history_dataset_association.tool_version, history_dataset_association.extension, history_dataset_association.metadata, history_dataset_association.parent_id, history_dataset_association.designation, history_dataset_association.deleted, history_dataset_association.visible, history_dataset_association.extended_metadata_id, history_dataset_association.version, history_dataset_association.hid, history_dataset_association.purged, history_dataset_association.hidden_beneath_collection_instance_id \nFROM history_dataset_association, job_to_output_dataset \nWHERE job.id = job_to_output_dataset.job_id AND history_dataset_association.id = job_to_output_dataset.dataset_id AND history_dataset_association.deleted = true) AS anon_1, EXISTS (SELECT history_dataset_collection_association.id \nFROM history_dataset_collection_association, job_to_output_dataset_collection \nWHERE job.id = job_to_output_dataset_collection.job_id AND history_dataset_collection_association.id = job_to_output_dataset_collection.dataset_collection_id AND history_dataset_collection_association.deleted = true) AS anon_2, job.id AS job_id, job.create_time AS job_create_time, job.update_time AS job_update_time, job.history_id AS job_history_id, job.library_folder_id AS job_library_folder_id, job.tool_id AS job_tool_id, job.tool_version AS job_tool_version, job.state AS job_state, job.info AS job_info, job.copied_from_job_id AS job_copied_from_job_id, job.command_line AS job_command_line, job.dependencies AS job_dependencies, job.param_filename AS job_param_filename, job.runner_name AS job_runner_name_1, job.stdout AS job_stdout, job.stderr AS job_stderr, job.exit_code AS job_exit_code, job.traceback AS job_traceback, job.session_id AS job_session_id, job.user_id AS job_user_id, job.job_runner_name AS job_job_runner_name, job.job_runner_external_id AS job_job_runner_external_id, job.destination_id AS job_destination_id, job.destination_params AS job_destination_params, job.object_store_id AS job_object_store_id, job.imported AS job_imported, job.params AS job_params, job.handler AS job_handler \nFROM job \nWHERE job.state = %(state_1)s AND job.handler = %(handler_1)s AND job.id NOT IN (SELECT job.id \nFROM job JOIN job_to_input_dataset ON job.id = job_to_input_dataset.job_id JOIN history_dataset_association ON history_dataset_association.id = job_to_input_dataset.dataset_id JOIN dataset ON dataset.id = history_dataset_association.dataset_id \nWHERE job.state = %(state_2)s AND (history_dataset_association.state = %(_state_1)s OR history_dataset_association.deleted = true OR dataset.state != %(state_3)s OR dataset.deleted = true)) AND job.id NOT IN (SELECT job.id \nFROM job JOIN job_to_input_library_dataset ON job.id = job_to_input_library_dataset.job_id JOIN library_dataset_dataset_association ON library_dataset_dataset_association.id = job_to_input_library_dataset.ldda_id JOIN dataset ON dataset.id = library_dataset_dataset_association.dataset_id \nWHERE job.state = %(state_4)s AND (library_dataset_dataset_association.state IS NOT NULL OR library_dataset_dataset_association.deleted = true OR dataset.state != %(state_5)s OR dataset.deleted = true)) ORDER BY job.id'] [parameters: {'state_3': 'ok', 'handler_1': 'main.web.2', 'state_1': 'new', '_state_1': 'failed_metadata', 'state_2': 'new', 'state_5': 'ok', 'state_4': 'new'}] (Background on this error at: http://sqlalche.me/e/e3q8)

___________________________________________________________
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  https://lists.galaxyproject.org/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/