I am running in snakefile on cluster using this default config:
"__default__":
"account" : "myAccount"
"queue" : "myQueue"
"nCPUs" : "16"
"memory" : 20000
"resources" : "\"select[mem>20000] rusage[mem=20000] span[hosts=1]\""
"name" : "JOBNAME.{rule}.{wildcards}"
"output" : "{rule}.{wildcards}.out"
"error" : "{rule}.{wildcards}.err"
"time" : "24:00:00"
It runs ok for some rules, but it raise this error for one of the rules.
Traceback (most recent call last):
File "home/conda3_64/lib/python3.5/site-packages/snakemake/__init__.py", line 537, in snakemake
report=report)
File "home/conda3_64/lib/python3.5/site-packages/snakemake/workflow.py", line 653, in execute
success = scheduler.schedule()
File "home/conda3_64/lib/python3.5/site-packages/snakemake/scheduler.py", line 286, in schedule
self.run(job)
File "home/conda3_64/lib/python3.5/site-packages/snakemake/scheduler.py", line 302, in run
error_callback=self._error)
File "home/conda3_64/lib/python3.5/site-packages/snakemake/executors.py", line 638, in run
jobscript = self.get_jobscript(job)
File "home/conda3_64/lib/python3.5/site-packages/snakemake/executors.py", line 496, in get_jobscript
cluster=self.cluster_wildcards(job))
File "home/conda3_64/lib/python3.5/site-packages/snakemake/executors.py", line 556, in cluster_wildcards
return Wildcards(fromdict=self.cluster_params(job))
File "home/conda3_64/lib/python3.5/site-packages/snakemake/executors.py", line 547, in cluster_params
cluster.update(self.cluster_config.get(job.name, dict()))
ValueError: need more than 1 value to unpack
This is how I run the snakemake;
snakemake -j 20 --cluster-config ./config.yaml --cluster "qsub -A {cluster.account} -l walltime={cluster.time} -q {cluster.queue} -l nodes=1:ppn={cluster.nCPUs},mem={cluster.memory}" -p
If I run it without a cluster snakemake only it will run normally.
similar error is here ValueError: need more than 1 value to unpack python but I could not relate.
I cannot test it right now but based on the error I suspect the issue seems to be with your JOBNAME placeholder not being explicitly specified in call to qsub.
Adding -N some_name to your qsub arguments should resolve it.
Related
I work with SQL Server 2019 on server I face issue when I try to read an Excel file from shared path using python 3.10.
SQL Server exists on server 7.7 and files shared i need to access and read exist on same server .
When I execute reading to Excel file on local server, it is working from path D:\ExportExcel\testData.xlsx.
But when try to read the Excel from a shared Path as below
EXECUTE sp_execute_external_script
#language = N'Python',
#script = N'import pandas as pd
df = pd.read_excel(r"\\192.168.7.7\ExportExcel\testData.xlsx", sheet_name = "Sheet1")
print(df)';
I get an error:
Msg 39004, Level 16, State 20, Line 48
A 'Python' script error occurred during execution of 'sp_execute_external_script' with HRESULT 0x80004004.
Msg 39019, Level 16, State 2, Line 48
An external script error occurred:
Error in execution. Check the output for more information.
Traceback (most recent call last):
File "", line 5, in
File "D:\ProgramData\MSSQLSERVER\Temp-PY\Appcontainer1\9D383F5D-F77E-444E-9A82-B8839C8801E3\sqlindb_0.py", line 31, in transform
df = pd.read_excel(r"\192.168.7.7\ExportExcel\testData.xlsx", sheet_name = "Sheet1")
File "D:\SQL Data\MSSQL15.MSSQLSERVER\PYTHON_SERVICES\lib\site-packages\pandas\util_decorators.py", line 178, in wrapper
return func(*args, **kwargs)
File "D:\SQL Data\MSSQL15.MSSQLSERVER\PYTHON_SERVICES\lib\site-packages\pandas\util_decorators.py", line 178, in wrapper
return func(*args, **kwargs)
File "D:\SQL Data\MSSQL15.MSSQLSERVER\PYTHON_SERVICES\lib\site-packages\pandas\io\excel.py", line 307, in read_excel
io = ExcelFile(io, engine=engine)
File "D:\SQL Data\MSSQL15.MSSQLSERVER\PYTHON_SERVICES\lib\site-packages\pandas\io\excel.py", line 394, in init
Msg 39019, Level 16, State 2, Line 48
An external script error occurred:
self.book = xlrd.open_workbook(self.io)
File "D:\SQL Data\MSSQL15.MSSQLSERVER\PYTHON_SERVICES\lib\site-packages\xlrd_init.py", line 111, in open_workbook
with open(filename, "rb") as f:
PermissionError: [Errno 13] Permission denied: '\\192.168.7.7\ExportExcel\testData.xlsx'
SqlSatelliteCall error: Error in execution. Check the output for more information.
STDOUT message(s) from external script:
SqlSatelliteCall function failed. Please see the console output for more information.
Traceback (most recent call last):
File "D:\SQL Data\MSSQL15.MSSQLSERVER\PYTHON_SERVICES\lib\site-packages\revoscalepy\computecontext\RxInSqlServer.py", line 605, in rx_sql_satellite_call
rx_native_call("SqlSatelliteCall", params)
File "D:\SQL Data\MSSQL15.MSSQLSERVER\PYTHON_SERVICES\lib\site-packages\revoscalepy\RxSerializable.py", line 375, in rx_native_call
ret = px_call(functionname, params)
RuntimeError: revoscalepy function failed.
How to solve issue above please?
What I tried:
I try to open shared path from run; I can open it and create new file and read and write on same path
I tried to use another tool for reading as openrowset
select *
from OPENROWSET('Microsoft.ACE.OLEDB.12.0', 'Excel 12.0 Xml;Database=\\192.168.7.7\ExportExcel\testData.xlsx;HDR=YES','select * FROM [Sheet1$]')
and it read the Excel file successfully.
Folder path and file have all permission like
network service and owner and administrator and authenticated user and all application package and every one and all these have full control over all that .
Please - what could be the issue?
I tried using python pysharm to run script python
for shared path it read success .
Previously my Trimmomatic shell command included the following trimmers:
ILLUMINACLIP:adapters.fa:2:30:10 LEADING:3 TRAILING:3 SLIDINGWINDOW:4:15 MINLEN:36
For the Snakemake wrapper for Trimmomatic only LEADING:3 and MINLEN:36 work in the params: trimmer:
rule trimming:
input:
r1 = lambda wildcards: getHome(wildcards.sample)[0],
r2 = lambda wildcards: getHome(wildcards.sample)[1]
output:
r1 = os.path.join(dirs_dict["TRIM_DIR"],config["TRIM_TOOL"],"{sample}_R1_trim_paired.fastq.gz"),
r1_unpaired = os.path.join(dirs_dict["TRIM_DIR"],config["TRIM_TOOL"],"{sample}_R1_trim_unpaired.fastq.gz"),
r2 = os.path.join(dirs_dict["TRIM_DIR"],config["TRIM_TOOL"],"{sample}_R2_trim_paired.fastq.gz"),
r2_unpaired = os.path.join(dirs_dict["TRIM_DIR"],config["TRIM_TOOL"],"{sample}_R2_trim_unpaired.fastq.gz")
log: os.path.join(dirs_dict["LOG_DIR"],config["TRIM_TOOL"],"{sample}.log")
threads: 32
params:
# list of trimmers (see manual)
trimmer=["LEADING:3", "MINLEN:36"],
# optional parameters
extra="",
compression_level="-9"
resources:
mem = 1000,
time = 120
message: """--- Trimming FASTQ files with Trimmomatic."""
wrapper:
"0.64.0/bio/trimmomatic/pe"
When trying to use any of the other parameters (ILLUMINACLIP:adapters.fa:2:30:10 TRAILING:3 SLIDINGWINDOW:4:15) it fails.
For example, trying only TRAILING:3:
rule trimming:
input:
r1 = lambda wildcards: getHome(wildcards.sample)[0],
r2 = lambda wildcards: getHome(wildcards.sample)[1]
output:
r1 = os.path.join(dirs_dict["TRIM_DIR"],config["TRIM_TOOL"],"{sample}_R1_trim_paired.fastq.gz"),
r1_unpaired = os.path.join(dirs_dict["TRIM_DIR"],config["TRIM_TOOL"],"{sample}_R1_trim_unpaired.fastq.gz"),
r2 = os.path.join(dirs_dict["TRIM_DIR"],config["TRIM_TOOL"],"{sample}_R2_trim_paired.fastq.gz"),
r2_unpaired = os.path.join(dirs_dict["TRIM_DIR"],config["TRIM_TOOL"],"{sample}_R2_trim_unpaired.fastq.gz")
log: os.path.join(dirs_dict["LOG_DIR"],config["TRIM_TOOL"],"{sample}.log")
threads: 32
params:
# list of trimmers (see manual)
trimmer=["TRAILING:3"],
# optional parameters
extra="",
compression_level="-9"
resources:
mem = 1000,
time = 120
message: """--- Trimming FASTQ files with Trimmomatic."""
wrapper:
"0.64.0/bio/trimmomatic/pe"
Results in the following error:
Building DAG of jobs...
Using shell: /usr/bin/bash
Provided cores: 1 (use --cores to define parallelism)
Rules claiming more threads will be scaled down.
Job counts:
count jobs
1 qc_before_align_r1
1
[Mon Sep 14 13:42:08 2020]
Job 0: --- Quality check of raw data with FastQC before alignment.
Activating conda environment: /home/moldach/wrappers/.snakemake/conda/975fb1fd
Activating conda environment: /home/moldach/wrappers/.snakemake/conda/975fb1fd
Skipping ' 2> logs/fastqc/before_align/MTG324_R1.log' which didn't exist, or couldn't be read
Failed to process file MTG324_R1_trim_paired.fastq.gz
uk.ac.babraham.FastQC.Sequence.SequenceFormatException: Ran out of data in the middle of a fastq entry. Your file is probably truncated
at uk.ac.babraham.FastQC.Sequence.FastQFile.readNext(FastQFile.java:179)
at uk.ac.babraham.FastQC.Sequence.FastQFile.next(FastQFile.java:125)
at uk.ac.babraham.FastQC.Analysis.AnalysisRunner.run(AnalysisRunner.java:77)
at java.base/java.lang.Thread.run(Thread.java:834)
mv: cannot stat ‘/tmp/tmpsnncjthh/MTG324_R1_trim_paired_fastqc.html’: No such file or directory
Traceback (most recent call last):
File "/home/moldach/wrappers/.snakemake/scripts/tmpp34b98yj.wrapper.py", line 47, in <module>
shell("mv {html_path:q} {snakemake.output.html:q}")
File "/home/moldach/anaconda3/envs/snakemake/lib/python3.7/site-packages/snakemake/shell.py", line 205, in __new__
raise sp.CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command 'set -euo pipefail; mv /tmp/tmpsnncjthh/MTG324_R1_trim_paired_fastqc.html qc/fastQC/before_align/MTG324_R1_trim_paired_fastqc.html' returned non-zero exit status $
[Mon Sep 14 13:45:16 2020]
Error in rule qc_before_align_r1:
jobid: 0
output: qc/fastQC/before_align/MTG324_R1_trim_paired_fastqc.html, qc/fastQC/before_align/MTG324_R1_trim_paired_fastqc.zip
log: logs/fastqc/before_align/MTG324_R1.log (check log file(s) for error message)
conda-env: /home/moldach/wrappers/.snakemake/conda/975fb1fd
RuleException:
CalledProcessError in line 181 of /home/moldach/wrappers/Trim:
Command 'source /home/moldach/anaconda3/bin/activate '/home/moldach/wrappers/.snakemake/conda/975fb1fd'; set -euo pipefail; python /home/moldach/wrappers/.snakemake/scripts/tmpp34b98yj.wrapper.py' retu$
File "/home/moldach/anaconda3/envs/snakemake/lib/python3.7/site-packages/snakemake/executors/__init__.py", line 2189, in run_wrapper
File "/home/moldach/wrappers/Trim", line 181, in __rule_qc_before_align_r1
File "/home/moldach/anaconda3/envs/snakemake/lib/python3.7/site-packages/snakemake/executors/__init__.py", line 529, in _callback
File "/home/moldach/anaconda3/envs/snakemake/lib/python3.7/concurrent/futures/thread.py", line 57, in run
File "/home/moldach/anaconda3/envs/snakemake/lib/python3.7/site-packages/snakemake/executors/__init__.py", line 515, in cached_or_run
File "/home/moldach/anaconda3/envs/snakemake/lib/python3.7/site-packages/snakemake/executors/__init__.py", line 2201, in run_wrapper
Shutting down, this might take some time.
Exiting because a job execution failed. Look above for error message
When I run a query for a table around 20Million records the job gets successful in other environments but when I run the same on Windows command line I get an error saying - You have encountered a bug in the BigQuery CLI.
Query : bq --quiet --format=csv --project_id="proj_id" query
--max_rows=999999999 " select ids from dataset.table group each by id"
Do the execution change according to OS? The same query works for 5 Million records
Error log
== UTC timestamp ==
2015-02-19 08:33:16
== Error trace ==
File "C:\Program Files\Google\Cloud SDK\google-cloud-sdk\platform/bq\bq.py", line 719, in RunSafely
return_value = self.RunWithArgs(*args, **kwds)
File "C:\Program Files\Google\Cloud SDK\google-cloud-sdk\platform/bq\bq.py", line 1086, in RunWithArgs
max_rows=self.max_rows)
File "C:\Program Files\Google\Cloud SDK\google-cloud-sdk\platform\bq\bigquery_client.py", line 846, in ReadSchemaAndJobRows
return reader.ReadSchemaAndRows(start_row, max_rows)
File "C:\Program Files\Google\Cloud SDK\google-cloud-sdk\platform\bq\bigquery_client.py", line 2200, in ReadSchemaAndRows
rows.append(self._ConvertFromFV(schema, row))
========================================
The issue I'm having is that I'm trying to build an Angstrom image from scratch using bitbake (since Angstrom is now Yocto Compatible) but I've run into an error the moment I run the bitbake systemd-image
Traceback (most recent call last):
File "/usr/bin/bitbake", line 234, in <module>
ret = main()
File "/usr/bin/bitbake", line 197, in main
server = ProcessServer(server_channel, event_queue, configuration)
File "/usr/lib/pymodules/python2.7/bb/server/process.py", line 78, in __init__
self.cooker = BBCooker(configuration, self.register_idle_function)
File "/usr/lib/pymodules/python2.7/bb/cooker.py", line 76, in __init__
self.parseConfigurationFiles(self.configuration.file)
File "/usr/lib/pymodules/python2.7/bb/cooker.py", line 510, in parseConfigurationFiles
data = _parse(os.path.join("conf", "bitbake.conf"), data)
TypeError: getVar() takes exactly 3 arguments (2 given)
ERROR: Error evaluating '${TARGET_OS}:${TRANSLATED_TARGET_ARCH}:build-${BUILD_OS}:pn-${PN}:${MACHINEOVERRIDES}:${DISTROOVERRIDES}:${CLASSOVERRIDE}:forcevariable${#bb.utils.contains("TUNE_FEATURES", "thumb", ":thumb", "", d)}${#bb.utils.contains("TUNE_FEATURES", "no-thumb-interwork", ":thumb-interwork", "", d)}'
Traceback (most recent call last):
File "/usr/lib/pymodules/python2.7/bb/data_smart.py", line 116, in expandWithRefs
s = __expand_var_regexp__.sub(varparse.var_sub, s)
File "/usr/lib/pymodules/python2.7/bb/data_smart.py", line 60, in var_sub
var = self.d.getVar(key, 1)
File "/usr/lib/pymodules/python2.7/bb/data_smart.py", line 260, in getVar
return self.expand(value, var)
File "/usr/lib/pymodules/python2.7/bb/data_smart.py", line 132, in expand
return self.expandWithRefs(s, varname).value
File "/usr/lib/pymodules/python2.7/bb/data_smart.py", line 117, in expandWithRefs
s = __expand_python_regexp__.sub(varparse.python_sub, s)
TypeError: getVar() takes exactly 3 arguments (2 given)
ERROR: Error evaluating '${#bb.parse.BBHandler.vars_from_file(d.getVar('FILE'),d)[0] or 'defaultpkgname'}'
Traceback (most recent call last):
File "/usr/lib/pymodules/python2.7/bb/data_smart.py", line 117, in expandWithRefs
s = __expand_python_regexp__.sub(varparse.python_sub, s)
File "/usr/lib/pymodules/python2.7/bb/data_smart.py", line 76, in python_sub
value = utils.better_eval(codeobj, DataContext(self.d))
File "/usr/lib/pymodules/python2.7/bb/utils.py", line 387, in better_eval
return eval(source, _context, locals)
File "PN", line 1, in <module>
TypeError: getVar() takes exactly 3 arguments (2 given)
I've been at this for a while now, searching on different sites. Originally I tried following the guide at the developer section on the Angstrom site, but once I got some errors (prior to this one I'm putting here), I found Derek Molloy's site http://derekmolloy.ie/building-angstrom-for-beaglebone-from-source/ which solved those errors and gave a little more detail into the process.
Eventually I stumbled onto another forum post which decribed my problem, but unfortunately the answers weren't really clear (for me anyway) http://comments.gmane.org/gmane.linux.distributions.angstrom.devel/7431. I'm at a loss on what could be wrong, and I'm pretty much new to Yocto project so I'm unsure if there's any steps missing or something that's implicit that I have overlooked, so I would deeply appreciate anyone who could point me on the right direction on this.
As side note, I've been thinking that it could be something having to do with the environment-angstrom-... file that I have, since mine is environment-angstrom-v2013.12 and all the other examples use previous versions, I'm wondering if there's a new step involved when working with this.
Is there a reason why you are using a system-wide bitbake instead of the one that is compatible with that release of Angstrom?
Don't use a system-wide bitbake, as the bitbake API can and does change over time. Use the corresponding bitbake for that release of angstrom.
(This is breaking because your bitbake requires getVar to take three arguments but your angstrom layers are only passing two)
Had ran a overnight job using a script.
It worked for lot of tables and then some 4 hrs back 7am IST approx started behaving weird
Now even single commands give the same error
bq load --max_bad_records=10 tbl163.a_V3_14Jun2012 a_V3_14Jun2012.log.gz ../schema/analyze.schema
Error:
BigQuery error in load operation: Could not connect with BigQuery server, http
response status: 502
Update: I have received the following error just now
You have encountered a bug in the BigQuery CLI. Please send an email to bigquery- team#google.com to report this, with the following information:
========================================
== Platform ==
CPython:2.7.3:Linux-3.2.0-25-virtual-x86_64-with-Ubuntu-12.04-precise
== bq version ==
v2.0.6
== Command line ==
['/usr/local/bin/bq', 'load', '--max_bad_records=10', 'vizvrm299.analyze_VIZVRM299_26Jun2012', 'analyze_VIZVRM299_26Jun2012.log.gz', '../schema/analyze.schema']
== Error trace ==
File "build/bdist.linux-x86_64/egg/bq.py", line 614, in RunSafely
self.RunWithArgs(*args, **kwds)
File "build/bdist.linux-x86_64/egg/bq.py", line 791, in RunWithArgs
job = client.Load(table_reference, source, schema=schema, **opts)
File "build/bdist.linux-x86_64/egg/bigquery_client.py", line 1473, in Load
upload_file=upload_file, **kwds)
File "build/bdist.linux-x86_64/egg/bigquery_client.py", line 1228, in ExecuteJob
job_id=job_id)
File "build/bdist.linux-x86_64/egg/bigquery_client.py", line 1214, in RunJobSynchronously
upload_file=upload_file, job_id=job_id)
File "build/bdist.linux-x86_64/egg/bigquery_client.py", line 1208, in StartJob
projectId=project_id).execute()
File "build/bdist.linux-x86_64/egg/bigquery_client.py", line 184, in execute
return super(BigqueryHttp, self).execute(**kwds)
File "build/bdist.linux-x86_64/egg/apiclient/http.py", line 644, in execute
_, body = self.next_chunk(http)
File "build/bdist.linux-x86_64/egg/apiclient/http.py", line 708, in next_chunk
raise ResumableUploadError("Failed to retrieve starting URI.")
========================================
Unexpected exception in load operation: Failed to retrieve starting URI.
Just to close out this question, are you still observing these errors? This looks like a network configuration issue, not directly an issue with the bq command line tool. However, AFAIK, bq doesn't provide functions for resuming job insertion if there is a problem with the network.