Maximum length of Command parameters in Windows 8 scheduler - windows-8

I have a console application that takes input command line arguments. I have to schedule this task on Windows 8 using Win32.TaskScheduler. The problem is whenever my command line arguments length exceeds 450 characters, i get a warning like this :
Task registered task "TaskName" , but not all specified triggers will start the task. User Action: Ensure all the task triggers are valid as configured. Additional Data: Error Value: 2147944183.
And the task does not run eventually at the run time.
Is there any limit on command line arguments' length?

I encountered a similar issue with scheduled tasks and command line arguments that exceeded 260 characters. We got the same useless error that you did.
Eventually we had to change the command line arguments so they were smaller (length of file paths in argument list)
We had this issue on Windows Server 2012 R2 64 bit, I'm not sure if the bitness has anything to do with it...

Related

DBT: How to fix Database Error Expecting Value?

I was running into troubles today while running Airflow and airflow-dbt-python. I tried to debug a bit using the logs and the error shown in the logs was this one:
[2022-12-27, 13:53:53 CET] {functions.py:226} ERROR - [0m12:53:53.642186 [error] [MainThread]: Encountered an error:
Database Error
Expecting value: line 2 column 5 (char 5)
Quite a weird one.
Possibly check your credentials file that allows DBT to run queries on your DB (in our case we run DBT with BigQuery), in our case the credentials file was empty. We even tried to run DBT directly in the worker instead of running it through airflow, giving as a result exactly the same error. Unfortunately this error is not really explicit.

Snakemake in cluster mode with --no-shared-fs: How to set cluster-status

I'm running Snakemake in a cluster environment and would like to use S3 as shared file system for writing output files.
Options --default-remote-provider, --default-remote-prefix and --no-shared-fs are set accordingly. The cluster uses UGE as scheduler, so setting --cluster is straightforward, but how do I set --cluster-status, whose use is enforced when using --no-shared-fs?
My best guess was a naive --cluster-status "qstat -j" which resulted in
subprocess.CalledProcessError: Command 'qstat Your job 2 ("snakejob.bwa_map.1.sh") has been submitted' returned non-zero exit status 1.
So I guess my question is, how do I get the actual jobid in there?
Thanks!
Andreas
EDIT 1:
I found https://groups.google.com/forum/#!topic/snakemake/7cyqAIfgeq4, so cluster-status has to be a script. So I wrote a Python script that is able to parse the above line, however snakemake still fails with:
/bin/sh: -c: line 0: syntax error near unexpected token `('
/bin/sh: -c: line 0: `/home/ec2-user/clusterstatus.py Your job 2 ("snakejob.bwa_map.1.sh") has been submitted'
...
subprocess.CalledProcessError: Command '/home/ec2-user/clusterstatus.py
Your job 2 ("snakejob.bwa_map.1.sh") has been submitted' returned non-zero exit status 1.
To answer my own question:
First, I needed the -terse option for qsub (which I had not added at first in my case and snakemake somehow remembered the wrong cluster command
Secondly, the cluster-status argument needs to point to a script being able to get the job status (job id being the only argument) and output "failed", "running" or "success".

Uploading job fails on the same file that was uploaded successfully before

I'm running regular uploading job to upload csv into BigQuery. The job runs every hour. According to recent fail log, it says:
Error: [REASON] invalid [MESSAGE] Invalid argument: service.geotab.com [LOCATION] File: 0 / Offset:268436098 / Line:218637 / Field:2
Error: [REASON] invalid [MESSAGE] Too many errors encountered. Limit is: 0. [LOCATION]
I went to line 218638 (the original csv has a headline, so I assume 218638 should be the actual failed line, let me know if I'm wrong) but it seems all right. I checked according table in BigQuery, it has that line too, which means I actually successfully uploaded this line before.
Then why does it causes failure recently?
project id: red-road-574
Job ID: Job_Upload-7EDCB180-2A2E-492B-9143-BEFFB36E5BB5
This indicates that there was a problem with the data in your file, where it didn't match the schema.
The error message says it occurred at File: 0 / Offset:268436098 / Line:218637 / Field:2. This means the first file (it looks like you just had one), and then the chunk of the file starting at 268436098 bytes from the beginning of the file, then the 218637th line from that file offset.
The reason for the offset portion is that bigquery processes large files in parallel in multiple workers. Each file worker starts at an offset from the beginning of the file. The offset that we include is the offset that the worker started from.
From the rest of the error message, it looks like the string service.geotab.com showed up in the second field, but the second field was a number, and service.geotab.com isn't a valid number. Perhaps there was a stray newline?
You can see what the lines looked like around the error by doing:
cat <yourfile> | tail -c +268436098 | tail -n +218636 | head -3
This will print out three lines... the one before the error (since I used -n +218636 instead of +218637), the one that had the error, and the next line as well.
Note that if this is just one line in the file that has a problem, you may be able to work around the issue by specifying maxBadRecords.

tcl tcltest unknown option -run

When I run ANY test I get the same message. Here is an example test:
package require tcltest
namespace import -force ::tcltest::*
test foo-1.1 {save 1 in variable name foo} {} {
set foo 1
} {1}
I get the following output:
WARNING: unknown option -run: should be one of -asidefromdir, -constraints, -debug, -errfile, -file, -limitconstraints, -load, -loadfile, -match, -notfile, -outfile, -preservecore, -relateddir, -singleproc, -skip, -testdir, -tmpdir, or -verbose
I've tried multiple tests and nothing seems to work. Does anyone know how to get this working?
Update #1:
The above error was my fault, it was due to it being run in my script. However if I run the following at a command line I got no output:
[root#server1 ~]$ tcl
tcl>package require tcltest
2.3.3
tcl>namespace import -force ::tcltest::*
tcl>test foo-1.1 {save 1 in variable name foo} {expr 1+1} {2}
tcl>echo [test foo-1.1 {save 1 in variable name foo} {expr 1+1} {2}]
tcl>
How do I get it to output pass or fail?
You don't get any output from the test command itself (as long as the test passes, as in the example: if it fails, the command prints a "contents of test case" / "actual result" / "expected result" summary; see also the remark on configuration below). The test statistics are saved internally: you can use the cleanupTests command to print the Total/Passed/Skipped/Failed numbers (that command also resets the counters and does some cleanup).
(When you run runAllTests, it runs test files in child processes, intercepting the output from each file's cleanupTests and adding them up to a grand total.)
The internal statistics collected during testing is available in AFACT undocumented namespace variables like ::tcltest::numTests. If you want to work with the statistics yourself, you can access them before calling cleanupTests, e.g.
parray ::tcltest::numTests
array set myTestData [array get ::tcltest::numTests]
set passed $::tcltest::numTests(Passed)
Look at the source for tcltest in your library to see what variables are available.
The amount of output from the test command is configurable, and you can get output even when the test passes if you add p / pass to the -verbose option. This option can also let you have less output on failure, etc.
You can also create a command called ::tcltest::ReportToMaster which, if it exists, will be called by cleanupTests with the pertinent data as arguments. Doing so seems to suppress both output of statistics and at least most resetting and cleanup. (I didn't go very far in investigating that method.) Be aware that messing about with this is more likely to create trouble than solve problems, but if you are writing your own testing software based on tcltest you might still want to look at it.
Oh, and please use the newer syntax for the test command. It's more verbose, but you'll thank yourself later on if you get started with it.
Obligatory-but-fairly-useless (in this case) documentation link: tcltest

ASE ISQL output to file, occassionally is empty or blank

Give this unix script, which is scheduled batch run:
isql -U$USR -S$SRVR -P$PWD -w2000 < $SCRIPTS/sample_report.sql > $TEMP_DIR/sample_report.tmp_1
sed 's/-\{3,\}//g' $TEMP_DIR/sample_report.tmp_1 > $TEMP_DIR/sample_report.htm_1
uuencode $TEMP_DIR/sample_report.htm_1 sample_report.xls > $TEMP_DIR/sample_report.mail_1
mailx -s "Daily Sample Report" email#example.com < $TEMP_DIR/sample_report.mail_1
There are occasionally cases where the sample_report.xls attached in the mail, is empty, zero lines.
I have ruled out the following:
not command processing timeout - by adding the -t30 to isql, I get the xls and it contains the error, not empty
not sql error - by forcing an error in the sql, I get the xls and it contains the error, not empty
not sure of login timeout - by adding -l1, it does not timeout, but I can't specify a number lower than 1 second, so I can't say
I cannot reproduce this, as I do not know the cause. Has anyone else experienced this or have way to address this? Any suggestions how to find the cause? Is it the unix or the Sybase isql?
I found the cause. Since this is scheduled, and this particular report takes a long time to generate. Other scheduled scripts, I found have this line of code:
rm -f $TEMP_DIR/*
If the this long running report, overlaps with one of the scheduled scripts with the line above, the .tmp_1 can possibly be deleted, hence blank by the time it is mailed. I replicated this by manually deleting the .tmp_1 while the report was still writing the sql in there.