5001 "Conversion Failed" when trying to compress PDF using ConvertAPI - pdf

I am trying to compress my PDF using ConvertAPI.
It works fine every time that the file name does not have an "__" or "-", but as soon as I put in an "_" the conversion fails most of the time with the
error code : 5001 and
error "Conversion Failed"
the file is 4.4 MB in size.
Below is the screen shot for the same
Update
I am facing this error even when I have no special characters in my file name.. seems to happen randomly with no apparent reason..
StackTrack
at ConvertApiDotNet.ConvertApi.<>c__DisplayClass6_0.<ConvertAsync>b__1(Task`1 t)
at System.Threading.Tasks.ContinuationResultTaskFromResultTask`2.InnerInvoke()
at System.Threading.Tasks.Task.Execute()
--- End of stack trace from previous location ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at ConvertApiDotNet.ConvertApi.<ConvertAsync>d__6.MoveNext()
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at ConvertApiDotNet.ConvertApi.<ConvertAsync>d__4.MoveNext()

Related

How to decipher Erlang runtime error message

When I run an Erlang function using IntelliJ's "Run Configuration", I am getting the following error message. The error message contains lot of nested brackets. Please help me in understanding the message.
"C:\Program Files\Erlang OTP\bin\erl.exe" -pa F:/1TB/P/workspace-IntelliJ-Erlang2/netconfClient/out/production/netconfClient -pa F:/1TB/P/workspace-IntelliJ-Erlang2/netconfClient -eval netconfManager:open2(). -s init stop -noshell
init terminating in do_boot ({badarg,[{ets,select,[ct_attributes,[_]],[{_}]},{ct_config,get_key_from_name,1,[{_},{_}]},{ct_util,does_connection_exist,3,[{_},{_}]},{ct_gen_conn,do_start,4,[{_},{_}]},{ct_netconfc,open,4,[{_},{_}]},{erl_eval,do_apply,7,[{_},{_}]},{init,start_it,1,[{_},{_}]},{init,start_em,1,[{_},{_}]}]})
Crash dump is being written to: erl_crash.dump...{"init terminating in do_boot",{badarg,[{ets,select,[ct_attributes,[{{ct_conf,'$1','_','_','_',undefined,'_'},[],['$1']}]],[{error_info,#{cause=>id,module=>erl_stdlib_errors}}]},{ct_config,get_key_from_name,1,[{file,"ct_config.erl"},{line,578}]},{ct_util,does_connection_exist,3,[{file,"ct_util.erl"},{line,577}]},{ct_gen_conn,do_start,4,[{file,"ct_gen_conn.erl"},{line,281}]},{ct_netconfc,open,4,[{file,"ct_netconfc.erl"},{line,424}]},{erl_eval,do_apply,7,[{file,"erl_eval.erl"},{line,744}]},{init,start_it,1,[{file,"init.erl"},{line,1234}]},{init,start_em,1,[{file,"init.erl"},{line,1220}]}]}}
done
Right click on a function in a .erl file and click on "Run ."
The error message consists of Error code and Stack trace.
Error code is badarg. Please refer to
Exit Reasons for the list of error code.
The stack trace contains one entry for each function call. Each call
provides file name, function name, line number. For example,
{init,start_em,1,[{file,"init.erl"},{line,1220}]} indicates that
init.erl is the file, start_em is the function and 1220 is line #.
After manual indentation, we could better visualize the stacktrace as follows.
{badarg,[
{ets,select,[ct_attributes,[_]],[{_}]},
{ct_config,get_key_from_name,1,[{_},{_}]},
{ct_util,does_connection_exist,3,[{_},{_}]},
{ct_gen_conn,do_start,4,[{_},{_}]},
{ct_netconfc,open,4,[{_},{_}]},
{erl_eval,do_apply,7,[{_},{_}]},
{init,start_it,1,[{_},{_}]},
{init,start_em,1,[{_},{_}]}
]}
{badarg,[
{ets,select,[ct_attributes,[{{ct_conf,'$1','_','_','_',undefined,'_'},[],['$1']}]],[{error_info,#{cause=>id,module=>erl_stdlib_errors}}]},
{ct_config,get_key_from_name,1,[{file,"ct_config.erl"},{line,578}]},
{ct_util,does_connection_exist,3,[{file,"ct_util.erl"},{line,577}]},
{ct_gen_conn,do_start,4,[{file,"ct_gen_conn.erl"},{line,281}]},
{ct_netconfc,open,4,[{file,"ct_netconfc.erl"},{line,424}]},
{erl_eval,do_apply,7,[{file,"erl_eval.erl"},{line,744}]},
{init,start_it,1,[{file,"init.erl"},{line,1234}]},
{init,start_em,1,[{file,"init.erl"},{line,1220}]}
]}

Can I get just the raised error from a subprocess?

I am using this to execute a command:
try:
subprocess.check_output(COMMAND)
except subprocess.CalledProcessError as e:
print(e.output)
Inside the COMMAND is raised an exception, after a lot of standard output. What gets returned in e.output is in fact ALL the output plus the thrown error.
Is there a method to only return the raised error?
In COMMAND the Message is raised like:
try:
for i in range(3):
print("i is {0}.".format(i))
x = 1/0
print("x is {0}".format(x))
except Exception as e:
print("Message")
raise
Only want the "Message" returned.
Thanks in advance.

Camel: How to split and then aggregate when number of item is less than batch size

I have a Camel route that reads a file from S3 and the processes the input file as follows:
Parse each row into a POJO (Student) using Bindy
Split the output by body()
Aggregate by an attribute of the the body (.semester) and a batch size of 2
Invoke the persistence service to upload to DB in given batches
The problem is that with a batch size of 2 and an odd number of records, there is always one record that does not get saved.
Code provided is Kotlin but should not be very different from equivalent Java code (bar the slash in front of "\${simple expression}" or the lack of semicolons to terminate statements.
If I set the batch size to 1 then every record is saved, otherwise the last record never gets saved.
I have checked the documentation for message-processor a few times but it doesn't seem to cover this particular scenario.
I have also set [completionTimeout|completionInterval] in addition to completionSize but it does not make any difference.
Has anyone encountered this problem before?
val csvDataFormat = BindyCsvDataFormat(Student::class.java)
from("aws-s3://$student-12-bucket?amazonS3Client=#amazonS3&delay=5000")
.log("A new Student input file has been received in S3: '\${header.CamelAwsS3BucketName}/\${header.CamelAwsS3Key}'")
.to("direct:move-input-s3-object-to-in-progress")
.to("direct:process-s3-file")
.to("direct:move-input-s3-object-to-completed")
.end()
from("direct:process-s3-file")
.unmarshal(csvDataFormat)
.split(body())
.streaming()
.parallelProcessing()
.aggregate(simple("\${body.semester}"), GroupedBodyAggregationStrategy())
.completionSize(2)
.bean(persistenceService)
.end()
With an input CSV file including seven (7) records, this is the output generated (with some added debug logging):
WARN 19540 --- [student-12-move] c.a.s.s.internal.S3AbortableInputStream : Not all bytes were read from the S3ObjectInputStream, aborting HTTP connection. This is likely an error and may result in sub-optimal behavior. Request only the bytes you need via a ranged GET or drain the input stream after use.
INFO 19540 --- [student-12-move] student-workflow-main : A new Student input file has been received in S3: 'student-12-bucket/inbox/foo.csv'
INFO 19540 --- [student-12-move] move-input-s3-object-to-in-progress : Moving S3 file 'inbox/foo.csv' to 'in-progress' folder...
INFO 19540 --- [student-12-move] student-workflow-main : Moved input S3 file 'in-progress/foo.csv' to 'in-progress' folder...
INFO 19540 --- [student-12-move] pre-process-s3-file-records : Start saving to database...
DEBUG 19540 --- [read #7 - Split] c.b.i.d.s.StudentPersistenceServiceImpl : Saving record to database: Student(id=7, name=Student 7, semester=2nd, javaMarks=25)
DEBUG 19540 --- [read #7 - Split] c.b.i.d.s.StudentPersistenceServiceImpl : Saving record to database: Student(id=5, name=Student 5, semester=2nd, javaMarks=81)
DEBUG 19540 --- [read #3 - Split] c.b.i.d.s.StudentPersistenceServiceImpl : Saving record to database: Student(id=6, name=Student 6, semester=1st, javaMarks=15)
DEBUG 19540 --- [read #3 - Split] c.b.i.d.s.StudentPersistenceServiceImpl : Saving record to database: Student(id=2, name=Student 2, semester=1st, javaMarks=62)
DEBUG 19540 --- [read #2 - Split] c.b.i.d.s.StudentPersistenceServiceImpl : Saving record to database: Student(id=3, name=Student 3, semester=2nd, javaMarks=72)
DEBUG 19540 --- [read #2 - Split] c.b.i.d.s.StudentPersistenceServiceImpl : Saving record to database: Student(id=1, name=Student 1, semester=2nd, javaMarks=87)
INFO 19540 --- [student-12-move] device-group-workflow-main : End pre-processing S3 CSV file records...
INFO 19540 --- [student-12-move] move-input-s3-object-to-completed : Moving S3 file 'in-progress/foo.csv' to 'completed' folder...
INFO 19540 --- [student-12-move] device-group-workflow-main : Moved S3 file 'in-progress/foo.csv' to 'completed' folder...
If you need to immediately complete your message, then you can specify a completion predicate which is based on the exchange properties set by the splitter. I've not tried this, but I think
.completionPredicate( simple( "${exchangeProperty.CamelSplitComplete}" ) )
would process the last message.
My other concern is that you've set parallelProcessing in your splitter, which may mean that the messages aren't processed in order. Is it really the splitter you want the parallel processing applied to, or actually the aggregator? You don't seem to do anything with the split records except aggregate them, then then process them, so it might be better to move the parallelProcessing instruction to the aggregator.

What is the meaning of pig exit codes?

Pig exists with exit code 7 after printing these 3 lines:
2014-07-16 21:57:37,271 [main] INFO org.apache.pig.Main - Apache Pig version 0.11.0-cdh4.6.0 (rexported) compiled Feb 26 2014, 03:01:22
2014-07-16 21:57:37,272 [main] INFO org.apache.pig.Main - Logging error messages to: ..../pig_1405562257268.log
2014-07-16 21:57:37,627 [main] INFO org.apache.pig.impl.util.Utils - Default bootup file /home/sam/.pigbootup not found
what does this mean?
The INFO messages are normal
The only unusual bit is the exit code (7, see above)
The pig_*.log file does not exist
Is this documented somewhere?
EDIT: the problem was eliminated when I removed the semicolon from the end of the %declare line.
go figure...
You may take a look at the return codes in the source code.
The book Programming Pig also contains a list of their meaning in chapter two.
I copy them here for reference:
0 Success
1 Retriable failure
2 Failure
3 Partial failure - Used with multiquery; see “Nonlinear Data Flows”
4 Illegal arguments passed to Pig
5 IOException thrown - Would usually be thrown by a UDF
6 PigException thrown - Usually means a Python UDF raised an exception
7 ParseException thrown (can happen after parsing if variable substitution
is being done)
8 Throwable thrown (an unexpected exception)

How to catch the error on screen into variable in TCL using catch

To grep the error on the screen though catch eg
puts $c
#error on terminal : can't read "c": no such variable
catch {puts $c} err
puts $err # value of err 1
Is there any way to catch actual error message in
TCL apart from signal in variable err.
Yes. Read the ::errorInfo or ::errorCode global variables to get the stack trace and a machine-parsable "POSIX error" three-element list, correspondingly.
Since Tcl 8.5, it's also possible to pass a name of a dictionary to catch after the name of the variable to receive the result, and that dictionary will be populated by much of what can be obtained via "classic" error variables I described above, and more.
This is all explained in the catch manual page.