I'm getting some weird errors in /var/log/exim_mainlog when someone is trying to send an email. The problem is I can't solve this so i'll try here.
2012-10-29 00:35:54 DBD::SQLite::db prepare failed: database is locked at /etc/exim_greylist_sqlite.pl line 1013, <HAN1> line 66.
2012-10-29 00:35:54 H=valid_hostname [valid_ip]:5555 F=<mail#example.com> temporarily rejected RCPT <mail#example.com>: failed to expand ACL string "${perl{greylist}}": Can't call method "execute" on an undefined value at /etc/exim_greylist_sqlite.pl line 1014, <HAN1> line 66.
2012-10-29 00:35:54 SMTP connection from valid_hostname [valid_ip]:5555 closed by QUIT
Some lines from exim_greylist_sqlite:
1012 my $query = "select strftime('%s', block_expires, 'utc')-strftime('%s','now') from relaytofrom where rcpt_to='$lp' and mail_from='$sender_addr'";
1013 $sth = $isp->prepare($query) || print FILE "$query\n";
1014 $sth->execute || print FILE "$query\n";
1015 my #status_array = $sth->fetchrow_array;
1016 $sth->finish;
I don't even know from where to start and solve this. I have tried searching on cPanel forums, tried using google in multiple ways but with no result :(
These seem applicable to your case:
Why does SQLite give a "database is locked" for a second query in a transaction when using Perl's DBD::SQLite?
How can I UPDATE rows returned by a SELECT in a loop?
It sounds like one process is in the middle of a select while some other process is in doing something that is trying to update the data. Find out what else is accessing that sqlite database and has it locked. If I'm right, everything after the first line is just blow back from the root cause: table lock contention.
Related
I was running into troubles today while running Airflow and airflow-dbt-python. I tried to debug a bit using the logs and the error shown in the logs was this one:
[2022-12-27, 13:53:53 CET] {functions.py:226} ERROR - [0m12:53:53.642186 [error] [MainThread]: Encountered an error:
Database Error
Expecting value: line 2 column 5 (char 5)
Quite a weird one.
Possibly check your credentials file that allows DBT to run queries on your DB (in our case we run DBT with BigQuery), in our case the credentials file was empty. We even tried to run DBT directly in the worker instead of running it through airflow, giving as a result exactly the same error. Unfortunately this error is not really explicit.
I installed oracle db version 19c in my docker environment and set up a database filled with dummy data. However, when I try to run a very large query I get the error:
SP2-0341: line overflow during variable substitution (>3000 characters at line 1).
I tried splitting it up with linebreaks but depending on how I split it I get all kinds of errors like:
ERROR at line 2: ORA-00933: SQL command not properly ended
or
ERROR at line 2:
SP2-0341: line overflow during variable substitution (>3000 characters at line 3)
The query is formatted as
SELECT AA.n_name AS AA_n_name, AA.n_nationkey AS ...
FROM nation AS AA FULL OUTER JOIN supplier...
WHERE (AC.p_partkey = ... AND...) OR((AC.p_partkey = ...)); -- The where part is over 5000 characters long--
Is there an alternative or solution to tackling this in the command line? I tried running the query as a sql file as well and hit a 4999 limit. I am on a Ubuntu server if that would help and any assistance would be appreciated.
It depends on the environment that you're working in, but generally you are able to continue a command onto the next line by ending the line with a 'back slash' \.
I have a problem with ksh in that a while loop is failing to obey the "while" condition. I should add now that this is ksh88 on my client's Solaris box. (That's a separate problem that can't be addressed in this forum. ;) I have seen Lance's question and some similar but none that I have found seem to address this. (Disclaimer: NO I haven't looked at every ksh question in this forum)
Here's a very cut down piece of code that replicates the problem:
1 #!/usr/bin/ksh
2 #
3 go=1
4 set -x
5 tail -0f loop-test.txt | while [[ $go -eq 1 ]]
6 do
7 read lbuff
8 set $lbuff
9 nwords=$#
10 printf "Line has %d words <%s>\n" $nwords "${lbuff}"
11 if [[ "${lbuff}" = "0" ]]
12 then
13 printf "Line consists of %s; time to absquatulate\n" $lbuff
14 go=0 # Violate the WHILE condition to get out of loop
15 fi
16 done
17 printf "\nLooks like I've fallen out of the loop\n"
18 exit 0
The way I test this is:
Run loop-test.sh in background mode
In a different window I run commands like "echo some nonsense >>loop_test.txt" (w/o the quotes, of course)
When I wish to exit, I type "echo 0 >>loop-test.txt"
What happens? It indeed sets go=0 and displays the line:
Line consists of 0; time to absquatulate
but does not exit the loop. To break out I append one more line to the txt file. The loop does NOT process that line and just falls out of the loop, issuing that "fallen out" message before exiting.
What's going on with this? I don't want to use "break" because in the actual script, the loop is monitoring the log of a database engine and the flag is set when it sees messages that the engine is shutting down. The actual script must still process those final lines before exiting.
Open to ideas, anyone?
Thanks much!
-- J.
OK, that flopped pretty quick. After reading a few other posts, I found an answer given by dogbane that sidesteps my entire pipe-to-while scheme. His is the second answer to a question (from 2013) where I see neeraj is using the same scheme I'm using.
What was wrong? The pipe-to-while has always worked for input that will end, like a file or a command with a distinct end to its output. However, from a tail command, there is no distinct EOF. Hence, the while-in-a-subshell doesn't know when to terminate.
Dogbane's solution: Don't use a pipe. Applying his logic to my situation, the basic loop is:
while read line
do
# put loop body here
done < <(tail -0f ${logfile})
No subshell, no problem.
Caveat about that syntax: There must be a space between the two < operators; otherwise it looks like a HEREIS document with bad syntax.
Er, one more catch: The syntax did not work in ksh, not even in the mksh (under cygwin) which emulates ksh93. But it did work in bash. So my boss is gonna have a good laugh at me, 'cause he knows I dislike bash.
So thanks MUCH, dogbane.
-- J
After articulating the problem and sleeping on it, the reason for the described behavior came to me: After setting go=0, the control flow of the loop still depends on another line of data coming in from STDIN via that pipe.
And now that I have realized the cause of the weirdness, I can speculate on an alternative way of reading from the stream. For the moment I am thinking of the following solution:
Open the input file as STDIN (Need to research the exec syntax for that)
When the condition occurs, close STDIN (Again, need to research the syntax for that)
It should then be safe to use the more intuitive:while read lbuffat the top of the loop.
I'll test this out today and post the result. I'd hope someone else benefit from the method (if it works).
I have a Perl program that we have run successfully every day for almost the past 2 years, but which crashes today with the error message:
FATAL ERR: Can't do PRAGMA cache_size = 1000000: attempt to write a readonly database
The SQLite database in question is readonly, and always has been, and the code has always used PRAGMA cache_size = 1000000 on it immediately after opening its readonly connection.
Setting cache_size is not a write operation, and does not fail if I access the db directly thru the DBI, like this:
$dbh->do("PRAGMA cache_size = 1000000")
However, the code makes SqliteH::db a subclass of DBI::db, then calls this function from the subclass:
$self->SUPER::do("PRAGMA cache_size = 1000000")
and it now dies with "DBD::SQLite::db do failed: attempt to write a readonly database at /local/ifs_projects/prok/function/src/lib/SqliteH.pm line 329."
The code worked with CentOS 5, Perl 5.10.1, DBD::SQLite 1.29, and DBI 1.611.
It does not work CentOS 6, Perl 5.16, DBD::SQLite 1.39, and DBI 1.627.
I am however mystified that it /did/ work last week on CentOS 6 and Perl 5.16. IT may have upgraded DBD::SQLite or DBI over the weekend.
Please do not change the title to "Suddenly getting error on program that has worked for months" again. That is an unhelpful and nonspecific title.
TL;DR - if transactions are on, then any command attempts to write to the transaction log. Remove AutoCommit=>0 from the dbh connection flags if the database is read-only [You shouldn't have any ->begin_work() or INSERT/UPDATE calls either, but that never worked on a read-only db :-) ].
As it turns out, I had exactly the same problem today after updating SQLite, DBI and DBD::SQLite (so I don't know exactly which of them caused the problem), but in my case, on a select (which made it even more baffling).
It turned out that transactions were turned on in the original connect string:
my $dbh=DBI->connect('dbi:SQLite:file.db','','',, {PrintError=>1,RaiseError=>1,AutoCommit=>0});
and, after tracing the code, I noticed that it was actually crashing trying to start a transaction.
DB<4> $dbh->trace(15)
DBI::db=HASH(0x18b9c38) trace level set to 0x0/15 (DBI # 0x0/0) in DBI 1.627-ithread (pid 15740)
DB<5> $sth= $dbh->prepare("SELECT key,value FROM annotation where accession=?")
...
DB<6> $sth->execute('D3FET3')
-> execute for DBD::SQLite::st (DBI::st=HASH(0x18ba340)~0x18ba178 'D3FET3') thr#10cd010
sqlite trace: bind into 0x18ba268: 1 => D3FET3 (0) pos 0 at dbdimp.c line 1232
sqlite trace: executing SELECT key,value FROM annotation where accession=? at dbdimp.c line 660
sqlite trace: bind 0 type 3 as D3FET3 at dbdimp.c line 677
sqlite trace: BEGIN TRAN at dbdimp.c line 774
sqlite error 8 recorded: attempt to write a readonly database at dbdimp.c line 79
!! ERROR: '8' 'attempt to write a readonly database' (err#1)
<- execute= ( undef ) [1 items] at (eval 15)[/usr/local/packages/perl-5.16.1/lib/5.16.1/perl5db.pl:646] line 2 via at -e line 1
DBD::SQLite::st execute failed: attempt to write a readonly database at (eval 15)[/usr/local/packages/perl-5.16.1/lib/5.16.1/perl5db.pl:646] line 2.
...
Removing the AutoCommit=>0 flag in the connect() call fixed my problem.
Give this unix script, which is scheduled batch run:
isql -U$USR -S$SRVR -P$PWD -w2000 < $SCRIPTS/sample_report.sql > $TEMP_DIR/sample_report.tmp_1
sed 's/-\{3,\}//g' $TEMP_DIR/sample_report.tmp_1 > $TEMP_DIR/sample_report.htm_1
uuencode $TEMP_DIR/sample_report.htm_1 sample_report.xls > $TEMP_DIR/sample_report.mail_1
mailx -s "Daily Sample Report" email#example.com < $TEMP_DIR/sample_report.mail_1
There are occasionally cases where the sample_report.xls attached in the mail, is empty, zero lines.
I have ruled out the following:
not command processing timeout - by adding the -t30 to isql, I get the xls and it contains the error, not empty
not sql error - by forcing an error in the sql, I get the xls and it contains the error, not empty
not sure of login timeout - by adding -l1, it does not timeout, but I can't specify a number lower than 1 second, so I can't say
I cannot reproduce this, as I do not know the cause. Has anyone else experienced this or have way to address this? Any suggestions how to find the cause? Is it the unix or the Sybase isql?
I found the cause. Since this is scheduled, and this particular report takes a long time to generate. Other scheduled scripts, I found have this line of code:
rm -f $TEMP_DIR/*
If the this long running report, overlaps with one of the scheduled scripts with the line above, the .tmp_1 can possibly be deleted, hence blank by the time it is mailed. I replicated this by manually deleting the .tmp_1 while the report was still writing the sql in there.