Postgresql job scheduler using pgagent giving error with update query - sql

I have created a job scheduler using pgagent in postgresql:
What I did is mentioned as screen shots
I had created like this to update name in my database field in a certain time. But when I check it is getting failed.
The failed status as follows:
What I did wrong? How can I correct it?

I have also faced the same problem exactly. By trial and error, I changed the Connection Type from Local to Remote and gave the following connection string
user=some_user password=some_password host=localhost port=5432 dbname=some_database
in the properties of the Step. And, it worked. So, the trick is to treat even the local server as a remote server.

Related

How do to increase timeout value for DataGrip's connection to Google BigQuery?

I currently connect JetBrain's DataGrip IDE to Google BigQuery to run my queries. I get the following error however: [Simba][BigQueryJDBCDriver](100034) The job has timed out on the server. Try increasing the timeout value. This of course happens when I run a query that may take some time to execute.
I can execute queries that take a short amount of time to complete so the connection does work.
I looked at this question (SQL Workbench/J and BigQuery) but I still did not fully understand how to change the timeout value
The error is seen below in this screenshot:
This works well also:
Datasource Properties | Advanced | Timeout : 3600
Please open up data source properties and add this to the very end of connection URL: ;Timeout=3600; (note it case sensitive). Try to increase the value until error is gone.

Read access violation related to input(variable,anydtdtm.);

Somebody tell me I'm not crazy. I have SAS on a server, and I'm running the following code:
data wtf;
a=".123456 1 1";
b=input(a,anydtdtm.);
run;
If I run this on my local computer, no problem. If I run this on the server, I get:
ERROR: An exception has been encountered.
Please contact technical support and provide them with the following traceback information:
The SAS task name is [DATASTEP]
ERROR: Read Access Violation DATASTEP
Exception occurred at (04E0AB8C)
Task Traceback
Address Frame (DBGHELP API Version 4.0 rev 5)
0000000004E0AB8C 0000000009C4EC20 sasxdtu:tkvercn1+0x9B4C
0000000004E030D9 0000000009C4F100 sasxdtu:tkvercn1+0x2099
0000000005FF14BE 0000000009C4F108 uwianydt:tkvercn1+0x47E
0000000002438026 0000000009C4F178 tkmk:tkBoot+0x162E6
Does anyone else get this error???
This is an internal bug that cannot be resolved by the user. You'll need to send this information, your environment description, and the exact steps to recreate the bug over to SAS Technical Support to open up an investigation and determine a workaround.
If your server is a database not made up of .sas7bdat files, it might be due to the SAS/ACCESS engine attempting to translate the function into a way that the server's language can understand, but is unable to do so properly; that is, it might think it's doing it correctly, but it's not. There are special cases where this can occur, and you may have discovered one.
If you are in fact querying some other database, try adding this before running the data step:
options sastrace=',,,d' sastraceloc=saslog;
This will show all of the steps as SAS sends data & functions to and from the server, and may help give some insight.
I am getting the same error on Linux system running SAS 9.4
AUTOMATIC SYSSCP LIN X64
AUTOMATIC SYSSCPL Linux
AUTOMATIC SYSVER 9.4
AUTOMATIC SYSVLONG 9.04.01M3P062415
AUTOMATIC SYSVLONG4 9.04.01M3P06242015
Until SAS can fix the informat you probably need to add additional testing in your code to exclude strange values like that.

Adding a new parameter throws an error in the pentaho report designer

I have a designed a report which works well. I am trying to add another sql query in it. Previewing the sql query works without any hiccups.
But when I try to add the parameter for the same query, I get two different errors depending on which mysql-connector I am using.
Earlier I was using the mysql-connector-java-5.0.8-bin and the error
was
org.pentaho.reporting.engine.classic.core.ReportDataFactoryException: Failed at query: (a few lines down )
Caused by: java.sql.SQLException: Stopped by user.
Then I changed the mysql-connector to mysql-connector-java-5.1.36-bin. The error changed to :-
org.pentaho.reporting.engine.classic.core.ReportDataFactoryException: Failed at query:
Caused by: com.mysql.jdbc.exceptions.MySQLTimeoutException: Statement cancelled due to timeout or client request
Any suggestions would be helpful. I am using Pentaho 5.3.0.0-213 on Windows 8.1 Pro. Although the same problem exists when I run Pentaho 5.3.0.0-213 on Ubuntu 14.04
Thanks

Error: Record cannot be modified right now .This cron task is currently being executed and may not be modified, please try again in a few minutes

While configuring incoming mail server in OpenERP 7, i get this following error.
Error: Record cannot be modified right now .This cron task is
currently being executed and may not be modified, please try again in
a few minutes.
If the job keeps running, you won't get a chance to change the configuration of the cron job. I met same issue, and find a way to solve it.
There is a DB lock on that row.
If you run following sql query to check current processes:
select * from pg_stat_activity where query like '%ir_cron%';
You can see some query like this (in the query field of the result):
select * from ir_cron where id = 100 for update nowait;
Get the pid from the query result, and terminate it with PG_TERMINATE_BACKEND. It will come back soon, so it's better to do the terminating and updating in one query, such as:
update ir_cron set active = false where PG_TERMINATE_BACKEND(57078) and id = 100;
I understand that original asker may not be interested anymore but for the sake of others :-
I faced the same error, while updating a module being developed.
So, the cron job related to my module had to be manually deleted first from the scheduler.
Settings -> Scheduler -> Scheduler Actions
delete the cron job you were trying to modify.
And update the module again.
First set the scheduler for fetching mail to inactive.Its time gap is 5mins. So make it inactive. then edit the incoming mail server.
I had a similar issue that kept me from upgrading a module. I solved it by stopping the odoo server and restarting postgresql and then starting odoo again. This gave me time to both mark the cron job as inactive and upgrade the module.
sudo service odoo-server stop
sudo service postgresql restart
sudo service odoo-server start

IBM DB2 - Can't Set Schema

I am trying to use the command, SET SCHEMA. However, it does not appear to be working, I get an error message. I am able to use the schema if I use Schema.Tablename, but this can be tedious. I am perfectly connected to the database and all the schema properties appear in my schemas folder.
The error message is below:
------------------------------ Commands Entered ------------------------------
SET SCHEMA RSBALANCE;
------------------------------------------------------------------------------
SET SCHEMA RSBALANCE
DB21034E The command was processed as an SQL statement because it was not a
valid Command Line Processor command. During SQL processing it returned:
SQL0805N Package "NULLID.SQLC2H20 0X41414141415A425A" was not found.
SQLSTATE=51002
SQL0805N Package "NULLID.SQLC2H20 0X41414141415A425A
The syntax for DB2 is (Info Center link):
SET SCHEMA = 'YOUR_SCHEMA'
If you're using the Command Line Processor (which it appears you are by the error message), you have to use double-quotes (it does matter!):
SET SCHEMA = "YOUR_SCHEMA"
Information Center has documentation on the SQL0805N error.
This is the relevant course of action:
If the DB2 utility programs need to be rebound to the database, the
database administrator can accomplish this by issuing one of the
following CLP command from the bnd subdirectory of the instance, while
connected to the database:
For the DB2 utilities:
db2 bind #db2ubind.lst blocking all grant public
For CLI::
db2 bind #db2cli.lst blocking all grant public
Turns out that my machine was missing an update from IBM. This allowed me to use the command from bhamby to work properly.
Thank you all for your input.