I'm trying to insert a flowfile from NiFi into PostgreSQL database. Flowfile is a JSON which keys are: id, timestamp, metric1, metric2, ..., and I have problems with timestamp data.
This is the value of SQL Insert statement:
INSERT INTO egillor.metrics (id, consumo_red, generacion_pv, timestamp) VALUES (?, ?, ?, ?)
And this is the error
Unable to execute SQL select query INSERT INTO egillor.metrics (id, consumo_red, generacion_pv, timestamp)
VALUES (?, ?, ?, ?) for StandardFlowFileRecord ... due to
java.sql.SQLDataException: The value of the sql.args.4.value is '2021-05-22T00:00:00+02:00', which cannot
be converted to a timestamp; routing to failure:
At the middle of the log, I have the following clue, but I don't now how to fix it
Caused by: java.text.ParseException: Unparseable date: "2021-05-22T00:00:00+02:00"
So, I have to deal with java.text.ParseException from inside a PutDatabaseRecord processor...
Does anyone had this problem in the past? Does anyone know how I can fix it?
Related
I am running a script to populate a table on an SQL Server 2017 database using SQLAlchamy. The script ran flawlessly a few weeks ago.
Essentially, the server does not seem to be populating default values on on non nullable fields.
For example, when I run the following statement (as rendered by SQLA):
INSERT INTO concept
(
retired,
short_name,
description,
form_text,
datatype_id,
class_id,
is_set,
creator,
date_created,
version,
changed_by,
date_changed,
retired_by,
date_retired,
retire_reason,
uuid,
note_regex
)
VALUES
(
?,
?,
?,
?,
?,
?,
?,
?,
?,
?,
?,
?,
?,
?,
?,
?,
?
)'] [parameters: (0,
'ck',
'creatine phosphokinase',
None,
14,
49,
0,
0,
None,
None,
None,
None,
None,
None,
None,
'cf6443ff-f2a1-49ab-96e3-c5d6fac362ed', None)]
I receive the error:
Cannot insert the value NULL into column 'date_created',
table 'myositis_longetudinal_cohort.dbo.concept'; column does not allow
nulls. INSERT fails. (515)
This is confusing to me since the the date_created field has a default value of getdate()
Here is the constraint statement:
ALTER TABLE [dbo].[concept]
ADD CONSTRAINT [DF__concept__date_cr__62AFA012] DEFAULT (Getdate()) FOR
[date_created]
I am new to SQL Server and am not sure what I may be missing. The server was update on 11/15/18, but I did not see anything in the update that could explain the change.
Thanks in advance for any help!
INSERT INTO concept
(
retired,
short_name,
description,
form_text,
datatype_id,
class_id,
is_set,
creator,
date_created, <--SPECIFYING HERE MEANS DEFAULT WON'T BE USED
version,
changed_by,
date_changed,
retired_by,
date_retired,
retire_reason,
uuid,
note_regex
)
And the parameters:
[parameters: (
0, <-- retired,
'ck', <-- short_name,
'creatine phosphokinase', <-- description,
None, <-- form_text,
14, <-- datatype_id,
49, <-- class_id,
0, <-- is_set,
0, <-- creator,
None, <-- date_created (provided with NULL)
None,
None,
None,
None,
None,
None,
'cf6443ff-f2a1-49ab-96e3-c5d6fac362ed',
None
)]
The same problem will occur for any other columns in your table that have a NON_NULL_DEFAULT+NOTNULL_CONSTRAINT
As to possibilities why it's suddenly started happening:
SQLAlchemy never used to generate an insert that specified this column
The NOT NULL constraint was added
A trigger supplying values in the case of a null being passed has been removed, re-coded or disabled
Thanks to everyone.
It turns out, that it was an SQLAlchemy issue.
I had to add autoload: true to my __table_args__ instead of manually defining each Column. Apparently, If you do not allow SQLA to introspect the table, it will pass NULL as a value instead of skipping the value when generating the insert statement as suggested by #larnu and others.
I'm mediocre with perl and new to SQL, so excuse any lameness.
I've got a perl script I'm working on that interacts with a database to keep track of users on IRC. From tutorials I've found, I've been able to create the db, create a table within, INSERT a new record, UPDATE a field in that record, and SELECT/find records based on a field.
The problem is, I can only figure out how to UPDATE one field at a time and that seems inefficient.
The code to create the table:
my $sql = <<'END_SQL';
CREATE TABLE seenDB (
id INTEGER PRIMARY KEY,
date VARCHAR(10),
time VARCHAR(8),
nick VARCHAR(30) UNIQUE NOT NULL,
rawnick VARCHAR(100),
channel VARCHAR(32),
action VARCHAR(20),
message VARCHAR(380)
)
END_SQL
$dbh->do($sql);
And to insert a record using values I've determined elsewhere:
$dbh->do('INSERT INTO seenDB (nick, rawnick, channel, action, message, date, time) VALUES (?, ?, ?, ?, ?, ?, ?)', undef, $nickString, $rawnickString, $channelString, $actionString, $messageString, $dateString, $timeString);
So, when the script needs to update, I'd like to update all of these fiends at once, but right now the only thing that works is one at a time, using syntax I got from the tutorial:
$dbh->do('UPDATE seenDB SET time = ? WHERE nick = ?',
undef,
$timeString,
$nickString);
I've tried the following syntaxes for multiple fields, but they fail:
$dbh->do('UPDATE seenDB (rawnick, channel, action, message, date, time) VALUES (?, ?, ?, ?, ?, ?)', undef, $rawnickString, $channelString, $actionString, $messageString, $dateString, $timeString);
and
$dbh->do('UPDATE seenDB SET rawnick=$rawnickString channel=$channelString action=$actionString message=$messageString date=$dateString time=$timeString WHERE nick=$nickString');
Is there a better way to do this?
You can update several fields at once in pretty much the same way as you update a single field, just list them comma separated in the update, something like;
$dbh->do('UPDATE seenDB SET rawnick=?, channel=?, action=?, message=?, date=?, time=? WHERE nick=?',
undef,
$rawnickString,
$channelString,
$actionString,
$messageString,
$dateString,
$timeString,
$nickString
);
I am inserting a large string into a CLOB column. The string is (in this instance) 3190 characters long - but can be much larger.
The string consists of xml data - sometimes the data will commit, sometimes i get the error. The error occurs roughly 50% of the time.
Even string which contain over 5000 characters will sometimes commit with no problem.
Unsure where to go next as i am under the impression that CLOB is the best data type for this data.
I have tried LONG LONG RAW
Someone suggested using XMLTYPE however that does not exist in my version of Oracle (11g - 11.2.0.2.0)
My insert statement:
INSERT INTO MYTABLE(InterfaceId, SourceSystem, Description, Type, Status, StatusNotes, MessageData, CreatedDate, ChangedDate, Id) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
MessageData is the CLOB column where the error is occuring, i have tried commiting without this data populated and it works.
Error
ORA-01461: can bind a LONG value only for insert into a LONG column
ALTER TABLE MYTABLE
ADD COLUMN XML_COL XMLTYPE;
AND THEN
SQL> INSERT INTO MYTABLE(..., XML_COL) VALUES (..., XMLTYPE('<root>example</root>'));
The key is to use XMLTYPE column and then use XMLTYPE() function to convert your string to XMLTYPE.
I got a table with 4 fields:
id, int(11), auto increament email, varchar(32) pass, varchar(32)
date_created, date
My question is how my query should look like?
I mean I don't need to insert the first value to id because it's auto increment but I have to insert all of the values..
First of all, I hope you're using PreparedStatements.
Assuming you have a Connection object named conn and two strings email and password...
PreparedStatement stmt = conn.prepareStatement("INSERT INTO table_name(email, pass, date_created) VALUES (?, ?, ?)");
stmt.setString(1, email);
stmt.setString(2, password);
stmt.setDate(3, new Date());
stmt.executeUpdate();
In SQL you can specify which columns you want to set in the INSERT statement:
INSERT INTO table_name(email, pass, date_created) VALUES(?, ?, ?)
You can insert in the format
INSERT INTO YourTable (Your Columns) VALUES (Your Values)
So for e.g.
INSERT INTO Test_Table (email, pass, data_created) VALUES ('john#blah.com', 'pass', to_date(string, format))
Using parameters-tsql; (better to pass values in parameters rather than as strings)
Insert into [YourTableName] (email, pass, date_created)
values (#email, #pass, #date_created)
I am attempting to run a query that uses bind variables against a mysql database engine. I am wondering how I can tell the engine to "reset" the bind variable assignments. I'm sure an example will explain much better than my poor brain.
Here is the query:
INSERT INTO site_support_docs
(
ASSET_ID,
TIME_STAMP,
SITE_NAME,
DOCUMENT_NAME,
DOCUMENT_LOCATION,
DOCUMENT_CONTENT,
DOCUMENT_LAST_MODIFIED
)
VALUES (?, ?, ?, ?, ?, ?, STR_TO_DATE(?, '%M %e, %Y %r'))
ON DUPLICATE KEY UPDATE asset_id = ?,
time_stamp = ?,
site_name = ?,
document_name = ?,
document_location = ?,
document_content = ?,
document_last_modified =
STR_TO_DATE(?, '%M %e, %Y %r')
My problem is that the eighth "?" is interpreted as a new bind variable when there are only seven. Anyway, I guess I can revert to using the actual values... but, I'm sure there is a better way.
Matt
MySQL offers a "VALUES()" function that provides the value which would have been inserted had the duplicate key conflict not existed. You don't need to repeat the placeholder then.
INSERT INTO t VALUES (?) ON DUPLICATE KEY UPDATE x = VALUES(x);
http://dev.mysql.com/doc/refman/5.0/en/miscellaneous-functions.html#function_values