Can i reuse bind variable in an 'insert ... on duplicate update ...' statement? - sql

I am attempting to run a query that uses bind variables against a mysql database engine. I am wondering how I can tell the engine to "reset" the bind variable assignments. I'm sure an example will explain much better than my poor brain.
Here is the query:
INSERT INTO site_support_docs
(
ASSET_ID,
TIME_STAMP,
SITE_NAME,
DOCUMENT_NAME,
DOCUMENT_LOCATION,
DOCUMENT_CONTENT,
DOCUMENT_LAST_MODIFIED
)
VALUES (?, ?, ?, ?, ?, ?, STR_TO_DATE(?, '%M %e, %Y %r'))
ON DUPLICATE KEY UPDATE asset_id = ?,
time_stamp = ?,
site_name = ?,
document_name = ?,
document_location = ?,
document_content = ?,
document_last_modified =
STR_TO_DATE(?, '%M %e, %Y %r')
My problem is that the eighth "?" is interpreted as a new bind variable when there are only seven. Anyway, I guess I can revert to using the actual values... but, I'm sure there is a better way.
Matt

MySQL offers a "VALUES()" function that provides the value which would have been inserted had the duplicate key conflict not existed. You don't need to repeat the placeholder then.
INSERT INTO t VALUES (?) ON DUPLICATE KEY UPDATE x = VALUES(x);
http://dev.mysql.com/doc/refman/5.0/en/miscellaneous-functions.html#function_values

Related

NiFi Caused by: java.text.ParseException: Unparseable date

I'm trying to insert a flowfile from NiFi into PostgreSQL database. Flowfile is a JSON which keys are: id, timestamp, metric1, metric2, ..., and I have problems with timestamp data.
This is the value of SQL Insert statement:
INSERT INTO egillor.metrics (id, consumo_red, generacion_pv, timestamp) VALUES (?, ?, ?, ?)
And this is the error
Unable to execute SQL select query INSERT INTO egillor.metrics (id, consumo_red, generacion_pv, timestamp)
VALUES (?, ?, ?, ?) for StandardFlowFileRecord ... due to
java.sql.SQLDataException: The value of the sql.args.4.value is '2021-05-22T00:00:00+02:00', which cannot
be converted to a timestamp; routing to failure:
At the middle of the log, I have the following clue, but I don't now how to fix it
Caused by: java.text.ParseException: Unparseable date: "2021-05-22T00:00:00+02:00"
So, I have to deal with java.text.ParseException from inside a PutDatabaseRecord processor...
Does anyone had this problem in the past? Does anyone know how I can fix it?

nHibernate is trying to SQL UPDATE an entity I just created, rather than INSERT?

A projectparticipant may be a member of many groups and a group may have many projectparticipants
I'm finding that when I create 2 ProjectParticipants (this works) and then populate the Project.Groups collection with 3 new groups and add the participants to the relevant groups (Group A has participant 1, Group B has participant 2, Group C has participants 1 and 2), I encounter a "StaleStateException - Batch update returned unexpected row count from update; actual row count: 0, expected: 3". I would expect nHibernate to INSERT the new groups but it's running an UPDATE query and balking that they don't exist. It doesn't get as far as assigning the the participants to the groups
Here are the mappings:
//ProjectMap: A Project..
Id(x => x.Id).GeneratedBy.GuidComb().UnsavedValue(Guid.Empty);
HasMany(x => x.Participants)
.Table("ProjectParticipants")
.KeyColumn("ProjectId")
.ApplyFilter(DeletedDateFilter.FilterName)
.Cascade.AllDeleteOrphan()
.Inverse();
HasMany(x => x.Groups)
.Table("ProjectGroups")
.KeyColumn("ProjectId")
.Cascade.AllDeleteOrphan()
.Inverse();
//ProjectParticipantMap: A ProjectParticipant…
Id(x => x.Id).GeneratedBy.GuidComb().UnsavedValue(Guid.Empty);
References(x => x.Project)
.Column("ProjectId")
.LazyLoad(Laziness.Proxy);
HasManyToMany(x => x.Groups)
.Table("ProjectGroupParticipants")
.ParentKeyColumn("ProjectParticipantId")
.ChildKeyColumn("ProjectGroupId");
//GroupMap: A Group...
Id(e => e.Id).GeneratedBy.Assigned().UnsavedValue(Guid.Empty);
References(e => e.Project)
.Column("ProjectId")
.LazyLoad(Laziness.Proxy);
HasManyToMany(x => x.Participants)
.Table("ProjectGroupParticipants")
.ParentKeyColumn("ProjectGroupId")
.ChildKeyColumn("ProjectParticipantId")
.ApplyChildFilter(DeletedDateFilter.FilterName);
The tables are:
[ProjectParticipants] 1-->M [ProjectGroupParticipants] M<--1 [ProjectGroups]
M M
\---------------->1 [Project] 1<--------------------/
Here are the SQLs being run by nHibernate:
--presume this is adding the first participant - I find him in the db
INSERT INTO ProjectParticipants (CreatedDate, ModifiedDate, DeletedDate, FirstName, LastName, Timezone, Email, Pseudonym, Role, ProjectId, UserId, MobileNumber, Id) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
--presume this is adding the second participant - I find her in the DB
INSERT INTO ProjectParticipants (CreatedDate, ModifiedDate, DeletedDate, FirstName, LastName, Timezone, Email, Pseudonym, Role, ProjectId, UserId, MobileNumber, Id) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
--not sure what this is doing
UPDATE Projects SET CreatedDate = ?, ModifiedDate = ?, LogoUrl = ?, LogoFilename = ?, Client = ?, Name = ?, Description = ?, LastAccessedDate = ? WHERE Id = ?
--not sure what this operation is for, but at this point in time NO GROUP EXISTs for this project ID
SELECT … FROM ProjectGroups groups0_ WHERE groups0_.ProjectId=?
--not sure what this is for either?
UPDATE Projects SET CreatedDate = ?, ModifiedDate = ?, LogoUrl = ?, LogoFilename = ?, Client = ?, Name = ?, Description = ?, LastAccessedDate = ? WHERE Id = ?
-- I've no idea why this is an UPDATE instead of an INSERT, but it will certainly update 0 rows instead of X, because no groups exist
UPDATE ProjectGroups SET CreatedDate = ?, ModifiedDate = ?, DeletedDate = ?, Name = ?, ProjectId = ? WHERE Id = ?
Exception thrown: 'NHibernate.StaleStateException' in NHibernate.dll
Batch update returned unexpected row count from update; actual row count: 0; expected: 3
[ UPDATE ProjectGroups SET CreatedDate = #p0, ModifiedDate = #p1, DeletedDate = #p2, Name = #p3, ProjectId = #p4 WHERE Id = #p5 ]
So why did nHibernate come to think that its local entity had already ben saved and was hence available to UPDATE? The SQL generated should be an insert, but I'm not sure how it manages sync between local cache and DB to know if entities already exist or not
Slightly puzzled, that this used to work in NH 2.x, but since an upgrade to latest (5.x) this exception has started appearing.
Slightly puzzled, that this used to work in NH 2.x,
Handling of unsaved-value was indeed changed in 5.2 with this pull request.
If I understand correctly this PR fixed some cases where provided unsaved-value mapping was ignored for assigned identifiers.
So it seems you have incorrect unsaved-value mapping for your entities with assigned identifier. From given data it's unclear how your expect NHibernate to determine if entity is transient. With your mapping if Id is not equal to Guid.Empty NHibernate will trigger UPDATE statement for all cascaded entities and it seems that's exact behavior you see.
If you want it to check database when entity is not present in session - set it to "undefined" instead:
Id(x => x.Id).GeneratedBy.GuidComb().UnsavedValue("undefined");
If you want it to always save entity - set it to "any".
Read spec with explanations for all other possible values. Also check this similar issue.

SQL Server suddenly not respecting default values

I am running a script to populate a table on an SQL Server 2017 database using SQLAlchamy. The script ran flawlessly a few weeks ago.
Essentially, the server does not seem to be populating default values on on non nullable fields.
For example, when I run the following statement (as rendered by SQLA):
INSERT INTO concept
(
retired,
short_name,
description,
form_text,
datatype_id,
class_id,
is_set,
creator,
date_created,
version,
changed_by,
date_changed,
retired_by,
date_retired,
retire_reason,
uuid,
note_regex
)
VALUES
(
?,
?,
?,
?,
?,
?,
?,
?,
?,
?,
?,
?,
?,
?,
?,
?,
?
)'] [parameters: (0,
'ck',
'creatine phosphokinase',
None,
14,
49,
0,
0,
None,
None,
None,
None,
None,
None,
None,
'cf6443ff-f2a1-49ab-96e3-c5d6fac362ed', None)]
I receive the error:
Cannot insert the value NULL into column 'date_created',
table 'myositis_longetudinal_cohort.dbo.concept'; column does not allow
nulls. INSERT fails. (515)
This is confusing to me since the the date_created field has a default value of getdate()
Here is the constraint statement:
ALTER TABLE [dbo].[concept]
ADD CONSTRAINT [DF__concept__date_cr__62AFA012] DEFAULT (Getdate()) FOR
[date_created]
I am new to SQL Server and am not sure what I may be missing. The server was update on 11/15/18, but I did not see anything in the update that could explain the change.
Thanks in advance for any help!
INSERT INTO concept
(
retired,
short_name,
description,
form_text,
datatype_id,
class_id,
is_set,
creator,
date_created, <--SPECIFYING HERE MEANS DEFAULT WON'T BE USED
version,
changed_by,
date_changed,
retired_by,
date_retired,
retire_reason,
uuid,
note_regex
)
And the parameters:
[parameters: (
0, <-- retired,
'ck', <-- short_name,
'creatine phosphokinase', <-- description,
None, <-- form_text,
14, <-- datatype_id,
49, <-- class_id,
0, <-- is_set,
0, <-- creator,
None, <-- date_created (provided with NULL)
None,
None,
None,
None,
None,
None,
'cf6443ff-f2a1-49ab-96e3-c5d6fac362ed',
None
)]
The same problem will occur for any other columns in your table that have a NON_NULL_DEFAULT+NOTNULL_CONSTRAINT
As to possibilities why it's suddenly started happening:
SQLAlchemy never used to generate an insert that specified this column
The NOT NULL constraint was added
A trigger supplying values in the case of a null being passed has been removed, re-coded or disabled
Thanks to everyone.
It turns out, that it was an SQLAlchemy issue.
I had to add autoload: true to my __table_args__ instead of manually defining each Column. Apparently, If you do not allow SQLA to introspect the table, it will pass NULL as a value instead of skipping the value when generating the insert statement as suggested by #larnu and others.

Using DBI in Perl to update multiple fields in one line?

I'm mediocre with perl and new to SQL, so excuse any lameness.
I've got a perl script I'm working on that interacts with a database to keep track of users on IRC. From tutorials I've found, I've been able to create the db, create a table within, INSERT a new record, UPDATE a field in that record, and SELECT/find records based on a field.
The problem is, I can only figure out how to UPDATE one field at a time and that seems inefficient.
The code to create the table:
my $sql = <<'END_SQL';
CREATE TABLE seenDB (
id INTEGER PRIMARY KEY,
date VARCHAR(10),
time VARCHAR(8),
nick VARCHAR(30) UNIQUE NOT NULL,
rawnick VARCHAR(100),
channel VARCHAR(32),
action VARCHAR(20),
message VARCHAR(380)
)
END_SQL
$dbh->do($sql);
And to insert a record using values I've determined elsewhere:
$dbh->do('INSERT INTO seenDB (nick, rawnick, channel, action, message, date, time) VALUES (?, ?, ?, ?, ?, ?, ?)', undef, $nickString, $rawnickString, $channelString, $actionString, $messageString, $dateString, $timeString);
So, when the script needs to update, I'd like to update all of these fiends at once, but right now the only thing that works is one at a time, using syntax I got from the tutorial:
$dbh->do('UPDATE seenDB SET time = ? WHERE nick = ?',
undef,
$timeString,
$nickString);
I've tried the following syntaxes for multiple fields, but they fail:
$dbh->do('UPDATE seenDB (rawnick, channel, action, message, date, time) VALUES (?, ?, ?, ?, ?, ?)', undef, $rawnickString, $channelString, $actionString, $messageString, $dateString, $timeString);
and
$dbh->do('UPDATE seenDB SET rawnick=$rawnickString channel=$channelString action=$actionString message=$messageString date=$dateString time=$timeString WHERE nick=$nickString');
Is there a better way to do this?
You can update several fields at once in pretty much the same way as you update a single field, just list them comma separated in the update, something like;
$dbh->do('UPDATE seenDB SET rawnick=?, channel=?, action=?, message=?, date=?, time=? WHERE nick=?',
undef,
$rawnickString,
$channelString,
$actionString,
$messageString,
$dateString,
$timeString,
$nickString
);

ORA-01461: can bind a LONG value only for insert into a LONG column - when inserting into CLOB

I am inserting a large string into a CLOB column. The string is (in this instance) 3190 characters long - but can be much larger.
The string consists of xml data - sometimes the data will commit, sometimes i get the error. The error occurs roughly 50% of the time.
Even string which contain over 5000 characters will sometimes commit with no problem.
Unsure where to go next as i am under the impression that CLOB is the best data type for this data.
I have tried LONG LONG RAW
Someone suggested using XMLTYPE however that does not exist in my version of Oracle (11g - 11.2.0.2.0)
My insert statement:
INSERT INTO MYTABLE(InterfaceId, SourceSystem, Description, Type, Status, StatusNotes, MessageData, CreatedDate, ChangedDate, Id) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
MessageData is the CLOB column where the error is occuring, i have tried commiting without this data populated and it works.
Error
ORA-01461: can bind a LONG value only for insert into a LONG column
ALTER TABLE MYTABLE
ADD COLUMN XML_COL XMLTYPE;
AND THEN
SQL> INSERT INTO MYTABLE(..., XML_COL) VALUES (..., XMLTYPE('<root>example</root>'));
The key is to use XMLTYPE column and then use XMLTYPE() function to convert your string to XMLTYPE.