DbUnit does not update postgresql sequences on insert - sql

I am using DbUnit to run some test on a postgreSql database. In order to be able to run my test, I bring the database into a well known state by repopulating the database tables before each test, running a clean insert. Therefore I use the FlatXmlDataSet definition below (compare with the attached SQL schema).
However, if I run the testCreateAvatar() test case, I get an exception because of a status code mismatch, which is caused by a failed sql insert, because of an already existing primary key (id field). A look into my database shows me, that the insert of the test datasets does not update the corresponding *avatars_id_seq* and *users_id_seq* sequence tables, which are used to generate the id fields (mechanism of postgresql to generate auto-increment values).
That means, that the auto-increment value is not updated, if I define static IDs in the FlatXmlDataSet definitions. So my question is how I could change this behavior or set the auto-increment value on my own (using DbUnit).
Avatar creation test case
#Test
public void testCreateAvatar() throws Exception {
// Set up the request url.
final HttpPost request = new HttpPost(
"http://localhost:9095/rest/avatars");
// Setup the JSON blob, ...
JSONObject jsonAvatar = new JSONObject();
jsonAvatar.put("imageUrl", "images/dussel.jpg");
// ... add it to the post request ...
StringEntity input = new StringEntity(jsonAvatar.toString());
input.setContentType("application/json");
request.setEntity(input);
// ... and execute the request.
final HttpResponse response = HttpClientBuilder.create().build()
.execute(request);
// Verify the result.
assertThat(response.getStatusLine().getStatusCode(),
equalTo(HttpStatus.SC_CREATED));
// Fetch dussel duck from the database ...
Avatar dussel = getServiceObjDao().queryForFirst(
getServiceObjDao().queryBuilder().where()
.eq("image_url", "images/dussel.jpg")
.prepare());
// ... and verify that the object was created correctly.
assertThat(dussel, notNullValue());
assertThat("images/dussel.jpg", equalTo(dussel.getImageUrl()));
}
The DbUnit dataset
<?xml version='1.0' encoding='UTF-8'?>
<dataset>
<!-- Avatars -->
<avatars
id="1"
image_url="images/donald.jpg" />
<avatars
id="2"
image_url="images/daisy.jpg" />
<!-- Users -->
<users
id = "1"
name = "Donald Duck"
email = "donald.duck#entenhausen.de"
password = "quack" />
<users
id = "2"
name = "Daisy Duck"
email = "daisy.duck#entenhausen.de"
password = "flower" />
</dataset>
The users and avatars table schema
CREATE TABLE avatars (
id BIGSERIAL PRIMARY KEY,
cdate TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
mdate TIMESTAMP,
image_url VARCHAR(200),
UNIQUE (image_url)
);
CREATE TABLE users (
id BIGSERIAL PRIMARY KEY,
cdate TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
mdate TIMESTAMP,
name VARCHAR(160) NOT NULL,
email VARCHAR (355) UNIQUE NOT NULL,
password VARCHAR(30) NOT NULL,
avatar_id BIGINT,
UNIQUE (name),
CONSTRAINT user_avatar_id FOREIGN KEY (avatar_id)
REFERENCES avatars (id) MATCH SIMPLE
ON UPDATE NO ACTION ON DELETE NO ACTION
);

The function below finds all sequences in a database, extracts the name of the corresponding table from the sequence name and finally updates the current value of the sequences based on the maximum id value in the corresponding table. As there has been no better solution yet, this seems to be the way to go. Hope, this helps someone.
Simple solution based on harmic's suggestion
#Before
public void resetSequence() {
Connection conn = null;
try {
// Establish a database connection.
conn = DriverManager.getConnection(
this.props.getProperty("database.jdbc.connectionURL"),
this.props.getProperty("database.jdbc.username"),
this.props.getProperty("database.jdbc.password"));
// Select all sequence names ...
Statement seqStmt = conn.createStatement();
ResultSet rs = seqStmt.executeQuery("SELECT c.relname FROM pg_class c WHERE c.relkind = 'S';");
// ... and update the sequence to match max(id)+1.
while (rs.next()) {
String sequence = rs.getString("relname");
String table = sequence.substring(0, sequence.length()-7);
Statement updStmt = conn.createStatement();
updStmt.executeQuery("SELECT SETVAL('" + sequence + "', (SELECT MAX(id)+1 FROM '" + table + "'));");
}
} catch (SQLException e) {
e.printStackTrace();
} finally {
try {
conn.close();
} catch (SQLException e) {
}
}
}

You can set the value of a sequence using setval, for example
SELECT SETVAL('sequence_name', 1000);
Where sequence_name is the name of the sequence, visible in psql using /dt on the table, and 1000 is the value you want to set it to. You would probably want to set it to the Max value of Id in the table.
What I don't really know is how to get DbUnit to emit this SQL.

Related

hibernate.hbm2ddl.auto does not link sequence to id column

Question
Why I get NULL not allowed for column "ID" exception when I execute INSERT INTO PUBLIC.MY_ENTITY (name) VALUES ('test name');?
Setup
I'm using Spring Boot and Hibernate. Spring Boot is launched with properties:
hibernate.hbm2ddl.auto=update
spring.jpa.hibernate.ddl-auto=update
I have entity:
#Entity
#Table(name = "MY_ENTITY")
public class MyEntity {
#Id
#SequenceGenerator(sequenceName = "MY_ENTITY_SEQ", name = "MyEntitySeq")
#GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "MyEntitySeq")
private Long id;
#Column(unique = true, nullable = false)
private String name;
// getters & setters
// ...
}
Table has been generated on application start.
I can prove that sequence has been created with the next query:
SELECT * FROM INFORMATION_SCHEMA.SEQUENCES WHERE SEQUENCE_NAME = 'MY_ENTITY_SEQ'
P.S.
For some reason Hibernate does not link sequence to id auto generation. I can solve the problem with the query below. But how make Hibernate generate the query below?
ALTER TABLE PUBLIC.MY_ENTITY ALTER COLUMN ID BIGINT DEFAULT (NEXT VALUE FOR PUBLIC.MY_ENTITY_SEQ) NOT NULL NULL_TO_DEFAULT SEQUENCE PUBLIC.MY_ENTITY_SEQ;
INSERT INTO PUBLIC.MY_ENTITY (name) VALUES ('test name');
Give the #SequenceGenerator an allocationSize: #SequenceGenerator(sequenceName = "MY_ENTITY_SEQ", name = "MyEntitySeq", allocationSize=1)
Check the dialect you are using
Set "hibernate.id.new_generator_mappings" to "true"

Multiple SQL statements using Groovy

Delete multiple entries from DB using Groovy in SoapUI
I am able to execute one SQL statement, but when I do a few it just hangs.
How can I delete multiple rows?
def sql = Sql.newInstance('jdbc:oracle:thin:#jack:1521:test1', 'test', 'test', 'oracle.jdbc.driver.OracleDriver')
log.info("SQL connetced")
sql.connection.autoCommit = false
try {
log.info("inside try")
log.info("before")
String Que =
"""delete from table name where user in (select user from user where ID= '123' and type= 262);
delete from table name where user in (select user from user where ID= '1012' and type= 28)
delete from table name where user in (select user from user where ID= '423' and type= 27)
"""
log.info (Que)
def output = sql.execute(Que);
log.info(sql)
log.info(output)
log.info("after")
sql.commit()
println("Successfully committed")
}catch(Exception ex) {
sql.rollback()
log.info("Transaction rollback"+ex)
}
sql.close()
Here is what you are looking for.
I feel it is more effective way if you want bulk number of records using the following way.
Create a map for the data i.e., id, type as key value pair that needs to be removed in your case.
Used closure to execute the query by iterating thru it.
Added comments appropriately.
//Closure to execute the query with parameters
def runQuery = { entry ->
def output = sql.execute("delete from table name where user in (select user from user where ID=:id and type=:type)", [id:entry.key, type:entry.value] )
log.info(output)
}
//Added below two statements
//Create the data that you want to remove in the form of map id, and type
def deleteData = ['123':26, '1012':28, '423':27]
def sql = Sql.newInstance('jdbc:oracle:thin:#jack:1521:test1', 'test', 'test', 'oracle.jdbc.driver.OracleDriver')
log.info("SQL connetced")
sql.connection.autoCommit = false
try {
log.info(sql)
log.info("inside try")
log.info("before")
//Added below two statements
//Call the above closure and pass key value pair in each iteration
deleteData.each { runQuery(it) }
log.info("after")
sql.commit()
println("Successfully committed")
}catch(Exception ex) {
sql.rollback()
log.info("Transaction rollback"+ex)
}
sql.close()
If you are just looking after execution of multiple queries only approach, then you may look at here and not sure if your database supports the same.

ADO.NET Increment ID on add new row

I'm trying to add new values to my GridView, that are later passed to Cache and DataSet and underlying SQL Database.
Here is my code, but I can't figure out what to type on the line "dataRow["ID"]=" as you can see. Everything else works fine and the other values are added to the database if I just give "ID" any number that doesn't exist.
protected void insertStudent_Click(object sender, EventArgs e)
{
DataSet dataSet = (DataSet)Cache["DATASET"];
//DataRow dataRow = dataSet.Tables["Students"].Rows.Find(e.Keys["ID"]);
dataSet.Tables["Students"].PrimaryKey = new DataColumn[] { dataSet.Tables["Students"].Columns["ID"] };
DataRow dataRow = dataSet.Tables["Students"].NewRow();
dataRow["ID"] =
dataRow["FirstName"] = ((TextBox)GridView1.FooterRow.FindControl("txtFirstName")).Text;
dataRow["LastName"] = ((TextBox)GridView1.FooterRow.FindControl("txtLastName")).Text;
dataRow["Gender"] = ((DropDownList)GridView1.FooterRow.FindControl("DropDownListGender")).SelectedValue;
dataRow["Course"] = ((DropDownList)GridView1.FooterRow.FindControl("DropDownListCourse")).SelectedValue;
dataRow["Grade"] = ((DropDownList)GridView1.FooterRow.FindControl("DropDownListGrade")).SelectedValue;
Cache.Insert("DATASET", dataSet, null, DateTime.Now.AddHours(24), System.Web.Caching.Cache.NoSlidingExpiration);
dataSet.Tables["Students"].Rows.Add(dataRow);
GridView1.DataSource = (DataSet)Cache["DATASET"];
GridView1.DataBind();
}
As per Andrei in the comment above, set up your ID column in the table as:
CREATE TABLE sample( ID INT IDENTITY(1,1) NOT NULL,
FirstName VARCHAR(50) )) -- And your rest of the details
No need to add a value to ID, it will increment by itself. Insert other values and when you read from the database, you have ID column incremented.
P.S. Do not include ID column while inserting other values to the table.
Google 'SQL INCREMENT' for more information.
The answer to this question is to use AutoIncrement on the ID Column in your Cached DataSet. Then when you save to DB, the added rows will get their correct ID in the DB.
dataSet.Tables["Students"].Columns["ID"].AutoIncrement = true;

error, string or binary data would be truncated when trying to insert

I am running data.bat file with the following lines:
Rem Tis batch file will populate tables
cd\program files\Microsoft SQL Server\MSSQL
osql -U sa -P Password -d MyBusiness -i c:\data.sql
The contents of the data.sql file is:
insert Customers
(CustomerID, CompanyName, Phone)
Values('101','Southwinds','19126602729')
There are 8 more similar lines for adding records.
When I run this with start > run > cmd > c:\data.bat, I get this error message:
1>2>3>4>5>....<1 row affected>
Msg 8152, Level 16, State 4, Server SP1001, Line 1
string or binary data would be truncated.
<1 row affected>
<1 row affected>
<1 row affected>
<1 row affected>
<1 row affected>
<1 row affected>
Also, I am a newbie obviously, but what do Level #, and state # mean, and how do I look up error messages such as the one above: 8152?
From #gmmastros's answer
Whenever you see the message....
string or binary data would be truncated
Think to yourself... The field is NOT big enough to hold my data.
Check the table structure for the customers table. I think you'll find that the length of one or more fields is NOT big enough to hold the data you are trying to insert. For example, if the Phone field is a varchar(8) field, and you try to put 11 characters in to it, you will get this error.
I had this issue although data length was shorter than the field length.
It turned out that the problem was having another log table (for audit trail), filled by a trigger on the main table, where the column size also had to be changed.
In one of the INSERT statements you are attempting to insert a too long string into a string (varchar or nvarchar) column.
If it's not obvious which INSERT is the offender by a mere look at the script, you could count the <1 row affected> lines that occur before the error message. The obtained number plus one gives you the statement number. In your case it seems to be the second INSERT that produces the error.
Just want to contribute with additional information: I had the same issue and it was because of the field wasn't big enough for the incoming data and this thread helped me to solve it (the top answer clarifies it all).
BUT it is very important to know what are the possible reasons that may cause it.
In my case i was creating the table with a field like this:
Select '' as Period, * From Transactions Into #NewTable
Therefore the field "Period" had a length of Zero and causing the Insert operations to fail. I changed it to "XXXXXX" that is the length of the incoming data and it now worked properly (because field now had a lentgh of 6).
I hope this help anyone with same issue :)
Some of your data cannot fit into your database column (small). It is not easy to find what is wrong. If you use C# and Linq2Sql, you can list the field which would be truncated:
First create helper class:
public class SqlTruncationExceptionWithDetails : ArgumentOutOfRangeException
{
public SqlTruncationExceptionWithDetails(System.Data.SqlClient.SqlException inner, DataContext context)
: base(inner.Message + " " + GetSqlTruncationExceptionWithDetailsString(context))
{
}
/// <summary>
/// PArt of code from following link
/// http://stackoverflow.com/questions/3666954/string-or-binary-data-would-be-truncated-linq-exception-cant-find-which-fiel
/// </summary>
/// <param name="context"></param>
/// <returns></returns>
static string GetSqlTruncationExceptionWithDetailsString(DataContext context)
{
StringBuilder sb = new StringBuilder();
foreach (object update in context.GetChangeSet().Updates)
{
FindLongStrings(update, sb);
}
foreach (object insert in context.GetChangeSet().Inserts)
{
FindLongStrings(insert, sb);
}
return sb.ToString();
}
public static void FindLongStrings(object testObject, StringBuilder sb)
{
foreach (var propInfo in testObject.GetType().GetProperties())
{
foreach (System.Data.Linq.Mapping.ColumnAttribute attribute in propInfo.GetCustomAttributes(typeof(System.Data.Linq.Mapping.ColumnAttribute), true))
{
if (attribute.DbType.ToLower().Contains("varchar"))
{
string dbType = attribute.DbType.ToLower();
int numberStartIndex = dbType.IndexOf("varchar(") + 8;
int numberEndIndex = dbType.IndexOf(")", numberStartIndex);
string lengthString = dbType.Substring(numberStartIndex, (numberEndIndex - numberStartIndex));
int maxLength = 0;
int.TryParse(lengthString, out maxLength);
string currentValue = (string)propInfo.GetValue(testObject, null);
if (!string.IsNullOrEmpty(currentValue) && maxLength != 0 && currentValue.Length > maxLength)
{
//string is too long
sb.AppendLine(testObject.GetType().Name + "." + propInfo.Name + " " + currentValue + " Max: " + maxLength);
}
}
}
}
}
}
Then prepare the wrapper for SubmitChanges:
public static class DataContextExtensions
{
public static void SubmitChangesWithDetailException(this DataContext dataContext)
{
//http://stackoverflow.com/questions/3666954/string-or-binary-data-would-be-truncated-linq-exception-cant-find-which-fiel
try
{
//this can failed on data truncation
dataContext.SubmitChanges();
}
catch (SqlException sqlException) //when (sqlException.Message == "String or binary data would be truncated.")
{
if (sqlException.Message == "String or binary data would be truncated.") //only for EN windows - if you are running different window language, invoke the sqlException.getMessage on thread with EN culture
throw new SqlTruncationExceptionWithDetails(sqlException, dataContext);
else
throw;
}
}
}
Prepare global exception handler and log truncation details:
protected void Application_Error(object sender, EventArgs e)
{
Exception ex = Server.GetLastError();
string message = ex.Message;
//TODO - log to file
}
Finally use the code:
Datamodel.SubmitChangesWithDetailException();
Another situation in which you can get this error is the following:
I had the same error and the reason was that in an INSERT statement that received data from an UNION, the order of the columns was different from the original table. If you change the order in #table3 to a, b, c, you will fix the error.
select a, b, c into #table1
from #table0
insert into #table1
select a, b, c from #table2
union
select a, c, b from #table3
on sql server you can use SET ANSI_WARNINGS OFF like this:
using (SqlConnection conn = new SqlConnection("Data Source=XRAYGOAT\\SQLEXPRESS;Initial Catalog='Healthy Care';Integrated Security=True"))
{
conn.Open();
using (var trans = conn.BeginTransaction())
{
try
{
using cmd = new SqlCommand("", conn, trans))
{
cmd.CommandText = "SET ANSI_WARNINGS OFF";
cmd.ExecuteNonQuery();
cmd.CommandText = "YOUR INSERT HERE";
cmd.ExecuteNonQuery();
cmd.Parameters.Clear();
cmd.CommandText = "SET ANSI_WARNINGS ON";
cmd.ExecuteNonQuery();
trans.Commit();
}
}
catch (Exception)
{
trans.Rollback();
}
}
conn.Close();
}
I had the same issue. The length of my column was too short.
What you can do is either increase the length or shorten the text you want to put in the database.
Also had this problem occurring on the web application surface.
Eventually found out that the same error message comes from the SQL update statement in the specific table.
Finally then figured out that the column definition in the relating history table(s) did not map the original table column length of nvarchar types in some specific cases.
I had the same problem, even after increasing the size of the problematic columns in the table.
tl;dr: The length of the matching columns in corresponding Table Types may also need to be increased.
In my case, the error was coming from the Data Export service in Microsoft Dynamics CRM, which allows CRM data to be synced to an SQL Server DB or Azure SQL DB.
After a lengthy investigation, I concluded that the Data Export service must be using Table-Valued Parameters:
You can use table-valued parameters to send multiple rows of data to a Transact-SQL statement or a routine, such as a stored procedure or function, without creating a temporary table or many parameters.
As you can see in the documentation above, Table Types are used to create the data ingestion procedure:
CREATE TYPE LocationTableType AS TABLE (...);
CREATE PROCEDURE dbo.usp_InsertProductionLocation
#TVP LocationTableType READONLY
Unfortunately, there is no way to alter a Table Type, so it has to be dropped & recreated entirely. Since my table has over 300 fields (😱), I created a query to facilitate the creation of the corresponding Table Type based on the table's columns definition (just replace [table_name] with your table's name):
SELECT 'CREATE TYPE [table_name]Type AS TABLE (' + STRING_AGG(CAST(field AS VARCHAR(max)), ',' + CHAR(10)) + ');' AS create_type
FROM (
SELECT TOP 5000 COLUMN_NAME + ' ' + DATA_TYPE
+ IIF(CHARACTER_MAXIMUM_LENGTH IS NULL, '', CONCAT('(', IIF(CHARACTER_MAXIMUM_LENGTH = -1, 'max', CONCAT(CHARACTER_MAXIMUM_LENGTH,'')), ')'))
+ IIF(DATA_TYPE = 'decimal', CONCAT('(', NUMERIC_PRECISION, ',', NUMERIC_SCALE, ')'), '')
AS field
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = '[table_name]'
ORDER BY ORDINAL_POSITION) AS T;
After updating the Table Type, the Data Export service started functioning properly once again! :)
When I tried to execute my stored procedure I had the same problem because the size of the column that I need to add some data is shorter than the data I want to add.
You can increase the size of the column data type or reduce the length of your data.
A 2016/2017 update will show you the bad value and column.
A new trace flag will swap the old error for a new 2628 error and will print out the column and offending value. Traceflag 460 is available in the latest cumulative update for 2016 and 2017:
https://support.microsoft.com/en-sg/help/4468101/optional-replacement-for-string-or-binary-data-would-be-truncated
Just make sure that after you've installed the CU that you enable the trace flag, either globally/permanently on the server:
...or with DBCC TRACEON:
https://learn.microsoft.com/en-us/sql/t-sql/database-console-commands/dbcc-traceon-trace-flags-transact-sql?view=sql-server-ver15
Another situation, in which this error may occur is in
SQL Server Management Studio. If you have "text" or "ntext" fields in your table,
no matter what kind of field you are updating (for example bit or integer).
Seems that the Studio does not load entire "ntext" fields and also updates ALL fields instead of the modified one.
To solve the problem, exclude "text" or "ntext" fields from the query in Management Studio
This Error Comes only When any of your field length is greater than the field length specified in sql server database table structure.
To overcome this issue you have to reduce the length of the field Value .
Or to increase the length of database table field .
If someone is encountering this error in a C# application, I have created a simple way of finding offending fields by:
Getting the column width of all the columns of a table where we're trying to make this insert/ update. (I'm getting this info directly from the database.)
Comparing the column widths to the width of the values we're trying to insert/ update.
Assumptions/ Limitations:
The column names of the table in the database match with the C# entity fields. For eg: If you have a column like this in database:
You need to have your Entity with the same column name:
public class SomeTable
{
// Other fields
public string SourceData { get; set; }
}
You're inserting/ updating 1 entity at a time. It'll be clearer in the demo code below. (If you're doing bulk inserts/ updates, you might want to either modify it or use some other solution.)
Step 1:
Get the column width of all the columns directly from the database:
// For this, I took help from Microsoft docs website:
// https://learn.microsoft.com/en-us/dotnet/api/system.data.sqlclient.sqlconnection.getschema?view=netframework-4.7.2#System_Data_SqlClient_SqlConnection_GetSchema_System_String_System_String___
private static Dictionary<string, int> GetColumnSizesOfTableFromDatabase(string tableName, string connectionString)
{
var columnSizes = new Dictionary<string, int>();
using (var connection = new SqlConnection(connectionString))
{
// Connect to the database then retrieve the schema information.
connection.Open();
// You can specify the Catalog, Schema, Table Name, Column Name to get the specified column(s).
// You can use four restrictions for Column, so you should create a 4 members array.
String[] columnRestrictions = new String[4];
// For the array, 0-member represents Catalog; 1-member represents Schema;
// 2-member represents Table Name; 3-member represents Column Name.
// Now we specify the Table_Name and Column_Name of the columns what we want to get schema information.
columnRestrictions[2] = tableName;
DataTable allColumnsSchemaTable = connection.GetSchema("Columns", columnRestrictions);
foreach (DataRow row in allColumnsSchemaTable.Rows)
{
var columnName = row.Field<string>("COLUMN_NAME");
//var dataType = row.Field<string>("DATA_TYPE");
var characterMaxLength = row.Field<int?>("CHARACTER_MAXIMUM_LENGTH");
// I'm only capturing columns whose Datatype is "varchar" or "char", i.e. their CHARACTER_MAXIMUM_LENGTH won't be null.
if(characterMaxLength != null)
{
columnSizes.Add(columnName, characterMaxLength.Value);
}
}
connection.Close();
}
return columnSizes;
}
Step 2:
Compare the column widths with the width of the values we're trying to insert/ update:
public static Dictionary<string, string> FindLongBinaryOrStringFields<T>(T entity, string connectionString)
{
var tableName = typeof(T).Name;
Dictionary<string, string> longFields = new Dictionary<string, string>();
var objectProperties = GetProperties(entity);
//var fieldNames = objectProperties.Select(p => p.Name).ToList();
var actualDatabaseColumnSizes = GetColumnSizesOfTableFromDatabase(tableName, connectionString);
foreach (var dbColumn in actualDatabaseColumnSizes)
{
var maxLengthOfThisColumn = dbColumn.Value;
var currentValueOfThisField = objectProperties.Where(f => f.Name == dbColumn.Key).First()?.GetValue(entity, null)?.ToString();
if (!string.IsNullOrEmpty(currentValueOfThisField) && currentValueOfThisField.Length > maxLengthOfThisColumn)
{
longFields.Add(dbColumn.Key, $"'{dbColumn.Key}' column cannot take the value of '{currentValueOfThisField}' because the max length it can take is {maxLengthOfThisColumn}.");
}
}
return longFields;
}
public static List<PropertyInfo> GetProperties<T>(T entity)
{
//The DeclaredOnly flag makes sure you only get properties of the object, not from the classes it derives from.
var properties = entity.GetType()
.GetProperties(System.Reflection.BindingFlags.Public
| System.Reflection.BindingFlags.Instance
| System.Reflection.BindingFlags.DeclaredOnly)
.ToList();
return properties;
}
Demo:
Let's say we're trying to insert someTableEntity of SomeTable class that is modeled in our app like so:
public class SomeTable
{
[Key]
public long TicketID { get; set; }
public string SourceData { get; set; }
}
And it's inside our SomeDbContext like so:
public class SomeDbContext : DbContext
{
public DbSet<SomeTable> SomeTables { get; set; }
}
This table in Db has SourceData field as varchar(16) like so:
Now we'll try to insert value that is longer than 16 characters into this field and capture this information:
public void SaveSomeTableEntity()
{
var connectionString = "server=SERVER_NAME;database=DB_NAME;User ID=SOME_ID;Password=SOME_PASSWORD;Connection Timeout=200";
using (var context = new SomeDbContext(connectionString))
{
var someTableEntity = new SomeTable()
{
SourceData = "Blah-Blah-Blah-Blah-Blah-Blah"
};
context.SomeTables.Add(someTableEntity);
try
{
context.SaveChanges();
}
catch (Exception ex)
{
if (ex.GetBaseException().Message == "String or binary data would be truncated.\r\nThe statement has been terminated.")
{
var badFieldsReport = "";
List<string> badFields = new List<string>();
// YOU GOT YOUR FIELDS RIGHT HERE:
var longFields = FindLongBinaryOrStringFields(someTableEntity, connectionString);
foreach (var longField in longFields)
{
badFields.Add(longField.Key);
badFieldsReport += longField.Value + "\n";
}
}
else
throw;
}
}
}
The badFieldsReport will have this value:
'SourceData' column cannot take the value of
'Blah-Blah-Blah-Blah-Blah-Blah' because the max length it can take is
16.
Kevin Pope's comment under the accepted answer was what I needed.
The problem, in my case, was that I had triggers defined on my table that would insert update/insert transactions into an audit table, but the audit table had a data type mismatch where a column with VARCHAR(MAX) in the original table was stored as VARCHAR(1) in the audit table, so my triggers were failing when I would insert anything greater than VARCHAR(1) in the original table column and I would get this error message.
I used a different tactic, fields that are allocated 8K in some places. Here only about 50/100 are used.
declare #NVPN_list as table
nvpn varchar(50)
,nvpn_revision varchar(5)
,nvpn_iteration INT
,mpn_lifecycle varchar(30)
,mfr varchar(100)
,mpn varchar(50)
,mpn_revision varchar(5)
,mpn_iteration INT
-- ...
) INSERT INTO #NVPN_LIST
SELECT left(nvpn ,50) as nvpn
,left(nvpn_revision ,10) as nvpn_revision
,nvpn_iteration
,left(mpn_lifecycle ,30)
,left(mfr ,100)
,left(mpn ,50)
,left(mpn_revision ,5)
,mpn_iteration
,left(mfr_order_num ,50)
FROM [DASHBOARD].[dbo].[mpnAttributes] (NOLOCK) mpna
I wanted speed, since I have 1M total records, and load 28K of them.
This error may be due to less field size than your entered data.
For e.g. if you have data type nvarchar(7) and if your value is 'aaaaddddf' then error is shown as:
string or binary data would be truncated
You simply can't beat SQL Server on this.
You can insert into a new table like this:
select foo, bar
into tmp_new_table_to_dispose_later
from my_table
and compare the table definition with the real table you want to insert the data into.
Sometime it's helpful sometimes it's not.
If you try inserting in the final/real table from that temporary table it may just work (due to data conversion working differently than SSMS for example).
Another alternative is to insert the data in chunks, instead of inserting everything immediately you insert with top 1000 and you repeat the process, till you find a chunk with an error. At least you have better visibility on what's not fitting into the table.

Transactions in NHibernate - UPDATE then INSERT. What am I doing wrong?

In this sample console app I want to update a row in a table, and then insert another row in the same table.
The table is like this
CREATE TABLE [dbo].[Basket2](
[Id] [int] IDENTITY(1,1) NOT NULL,
[UserId] [int] NULL
) ON [PRIMARY]
CREATE UNIQUE NONCLUSTERED INDEX [IX_Basket] ON [dbo].[Basket2]
(
[UserId] ASC
)
So basically a user cannot have 2 baskets.
For reasons beyond this post baskets must not be deleted from the table. Therefore when a user needs a new basket the old one is just set to a unique number (id*-1).
The following code is a sample app that simulates the flow - and fails
private static void Main(string[] args)
{
ISessionFactory sessionFactory = CreateSessionFactory();
int userId = new Random().Next();
int basketId;
using (var session = sessionFactory.OpenSession())
{
using (var tx = session.BeginTransaction(IsolationLevel.ReadUncommitted))
{
var newBasket = new Basket {UserId = userId};
basketId = (int) session.Save(newBasket);
tx.Commit();
}
using (var tx = session.BeginTransaction(IsolationLevel.ReadUncommitted))
{
var basket = session.Get<Basket>(basketId);
basket.UserId = basket.Id*-1;
session.Save(basket);
// comment in this line to make it work:
//session.Flush();
var newBasket = new Basket {UserId = userId};
session.Save(newBasket);
tx.Commit();
}
}
}
The error is:
Unhandled Exception: NHibernate.Exceptions.GenericADOException: could not insert: [ConsoleApplication1.Basket][SQL: INSERT INTO [Basket] (UserId) VALUES (?); select SCOPE_IDENTITY()] ---> System.Data.SqlClient.SqlException: Cannot insert duplicate key row in object 'dbo.Basket' with unique index 'IX_Basket'.
If I Flush the session (commented out lines) it works, but why is this necessary?
I would prefer not having to Flush my session and letting Commit() handle it.
You don't need to Save / Update / SaveOrUpdate any entities which are already in the session.
But you are reusing the same id again. So make sure that the session is flushed before:
using (var tx = session.BeginTransaction(IsolationLevel.ReadUncommitted))
{
var basket = session.Get<Basket>(basketId);
basket.UserId = basket.Id*-1;
// no save
//session.Save(basket);
// flush change on unique field
session.Flush();
var newBasket = new Basket {UserId = userId};
// save new item which is not in the session yet
session.Save(newBasket);
tx.Commit();
}
This is because you add the same unique value again. Of course you change the existing value before, but this is not stored to the database before the session is flushed.
The session is flushed when:
you call flush
before queries (except of Get and Load)
on commit (except you use your own ADO connection)
It is a common misunderstanding that NH performs update or insert on the database when you call Save or Update. This is not the case. Insert and update are performed when flushing the session. (There are some exceptions on that, eg. when using native ids.)