What's the point of hibernatetemplate's bulkupdate? - sql

Is hibernatetemplate's bulkUpdate actually doing a bulkUpdate? I looked at the code, and it doesn't seem to be doing bulkUpdate. Or maybe am I missing something?
public int bulkUpdate(final String queryString, final Object... values) throws DataAccessException {
return executeWithNativeSession(new HibernateCallback<Integer>() {
public Integer doInHibernate(Session session) throws HibernateException {
Query queryObject = session.createQuery(queryString);
prepareQuery(queryObject);
if (values != null) {
for (int i = 0; i < values.length; i++) {
queryObject.setParameter(i, values[i]);
}
}
return queryObject.executeUpdate();
}
});
}
whereas JdbcTemplate batchUpdate (looks like) is doing a batchUpdate
public int[] batchUpdate(final String[] sql) throws DataAccessException {
Assert.notEmpty(sql, "SQL array must not be empty");
if (logger.isDebugEnabled()) {
logger.debug("Executing SQL batch update of " + sql.length + " statements");
}
class BatchUpdateStatementCallback implements StatementCallback<int[]>, SqlProvider {
private String currSql;
public int[] doInStatement(Statement stmt) throws SQLException, DataAccessException {
int[] rowsAffected = new int[sql.length];
if (JdbcUtils.supportsBatchUpdates(stmt.getConnection())) {
for (String sqlStmt : sql) {
this.currSql = sqlStmt;
stmt.addBatch(sqlStmt);
}
rowsAffected = stmt.executeBatch();
}
else {
for (int i = 0; i < sql.length; i++) {
this.currSql = sql[i];
if (!stmt.execute(sql[i])) {
rowsAffected[i] = stmt.getUpdateCount();
}
else {
throw new InvalidDataAccessApiUsageException("Invalid batch SQL statement: " + sql[i]);
}
}
}
return rowsAffected;
}
public String getSql() {
return this.currSql;
}
}
return execute(new BatchUpdateStatementCallback());
}

Yes, it is doing bulk update. As you see, DELETE and INSERT queries executed in bulkUpdate method can affect multiple rows. That's why they are called bulk operations.
Point is to have handy method to execute update and execute query and return number of rows affected in bulk operation. Additionally it wraps exceptions to DataAccessException.

Related

SCALA: executing complete .sql file

I have a .sql script files and need to execute complete .sql file at once, I have found many related answers, where programmer reads file and execute query one by one.
I require to run .sql file all at-once, like picking up the file and executing in within SCALA.
Currently I am doing it with following code, copied it from copied it from:https://gist.github.com/joe776/831762, my Lead ask me to do it in simple way, instead of executing script line by line, it should get executed as .SQL file
import java.io.IOException;
import java.io.LineNumberReader;
import java.io.PrintWriter;
import java.io.Reader;
import java.sql.Connection;
import java.sql.ResultSet;
import java.sql.ResultSetMetaData;
import java.sql.SQLException;
import java.sql.Statement;
import java.util.regex.Matcher;
import java.util.regex.Pattern;
public class ScriptRunner {
private static final String DEFAULT_DELIMITER = ";";
private static final String DELIMITER_LINE_REGEX = "(?i)DELIMITER.+";
private static final String DELIMITER_LINE_SPLIT_REGEX = "(?i)DELIMITER";
private final Connection connection;
private final boolean stopOnError;
private final boolean autoCommit;
private PrintWriter logWriter = new PrintWriter(System.out);
private PrintWriter errorLogWriter = new PrintWriter(System.err);
private String delimiter = DEFAULT_DELIMITER;
private boolean fullLineDelimiter = false;
/**
* Default constructor.
*
* #param connection
* #param autoCommit
* #param stopOnError
*/
public ScriptRunner(Connection connection, boolean autoCommit, boolean stopOnError) {
this.connection = connection;
this.autoCommit = autoCommit;
this.stopOnError = stopOnError;
}
/**
* #param delimiter
* #param fullLineDelimiter
*/
public void setDelimiter(String delimiter, boolean fullLineDelimiter) {
this.delimiter = delimiter;
this.fullLineDelimiter = fullLineDelimiter;
}
/**
* Setter for logWriter property.
*
* #param logWriter
* - the new value of the logWriter property
*/
public void setLogWriter(PrintWriter logWriter) {
this.logWriter = logWriter;
}
/**
* Setter for errorLogWriter property.
*
* #param errorLogWriter
* - the new value of the errorLogWriter property
*/
public void setErrorLogWriter(PrintWriter errorLogWriter) {
this.errorLogWriter = errorLogWriter;
}
/**
* Runs an SQL script (read in using the Reader parameter).
*
* #param reader
* - the source of the script
* #throws SQLException
* if any SQL errors occur
* #throws IOException
* if there is an error reading from the Reader
*/
public void runScript(Reader reader) throws IOException, SQLException {
try {
boolean originalAutoCommit = connection.getAutoCommit();
try {
if (originalAutoCommit != autoCommit) {
connection.setAutoCommit(autoCommit);
}
runScript(connection, reader);
} finally {
connection.setAutoCommit(originalAutoCommit);
}
} catch (IOException e) {
throw e;
} catch (SQLException e) {
throw e;
} catch (Exception e) {
throw new RuntimeException("Error running script. Cause: " + e, e);
}
}
/**
* Runs an SQL script (read in using the Reader parameter) using the connection passed in.
*
* #param conn
* - the connection to use for the script
* #param reader
* - the source of the script
* #throws SQLException
* if any SQL errors occur
* #throws IOException
* if there is an error reading from the Reader
*/
private void runScript(Connection conn, Reader reader) throws IOException, SQLException {
StringBuffer command = null;
try {
LineNumberReader lineReader = new LineNumberReader(reader);
String line = null;
while ((line = lineReader.readLine()) != null) {
if (command == null) {
command = new StringBuffer();
}
String trimmedLine = line.trim();
if (trimmedLine.startsWith("--")) {
println(trimmedLine);
} else if (trimmedLine.length() < 1 || trimmedLine.startsWith("//")) {
// Do nothing
} else if (trimmedLine.length() < 1 || trimmedLine.startsWith("--")) {
// Do nothing
} else if (!fullLineDelimiter && trimmedLine.endsWith(getDelimiter())
|| fullLineDelimiter && trimmedLine.equals(getDelimiter())) {
Pattern pattern = Pattern.compile(DELIMITER_LINE_REGEX);
Matcher matcher = pattern.matcher(trimmedLine);
if (matcher.matches()) {
setDelimiter(trimmedLine.split(DELIMITER_LINE_SPLIT_REGEX)[1].trim(),
fullLineDelimiter);
line = lineReader.readLine();
if (line == null) {
break;
}
trimmedLine = line.trim();
}
command.append(line.substring(0, line.lastIndexOf(getDelimiter())));
command.append(" ");
Statement statement = conn.createStatement();
println(command);
boolean hasResults = false;
if (stopOnError) {
hasResults = statement.execute(command.toString());
} else {
try {
statement.execute(command.toString());
} catch (SQLException e) {
e.fillInStackTrace();
printlnError("Error executing: " + command);
printlnError(e);
}
}
if (autoCommit && !conn.getAutoCommit()) {
conn.commit();
}
ResultSet rs = statement.getResultSet();
if (hasResults && rs != null) {
ResultSetMetaData md = rs.getMetaData();
int cols = md.getColumnCount();
for (int i = 0; i < cols; i++) {
String name = md.getColumnLabel(i);
print(name + "\t");
}
println("");
while (rs.next()) {
for (int i = 1; i <= cols; i++) {
String value = rs.getString(i);
print(value + "\t");
}
println("");
}
}
command = null;
try {
if (rs != null) {
rs.close();
}
} catch (Exception e) {
e.printStackTrace();
}
try {
if (statement != null) {
statement.close();
}
} catch (Exception e) {
e.printStackTrace();
// Ignore to workaround a bug in Jakarta DBCP
}
} else {
Pattern pattern = Pattern.compile(DELIMITER_LINE_REGEX);
Matcher matcher = pattern.matcher(trimmedLine);
if (matcher.matches()) {
setDelimiter(trimmedLine.split(DELIMITER_LINE_SPLIT_REGEX)[1].trim(),
fullLineDelimiter);
line = lineReader.readLine();
if (line == null) {
break;
}
trimmedLine = line.trim();
}
command.append(line);
command.append(" ");
}
}
if (!autoCommit) {
conn.commit();
}
} catch (SQLException e) {
e.fillInStackTrace();
printlnError("Error executing: " + command);
printlnError(e);
throw e;
} catch (IOException e) {
e.fillInStackTrace();
printlnError("Error executing: " + command);
printlnError(e);
throw e;
} finally {
conn.rollback();
flush();
}
}
private String getDelimiter() {
return delimiter;
}
private void print(Object o) {
if (logWriter != null) {
logWriter.print(o);
}
}
private void println(Object o) {
if (logWriter != null) {
logWriter.println(o);
}
}
private void printlnError(Object o) {
if (errorLogWriter != null) {
errorLogWriter.println(o);
}
}
private void flush() {
if (logWriter != null) {
logWriter.flush();
}
if (errorLogWriter != null) {
errorLogWriter.flush();
}
}
}
Why did you tag "scala"?
To execute a whole sql file at once:
sql < myFile.sql
Edit: as Cyrille Corpet pointed out, you can perform this command directly via scala
import sys.process._
"sql < myfile.sql" !

SQL query into JTable

I've found one totally working query, which gets columns and their data from Oracle DB and puts the output in console printout.
I've spent 3 hours trying to display this data in Swing JTable.
When I am trying to bind data with JTable:
jTable1.setModel(new javax.swing.table.DefaultTableModel(
data, header
));
it keeps telling me that constructor is invalid. That's true, because I need arrays [] and [][] to make that. Any ideas how this can be implemented?
Here is the original query:
package com.javacoderanch.example.sql;
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.ResultSet;
import java.sql.ResultSetMetaData;
import java.sql.SQLException;
import java.sql.Statement;
import java.util.ArrayList;
public class MetadataColumnExample {
private static final String DRIVER = "oracle.jdbc.OracleDriver";
private static final String URL = "jdbc:oracle:thin:#//XXX";
private static final String USERNAME = "XXX";
private static final String PASSWORD = "XXX";
public static void main(String[] args) throws Exception {
Connection connection = null;
try {
//
// As the usual ritual, load the driver class and get connection
// from database.
//
Class.forName(DRIVER);
connection = DriverManager.getConnection(URL, USERNAME, PASSWORD);
//
// In the statement below we'll select all records from users table
// and then try to find all the columns it has.
//
Statement statement = connection.createStatement();
ResultSet resultSet = statement.executeQuery("select *\n"
+ "from booking\n"
+ "where TRACKING_NUMBER = 1000001741");
//
// The ResultSetMetaData is where all metadata related information
// for a result set is stored.
//
ResultSetMetaData metadata = resultSet.getMetaData();
int columnCount = metadata.getColumnCount();
//
// To get the column names we do a loop for a number of column count
// returned above. And please remember a JDBC operation is 1-indexed
// so every index begin from 1 not 0 as in array.
//
ArrayList<String> columns = new ArrayList<String>();
for (int i = 1; i < columnCount; i++) {
String columnName = metadata.getColumnName(i);
columns.add(columnName);
}
//
// Later we use the collected column names to get the value of the
// column it self.
//
while (resultSet.next()) {
for (String columnName : columns) {
String value = resultSet.getString(columnName);
System.out.println(columnName + " = " + value);
}
}
} catch (SQLException e) {
e.printStackTrace();
} finally {
connection.close();
}
}
}
Ok I found a bit better way adding SQL query to JTable without even using the ArrayList. Maybe will be helpful for someone, completely working:
import java.awt.*;
import javax.swing.*;
import java.sql.*;
import java.awt.image.BufferedImage;
public class report extends JFrame {
PreparedStatement ps;
Connection con;
ResultSet rs;
Statement st;
JLabel l1;
String bn;
int bid;
Date d1, d2;
int rows = 0;
Object data1[][];
JScrollPane scroller;
JTable table;
public report() {
Container cp = getContentPane();
cp.setLayout(new BorderLayout());
setSize(600, 600);
setLocation(50, 50);
setLayout(new BorderLayout());
setTitle("Library Report");
try {
Class.forName("oracle.jdbc.OracleDriver");
con = DriverManager.getConnection("jdbc:oracle:thin:#XXXXX", "XXXXX", "XXXXX");
} catch (Exception e) {
}
try {
/*
* JDBC 2.0 provides a way to retrieve a rowcount from a ResultSet without having to scan through all the rows
* So we add TYPE_SCROLL_INSENSITIVE & CONCUR_READ_ONLY
*/
st = con.createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE, ResultSet.CONCUR_READ_ONLY); //Creating Statement Object
} catch (SQLException sqlex) {
System.out.println("!!!###");
}
try {
rs = st.executeQuery("select TRACKING_NUMBER, INCO_TERM_CODE, MODE_OF_TRANSPORT\n"
+ "from salog.booking\n"
+ "where rownum < 5");
// Counting rows
rs.last();
int rows = rs.getRow();
rs.beforeFirst();
System.out.println("cc " + rows);
ResultSetMetaData metaData = rs.getMetaData();
int colummm = metaData.getColumnCount();
System.out.println("colms =" + colummm);
Object[] Colheads = {"BookId", "BookName", "rtyry"};
if (Colheads.length != colummm) {
// System.out.println("EPT!!");
JOptionPane.showMessageDialog(rootPane, "Incorrect Column Headers quantity listed in array! The program will now exit.", "System Error", JOptionPane.ERROR_MESSAGE);
System.exit(0);
}
data1 = new Object[rows][Colheads.length];
for (int i1 = 0; i1 < rows; i1++) {
rs.next();
for (int j1 = 0; j1 < Colheads.length; j1++) {
data1[i1][j1] = rs.getString(j1 + 1);
}
}
JTable table = new JTable(data1, Colheads);
JScrollPane jsp = new JScrollPane(table);
getContentPane().add(jsp);
} catch (Exception e) {
}
setVisible(true);
setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
}
public static void main(String args[]) {
JFrame frm = new report();
frm.setSize(600, 600);
frm.setLocation(50, 50);
BufferedImage image = null;
frm.setIconImage(image);
frm.setVisible(true);
frm.show();
}
}

With Lucene 4.3.1, How to get all terms which occur in sub-range of all docs

Suppose a lucene index with fields : date, content.
I want to get all terms value and frequency of docs whose date is yesterday. date field is keyword field. content field is analyzed and indexed.
Pls help me with sample code.
My solution source is as follow ...
/**
*
*
* #param reader
* #param fromDateTime
* - yyyymmddhhmmss
* #param toDateTime
* - yyyymmddhhmmss
* #return
*/
static public String top10(IndexSearcher searcher, String fromDateTime,
String toDateTime) {
String top10Query = "";
try {
Query query = new TermRangeQuery("tweetDate", new BytesRef(
fromDateTime), new BytesRef(toDateTime), true, false);
final BitSet bits = new BitSet(searcher.getIndexReader().maxDoc());
searcher.search(query, new Collector() {
private int docBase;
#Override
public void setScorer(Scorer scorer) throws IOException {
}
#Override
public void setNextReader(AtomicReaderContext context)
throws IOException {
this.docBase = context.docBase;
}
#Override
public void collect(int doc) throws IOException {
bits.set(doc + docBase);
}
#Override
public boolean acceptsDocsOutOfOrder() {
return false;
}
});
//
Analyzer analyzer = new StandardAnalyzer(Version.LUCENE_43,
EnglishStopWords.getEnglishStopWords());
//
HashMap<String, Long> wordFrequency = new HashMap<>();
for (int wx = 0; wx < bits.length(); ++wx) {
if (bits.get(wx)) {
Document wd = searcher.doc(wx);
//
TokenStream tokenStream = analyzer.tokenStream("temp",
new StringReader(wd.get("content")));
// OffsetAttribute offsetAttribute = tokenStream
// .addAttribute(OffsetAttribute.class);
CharTermAttribute charTermAttribute = tokenStream
.addAttribute(CharTermAttribute.class);
tokenStream.reset();
while (tokenStream.incrementToken()) {
// int startOffset = offsetAttribute.startOffset();
// int endOffset = offsetAttribute.endOffset();
String term = charTermAttribute.toString();
if (term.length() < 2)
continue;
Long wl;
if ((wl = wordFrequency.get(term)) == null)
wordFrequency.put(term, 1L);
else {
wl += 1;
wordFrequency.put(term, wl);
}
}
tokenStream.end();
tokenStream.close();
}
}
analyzer.close();
// sort
List<String> occurterm = new ArrayList<String>();
for (String ws : wordFrequency.keySet()) {
occurterm.add(String.format("%06d\t%s", wordFrequency.get(ws),
ws));
}
Collections.sort(occurterm, Collections.reverseOrder());
// make query string by top 10 words
int topCount = 10;
for (String ws : occurterm) {
if (topCount-- == 0)
break;
String[] tks = ws.split("\\t");
top10Query += tks[1] + " ";
}
top10Query.trim();
} catch (IOException e) {
e.printStackTrace();
} finally {
}
// return top10 word string
return top10Query;
}

How to create a new table in a database with program that uses entity framework?

I have a piece of software out in the field and I need to add a new table to it. I have the table in my entity framework and new clients get the table. How to I update the others?
ADDED: To make it clear my development and new clients have the table. How to update the older clients databases is the question?
Since it is in my model it seems I should just call a create method and everything should happen under the hood.
_context.NewTable.CreateTable();
But think that I will have to write a sql command string to see if table exists and if it doesn't to create the table.
IDVisitorEntities _context = new IDVisitorEntities ();
String cmd = IF NOT EXISTS ( SELECT [name]
FROM sys.tables
WHERE [name] = NewTable )
CREATE TABLE NewTable (
ID int IDENITY,
NAME VARCHAR(40))
_context.NewTable.CommandText (cmd);
I want to only run this one time if the table doesn't exist. So that doesn't solve that problem. I really don't know what to do.
ADDED 5/6/2013
I'm thinking that EF has the Property Collection for each table and that might hold a clue. Might have to use ICustomTypeDescriptor ... Anyone else have any thoughts?
Added 7/15/2013
I started building it, at my new job, here is a sample. I need to create a file(s) with the partial classes and have the abstract and interface applied to them. It has a long way to go but this is a start...
namespace DataModelMAXFM
{
public abstract class ATable
{
private ArrayList _columns = new ArrayList();
private ArrayList colsToAdd = new ArrayList();
public ArrayList Columns
{
get
{
return _columns;
}
}
public bool TableCreate(SqlConnection sqlConn)
{
//assuming table needs to be created (already checked)
//get column list
//use for loop to create query string
// run command
ITable thisItable;
if (Columns.Count <= 0) //generate column list if not already created
{
if (!ColumnList())
return false;
}
if (this is ITable)
{
thisItable = (ITable) this;
}
else
{
throw new Exception("");
}
StringBuilder sb = new StringBuilder("CREATE TABLE " + thisItable.GetTableName() + " (");
bool flgFirst = true; // to allow for proper comma placement
foreach (PropertyInfo prop in Columns)
{
String propType = this.GetDataType(prop);
if (propType == String.Empty)//check to make sure datatype found a match, EF to SQL
{
return false;
}
if (!flgFirst)
{
sb.Append(", ");
}
else
{
flgFirst = false;
}
sb.Append(prop.Name + " " + propType);
}
// add right parentheses
sb.Append(")");
//now run query created above
SqlCommand com;
try
{
com = new SqlCommand(sb.ToString(), sqlConn);
com.ExecuteNonQuery();
}
catch (Exception e)
{
Console.WriteLine("TableCreate e:" + e.ToString());
return false;
}
return true;
}
public bool TableExists(SqlConnection sqlConn)
{
SqlDataReader sdr = null;
SqlCommand com;
ITable thisItable;
try
{
//create and execute command
if (this is ITable)
thisItable = (ITable)this;
else
{
throw new Exception("");
}
com = new SqlCommand("Select * from INFORMATION_SCHEMA.COLUMNS where TABLE_NAME = " + thisItable.GetTableName(), sqlConn);
sdr = com.ExecuteReader();
if (!sdr.HasRows)//ie table does not exist
{
return false;
}
}
catch (Exception e)
{
Console.WriteLine("TableCreate e:" + e.ToString());
return false;
}
//close datareader
try
{
sdr.Close();
}
catch (Exception e)
{
Console.WriteLine("close sqldatareader TableExists e: " + e.ToString());
}
return true;
}
public bool ColumnList()
{
bool flgListCreated = false;
PropertyInfo[] propList = typeof(TransactionCategory).GetProperties();
foreach (PropertyInfo prop in propList)
{
if (prop.CanRead && prop.CanWrite)
{
MethodInfo mget = prop.GetGetMethod(false);
MethodInfo mset = prop.GetSetMethod(false);
// Get and set methods have to be public
if (mget == null)
{
continue;
}
if (mset == null)
{
continue;
}
Columns.Add(prop);
if (!flgListCreated)
{
flgListCreated = true;
}
}
}
return flgListCreated;
}
public bool ColumnsExist(SqlConnection sqlConn)
{
ITable thisItable;
if (Columns.Count <= 0)
{
if (!ColumnList())
return false;
}
//2013-07-10 create basic connection and data reader
if (this is ITable)
thisItable = (ITable)this;
else
{
throw new Exception("");
}
SqlDataReader sdr = null;
SqlCommand com;
foreach (PropertyInfo prop in Columns)
{
try
{
//create and execute command
com = new SqlCommand("Select * from INFORMATION_SCHEMA.COLUMNS where TABLE_NAME = " + thisItable.GetTableName() + " and COLUMN_NAME = " + prop.Name, sqlConn);
sdr = com.ExecuteReader();
//if no rows returned to datareader == column does not exist, add to ArrayList of columns to add
if (!sdr.HasRows)
colsToAdd.Add(prop);
}
catch (Exception e)
{
Console.WriteLine(e.ToString());
return false;
}
}
//close datareader
try
{
sdr.Close();
}
catch (Exception e)
{
Console.WriteLine("close sqldatareader ColumnsExist e: " + e.ToString());
}
if (colsToAdd.Count == 0)
return false;
//returns true only if method worked and found columns to add to DB
return true;
}
public bool ColumnsCreate(SqlConnection sqlConn)
{
ITable thisItable;
StringBuilder sb = new StringBuilder();
if (colsToAdd.Count <= 0)
{
if (!ColumnsExist(sqlConn)) //2013-07-08 - MAXIMUS\58398(BJH) //if no columns, attempt to create list
return false; // if Column list was not created, return false
}
// add a array of the alter table
if (this is ITable)
thisItable = (ITable)this;
else
{
throw new Exception();
}
sb.Append("ALTER TABLE " + thisItable.GetTableName() + " ADD ( ");
bool flgFirst = true; // allows for no leading comma on first entry
String propType;
foreach (PropertyInfo prop in colsToAdd)
{
//make sure SQL datatype can be determined from EF data
propType = this.GetDataType(prop);
if (propType == String.Empty)
throw new Exception("no datatype match found " + prop.Name + " " + prop.PropertyType.ToString());
if (!flgFirst)
{
sb.Append(", ");
}
else
{
flgFirst = false;
}
sb.Append(prop.Name + " " + propType);
}
sb.Append(" )");
SqlCommand com;
try
{
com = new SqlCommand(sb.ToString(), sqlConn);
com.ExecuteNonQuery();
}
catch (Exception e)
{
Console.WriteLine(e.ToString());
return false;
}
return true;
}
public bool ColumnsUpdate(SqlConnection sqlConn)
{
if (ColumnsExist(sqlConn))
return ColumnsCreate(sqlConn);
else
return false;
}
//method to convert from EF to SQL datatypes, see noted issues
public String GetDataType(PropertyInfo pi)
{
String s = "";
String pistr = pi.PropertyType.ToString();
switch (pistr)
{
case "Byte[]":
s = "binary";
break;
case "Boolean":
s = "bit";
break;
case "String Char[]": // also maps to other options such as nchar, ntext, nvarchar, text, varchar
s = "char";
break;
case "DateTime":
s = "datetime";
break;
case "DateTimeOffset":
s = "datetimeoffset";
break;
case "Decimal":
s = "decimal";
break;
case "Double":
s = "float";
break;
case "Int16":
s = "smallint";
break;
case "Int32":
s = "int";
break;
case "Int64":
s = "bigint";
break;
case "Single":
s = "real";
break;
case "TimeSpan":
s = "time";
break;
case "Byte":
s = "tinyint";
break;
case "Guid":
s = "uniqueidentifier";
break;
case "Xml":
s = "xml";
break;
default:
Console.WriteLine("No datatype match found for " + pi.ToString() + " " + pi.PropertyType.ToString());
return String.Empty;
}
return s;
}
}
public interface ITable
{
int ID
{
get;
}
bool TableUpdate(SqlConnection sqlConn);
bool TableStoredProceduresCreate(SqlConnection sqlConn);
bool TableStoredProceduresDrop(SqlConnection sqlConn);
bool TableCreateTriggers(SqlConnection sqlConn);
bool TableCreateViews(SqlConnection sqlConn);
DataTable GetDataTable();
DateTime dtTableImplemented
{
get;
}
String GetTableName();
}
}
How to I update the others?
And how do you upgrade code of others to require that table? Probably by using some patch or upgrade package and this package should create your table as well.
Since it is in my model it seems I should just call a create method and everything should happen under the hood.
No. EF itself has nothing to define database schema (Except creating SQL script for whole database creation). What you are looking for is possible with EF Migrations but that is feature of EF 4.3 and newer when using code first and DbContext API.

NHibernate does not seems doing Bulk Inserting into PostgreSQL

I am interfacing with a PostgreSQL database with NHibernate.
Background
I made some simple tests...it seems it's taking 2 seconds to persist 300 records.
I have a Perl program with identical functionality, but issue direct SQL instead, takes only 70% of the time.
I am not sure if this is expected. I thought C#/NHibernate would be faster or at least on par.
Questions
One of my observation is that (with show_sql turned on), the NHibernate is issuing INSERTs a few hundreds times, instead of doing bulk INSERT that take cares of multiple rows. And note I am assigning the primary key myself, not using the "native" generator.
Is that expected? Is there anyway I could make it issue bulk INSERT statement instead? It seems to me that this could be one of the area I could speed up the performance.
As stachu found out correctly: NHibernate does not have *BatchingBatcher(Factory) for PostgreSQL(Npgsql)
As stachu askes: Did anybody managed to force Nhibarnate to do batch inserts to PostgreSQL
I wrote a Batcher that doesn't use any Npgsql batching stuff, but does manipulate the SQL String "oldschool style" (INSERT INTO [..] VALUES (...),(...), ...)
using System;
using System.Collections;
using System.Data;
using System.Diagnostics;
using System.Text;
using Npgsql;
namespace NHibernate.AdoNet
{
public class PostgresClientBatchingBatcherFactory : IBatcherFactory
{
public virtual IBatcher CreateBatcher(ConnectionManager connectionManager, IInterceptor interceptor)
{
return new PostgresClientBatchingBatcher(connectionManager, interceptor);
}
}
/// <summary>
/// Summary description for PostgresClientBatchingBatcher.
/// </summary>
public class PostgresClientBatchingBatcher : AbstractBatcher
{
private int batchSize;
private int countOfCommands = 0;
private int totalExpectedRowsAffected;
private StringBuilder sbBatchCommand;
private int m_ParameterCounter;
private IDbCommand currentBatch;
public PostgresClientBatchingBatcher(ConnectionManager connectionManager, IInterceptor interceptor)
: base(connectionManager, interceptor)
{
batchSize = Factory.Settings.AdoBatchSize;
}
private string NextParam()
{
return ":p" + m_ParameterCounter++;
}
public override void AddToBatch(IExpectation expectation)
{
if(expectation.CanBeBatched && !(CurrentCommand.CommandText.StartsWith("INSERT INTO") && CurrentCommand.CommandText.Contains("VALUES")))
{
//NonBatching behavior
IDbCommand cmd = CurrentCommand;
LogCommand(CurrentCommand);
int rowCount = ExecuteNonQuery(cmd);
expectation.VerifyOutcomeNonBatched(rowCount, cmd);
currentBatch = null;
return;
}
totalExpectedRowsAffected += expectation.ExpectedRowCount;
log.Info("Adding to batch");
int len = CurrentCommand.CommandText.Length;
int idx = CurrentCommand.CommandText.IndexOf("VALUES");
int endidx = idx + "VALUES".Length + 2;
if (currentBatch == null)
{
// begin new batch.
currentBatch = new NpgsqlCommand();
sbBatchCommand = new StringBuilder();
m_ParameterCounter = 0;
string preCommand = CurrentCommand.CommandText.Substring(0, endidx);
sbBatchCommand.Append(preCommand);
}
else
{
//only append Values
sbBatchCommand.Append(", (");
}
//append values from CurrentCommand to sbBatchCommand
string values = CurrentCommand.CommandText.Substring(endidx, len - endidx - 1);
//get all values
string[] split = values.Split(',');
ArrayList paramName = new ArrayList(split.Length);
for (int i = 0; i < split.Length; i++ )
{
if (i != 0)
sbBatchCommand.Append(", ");
string param = null;
if (split[i].StartsWith(":")) //first named parameter
{
param = NextParam();
paramName.Add(param);
}
else if(split[i].StartsWith(" :")) //other named parameter
{
param = NextParam();
paramName.Add(param);
}
else if (split[i].StartsWith(" ")) //other fix parameter
{
param = split[i].Substring(1, split[i].Length-1);
}
else
{
param = split[i]; //first fix parameter
}
sbBatchCommand.Append(param);
}
sbBatchCommand.Append(")");
//rename & copy parameters from CurrentCommand to currentBatch
int iParam = 0;
foreach (NpgsqlParameter param in CurrentCommand.Parameters)
{
param.ParameterName = (string)paramName[iParam++];
NpgsqlParameter newParam = /*Clone()*/new NpgsqlParameter(param.ParameterName, param.NpgsqlDbType, param.Size, param.SourceColumn, param.Direction, param.IsNullable, param.Precision, param.Scale, param.SourceVersion, param.Value);
currentBatch.Parameters.Add(newParam);
}
countOfCommands++;
//check for flush
if (countOfCommands >= batchSize)
{
DoExecuteBatch(currentBatch);
}
}
protected override void DoExecuteBatch(IDbCommand ps)
{
if (currentBatch != null)
{
//Batch command now needs its terminator
sbBatchCommand.Append(";");
countOfCommands = 0;
log.Info("Executing batch");
CheckReaders();
//set prepared batchCommandText
string commandText = sbBatchCommand.ToString();
currentBatch.CommandText = commandText;
LogCommand(currentBatch);
Prepare(currentBatch);
int rowsAffected = 0;
try
{
rowsAffected = currentBatch.ExecuteNonQuery();
}
catch (Exception e)
{
if(Debugger.IsAttached)
Debugger.Break();
throw;
}
Expectations.VerifyOutcomeBatched(totalExpectedRowsAffected, rowsAffected);
totalExpectedRowsAffected = 0;
currentBatch = null;
sbBatchCommand = null;
m_ParameterCounter = 0;
}
}
protected override int CountOfStatementsInCurrentBatch
{
get { return countOfCommands; }
}
public override int BatchSize
{
get { return batchSize; }
set { batchSize = value; }
}
}
}
I also found that NHibernate is not doing batch inserts into PostgreSQL.
I identified two possible reasons:
1) Npgsql driver does not support batch inserts/updates (see forum)
2) NHibernate does not have *BatchingBatcher(Factory) for PostgreSQL(Npgsql). I tried using Devart dotConnect driver with NHibernate (I wrote custom driver for NHibernate) but it still did not worked.
I suppose this driver should also implement IEmbeddedBatcherFactoryProvider interface, but it seems not trivial for me (using one for Oracle did not worked ;) )
Did anybody managed to force Nhibarnate to do batch inserts to PostgreSQL or can confirm my conclusion?