Copy data between databases using Entity Framework, LINQ and MVC - sql

There is a problem. I have old database with some data, and by another side I have new database with new structure.
Now I need best way (ideas) how to copy data from one table to the another. Problem is some tables have max 1000 records some 32 000 some 640 000, and time to copy 5000+ is really long.
Any best practices ? Sample code below ...
public ActionResult ImportTable1()
{
var oldTable1 = context.OLDTABLE.ToList();
foreach (var item in oldTable1)
{
try
{
var cTable = contextNew.NEWTABLE.Where(p => p.fiel1 == item.field1).FirstOrDefault();
if (cTable == null)
{
NEWTABLE nTable = new NEWTABLE
{
field1 = item.field1,
field2 = item.field2
};
contextNew.NEWTABLE.Add(nTable);
}
else
{
cTable.field1 = item.field1
cTable.field2 = item.field2;
contextNew.Entry(cTable).State = EntityState.Modified;
}
IcontextNew.SaveChanges();
}
catch (DbEntityValidationException dbEx)
{
foreach (var validationErrors in dbEx.EntityValidationErrors)
{
foreach (var validationError in validationErrors.ValidationErrors)
{
_progresLog = ("Property: " + validationError.PropertyName + " Error: {1}" + validationError.ErrorMessage);
}
}
}
return PartialView();
}
... so bulk now
public void ExperimentalPartsBulk()
{
string msisDatabase = ConfigurationManager.ConnectionStrings["old"].ToString();
string newDatabase = ConfigurationManager.ConnectionStrings["new"].ToString();
SqlConnection sourceconnection = new SqlConnection(msisDatabase);
SqlConnection sourcedestination = new SqlConnection(newDatabase);
sourceconnection.Open();
SqlCommand cmd = new SqlCommand("Select * from ELEMENTS");
cmd.Connection = sourceconnection;
SqlDataReader reader = cmd.ExecuteReader();
//Connect to Destination DataBase
SqlConnection destinationConnection = new SqlConnection(newDatabase);
destinationConnection.Open();
SqlBulkCopy bulkCopy = new SqlBulkCopy(destinationConnection);
bulkCopy.DestinationTableName = "ELEMENTSNEW";
bulkCopy.ColumnMappings.Clear();
bulkCopy.ColumnMappings.Add("fielString1", "newString1");
bulkCopy.ColumnMappings.Add("fielString2", "newStrin2");
bulkCopy.ColumnMappings.Add("fielFloat1", "newINT1");
bulkCopy.WriteToServer(reader);
reader.Close();
sourceconnection.Close();
sourcedestination.Close();
}
problem now is w differences betwen two tables
fielString1 can be null, newString1 cant be |
fielFloat1 is float now is nullable but newINT1 not
How to import with some conditions or to the different types of field ?

Siwek,
Any loop as shown in the first code sample will failed due to performance issues.... as you pointed!
The right approach here is SQL approach. The idea is to "flush" all data to new DB. Flush mean that ALL records (5,000 or 500,000) are stored to new DB with one action! And avoid any loops during extracting, filtering, editing and saving of data, because 640,000 loops takes long time....
Bulk copy is one possible. Issue with bulk copy is that it's hard for you to filter and edit data in this object.
Use ADO.net DataSet to get data from old DB, filter it, edit it, and save it on memory and flush it to new DB. DataSet take one step per action (extracting, filtering, editing, etc. ! no loops).
Or, try SQL replication. Replication is the SQL mechanism to copy data from DB "A" table "oneTable" to another DB, "B" with a table "AnotherTable" with a different schema and rules. Try it. I can specify more if you think it's a reasonable solution for you. No code needed, it's can be created using wizard on SQL Management studio, and run whenever needed (via SQL Job agent).

You should seriously consider SSIS or bcp. Otherwise you are looking at a scenario whet you're pulling data from the source server all the way down to the client box where the .net code is executing, then pushing all off that data up to the destination server. Think of the bandwidth being consumed. If you can instead do an SSIS export into the destination, at least it would be eliminating an extra layer of concern.
If you absolutely must pull data down to the client, consider writing the data into bcp formatted files, and then bulkcopying them into the destination server.
I'm pretty sure that you'll find that both of these paths are significantly faster than using plain old ADO.NET sort of approaches.

Related

ResultSet coming as empty after executing query

I have a query
SELECT instance_guid FROM service_instances WHERE service_template_guid='E578F99360A86E4EE043C28DE50A1D84' AND service_family_name='TEST'
Directly executing this returns me
4FEFDE7671A760A8DC8FC63CFBFC8316
F2F9DF641D8E2CACC03175A7A628D51D
Now I am trying same code from JDBC.
PreparedStatement ps = null;
ResultSet rs = null;
try {
conn = executionContext.getConnection();
if (conn != null) {
ps = (PreparedStatement)conn.prepareStatement(query);
if (params == null) params = new Object[0];
for (int i=0;i<params.length;i++) {
if (params[i] instanceof Integer) {
ps.setInt(i+1, ((Integer)params[i]).intValue());
} else if (params[i] instanceof java.util.Date) {
((PreparedStatement)ps).setDATE(i+1, new oracle.sql.DATE((new java.sql.Timestamp(((Date)params[i]).getTime()))));
//ps.setObject(i+1, new oracle.sql.DATE(new Time(((Date)params[i]).getTime())));
} else {
if (params[i] == null) params[i] = "";
ps.setString(i+1, params[i].toString());
}
}
rs = ps.executeQuery();
I see params[0] =E578F99360A86E4EE043C28DE50A1D84 and params[1]=TEST
But the resultSet is empty and not getting the result.I debugged but not much help?
Can you please let me know Am i trying right?
In java its defined as below
final static private String INSTANCE_GUID_BY_TEMPLATE_GUID =
"SELECT instance_guid FROM service_instances WHERE service_template_guid=? AND service_family_name=? "
SERVICE_FAMILY_NAME NOT NULL VARCHAR2(256)
SERVICE_TEMPLATE_GUID NOT NULL RAW(16 BYTE)
First and foremost this breaks every sql mapping pattern I have ever seen.
String sql = "SELECT instance_guid FROM service_instances WHERE service_template_guid=? AND service_family_name=?";
PreparedStatement ps = null;
ResultSet rs = null;
try {
conn = executionContext.getConnection();
ps = conn.prepareStatement(sql);
ps.setString(1,guid);
ps.setString(2,family);
rs = ps.executeQuery();
while(rs.next(){...}
...
}
You should not be dynamically figuring out the data types as they come in, unless you are trying to write some code to port from database X to database Y.
UPDATE
I see you are using RAW as a datatype, from this post:
As described in the Oracle JDBC Developer's guide and reference 11g,
when using a RAW column, you can treat it as a BINARY or VARBINARY
JDBC type, which means you can use the JDBC standard methods
getBytes() and setBytes() which returns or accepts a byte[]. The other
options is to use the Oracle driver specific extensions getRAW() and
setRAW() which return or accept a oracle.sql.RAW. Using these two will
require you to unwrap and/or cast to the specific Oracle
implementation class.
Further from a code readability standpoint, your solution makes it painful for a new developer to take over. Far too often I see people making sql be "dynamic" when in reality 99% of the time you don't need this level of dynamic query building. It sounds good in most people's heads but it just causes pain and suffering in the SDLC.

How to insert image in postgresql?

I am wondering how to insert an image on one of the fields in my postgresql table. I cannot find an appropriate tutorial re this matter. The dataype of the field is oid. Has anyone tried this? Thanks!
// All LargeObject API calls must be within a transaction
conn.setAutoCommit(false);
// Get the Large Object Manager to perform operations with
LargeObjectManager lobj = ((org.postgresql.PGConnection)conn).getLargeObjectAPI();
//create a new large object
int oid = lobj.create(LargeObjectManager.READ | LargeObjectManager.WRITE);
//open the large object for write
LargeObject obj = lobj.open(oid, LargeObjectManager.WRITE);
// Now open the file
File file = new File("myimage.gif");
FileInputStream fis = new FileInputStream(file);
// copy the data from the file to the large object
byte buf[] = new byte[2048];
int s, tl = 0;
while ((s = fis.read(buf, 0, 2048)) > 0)
{
obj.write(buf, 0, s);
tl += s;
}
// Close the large object
obj.close();
//Now insert the row into imagesLO
PreparedStatement ps = conn.prepareStatement("INSERT INTO imagesLO VALUES (?, ?)");
ps.setString(1, file.getName());
ps.setInt(2, oid);
ps.executeUpdate();
ps.close();
fis.close();
Found that sample code from here. Really very good bunch of sql operations.
To quote this site,
PostgreSQL database has a special data type to store binary data
called bytea. This is a non-standard data type. The standard data type
in databases is BLOB.
You need to write a client to read the image file, for example
File img = new File("woman.jpg");
fin = new FileInputStream(img);
con = DriverManager.getConnection(url, user, password);
pst = con.prepareStatement("INSERT INTO images(data) VALUES(?)");
pst.setBinaryStream(1, fin, (int) img.length());
pst.executeUpdate();
You can either use the bytea type or the large objects facility. However note that depending on your use case it might not be a good idea to put your images in the DB because of additional load it may put on the DB server.
Rereading your question I notice you mentioned you have a field of type oid. If this is an application you are modifying it suggests to me it is using large objects. These objects get an oid which you then need to store in another table to keep track of them.

SQL ERROR: The connection was not closed. The connection's current state is open

EDIT
After staring at this for 2 days, I do see one issue. I was still opening the original connection. So I changed the inner open statements to conn2.Open. Then, I changed the second inner query to where all the variables were number 3 instead of 2 so that they were completely different than the previous query. At that point, I got the error:
There is already an open DataReader associated with this Command which must be closed first.
I took out the inner connections, thinking I could use the outer connection and took out the inner .Close lines, but that also returned an error saying the connection was not closed.
END EDIT
I am writing a script that updates user information with data pulled from other tables where that user may be in it multiple times for purchases made.
So first, the "outside" sql query pulls some data from the items table which contains purchaser information as well as category information. For each item, it is going to check it's purchaser's information.
Second, the first "inner" sql query pulls category information from the user table. Some code is then run to see if they're already marked as purchasing from the category of the "outside" query. If they are not, it adds the category to a string variable.
Lastly, the second "inner" sql query updates the user table for the current user with the new category list.
I've asked about how to perform queries like this before, but was always given a solution of combining the queries into one. That worked for the other queries, but I cannot do that here. I must iterate through each record of the outer query to perform the necessary functions inside of it. But my issue here is that I get an SQL error saying that the connection was not closed, and it points to the catch of the outer query (for 'conn').
I had tried to set my 2 inner queries so that they used different connection variables (conn2 and conn3), and also different strSQL variables, but that didn't help. And I'm still a newb when it comes to SQL, having programmed using MySQL until this probject. Any help would be greately appreciated.
using (SqlConnection conn = new SqlConnection(System.Configuration.ConfigurationManager.ConnectionStrings["connectionName"].ToString()))
using (SqlCommand strSQL = conn.CreateCommand())
{
strSQL.CommandText = "SELECT field FROM itemsTable";
try
{
conn.Open();
using (SqlDataReader itemReader = strSQL.ExecuteReader())
{
while (itemReader.Read())
{
{Do some stuff here}
using (SqlConnection conn2 = new SqlConnection(System.Configuration.ConfigurationManager.ConnectionStrings["connectionName"].ToString()))
using (SqlCommand strSQL2 = conn2.CreateCommand())
{
strSQL2.CommandText = "SELECT fields FROM userTable";
try
{
conn2.Open();
using (SqlDataReader itemReader2 = strSQL2.ExecuteReader())
{
while (itemReader2.Read())
{
{Do stuff here}
}
itemReader2.Close();
}
}
catch (Exception e3)
{
throw new Exception(e3.Message);
}
finally
{
conn2.Close();
}
}
{Do some more stuff here}
using (SqlConnection conn2 = new SqlConnection(System.Configuration.ConfigurationManager.ConnectionStrings["connectionName"].ToString()))
using (SqlCommand strSQL2 = conn2.CreateCommand())
{
strSQL2.CommandText = "UPDATE userTable set field='value'";
try
{
conn2.Open();
strSQL2.ExecuteNonQuery();
}
catch (Exception e2)
{
throw new Exception(e2.Message);
}
finally
{
conn2.Close();
}
}
{Do even more stuff here.}
}
itemReader.Close();
}
}
catch (Exception e1)
{
throw new Exception(e1.Message);
}
finally
{
conn.Close();
}
}
There's some unusual logic going on with conn.Open(). I see it used several times, but I think you mean to use conn2.Open() in the inner using statements after the first call.

Partial replace on SQL image data column

This question is related to another one I posted earlier.
To recap, I need to fix an issue with an ancient legacy app where people messed up data storage by re-installing the software the wrong way.
The application stores data by saving a record in an SQL DB. Each record holds a reference to a file on disk of which the filename auto-increments.
By re-installing the app the filename auto-increment was re-set so the DB now holds multiple unrelated records which reference the same filename and I have to directories with files which I obviously cannot merge because of these identical filenames. The files hold no reference to the DB data so the only course of action that remains is to filter the DB records on date created and try to rename "EXED" to "IXED" or something like that.
The DB is relatively simple with one table containing a column that holds data of type "Image".
An example content of this image data is as follows:
0x3200001000000000000000200B0000000EFF00000300000031340000000070EC0100002C50000004000000C90000005D010000040000007955B63F4D01000004000000F879883E4F01000004000000BC95563E98010000040000009A99993F4A01000004000000000000004B01000004000000000000009101000004000000000000004E01000004000000721C83425101000004000000D841493F5E01000004000000898828414101000004000000F2D2BD3F4201000004000000FCA9B13F40010000040000007574204244010000040000000000204345010000040000007DD950414601000004000000000000004701000004000000000000009201000004000000000000008701000004000000D2DF13426A0100000400000000005C42740100000400000046B68F40500100000400000018E97A3F7901000004000000FB50CF3C7A01000004000000E645703F99010000040000000000E0404C010000040000008716593F8601000004000000000006439A0100000400000000008040700100000400000063D887449E01000004000000493CBA3E9C0100000400000069699D429B01000004000000DD60CA3F9D0100000400000035DE3C44B4010000040000008B5C744433000000040000003D0ABB4134000000040000000AFF7C44350000000400000093CB3942750400000400000054A69F41BA010000040000002635C64173040000040000008367C24100000080690100002B5000003101000032000010000000000000002009000000000000000100000000000000F00000000000000080080100000100000010000000540100000100000021F0AA42270000000200000010000000540100000200000021F0AA42280000000300000010000000540100000300000059C9E6432900000004000000100000005401000004000000637888442A00000005000000100000005401000005000000DFEF87442B00000006000000100000005401000006000000000000002C00000007000000100000005401000007000000000000002D00000008000000100000005401000008000000000000002D000000090000001000000054010000090000002F353D442D0000000A00000010000000540100000A00000035DE3C44340000000B00000010000000540100000B0000008B5C7444240000009D50000010000000CDCCCC3E2C513B41F65D5F3F2C51BB419E50000010000000CCBA2C3FE17C8C411553B13F83F32142000000403700000000FE0000090000004558454434386262002D50000008000000447973706E6F65008E5000000E00000056454C442052414D502033363000000000F000000000
The data is apparently Hex which mostly encodes meaningless crap but also holds the name of physical files (towards the end of the data field) in the filesystem that is linked to the SQL records:
??#7???????????EXED48bb?-P??????Dyspnoe??P??????VELD RAMP 360
I'm interested in the EXED part.
There is no clear regularity in the offset at which the filename appears and the filename is of variable length (so I do not know beforehand how long the substring will be).
I can call up all records with SQL like this:
SELECT COUNT(*) as "Number of EXED Files after critical date"
FROM [ZAN].[dbo].[zanu]
WHERE udata is not null
and SUBSTRING(udata, 1 , 2147483647) like '%EXED%'
and [udatum] > 0
and CONVERT(date,[udatum]) > CONVERT(date,'20100629')
What I would like to do now is know how to replace this EXED substring by something else (e.g. IXID).
I'm unfamiliar with SQL and Googling so far has yielded very little information on my options here.
I also have no other info on the original code that generated this data/the data format/encoding/whatever...
It's a mess really.
Any help is welcome!
An update on this:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Data.Linq;
using System.Text;
using System.Data.SqlClient;
using System.Threading;
namespace ZANLinq
{
class Program
{
static void Main(string[] args)
{
try
{
DataContext zanDB = new DataContext(#"Data Source=.\,1433;database=ZAN;Integrated Security=true");
string strSQL = #"SELECT
Idnr,
Udatum,
Uzeit,
Unr,
Uart,
Ubediener,
Uzugriff,
Ugr,
Uflags,
Usize,
Udata
FROM Zanu
WHERE (Udata IS NOT null and SubString(Udata, 1 , 2147483647) LIKE '%EXED%')
AND (Idnr = ' 2')";
var zanQuery = zanDB.ExecuteQuery<Zanu>(strSQL);
List<Zanu> list = zanQuery.ToList<Zanu>();
foreach (Zanu zanTofix in list)
{
string strOriginal = ASCIIEncoding.ASCII.GetString(zanTofix.Udata);
string strFixed = strOriginal.Replace("EXED", "IXED");
zanTofix.Udata = ASCIIEncoding.ASCII.GetBytes(strFixed);
}
zanDB.SubmitChanges();
//Console.WriteLine(zanResults.Count<Zanu>().ToString());
}
catch (SqlException e)
{
Console.WriteLine(e.Message);
}
}
}
}
It finds the records I'm interested in, I can easily manipulate the data but the commit doesnt work. I'm stumped, there are no exceptions, no indication the code is wrong.
Anybody have ideas?
UPDATE:
I think the above does not work because my table appears to have a composite PK (I cannot change this):
Since I could not debug this (no info anywhere, no exceptions, just a silent fail of the submitchanges()) I decided to use another approach and abandon Linq2SQL altogether:
try
{
SqlConnection thisConnection = new SqlConnection(#"Network Library=DBMSSOCN;Data Source=.\,1433;database=ZAN;Integrated Security=SSPI");
DataSet zanDataSet = new DataSet();
SqlDataAdapter zanDa;
SqlCommandBuilder zanCmdBuilder;
thisConnection.Open();
//Initialize the SqlDataAdapter object by specifying a Select command
//that retrieves data from the sample table.
zanDa = new SqlDataAdapter(#"SELECT
Idnr,
Udatum,
Uzeit,
Unr,
Uart,
Ubediener,
Uzugriff,
Ugr,
Uflags,
Usize,
Udata
FROM Zanu
WHERE (Udata IS NOT null and SubString(Udata, 1 , 2147483647) LIKE '%IXED%')
AND (Idnr = ' 2')
AND (Uzeit = '13:21')", thisConnection);
//Initialize the SqlCommandBuilder object to automatically generate and initialize
//the UpdateCommand, InsertCommand, and DeleteCommand properties of the SqlDataAdapter.
zanCmdBuilder = new SqlCommandBuilder(zanDa);
//Populate the DataSet by running the Fill method of the SqlDataAdapter.
zanDa.Fill(zanDataSet, "Zanu");
Console.WriteLine("Records that will be affected: " + zanDataSet.Tables["Zanu"].Rows.Count.ToString());
foreach (DataRow record in zanDataSet.Tables["Zanu"].Rows)
{
string strOriginal = ASCIIEncoding.ASCII.GetString((byte[])record["Udata"]);
string strFixed = strOriginal.Replace("IXED", "EXED");
record["Udata"] = ASCIIEncoding.ASCII.GetBytes(strFixed);
//string strPostMod = ASCIIEncoding.ASCII.GetString((byte[])record["Udata"]);
}
zanDa.Update(zanDataSet, "Zanu");
thisConnection.Close();
Console.ReadLine();
}
catch (SqlException e)
{
Console.WriteLine(e.Message);
}
This seems to work but any input on why the Linq does not work and whether or not my second solution is efficient/optimal or not is still very much appreciated.

Problems executing SQL-script using Firebird.NET 2.5 (Error Code = -104)

Sorry for my English first of all. I have a problem and need help.
I have a simple tool made by myself on c#. This tool makes connect to local or remote firebird server (v.2.5). And my tool can create specified .fdb file (database) somewhere on the server.
Also I have a file with SQL statements (create table, triggers and so on). I want to execute this file after database was created. Executing this file will fill structure of user database - not data, only structure.
But then I try to execute my SQL script - firebird server returns a
SQL error code = -104 Token unknown line xxx column xxx.
That's the line on this CREATE TABLE SQL statement, for example:
CREATE TABLE tb1
(
col1 INTEGER NOT NULL,
col2 VARCHAR(36)
);
/* This next create statement causes an error */
CREATE TABLE tb2
(
col1 INTEGER NOT NULL,
col2 VARCHAR(36)
);
If I will leave only one create statement in my file - all will be good... I don't know how I explained (it's clear or not)) - another words - why can't I execute full query with many create statements in one transaction? There is my main method which executes query:
public static string Do(string conString, string query)
{
using (FbConnection conn = new FbConnection())
{
try
{
conn.ConnectionString = conString;
conn.Open();
FbTransaction trans = conn.BeginTransaction();
FbCommand cmd = new FbCommand(query, conn, trans);
cmd.ExecuteNonQuery();
trans.Commit();
}
catch (Exception ex)
{
System.Windows.MessageBox.Show(ex.ToString());
return "Transaction Fail";
}
}
return "Transaction Commited";
}
There is a query is my SQL file.
As Victor already stated in his final comment, you can use the FBScript class for batch execution.
I was just confronted with the same task. This question pointed me in the right direction but i had to do some further digging.
I this example, the source of the statements is a external script file:
private void ExecuteScript(FbConnection myConnection, string scriptPath) {
if (!File.Exists(scriptPath))
throw new FileNotFoundException("Script not found", scriptPath);
FileInfo file = new FileInfo(scriptPath);
string script = file.OpenText().ReadToEnd();
// use FbScript to parse all statements
FbScript fbs = new FbScript(script);
fbs.Parse();
// execute all statements
FbBatchExecution fbe = new FbBatchExecution(myConnection, fbs);
fbe.Execute(true);
}
This will work fine, but you may wonder why this whole thing isn't surrounded by a transaction. Actually there is no support to "bind" FbBatchExecution to a transaction directly.
The first thing i tried was this (will not work)
private void ExecuteScript(FbConnection myConnection, string scriptPath) {
using (FbTransaction myTransaction = myConnection.BeginTransaction()) {
if (!File.Exists(scriptPath))
throw new FileNotFoundException("Script not found", scriptPath);
FileInfo file = new FileInfo(scriptPath);
string script = file.OpenText().ReadToEnd();
// use FbScript to parse all statements
FbScript fbs = new FbScript(script);
fbs.Parse();
// execute all statements
FbBatchExecution fbe = new FbBatchExecution(myConnection, fbs);
fbe.Execute(true);
myTransaction.Commit();
}
}
This will result in an exception stating: "Execute requires the Command object to have a Transaction object when the Connection object assigned to the command is in a pending local transaction. The Transaction property of the Command has not been initialized."
This means nothing more than that the commands that are executed by FbBatchExecution are not assigned to our local transaction that is surrounding the code block. What helps here is that that FbBatchExecution provides
the event CommandExecuting where we can intercept every command and assign our local transaction like this:
private void ExecuteScript(FbConnection myConnection, string scriptPath) {
using (FbTransaction myTransaction = myConnection.BeginTransaction()) {
if (!File.Exists(scriptPath))
throw new FileNotFoundException("Script not found", scriptPath);
FileInfo file = new FileInfo(scriptPath);
string script = file.OpenText().ReadToEnd();
// use FbScript to parse all statements
FbScript fbs = new FbScript(script);
fbs.Parse();
// execute all statements
FbBatchExecution fbe = new FbBatchExecution(myConnection, fbs);
fbe.CommandExecuting += delegate(object sender, CommandExecutingEventArgs args) {
args.SqlCommand.Transaction = myTransaction;
};
fbe.Execute(true);
// myTransaction.Commit();
}
}
Note that i have uncommented the myTransaction.Commit() line. I was a little bit surprised by this behavior, but if you keep that line the transaction will throw an exception stating that it has already been committed. The bool parameter fbe.Execute(true) is named "autoCommit", but changing this to false seems to have no effect.
I would like some feedback if you see any potential issues with assigning the local transaction this way, or if it has any benefits at all or could as well be omitted.
Probably error in launching two create statements in one batch. Would it work if you break it to separate queries? Does it work in your SQL tool?