SqlServerCE version 3.5 SP2 - Database file is larger than configured maximum - .net-4.0

Here is some sample code of how I am creating/connecting/working with my database
string connection = #"Data Source='C:\test.sdf';Max Database Size=4000;"
+ "Max Buffer Size=4096;";
File.Delete(#"C:\test.sdf");
using (var engine = new SqlCeEngine(connection))
{
engine.CreateDatabase();
engine.Compact("Data Source=; Case Sensitive=True; Max Database Size=4000;");
}
using (var dbConn = new SqlCeConnection(connection))
{
// Create tables, indexes, etc, and insert loads of data here
// Somewhere in the loading of data I get
// the "Database file is larger..." exception
}
Here is my question. The database file size at the point of the exception is a mere 368 MB (386,879,488 bytes to be exact according to the file properties). Do I need to add the max database size string into the Compact statement?
Any other ideas on what could be wrong.

The default value for Max Database Size is 256 MB, so yes, you would need to add this to the connection string, if the file size grows over this.

As had ErikEJ said, this is how the connection string have to be:
"Data Source=MyData.sdf;Max Database Size=256;Persist Security Info=False;"
where you can replace 256 with a needed size.

Related

Exception "Data source name not found" while connecting to an existing DSN

I have created an ODBC connections (both 32/64 bit) with configuration given below:
Microsoft SQL Server ODBC Driver Version 10.00.14393
Data Source Name: ODBCMSSQL
Data Source Description:
Server: .\SQLEXPRESS
Database: MedicalMarketting
Language: (Default)
Translate Character Data: Yes
Log Long Running Queries: No
Log Driver Statistics: No
Use Regional Settings: No
Prepared Statements Option: Drop temporary procedures on disconnect
Use Failover Server: No
Use ANSI Quoted Identifiers: Yes
Use ANSI Null, Paddings and Warnings: Yes
Data Encryption: No
I want to connect to local MsSQL server as given in below code snippet:
string connectionString = "Data Source=ODBCMSSQL;Initial Catalog=MedicalMarketting;Integrated Security=True";
con = new OdbcConnection(connectionString);
cmd = new OdbcCommand();
cmd.Connection = con;
cmd.CommandType = CommandType.Text;
try
{
this.con.Open();
this.tr = con.BeginTransaction();
this.cmd.Transaction = tr;
}
catch (Exception ex)
{
this.RollBack();
}
This throws an exception which has an error message as below:
ERROR [IM002] [Microsoft][ODBC Driver Manager] Data source name not found and no default driver specified
Sorry if this was too basic, but had to post for a clue because the same configuration for different ODBC connections works perfectly.
I found a quick fix to the problem by changing the connection string to,
string connectionString = "DSN=ODBCMSSQL";// best practice is to store this in a seperate config file.
In fact, other attributes specified (Initial Catalog, Integrated Security) are not ODBC connection string attributes, hence ignored. A full list of ODBC connection attributes can be found below.
https://msdn.microsoft.com/en-us/library/ee275047(v=bts.10).aspx

Data is not properly stored to hsqldb when using pooled data source by dbcp

I'm using hsqldb to create cached tables and indexed tables.
The data being stored has pretty high frequency so I need to use a connection pool.
Also because there is a lot of data I do not call checkpoint on every commit, but rather expect the data to be flushed after 50,000 rows are inserted.
So the thing is that I can see the .data file is growing but when I connect with hsqldb client I don't see the tables and the data.
So I had 2 simple tests, one inserted single row and one inserted 60,000 rows to new table. In both cases I couldn't see the result in any hsqldb client.
(Note that I use shutdown=true)
So when I add checkpoint after each commit, it solve the problem.
Also if specify in the connection string to use log, it solves the problem (I don't want the log in production though). Also not using pooled connection solved the problem and last is using pooled data source and explicitly close it before shutdown.
So I guess that some connections in the connection pool are not being closed, preventing from the db to somehow commit the changes and make them available for the client. But then, why couldn't I see the result even with 60,000 rows?
I also would expect the pool to be closed automatically...
What am I doing wrong? What is happening behind the scene?
The code to get the data source looks like this:
Class.forName("org.hsqldb.jdbcDriver");
String url = "jdbc:hsqldb:" + m_dbRoot + dbName + "/db" + ";hsqldb.log_data=false;shutdown=true;hsqldb.nio_data_file=false";
ConnectionFactory connectionFactory = new DriverManagerConnectionFactory(url, user, password);
GenericObjectPool connectionPool = new GenericObjectPool();
KeyedObjectPoolFactory stmtPool = new GenericKeyedObjectPoolFactory(null);
new PoolableConnectionFactory(connectionFactory, connectionPool, stmtPool, null, false, true);
DataSource ds = new PoolingDataSource(connectionPool);
And I'm using this Pooled data source to create table:
Connection c = m_dataSource.getConnection();
Statement st = c.createStatement();
String script = String.format("CREATE CACHED TABLE IF NOT EXISTS %s (id %s NOT NULL, entity %s NOT NULL, PRIMARY KEY (id));", m_tableName, m_idGenerator.getIdType(), TABLE_ENTITY_TYPE);
st.execute(script);
c.close;
st.close();
And insert rows:
Connection c = m_dataSource.getConnection();
c.setAutoCommit(false);
Statement stmt = c.prepareStatement(m_sqlInsert);
stmt.setObject(1, id);
stmt.setBinaryStream(2, Serializer.Helper.serialize(m_serializer, entity));
stmt.executeUpdate();
stmt.close();
stmt = null;
c.commit();
c.close();
stmt.close();
so the above seems to add data but it cannot be seen.
When I explicitly called
connectionPool.close();
Then and only then I could see the result.
I also tried to use JDBCDataSource and it worked as well.
So what is going on? And what is the right way to do this?
Your method of accessing the database from outside your application process is simply wrong.
Only one java process is supposed to connect to the file: database.
In order to achieve your aim, launch an HSQLDB server within your application, using exactly the same JDBC URL. Then connect to this server from the external client.
See the Guide:
http://www.hsqldb.org/doc/2.0/guide/listeners-chapt.html#lsc_app_start
Update: The OP commented that the external client was used after the application had stopped. Because you have turned the log off with hsqldb.log_data=false, nothing is persisted permanently. You need to perform an explicit CHECKPOINT or SHUTDOWN when your application completes its work. You cannot rely on shutdown=true at all, even without connection pooling.
See the Guide:
http://www.hsqldb.org/doc/2.0/guide/deployment-chapt.html#dec_bulk_operations

mysql Command timeout error

I am converting my database from SQL Server 2005 to MySQL using asp .net mvc.
I have bulk data in SQL Server (400k records), but I am facing command timeout/waiting for CommandTimeout error which, when I search on Google, can be given 65535 as its highest value or 0 (if it should wait for unlimited time).
Both of these aren't working for me. I also have set any ConnectTimeout to 180. So should I have to change it too? Anybody who had faced this problem or have any confirmed knowledge please share.
For me increasing the CommandTimeout fixed the issue.
Code sample:
//time in seconds
int timeOut = 300;
//create command
MySqlCommand myCommand = new MySqlCommand(stringSQL);
//set timeout
myCommand.CommandTimeout = timeOut;
Try sending commands in a batch of 100/500 then there will be no need of command timeout. Hope it works for you

SELECT through oledbcommand in vb.net not picking up recent changes

I'm using the following code to work out the next unique Order Number in an access database. ServerDB is a "System.Data.OleDb.OleDbConnection"
Dim command As New OleDb.OleDbCommand("", serverDB)
command.CommandText = "SELECT max (ORDERNO) FROM WORKORDR"
iOrder = command.ExecuteScalar()
NewOrderNo = (iOrder + 1)
If I subsequently create a WORKORDR (using a different DB connection), the code will not pick up the new "next order number."
e.g.
iFoo = NewOrderNo
CreateNewWorkOrderWithNumber(iFoo)
iFoo2 = NewOrderNo
will return the same value to both iFoo and iFoo2.
If I Close and then reopen serverDB, as part of the "NewOrderNo" function, then it works. iFoo and iFoo2 will be correct.
Is there any way to force a "System.Data.OleDb.OleDbConnection" to refresh the database in this situation without closing and reopening the connection.
e.g. Is there anything equivalent to serverdb.refresh or serverdb.FlushCache
How I create the order.
I wondered if this could be caused by not updating my transactions after creating the order. I'm using an XSD for the order creation, and the code I use to create the record is ...
Sub CreateNewWorkOrderWithNumber(ByVal iNewOrder As Integer)
Dim OrderDS As New CNC
Dim OrderAdapter As New CNCTableAdapters.WORKORDRTableAdapter
Dim NewWorkOrder As CNC.WORKORDRRow = OrderDS.WORKORDR.NewWORKORDRRow
NewWorkOrder.ORDERNO = iNewOrder
NewWorkOrder.name = "lots of fields filled in here."
OrderDS.WORKORDR.AddWORKORDRRow(NewWorkOrder)
OrderAdapter.Update(NewWorkOrder)
OrderDS.AcceptChanges()
End Sub
From MSDN
Microsoft Jet has a read-cache that is
updated every PageTimeout milliseconds
(default is 5000ms = 5 seconds). It
also has a lazy-write mechanism that
operates on a separate thread to main
processing and thus writes changes to
disk asynchronously. These two
mechanisms help boost performance, but
in certain situations that require
high concurrency, they may create
problems.
If you possibly can, just use one connection.
Back in VB6 you could force the connection to refresh itself using ADO. I don't know whether it's possible with VB.NET. My Google-fu seems to be weak today.
You can change the PageTimeout value in the registry but that will affect all programs on the computer that use the Jet engine (i.e. programmatic use of Access databases)
I always throw away a Connection Object after I used it. Due to Connection Pooling getting a new Connection is cheap.

why would a SQLCLR proc run slower than the same code client side

I am writing a stored procedure that when completed will be used to scan staging tables for bogus data on a column by column basis.
Step one in the exercise was just to scan the table --- which is what the code below does. The issue is that this code runs in 5:45 seconds --- however the same code run as a console app (changing the connectionstring of course) runs in about 44 seconds.
using (SqlConnection sqlConnection = new SqlConnection("context connection=true"))
{
sqlConnection.Open();
string sqlText = string.Format("select * from {0}", source_table.Value);
int count = 0;
using (SqlCommand sqlCommand = new SqlCommand(sqlText, sqlConnection))
{
SqlDataReader reader = sqlCommand.ExecuteReader();
while (reader.Read())
count++;
SqlDataRecord record = new SqlDataRecord(new SqlMetaData("rowcount", SqlDbType.Int));
SqlContext.Pipe.SendResultsStart(record);
record.SetInt32(0, count);
SqlContext.Pipe.SendResultsRow(record);
SqlContext.Pipe.SendResultsEnd();
}
}
However the same code (different connection string of course) runs in a console app in about 44 seconds (which is closer to what I was expecting on the client side)
What am I missing on the SP side, that would cause it to run so slow.
Please note: I fully understand that if I wanted a count of rows, I should use the count(*) aggregation --- that's not the purpose of this exercise.
The type of code you are writing is highly susceptible to SQL Injection. Rather than processing the reader like you are, you could just use the RecordsAffected Property to find the number of rows in the reader.
EDIT:
After doing some research, the difference you are seeing is a by design difference between the context connection and a regular connection. Peter Debetta blogged about this and writes:
"The context connection is written such that it only fetches a row at a time, so for each of the 20 million some odd rows, the code was asking for each row individually. Using a non-context connection, however, it requests 8K worth of rows at a time."
http://sqlblog.com/blogs/peter_debetta/archive/2006/07/21/context-connection-is-slow.aspx
Well it would seem the answer is in the connection string after all.
context connection=true
versus
server=(local); database=foo; integrated security=true
For some bizzare reason, using the "external" connection the SP runs almost as fast as a console app (still not as fast mind you! -- 55 seconds)
Of course now the assembly has to be deployed as External rather than Safe --- and that introduces more frustration.