Upgrading an old project from NHibernate 1.2 to 3.3 - nhibernate

WE have an old project, which was originally written in .NET 2.0 and VS2005, which ended up in VS2008. It uses NHibernate 1.2 for data access. As part of our upgrade, we moved to .NET 4.0 and VS2010, but we are having some problems with the move from NHibernate 1.2 to 3.3.
The main problem we are having is querying a table, which has a link on it. The query we are running is as follows:
IQuery query = base.Session.CreateSQLQuery("select t from Transaction t inner join Order o where TransactionDate >= ? && TransactionDate <= ? order by TransactionDate desc");
We get 2 different errors: either t.Transaction or t.Orders does not exist in the database. We know these tables exist, i have checked multiple times, and i know there is data in there...
I have seen the Question What to be aware of when upgrading from NHibernate 1.2 to 3.2 and it mentions that we may need to modify our mapping files... but does not mention what needs to be changed... Is there something that will look at our mapping files and tell us what needs to change? I will admit, this is my first time using NHibernate at the lower level (actually talking to the DB). up till this point, all database stuff was already "done"... its only now with the upgrade that the problems have occurred...

Since CreateSQLQuery, as the name implies, executes raw SQL, the only explanation I can think of is that you are connecting to the wrong database.
Considering you're using ? for the parameter place holders, I know you're not using SQL Server... so it's probably a DB that requires data source configuration outside of the connection string.
That opens the option of something I've seen before: 32bit and 64bit drivers using different configuration files/registry keys.

Related

alter table drop column fail in SSDT because of dependancy of non cluster index

I have created an SSDT project for SQL Server 2012 database. since i have database already present in the SQL Server Database engine so i use the import feature to import all the objects into SSDT. everything works fine but i am now facing 2 problems
1) since one of the table is using the HIERARCHYID column (col1) as a datatype and there is one computed column based on the HIERARCHYID column. The definition of computed column is something like case Col1= hierarchy.GETRoot() THE NULL ELSE someexpression END. after importing the table script in SSDT, Error of unresolve reference start coming up.
If i change the defination to something like case hierarchy.GETRoot() = Col1 THE NULL ELSE someexpression END (note now col1 is now at the end) it works fine.
2) if i keep the above solution (i.e keeping col1 after =) then at the time of publishing the project,SSDT has to drop the column at the production server and then recreate it. since there is a index depend on this column the deployment get failed everytime saying the error like ALTER TABLE DROP COLUMN fail because other object access it. i have no control how SSDT design / publish the script. and if i have to keep any eye to drop every dependent object before publishing the database project then i think there is no use of it
Please suggest how i can resolve this
Thanks
Atul
I was able to reproduce the reference resolution problem you described. I would suggest submitting that issue to Microsoft via Connect here: https://connect.microsoft.com/SQLServer/feedback/CreateFeedback.aspx
I was not able to reproduce the publish failure. Which version of SSDT does the Visual Studio Help > About dialog show is installed? The most recent version ends with 40403.0. If you're not using the most recent version, I would suggest installing it to see if that fixes the publish failure. You can use Tools > Extensions and Updates to download SSDT updates.
If you do have the most recent version, could you provide an example schema that demonstrates the problem?
Compare your project to a production dacpac and have it generate scripts to make the changes. Then if need be, you can edit the scripts before they get applied to production. This is how my dev teams do it.
I was running into the same issue for a number of days now. After finding your post to confirm the issue was in SSDT, I realized that it may be fixed in a later version than the one we are currently using: 12.0.50730.0 (VS 2013, the version this project uses).
I also have version 14.0.3917.1 installed from VS 2017. I just attempted with that, no issues. So the solution is to upgrade your SSDT version.
Please ignore that solution, it appears my success last night was anomalous. While attempting to repeat it today after restoring a database with the issue, the deployment failed to account for at least one index again.
EDIT:
I have posted about this on User Voice: https://feedback.azure.com/forums/908035-sql-server/suggestions/33850309-computed-column-indexes-are-ignored-with-dacpac-de
Also, to maintain that this is at least a workable answer of sorts, the workaround I am implementing involves dropping and recreating the missed indexes myself using pre and post deployment scripts.
Not an ideal solution if the dacpac was meant to update various versions of the database that could have different levels of drift from the model, however it works for us as we have a tight control over all instances and can expect about the same delta generated each release for each db instance.

Issue with System.Data.OracleClient and ODP.Net 11g together used in .net 2.0 web site

In our .net framework 2.0 based application we were using System.Data.Oracleclient and now migrating to ODP.Net, the volume of the project is too high,
so we cannot do the entire migration on one go, as a result the application is using 2 providers System.Data.Oracleclient & ODP.Net as of now.
Now we are changing our OS, from Windows xp 32bit to Windows 7 64bit. While doing so we observed the following,
1) A query executes in < 1 sec using System.Data.Oracleclient & ODP.Net 10g 64bit (Oracle.DataAccess.dll version 2.102.2.20).
and the same query executes in < 1 sec on Oracle SQL Developer v1.5.
2) However the same query is taking 2-3 mins to execute using System.Data.OracleClient with ODP.Net 11g 64bit (Oracle.DataAccess.dll version 2.112.3.0).
we found a remarkable performance degradation in point 2),
we have to use System.Data.OracleClient with ODP.Net 11g 64bit (Oracle.DataAccess.dll version 2.112.3.0) on Windows 7 64bit OS,
but we cannot live with the performance degradation as mentioned in point 2),
and we cannot convert all code which uses System.Data.OracleClient to ODP.Net very quickly.
So can anyone help us, on why do we see such remarkable performance degradation as mentioned in point 2), and what do we do to resolve this problem.
Regards
Sanjib Harchowdhury
Adding the following to your config will send odp.net tracing info to a log file:
<oracle.dataaccess.client>
<settings>
<add name="TraceFileName" value="c:\temp\odpnet-tests.trc"/>
<add name="TraceLevel" value="63"/>
</settings>
</oracle.dataaccess.client>
This will probably only be helpful if you can find a large gap in time. Chances are rows are actually coming in, just at a slower pace.
Try adding "enlist=false" to your connection string. I don't consider this a solution since it effecitively disables distributed transactions but it should help you isolate the issue. You can get a little bit more information from an oracle forumns post:
From an ODP perspective, all we can really point out is that the
behavior occurs when OCI_ATR_EXTERNAL_NAME and OCI_ATR_INTERNAL_NAME
are set on the underlying OCI connection (which is what happens when
distrib tx support is enabled).
I'd guess what you're not seeing is that the execution plan is actually different (meaning the actual performance hit is actually occuring on the server) between the odp.net call and the sql developer call. Have your dba trace the connection and obtain execution plans from both the odp.net call and the call straight from SQL Developer (or with the enlist=false parameter).
If you confirm different execution plans or if you want to take a preemptive shot in the dark, update the statistics on the related tables. In my case this corrected the issue, indicating that execution plan generation doesn't really follow different rules for the different types of connections but that the cost analysis is just slighly more pesimistic when a distributed transaction might be involved. Query hints to force an execution plan are also an option but only as a last resort.
Finally, it could be a network issue. If your odp.net install is using a fresh oracle home (which I would expect unless you did some post-install configuring) then the tnsnames.ora could be different. Host names might not be fully qualified, creating more delays resolving the server. I'd only expect the first attempt (and not subsequent attempts) to be slow in this case so I don't think it's the issue but I thought it should be mentioned.
please refer this link, or just replace ODP.Net 64bit component with ODP.Net 32bit, as we are using asp.net we could easily configure our application to run using the 32bit component in Windows 7 (x64) edition.

Entity Framework migration with already deployed SQL compact database

My situation: EF 4.3, private install of SQL Compact 4, .Net FW 4, c# winform
Problem: after the application was deployed there was the inevitable change requested that required me to create a new field in the only table in the SQL compact database. During app install the SQL compact db is placed in the user's Application Data folder so that it can be written to successfully. I updated the program and redeployed but the following behaviors are occurring:
uninstalling the old version of the app does not uninstall the SQL compact db in the user's Application Data folder
installing the updated version of the application does not overwrite the old SQL Compact db in the user's Application.
Since the new database with the added column doesn't get copied over it is breaking the application when the user runs it. My reaearch indicates that I should be using "automatic EF migrations" to solve my problem. In my situation mydesired strategy would be to do a quick check of the db table and add the missing column if necessary.
I am using EF the Database first way. I am having a hard time finding a good example that fits my situation and my desired strategy for fixing this.
Any help would be greatly appreciated! :-)
I don't think EF Migrations have anything to do with your problem. According to this, if you follow ClickOnce tutorials step by step, it should work. As suggested by that SO answer, you should check this page. Good luck!

Handling database migrations when using Entity Framework

We are building an app in C# which uses Entity Framework with SQL Server 2008. We design the model using the designer in Visual Studio and auto-generate entities from this.
We're working on version 1.0. When we release 2.0, we'll need to make changes to the model and underlying database structure. I guess we need what's called "database migrations".
Traditionally, I've had a table in the database called something like 'version'. Whenever I've created a new version of my software, I've created database upgrade scripts containing ALTER TABLE statements. My software has checked the version table and run the upgrade scripts needed to upgrade the database to the 'software version'.
Is there some better way of handling this? It would be nice if I didn't have to write the alter table-scripts myself and write my own software to upgrade the database structure.
What I used to do when I did model first, is I pointed my model to a database that was purely for schema (so I had a myapp database, which was where my app ran, but my EF4 model was outputted to a myapp_schema database). When the myapp_schema was updated, I used Db Source Tools to generate the update scripts and make the myapp's database schema be the same as myapp_schema.
Chech this post out. It's about CTP4 of EF4, but it thing this is what you need.
http://blogs.msdn.com/b/efdesign/archive/2010/10/22/code-first-database-evolution-aka-migrations.aspx
Unforunately this isn't yet available. CTP5 was released a few days ago and as far as I know this is not yet included.

FluentNHibernate: Getting the Examples.FirstProject to work

Im trying to get the most basic of examples to run in FnH. I started with the Examples.FirstProject. However, I did not use the SQL lite configuration. Instead, I set the configuration to SQL2005 and created the tables as was diagramed in the example.
When stepping through the code, there appears to be no problems when creating the session factory. I do receive an error however when the code reaches the "transaction.commit" line. The error reads:
Could not insert collection: [Examples.FirstProject.Entities.Store.Products#5][SQL:SQL not available]
Im wondering if there was an issue with the way the tables were created in SQL Server. The IDs were "int" type and the names and such were "varchar(50)." I set the PK of Store, Product, and Employee to its respective ID field. I also made the ID increment automatically by 1 (IdentitySpecification column property in SQL Server). StoreProduct is the many-to-many and is also there per the diagram.
Any help would be appreciated. Thanks.
Have you modified the sample in any way other than changing the database provide? Have you been able to save any entities from the sample (ie if you remove the Products code and just save the Store)?
I developed this sample against SQLExpress, so I would imagine there wouldn't be any incompatibilities with SQL 2005.
Also, this question would probably be better be suited to the Fluent NHibernate mailing list, as Stack Overflow isn't great for these kind-of investigatory postings.
Thank you James. I'll look at using Fluent NHibernate mailing list. As a solution to my issue, I did simplify the example a bit and found that rebuilding the tables helped. In the previous attempt I built the tables in the Database diagram tool. That is where I think something was a little off. Just now I rebuilt them using the menus and still made the foreign key connections with the Database Diagram section. Worked like a charm. Thanks again and keep up the good work with FnH.
First thing to check: Are you sure that you have really created a correct table in SQL server, and that the schema is correct? You can verify this by using SQL Express management studio to view the sql database.