How to inject parameter with EF6 / Code First? - sql

I'm trying to figure out how to inject a parameter into Entity Framework 6 when using MapToStoredProcedures. Is this even possible?
I just want to pass my currently logged in username from the application to the stored procedure, but I can't seem to figure out WHERE EF6 does the actual call.
EDIT : A bit more information
Ok, so WITHOUT MapToStoredProcedures (aka letting EF6 just use tables directly) I can do the following in my overridden SaveChangesAsync method:
For Each Entry As DbEntityEntry In Me.ChangeTracker.Entries().Where(Function(o) o.State = EntityState.Deleted)
If TypeOf Entry.Entity Is ISoftDelete Then
'Implements Soft Delete interface, so let's do what needs doing.
Select Case Entry.Entity.GetType()
Case GetType(OS)
Dim _thisOS As OS = TryCast(Entry.Entity, OS)
Using db As New AppRegistrationContext
_thisOS = Await db.OSSet.Include("OSType").FirstOrDefaultAsync(Function(o) o.ID = _thisOS.ID)
End Using
If Not _thisOS Is Nothing Then
Try
Entry.Reference("OSType").CurrentValue = _thisOS.OSType
Catch ex As Exception
Debug.Print(ex.ToString)
End Try
End If
Case GetType(Server)
Case Else
'Do nothing - only filling in extra information for those that we need to
End Select
'Set the archival bits
Entry.Property("Archive").CurrentValue = True
Entry.Property("ArchiveDate").CurrentValue = Date.Now
Entry.Property("ArchiveBy").CurrentValue = HttpContext.Current.User.Identity.Name.ToString()
'Mark it modified
Entry.State = EntityState.Modified
End If
Next
Return Await MyBase.SaveChangesAsync()
Alright, that works great with direct-table manipulation on EF's behalf.
What I want to do instead, is handle all of this in stored procedures - but I need to pass HttpContext.Current.User.Identity.Name.ToString() WITH my delete stored procedure to set the ArchiveBy parameter.
Hopefully this better illustrates what I am attempting to do.

Life could not be easier for you. Run something like the following:
In your repository add the following:
public void ExecuteSqlCommand(string sql, params object[] parameters)
{
DbContext.Database.ExecuteSqlCommand(sql, parameters);
}
and use it just like the following:
public void DoSomething(int officeId)
{
var sqlParam = new SqlParameter("p0", officeId);
var parameters = new object[] { sqlParam };
((GenericRepository)Repository).ExecuteSqlCommand("EXEC dbo.myProc #p0", parameters);
}
Or simply just call
DbContext.Database.ExecuteSqlCommand
as I showed above based on your needs.
Update 1 : You want a stored procedure to take care of the CRUD business :
Suppose your context is called : MyDbContext
Then declare something like the following in a partial MyDbContext class:
public partial class MyDbContext
{
protected override void OnModelCreating(DbModelBuilder modelBuilder)
{
base.OnModelCreating(modelBuilder);
modelBuilder
.Entity<SomeCustomEntity>()
.MapToStoredProcedures(agent =>
{
agent.Insert(i => i.HasName("spr_MyInsert"));
agent.Update(u => u.HasName("spr_MyUpdate"));
agent.Delete(d => d.HasName("spr_MyDelete"));
});
}
}
Now every time you want to do some CRUD procedure you operation will be running through the stored procedure [The one you have mapped] and you don't need to worry about passing anything to the stored procedure :
using (var context = new MyDbContext())
{
context.SomeCustomEntity.Add(new SomeCustomEntity
{
Name = "Jack Something",
Phone = "999"
});
context.SaveChanges();
}

Related

Returning distinct data for a dropdownlist box with selectlistItem

I have a field in my database with duplicates. I want to use it in a dropdown list, which has to return distinct data.
Here is the method that I created to do this:
public IEnumerable<SelectListItem> GetBranches(string username)
{
using (var objData = new BranchEntities())
{
IEnumerable<SelectListItem> objdataresult = objData.ABC_USER.Select(c => new SelectListItem
{
Value = c.BRANCH_CODE.ToString(),
Text = c.BRANCH_CODE
}).Distinct(new Reuseablecomp.SelectListItemComparer());
return objdataresult;
}
}
Here is the class I am using:
public static class Reuseablecomp
{
public class SelectListItemComparer : IEqualityComparer<SelectListItem>
{
public bool Equals(SelectListItem x, SelectListItem y)
{
return x.Text == y.Text && x.Value == y.Value;
}
public int GetHashCode(SelectListItem item)
{
int hashText = item.Text == null ? 0 : item.Text.GetHashCode();
int hashValue = item.Value == null ? 0 : item.Value.GetHashCode();
return hashText ^ hashValue;
}
}
}
Nothing is returned and I get the error below. When I try a basic query without Distinct, everything works fine.
{"The operation cannot be completed because the DbContext has been disposed."}
System.Exception {System.InvalidOperationException}
Inner exception = null
How can I return distinct data for my dropdown?
Technically, your problem can be solved simply by appending .ToList() after your Distinct(...) call. The problem is that queries are evaluated JIT (just in time). In other words, until the actual data the query represents is needed, the query is not actually sent to the database. Calling ToList is one such thing that requires the actual data, and therefore will cause the query to be evaluated immediately.
However, the root cause of your problem is that you are doing this within a using statement. When the method exits, the query has not yet been evaluated, but you have now disposed of your context. Therefore, when it comes time to actually evaluate that query, there's no context to do it with and you get that exception. You should really never use a database context in conjuction with using. It's just a recipe for disaster. Your context should ideally be request-scoped and you should use dependency injection to feed it to whatever objects or methods need it.
Also, for what it's worth, you can simply move your Distinct call to before your Select and you won't need a custom IEqualityComparer any more. For example:
var objdataresult = objData.ABC_USER.Distinct().Select(c => new SelectListItem
{
Value = c.BRANCH_CODE.ToString(),
Text = c.BRANCH_CODE
});
Order of ops does matter here. Calling Distinct first includes it as part of the query to the database, but calling it after, as you're doing, runs it on the in-memory collection, once evaluated. The latter requires, then, custom logic to determine what constitutes distinct items in an IEnumerable<SelectListItem>, which is obviously not necessary for the database query version.

Returning the class which is a foreign key in the database

I want to ask this question and I tried to search for a while without concrete answers.
I have made a database and used LINQ2SQL to auto-generate the classes needed.
I have set the serialization mode to unidirectional to make sure the classes are being serialized and making the datamembers.
Now, what I want to know is, how I can send the references to the other classes (which has been made through LINQ2SQL).
F.x. I have a Class called Scheduler which is referencing Reservation, and Seat, because Reservation and Seat have foreign keys.
You can see the dbml here:
http://imgur.com/rR6OxDi
The dbml file. This is the model of our database
Also you can see that when I run the WCF test client it does not return the objects of Seats and Reservation.
http://imgur.com/brxNBz7
Hopefully you can all help.
UPDATE
Here is the snippet of the code provided by LINQ2SQL.
This is the fields for the scheduler
[global::System.Data.Linq.Mapping.TableAttribute(Name="dbo.Scheduler")]
[global::System.Runtime.Serialization.DataContractAttribute()]
public partial class Scheduler : INotifyPropertyChanging, INotifyPropertyChanged
{
private static PropertyChangingEventArgs emptyChangingEventArgs = new PropertyChangingEventArgs(String.Empty);
private int _SchID;
private System.Nullable<System.DateTime> _Date;
private System.Nullable<System.TimeSpan> _Starttime;
private System.Nullable<int> _MovieID;
private System.Nullable<int> _HallID;
private EntitySet<Seat> _Seats;
private EntitySet<Reservation> _Reservations;
private EntityRef<Hall> _Hall;
private EntityRef<Movie> _Movie;
private bool serializing;
And here is the snippet part of the code where it references to Reservation and Seat:
[global::System.Data.Linq.Mapping.AssociationAttribute(Name="Scheduler_Seat", Storage="_Seats", ThisKey="SchID", OtherKey="SchedulerID")]
[global::System.Runtime.Serialization.DataMemberAttribute(Order=6, EmitDefaultValue=false)]
public EntitySet<Seat> Seats
{
get
{
if ((this.serializing
&& (this._Seats.HasLoadedOrAssignedValues == false)))
{
return null;
}
return this._Seats;
}
set
{
this._Seats.Assign(value);
}
}
[global::System.Data.Linq.Mapping.AssociationAttribute(Name="Scheduler_Reservation", Storage="_Reservations", ThisKey="SchID", OtherKey="SchedulerID")]
[global::System.Runtime.Serialization.DataMemberAttribute(Order=7, EmitDefaultValue=false)]
public EntitySet<Reservation> Reservations
{
get
{
if ((this.serializing
&& (this._Reservations.HasLoadedOrAssignedValues == false)))
{
return null;
}
return this._Reservations;
}
set
{
this._Reservations.Assign(value);
}
}
Update 2
Here is the Reservation class which LINQ2SQL made:
Here is the fields:
[global::System.Data.Linq.Mapping.TableAttribute(Name="dbo.Reservation")]
[global::System.Runtime.Serialization.DataContractAttribute()]
public partial class Reservation : INotifyPropertyChanging, INotifyPropertyChanged
{
private static PropertyChangingEventArgs emptyChangingEventArgs = new PropertyChangingEventArgs(String.Empty);
private int _ResID;
private System.Nullable<int> _CustomerID;
private System.Nullable<int> _SchedulerID;
private string _Row;
private string _Seat;
private EntityRef<Customer> _Customer;
private EntityRef<Scheduler> _Scheduler;
And here is the Scheduler reference part of the class
[global::System.Data.Linq.Mapping.AssociationAttribute(Name="Scheduler_Reservation", Storage="_Scheduler", ThisKey="SchedulerID", OtherKey="SchID", IsForeignKey=true, DeleteRule="SET DEFAULT")]
public Scheduler Scheduler
{
get
{
return this._Scheduler.Entity;
}
set
{
Scheduler previousValue = this._Scheduler.Entity;
if (((previousValue != value)
|| (this._Scheduler.HasLoadedOrAssignedValue == false)))
{
this.SendPropertyChanging();
if ((previousValue != null))
{
this._Scheduler.Entity = null;
previousValue.Reservations.Remove(this);
}
this._Scheduler.Entity = value;
if ((value != null))
{
value.Reservations.Add(this);
this._SchedulerID = value.SchID;
}
else
{
this._SchedulerID = default(Nullable<int>);
}
this.SendPropertyChanged("Scheduler");
}
}
}
All of these things should lead to where I could get the object like this:
Scheduler[] schedulers = client.GetAllSchedulers();
Reservation reservation = schedulers[0].Reservations.First();
But get this error due to WCF not sending the object, (which you could see in picture one).
Which is this error:
Description: An unhandled exception occurred during the execution of
the current web request. Please review the stack trace for more
information about the error and where it originated in the code.
Exception Details: System.InvalidOperationException: Sequence contains
no elements
UPDATE 3:
Ok so it appears that it works somehow.
I just had to make a join between the Scheduler and Reservation.
Also whenever I debug the code I can see the variables are there. (Due to my reputation I can not post links).
But some of you might recognize the following whenever you try to view a result in debug mode:
"expanding the results view will enumerate the ienumerable c#"
Whenever I do this, it works, but not if I run it in release mode.
Looks like only object types (Reservation,Seat) have null values.
I'm guessing either you are missing DataContract/DataMember attributes in your complex types or you might need to include KnownTypeAttribute
It'd be easier to tell if you could provide some code.
EDIT
What your are talking about later is deferred loading. See this blog for more information on deferred vs immediate loading.
When you expand the IEnumerable in debug mode, that makes the request to retrieve/load the objects.
What your probably want is to load your Reservation,Seat objects along with your Scheduler object. Something like the following:
YourDatabaseContext database = new YourDatabaseContext ())
{
DataLoadOptions options = new DataLoadOptions();
options.LoadWith<Scheduler>(sch=> sch.Reservation);
options.LoadWith<Scheduler>(sch=> sch.Seat);
database.LoadOptions = options;
}
See DataLoadOptions for more details.
If you want to understand deferred execution. See this article for more details.
Quote from the article:
By default LINQ uses deferred query execution. This means when you write a LINQ query it doesn't execute. LINQ queries execute when you 'touch' the query results. This means you can change the underlying collection and run the same query subsequent times in the same scope. Touching the data means accessing the results, for instance in a for loop or by using an aggregate operator like Average or AsParallel on the results.

BuildCommands argument in Rob Conery's Massive

I am using Rob Conery's Massive.
The method List<DbCommand> BuildCommands(params object[] things), according to the methods comments, is supposed to take objects that "can be POCOs, Anonymous, NameValueCollections, or Expandos". But this:
var x = new { Id = new Guid("0F66CDCF-C219-4510-B81A-674CE126DD8C"), Name = "x", DisplayName = "y" };
myTable.BuildCommands(x);
Results in an InvalidCastException. Which reasonable since in the Massive.cs a cast from the passed in anonymous type to an ExpandoObject is attempted.
Why does the comment state you can pass in anything? Is there some other way to build commands from non-ExpandoObjects?
Here's some more code:
public static void ThisFails()
{
DynamicModel myTable = new DynamicModel("myConnectionString", tableName: "dbo.MyTable", primaryKeyField: "Id");
var updateMe = new { Id = new Guid("DF9A2F1B-3556-4EAC-BF2B-40E6821F3394"), Name = "abcx", DisplayName = "x" };
var commands = myTable.BuildCommands(updateMe); // This fails
myTable.Execute(commands);
}
public static void ThisSucceeds()
{
DynamicModel myTable = new DynamicModel("myConnectionString", tableName: "dbo.MyTable", primaryKeyField: "Id");
dynamic updateMe = new ExpandoObject();
updateMe.Id = new Guid("DF9A2F1B-3556-4EAC-BF2B-40E6821F3394");
updateMe.Name = "abcx";
updateMe.DisplayName = "x";
var commands = myTable.BuildCommands(updateMe);
myTable.Execute(commands);
}
The code that fails results in:
Unable to cast object of type
'<>f__AnonymousType03[System.Guid,System.String,System.String]' to
type <br/>
'System.Collections.Generic.IDictionary2[System.String,System.Object]'.
It's thrown from the first line in your method
public virtual DbCommand CreateUpdateCommand(dynamic expando, object key)
{
var settings = (IDictionary<string, object>)expando;
...
To me it looks like there should be a call to your extension method ToExpando before CreateUpdateCommand is called?
I think this is why people make methods private and public :). You're not supposed to call BuildCommands directly (though the code you have here still should work). I have a feeling there might be a bug that was committed in a patch.
That said - I believe this will work if you call myTable.Update() or myTable.Insert().
This last part answers the question - in terms of a possible "issue" - let's take that to Github.

NHibernate database versioning: object level schema and data upgrades

I would like to approach database versioning and automated upgrades in NHibernate from a different direction than most of the strategies proposed out there.
As each object is defined by an XML mapping, I would like to take size and checksum for each mapping file/ configuration and store that in a document database (raven or something) along with a potential custom update script. If no script is found, use the NHibernate DDL generator to update the object schema. This way I can detect changes, and if I need to make DML changes in addition to DDL, or perform a carefully ordered transformation, I can theoretically do so in a controlled, testable manner. This should also maintain a certain level of persistence-layer agnosticism, although I'd imagine the scripts would still necessarily be database system-specific.
The trick would be, generating the "old" mapping files from the database and comparing them to the current mapping files. I don't know if this is possible. I also don't know if I'm missing anything else that would make this strategy prohibitively impractical.
My question, then: how practical is this strategy, and why?
what i did to solve just that problem
version the database in a table called SchemaVersion
query the table to see if schema is up to date (required version stored in DAL), if yes goto 6.
get updatescript with version == versionFromBb from resources/webservices/...
run the script which also alters the schemaversion to the new version
goto 2.
run app
to generate the scripts i have used 2 options
support one rdbms: run SchemaUpdate to export into file and add DML statements manually
support multiple rdbms: use Nhibernate class Table to generate at runtime ddl to add/alter/delete tables and code which uses a session DML
Update:
"what method did you use to store the current version"
small example
something like this
public static class Constants
{
public static readonly Version DatabaseSchemaVersion = new Version(1, 2, 3, 4);
}
public class DBMigration
{
private IDictionary<Version, Action> _updates = new Dictionary<Version, Action>();
private Configuration _config;
private Dialect _dialect;
private IList<Action<ISession>> _actions = new List<Action<ISession>>(16);
private string _defaultCatalog;
private string _defaultSchema;
private void CreateTable(string name, Action<Table> configuretable)
{
var table = new Table(name);
configuretable(table);
string createTable = table.SqlCreateString(_dialect, _config.BuildMapping(), _defaultCatalog, _defaultSchema);
_actions.Add(session => session.CreateSQLQuery(createTable).ExecuteUpdate());
}
private void UpdateVersionTo(Version version)
{
_actions.Add(session => { session.Get<SchemaVersion>(1).Value = version; session.Flush(); });
}
private void WithSession(Action<session> action)
{
_actions.Add(action);
}
public void Execute(Configuration config)
{
_actions.Clear();
_defaultCatalog = config.Properties[NH.Environment.DefaultCatalog];
_defaultSchema = config.Properties[NH.Environment.DefaultSchema];
_config = config;
_dialect = Dialect.GetDialect(config.Properties);
using (var sf = _config.BuildSessionFactory())
using (var session = sf.OpenSession())
using (var tx = session.BeginTransaction())
{
Version dbVersion = session.Get<SchemaVersion>(1).Value;
while (dbVersion < Constants.DatabaseSchemaVersion)
{
_actions.Clear();
_updates[dbVersion].Invoke(); // init migration, TODO: error handling
foreach (var action in _actions)
{
action.Invoke(session);
}
tx.Commit();
session.Clear();
dbVersion = session.Get<SchemaVersion>(1).Value;
}
}
}
public DBMigration()
{
_updates.Add(new Version(1, 0, 0, 0), UpdateFromVersion1);
_updates.Add(new Version(1, 0, 1, 0), UpdateFromVersion1);
...
}
private void UpdateFromVersion1()
{
AddTable("Users", table => table.AddColumn(...));
WithSession(session => session.CreateSqlQuery("INSERT INTO ..."));
UpdateVersionTo(new Version(1,0,1,0));
}
...
}

Execute a SQL stored procedure before every query generated by EntityFramework

I need to execute a SQL stored procedure every time before I query my ObjectContext. What I want to achieve is setting the CONTEXT_INFO to a value which will be later on used with most of my queries.
Has anyone done that? Is that possible?
[EDIT]
Currently I'm achieving this by opening the connection and executing the stored procedure in my ObjectContext constructor like this:
public partial class MyEntitiesContext
{
public MyEntitiesContext(int contextInfo) : this()
{
if (Connection.State != ConnectionState.Open)
{
Connection.Open(); // open connection if not already open
}
var connection = ((EntityConnection)Connection).StoreConnection;
using (var cmd = connection.CreateCommand())
{
// run stored procedure to set ContextInfo to contextInfo
cmd.CommandType = CommandType.StoredProcedure;
cmd.CommandText = "[dbo].[SetContextInfo]";
cmd.Parameters.Add(new SqlParameter("#ci", _contextInfo));
cmd.ExecuteNonQuery();
}
// leave the connection open to reuse later
}
}
Then in my integration test:
[TestMethod]
public void TestMethod1()
{
using (var ctx = new MyEntitiesContext(1))
{
Assert.AreEqual(2, ctx.Roles.ToList().Count);
Assert.AreEqual(2, ctx.Users.ToList().Count);
}
}
But this requires me to leave the connection open - this is error prone since I will always need CONTEXT_INFO, and another developer might easily do:
[TestMethod]
public void TestMethod2()
{
using (var ctx = new MyEntitiesContext(1))
{
// do something here
// ... more here :)
ctx.Connection.Close(); // then out of the blue comes Close();
// do something here
Assert.AreEqual(2, ctx.Roles.ToList().Count);
Assert.AreEqual(2, ctx.Users.ToList().Count); // this fails since the where
// clause will be:
// WHERE ColumnX = CAST(CAST(CONTEXT_INFO() AS BINARY(4)) AS INT)
// and CONTEXT_INFO is empty - there are no users with ColumnX set to 0
// while there are 2 users with it set to 1 so this test should pass
}
}
The above means that I can write the code like in my test and everthing is green (YAY!) but then my colleague uses the code from TestMethod2 somewhere in his business logic and it's all f'd up - and nobody knows where and why since all tests are green :/
[EDIT2]
This blog post certainly does not answer my question but actually solves my problem. Maybe going with NHibernate will be better suited for my purpose :)
We have used this pattern.
But the way we did it was to call the stored procedure as the first opperation inside each db context.
Finally I found the answer. I can wrap the connection using the EFProvider wraper toolkit from EFProviderWrappers.
To do this I mostly have to derive from EFProviderWrapperConnection and override the DbConnection.Open() method. I already tried it with the Tracing provider and it works fine. Once I test it with my solution I will add more information.