I have written the following tests to compare performance of Linq2SQL and NHibernate and I find results to be somewhat strange. Mappings are straight forward and identical for both. Both are running against a live DB. Although I'm not deleting Campaigns in case of Linq, but that shouldn't affect performance by more than 10 ms.
Linq:
[Test]
public void Test1000ReadsWritesToAgentStateLinqPrecompiled()
{
Stopwatch sw = new Stopwatch();
Stopwatch swIn = new Stopwatch();
sw.Start();
for (int i = 0; i < 1000; i++)
{
swIn.Reset();
swIn.Start();
ReadWriteAndDeleteAgentStateWithLinqPrecompiled();
swIn.Stop();
Console.WriteLine("Run ReadWriteAndDeleteAgentState: " + swIn.ElapsedMilliseconds + " ms");
}
sw.Stop();
Console.WriteLine("Total Time: " + sw.ElapsedMilliseconds + " ms");
Console.WriteLine("Average time to execute queries: " + sw.ElapsedMilliseconds / 1000 + " ms");
}
private static readonly Func<AgentDesktop3DataContext, int, EntityModel.CampaignDetail>
GetCampaignById =
CompiledQuery.Compile<AgentDesktop3DataContext, int, EntityModel.CampaignDetail>(
(ctx, sessionId) => (from cd in ctx.CampaignDetails
join a in ctx.AgentCampaigns on cd.CampaignDetailId equals a.CampaignDetailId
where a.AgentStateId == sessionId
select cd).FirstOrDefault());
private void ReadWriteAndDeleteAgentStateWithLinqPrecompiled()
{
int id = 0;
using (var ctx = new AgentDesktop3DataContext())
{
EntityModel.AgentState agentState = new EntityModel.AgentState();
var campaign = new EntityModel.CampaignDetail { CampaignName = "Test" };
var campaignDisposition = new EntityModel.CampaignDisposition { Code = "123" };
campaignDisposition.Description = "abc";
campaign.CampaignDispositions.Add(campaignDisposition);
agentState.CallState = 3;
campaign.AgentCampaigns.Add(new AgentCampaign
{
AgentState = agentState
});
ctx.CampaignDetails.InsertOnSubmit(campaign);
ctx.AgentStates.InsertOnSubmit(agentState);
ctx.SubmitChanges();
id = agentState.AgentStateId;
}
using (var ctx = new AgentDesktop3DataContext())
{
var dbAgentState = ctx.GetAgentStateById(id);
Assert.IsNotNull(dbAgentState);
Assert.AreEqual(dbAgentState.CallState, 3);
var campaignDetails = GetCampaignById(ctx, id);
Assert.AreEqual(campaignDetails.CampaignDispositions[0].Description, "abc");
}
using (var ctx = new AgentDesktop3DataContext())
{
ctx.DeleteSessionById(id);
}
}
NHibernate (the loop is the same):
private void ReadWriteAndDeleteAgentState()
{
var id = WriteAgentState().Id;
StartNewTransaction();
var dbAgentState = agentStateRepository.Get(id);
Assert.IsNotNull(dbAgentState);
Assert.AreEqual(dbAgentState.CallState, 3);
Assert.AreEqual(dbAgentState.Campaigns[0].Dispositions[0].Description, "abc");
var campaignId = dbAgentState.Campaigns[0].Id;
agentStateRepository.Delete(dbAgentState);
NHibernateSession.Current.Transaction.Commit();
Cleanup(campaignId);
NHibernateSession.Current.BeginTransaction();
}
Results:
NHibernate:
Total Time: 9469 ms
Average time to execute 13 queries: 9 ms
Linq:
Total Time: 127200 ms
Average time to execute 13 queries: 127 ms
Linq lost by 13.5 times! Event with precompiled queries (both read queries are precompiled).
This can't be right, although I expected NHibernate to be faster, this is just too big of a difference, considering mappings are identical and NHibernate actually executes more queries against the DB.
Update. I have refactored a project to use NHibernate instead of Linq2Sql and the performance gain seems to be a lot less (about 20-30%) compared to test working on the same mappings. Does anyone have some real world examples of their own?
Run a profiler, both on the .NET code and on the SQL Server database. Also, identify what SQL statements are being run under the covers for both scenarios. Where is the time being lost for LinqToSql? If the underlying SQL statements are different, why? It's very likely you can tweak both ORMs to be faster. They should likely be in the same ballpark performance wise for simple tests. This feels like a configuration problem.
Related
_context.Update(v) ;
_context.SaveChanges();
When I use this code then SQL Server adds a new record instead of updating the
current context
[HttpPost]
public IActionResult PageVote(List<string> Sar)
{
string name_voter = ViewBag.getValue = TempData["Namevalue"];
int count = 0;
foreach (var item in Sar)
{
count = count + 1;
}
if (count == 6)
{
Vote v = new Vote()
{
VoteSarparast1 = Sar[0],
VoteSarparast2 = Sar[1],
VoteSarparast3 = Sar[2],
VoteSarparast4 = Sar[3],
VoteSarparast5 = Sar[4],
VoteSarparast6 = Sar[5],
};
var voter = _context.Votes.FirstOrDefault(u => u.Voter == name_voter && u.IsVoted == true);
if (voter == null)
{
v.IsVoted = true;
v.Voter = name_voter;
_context.Add(v);
_context.SaveChanges();
ViewBag.Greeting = "رای شما با موفقیت ثبت شد";
return RedirectToAction(nameof(end));
}
v.IsVoted = true;
v.Voter = name_voter;
_context.Update(v);
_context.SaveChanges();
return RedirectToAction(nameof(end));
}
else
{
return View(_context.Applicants.ToList());
}
}
You need to tell the DbContext about your entity. If you do var vote = new Vote() vote has no Id. The DbContext see this and thinks you want to Add a new entity, so it simply does that. The DbContext tracks all the entities that you load from it, but since this is just a new instance, it has no idea about it.
To actually perform an update, you have two options:
1 - Load the Vote from the database in some way; If you get an Id, use that to find it.
// Loads the current vote by its id (or whatever other field..)
var existingVote = context.Votes.Single(p => p.Id == id_from_param);
// Perform the changes you want..
existingVote.SomeField = "NewValue";
// Then call save normally.
context.SaveChanges();
2 - Or if you don't want to load it from Db, you have to manually tell the DbContext what to do:
// create a new "vote"...
var vote = new Vote
{
// Since it's an update, you must have the Id somehow.. so you must set it manually
Id = id_from_param,
// do the changes you want. Be careful, because this can cause data loss!
SomeField = "NewValue"
};
// This is you telling the DbContext: Hey, I control this entity.
// I know it exists in the DB and it's modified
context.Entry(vote).State = EntityState.Modified;
// Then call save normally.
context.SaveChanges();
Either of those two approaches should fix your issue, but I suggest you read a little bit more about how Entity Framework works. This is crucial for the success (and performance) of your apps. Especially option 2 above can cause many many issues. There's a reason why the DbContext keep track of entities, so you don't have to. It's very complicated and things can go south fast.
Some links for you:
ChangeTracker in Entity Framework Core
Working with Disconnected Entity Graph in Entity Framework Core
I am currently updating a BackEnd project to .NET Core and having performance issues with my Linq queries.
Main Queries:
var queryTexts = from text in _repositoryContext.Text
where text.KeyName.StartsWith("ApplicationSettings.")
where text.Sprache.Equals("*")
select text;
var queryDescriptions = from text in queryTexts
where text.KeyName.EndsWith("$Descr")
select text;
var queryNames = from text in queryTexts
where !(text.KeyName.EndsWith("$Descr"))
select text;
var queryDefaults = from defaults in _repositoryContext.ApplicationSettingsDefaults
where defaults.Value != "*"
select defaults;
After getting these IQueryables I run a foreach loop in another context to build my DTO model:
foreach (ApplicationSettings appl in _repositoryContext.ApplicationSettings)
{
var applDefaults = queryDefaults.Where(c => c.KeyName.Equals(appl.KeyName)).ToArray();
description = queryDescriptions.Where(d => d.KeyName.Equals("ApplicationSettings." + appl.KeyName + ".$Descr"))
.FirstOrDefault()?
.Text1 ?? "";
var name = queryNames.Where(n => n.KeyName.Equals("ApplicationSettings." + appl.KeyName)).FirstOrDefault()?.Text1 ?? "";
// Do some stuff with data and return DTO Model
}
In my old Project, this part had an execution from about 0,45 sec, by now I have about 5-6 sec..
I thought about using compiled queries but I recognized these don't support returning IEnumerable yet. Also I tried to avoid Contains() method. But it didn't improve performance anyway.
Could you take short look on my queries and maybe refactor or give some hints how to make one of the queries faster?
It is to note that _repositoryContext.Text has compared to other contexts the most entries (about 50 000), because of translations.
queryNames, queryDefaults, and queryDescriptions are all queries not collections. And you are running them in a loop. Try loading them outside of the loop.
eg: load queryNames to a dictionary:
var queryNames = from text in queryTexts
where !(text.KeyName.EndsWith("$Descr"))
select text;
var queryNamesByName = queryName.ToDictionary(n => n.KeyName);
one can write queries like below
var Profile="developer";
var LstUserName = alreadyUsed.Where(x => x.Profile==Profile).ToList();
you can also use "foreach" like below
lstUserNames.ForEach(x=>
{
//do your stuff
});
I'm using a SqlDataReader to retrieve some "SELECT" query from a DBMS.
So far, I read each row one by one in the result set using SqlDataReader.read(), and I process them one by one. When the result set is huge (meaning millions of rows times hundreds of columns), iterating with .read() is very slow. I'm asking: is there a way to do a "block" read from SqlDataReader, meaning that i.e. something like SqlDataReader.read(100) gives me an array of the next 100 rows in the result set?
I thought about doing something like DataTable.Load() to load all the result set in memory, but since the table has a size of several gigabytes, it would't fit in memory.
What would you recommend?
Thanks a lot
Example code:
TdConnection conn;
TdCommand cmd;
TdDataReader reader;
IAsyncResult res;
conn = new TdConnection(#"xxxx;");
conn.Open();
cmd = new TdCommand(q,conn);
res = cmd.BeginExecuteReader();
while (!res.IsCompleted);
reader = cmd.EndExecuteReader(res);
if (reader.HasRows)
{
string sb;
string currentout = "C:\file";
string[] row = new string[reader.FieldCount];
sb = "";
for (int i = 0; i < reader.FieldCount; i++)
row[i] = reader.GetName(i);
sb = String.Concat(sb,String.Join("\t",row),"\r\n");
File.WriteAllText(currentout,sb);
sb = "";
/* With a test query, the following commented "while" takes 5 minutes
/* to iterate over a dataset with 639967 rows x 63 columns (about 300MB)
/* while(reader.Read());
*/
/* With the same test query, the following "while block" takes 6 minutes
/* to iterate over the same dataset AND writing it on a text file
/* I conclude that I/O writing on text file is fast, and .Read() is very slow
*/
while(reader.Read())
{
for (int i = 0; i < reader.FieldCount; i++)
row[i] = reader.GetValue(i).ToString();
sb = String.Concat(sb,String.Join("\t",row),"\r\n");
if (sb.Length > 100000)
{
File.AppendAllText(currentout,sb);
sb = "";
}
}
File.AppendAllText(currentout,sb);
}
reader.Close();
reader.Dispose();
cmd.Dispose();
conn.Close();
The "Td" components are the Teradata DBMS interface for .NET (but they behave just like "SQL" components).
What's slow here is the quadratic cost of the string concatenation in the loop:
sb = String.Concat(sb,String.Join("\t",row),"\r\n");
Since this is such a glaring perf problem I'm submitting this as an answer since it probably solves your problem.
If your app is slow, profile it to see what is slow.
Unfortunately, ADO.NET is indeed quite CPU heavy when reading data. Nothing you can do about it.
I have a very strange error with dapper:
there is already an open DataReader associated with this Command
which must be closed first
But I don't use DataReader! I just call select query on my server application and take first result:
//How I run query:
public static T SelectVersion(IDbTransaction transaction = null)
{
return DbHelper.DataBase.Connection.Query<T>("SELECT * FROM [VersionLog] WHERE [Version] = (SELECT MAX([Version]) FROM [VersionLog])", null, transaction, commandTimeout: DbHelper.CommandTimeout).FirstOrDefault();
}
//And how I call this method:
public Response Upload(CommitRequest message) //It is calling on server from client
{
//Prepearing data from CommitRequest
using (var tr = DbHelper.DataBase.Connection.BeginTransaction(IsolationLevel.Serializable))
{
int v = SelectQueries<VersionLog>.SelectVersion(tr) != null ? SelectQueries<VersionLog>.SelectVersion(tr).Version : 0; //Call my query here
int newVersion = v + 1; //update version
//Saving changes from CommitRequest to db
//Updated version saving to base too, maybe it is problem?
return new Response
{
Message = String.Empty,
ServerBaseVersion = versionLog.Version,
};
}
}
}
And most sadly that this exception appearing in random time, I think what problem in concurrent access to server from two clients.
Please help.
This some times happens if the model and database schema are not matching and an exception is being raised inside Dapper.
If you really want to get into this, best way is to include dapper source in your project and debug.
i'm using the following method for all data inserting and updating
processes in my application.i just need to pass array of sql quires to
the method.are there any disadvantages of using one common method.does
it cause any performance reduction in the application
public int ExecuteCommand(string[] sqls)
{
numberOfRecordsAffected = 0;
IngresConnection ingresConnection = new IngresConnection(ConnStr);
IngresTransaction ingresTransaction = null;
try
{
ingresConnection.Open();
ingresTransaction = ingresConnection.BeginTransaction();
foreach (string sql in sqls)
{
IngresCommand ingresCommand = new IngresCommand(sql, ingresConnection, ingresTransaction);
ingresCommand.CommandTimeout = 0;
numberOfRecordsAffected += ingresCommand.ExecuteNonQuery();
}
ingresTransaction.Commit();
}
catch
{
if (ingresTransaction != null)
ingresTransaction.Rollback();
ingresConnection.Close();
throw;
}
finally
{
if (ingresConnection != null)
ingresConnection.Close();
}
return numberOfRecordsAffected;
}
See this opinionated article about dynamic sql. You ask specifically about performance which indeed is hurt a lot because your queries can't be cached by the database and each of them need to be parsed. The real worry should be about security though. It's so easy to do it wrong/incomplete at one point or the other and it's even harder to test if it has been messed up somewhere or not.