I am building a database is Access 2007, and we don't even have any data yet but the database is constantly freezing. I used the built in performance checker, and it said everything was fine, but I am worried that the database will be unusably slow if I don't fix it soon.
Here is why I think it may be slow.
We have 300+ queries saved in the
database, all of which need to run
weekly.
We have 4 main reports and a sub
report for nearly all of the queries
above. Why? Because the 4 main
reports need information from all of
the queries, and we are using sub
reports as the source.
A few of our queries are pulling
information from at least 15 other
sub queries.
Other than this, I don't know why it could be slow, unless it's just my computer. Could someone pleas give me some insight about what might be wrong, how I might improve our database's performance, and if this amount of queries and sub reports is abnormally high.
Thanks,
Links to tables on network share, or even a default printer that is part of the network can cause many delays. One often used solution is to keep open (force) a persistent connection. During development you can simply in the front end open any linked table (one that is linked to the back end), and then minimize it. This often will solve those delays. The list of other things to check can be found here:
http://www.granite.ab.ca/access/performancefaq.htm
If the above persistent connection works, you also want to ensure in your startup code you open a connection to the back end to a global database var, or perhaps open up a table to a global reocrdset.
Related
I run heavy query on IBM i. First time it takes a long time, Subsequent times are much faster. It seems to be creating temporary index. How can I remove this index, so I can re-test like the first time?
Use the Visual Explain (VE) tool in the Run SQL Scripts component of ACS to see the differences between runs.
If indeed the issue is a system maintained temporary index (MTI), you can track it down via the schema's tooling in ACS and delete it if you so desire.
However, an MTI only gets deleted by the system when the system reboots (IPL).
So if you seeing differences without rebooting the server, I suspect the differences are caused by psuedo-closing. By default, once the DB see's the same query a few times (3 is the default), instead of hard closing it's cursors, it will psuedo-close them.
Again, VE will show "hard opens" and "pseudo opens".
To get the pseduo closed cursors to hard close, simply disconnect and reconnect.
I have a new idea and question about that I would like to ask you.
We have a CRM application on-premise / in house. We use that application kind of 24X7. We also do billing and payroll on the same CRM database which is OLTP and also same thing with SSRS reports.
It looks like whenever we do operation in front end which does inserts and updates to couple of entities at the same time, our application gets frozen until that process finishes. e.g. extracting payroll for 500 employees for their activities during last 2 weeks. Basically it summarize total working hours pulls that numbers from database and writes/updates that record where it says extract has been accomplished. so for 500 employees we are looking at around 40K-50K rows for Insert/Select/Update statements together.
Nobody can do anything while this process runs! We are considering the following options to take care of this issue.
Running this process in off-hours
OR make a copy of DB of Dyna. CRM and do this operations(extracting thousands of records and running multiple reports) on copy.
My questions are:
how to create first of all copy and where to create it (best practices)?
How to make it synchronize in real-time.
if we do select statement operation in copy DB than it's OK, but if we do any insert/update on copy how to reflect that on actual live db? , in short how to make sure both original and copy DB are synchronize to each other in real time.
I know I asked too many questions, but being SQL person, stepping into CRM team and providing suggestion, you know what I am trying to say.
Thanks folks for your any suggestion in advance.
Well to answer your question in regards to the live "copy" of a database a good solution is an alwayson availability group.
https://blogs.technet.microsoft.com/canitpro/2013/08/19/step-by-step-creating-a-sql-server-2012-alwayson-availability-group/
Though I dont think that is what you are going to want in this situation. Alwayson availability groups are typically for database instances that require very low failure time frames. For example: If the primary DB server goes down in the cluster it fails over to a secondary in a second or two at the most and the end users only notice a slight hiccup for a second.
What I think you would find better is to look at those insert statements that are hitting your database server and seeing why they are preventing you from pulling data. If they are truly locking the table maybe changing a large amount of your reads to "nolock" reads might help remedy your situation.
It would also be helpful to know what kind of resources you have allocated and also if you have proper indexing on the core tables for your DB. If you dont have proper indexing then a lot of the queries can take longer then normal causing the locking your seeing.
Finally I would recommend table partitioning if the tables you are pulling against are to large. This can help with a lot of disk speed issues potentially and also help optimize your querys if you partition by time segment (i.e. make a new partition every X months so when a query pulls from one time segment they only pull from that one data file).
https://msdn.microsoft.com/en-us/library/ms190787.aspx
I would say you need to focus on efficiency more then a "copy database" as your volumes arent very high to be needing anything like that from the sounds of it. I currently have a sql server transaction database running with 10 million+ inserts on it a day and I still have live reports hit against it. You just need the resources and proper indexing to accommodate.
I have a split MS Access database. Most of the data is populated through SQL queries run through VBA. When I first connect to the back end data, it takes a long time and the back end file (.accdc file) locks and unlocks 3 or 4 times. It's not the same number of locks every time, but this locking and unlocking corresponds to taking a while to open. When I first open the front end, it does not connect to the back end. This step is done very quickly. The first time that I connect to the back end, it can take a while, though.
Any suggestions on things to look into to speed this up, and make it happen more reliably on the first try? This is a multi-user file and I was not wanting to make any changes to the registry since that would require making that update for everyone in my department. I'm mostly concerned about it taking a while to open, but thought the locking and unlocking seemed peculiar and might be contributing or a symptom of something else going on.
In most cases if you use a persistent connection, then the slow process you note only occurs once at startup.
This and some other performance tips can be found here:
http://www.fmsinc.com/MicrosoftAccess/Performance/LinkedDatabase.html
9 out of 10 times, the above will thus fix the "delays" when running the application. You can for testing simply open any linked tables, minimize that table, and now try running your code or startup form - note how the delays are gone.
We use SQL Server and have Winforms application. In our product, sometimes the records exceeds more than 50000 in single transaction and we face Performance issue there.
When we have huge amount of data, we generally do that in multiple database calls. So in one of our Import functionality we are updating servers in a bunch of 1000 rows. So if we have 5000 records, then while processing them (in a for loop) we update the first 1000 rows and then continue processing until we get new 1000 rows to update. This performs better but honestly not the best I feel in terms of performance.
But we have seen in other Import/Export functionality that updating database with every 5000 rows is giving good results when compared to 1000. So there is a lot of confusion we are facing and also code does not look to be same across our applications.
Can anyone give me an idea what makes this happen. You don't have sample data, database schema etc. and yes I do agree. But are there any scenarios which should be taken care/considered while working with database? And why different number of records are giving us the good results, is there something we are ignoring? I am not a champ in database and more of a programming guy in .Net. Will be happy to hear your suggestions.
Not sure if this is helpful, our data generally contains employee details like payroll information, personal details, Accrual Benefits, Compensation etc. Data is fed from an excel and also we generate lot of data in our internal process. Let me know if you need more information. Thanks!!
The more database callouts you have, the more connection management you will need (open connection, use connection, cleanup & close, are we using connection pooling etc.etc.). You're sending the same amount of data over the wire, but you are opening and closing the taps more often, which brings overhead.
The downside of this is that the amount of data held in a transaction is greater.
However, if I may make a suggestion, you might want to consider achieving this in a different way, by loading all data into the database as fast as possible (into interim tables where the contraints are deativated and with transactional management turned off, if possible) and then allowing the database to carry out the task of checking and validating the data.
Since you are using SQL Server, you can just turn on SQL Profiler, define an appropriate event filter, and watch what happens under different loads.
I have a problem that seems like its a result of a deadlock-situation.
Whe are now searching for the root of the problem but meantime we wanted to restart the server and get the customer going.
And now everytime we start the program it just says "SqlConnection does not support parallel transactions". We have not changed anything in the program, its compiled and on the customers server, but after the "possible deadlock"-situation it want go online again.
We have 7 clients (computers) running the program, each client is talking to a webservice on a local server, and the webservice is talking to the sql-server (same machine as webserver).
We have restarted both the sql-server and the iis-server, but not rebooted the server because of other important services running on the server so its the last thing we do.
We can se no locks or anything in the management tab.
So my question is, why does the "SqlConnection does not support parallel transactions" error comming from one time to another without changing anything in the program and it still lives between sql-restart.
It seems like it happens at the first db-request the program does when it start.
If you need more information just ask. Im puzzled...
More information:
I dont think I have "long" running transactions. The scenario is often that I have a dataset with 20-100 rows (ContractRows) in that Ill do a .Update on the tableAdapter. I also loop throug those 20-100 rows and for some of them Ill create ad-hook-sql-querys (for example if a rented product is marked as returned I create a sql-query to mark the product as returned in the database)
So I do this very simplified:
Create objTransactionObject
Create objtableadapter (objTransactionObject)
for each row in contractDS.contractrows
if row.isreturned then
strSQL &= "update product set instock=1 where prodid=" & row.productid & vbcrlf
End if
next
objtableadapter.update(contractDS)
objData.ExecuteQuery(strSQL, objTransactionObject)
if succsesfull
objtransactionobject.commit
else
objtransactionobject.rollback
end if
objTran.Dispose()
And then Im doing commit or rollback depending on if It went well or not.
Edit: None of the answers have solved the problem, but I'll thank you for the good trouble shooting pointers.
The "SqlConnection does not support parallel transactions" dissapeared suddenly and now the sql-server just "goes down" 4-5 times a day, I guess its a deadlock that does that but I have not the right knowledge to find out and are short on sql-experts who can monitor this for me at the moment. I just restart the sql-server and everything works again. 1 of 10 times I also have to restart the computer. Its really bugging me (and my customers of course).
Anyone knowing a person with good knowledge in analyzing troubles with deadlocks or other sql problems in sweden (or everywhere in the world,english speaking) are free to contact me. I know this is'nt a contact site but I take my chanse to ask the question because I have run out of options, I have spent 3 days and nights optimizing the clients to be sure we close connections and dont do too much stupid things there. Without luck.
It seems to be that you are sharing connections and creating new transactions on the same open connection (this is the parallel part of the exception you are seeing).
Your example seems to support this as you have no mention of how you acquire the connection in it.
You should do a review of your code and make sure that you are only opening a connection and then disposing of it when you are done (and by all means, use the using statement to make sure that you close the connection), as it seems like you are leaving one open somewhere.
Yours doesn't appear to be an unusual problem. Google found a lot of hits when I pasted your error string into the query box.
Reading past answers, it sounds like it has something to do with interleaving transactions improperly or isolation level.
How long are connections held open? Do you have long-running transactions?
Do you have implicit transactions turned on somewhere, so that there are some transactions where you wouldn't have expected them? Have you opened Activity Monitor to see if there are any unexpected transactions?
Have you tried doing a backup of your transaction log? That might clear it out as well if I remember a previous, similar experience correctly.