MySQL trigger in order to cache results? - sql

I have a query that takes about a second to execute pre-cache. Post-cache is fine. Here's a description of the problem: MySQL: nested set is slow?
If I can't get a solution for my problem, would creating a trigger to loop through and execute all the possible queries that table might have to do (i.e. if there are 100 records on that table, it would execute 100 queries) be a good idea? This way, when my application does execute such a query, I can depend on cached results.
It feels like a bad solution, but I really can't afford a 1 second response time from that query.

Since you are using a MyISAM table, you can try preload the table indexes into the key cache.
http://dev.mysql.com/doc/refman/5.0/en/cache-index.html
http://dev.mysql.com/doc/refman/5.0/en/load-index.html

Related

A simple SQL statement is timing out

This is a very simple SQL statement:
Update FosterHomePaymentIDNo WITH (ROWLOCK) Set FosterHomePaymentIDNo=1296
But it's timing out when I execute it from the context of an ASP.NET WebForms application.
Could this have something to do with the rowlock? How can I make sure that this SQL statement runs in a reasonable amount of time without compromising the integrity of the table? I suspect removing the rowlock would help, but then we could run into different users updating the table at the same time.
And yes, this is a "next ID" table that contains only one column and only one row; I don't know why it was set up this way instead of using an identity column or even select max(id) + 1!
If an UPDATE of a one-row table takes a long time, it's probably blocked by another session that updated it in a transaction that hasn't committed yet. This is why you don't want to generate IDs like this.

Sql server bulk load into a table

I need to update a table in sql server 2008 along the lines of the Merge statement - delete, insert, updates. Table is 700k rows and I need users to still have read access to it assuming an isolation level of read committed.
I tried things like ALTER TABLE table SET (LOCK_ESCALATION=DISABLE) to no avail. I tested by doing a select top 50000 * from another window, obvious read uncommitted worked :). Is there anyway around this without changing the user's isolation level and retaining an 'all or nothing' transaction behaviour?
My current solution of a cursor that commits in batches of n may allow users to work but loses the transactional behaviour. Perhaps I could just make the bulk update fast enough to always be less than 30 seconds (for timeout). The problem is the user's target db's are on very slow machines with only 512mb ram. Not sure the processor but assume it is really slow and I don't have access to them at this time!
I created a test that causes an update statement to need to run against all 700k rows:
I tried an update with a left join on my dev box (quite fast) and it was 17 seconds
The merge statement was 10 seconds
The FORWARD ONLY cursor was slower than both
These figures are acceptable on my machine but I would feel more comfortable if I could get the query time down to less than 5 seconds before allowing locks.
Any ideas on preventing any locking on the table/rows or making it faster still?
It sounds like this table may be queried a lot but not updated much. If it really is a true read-only table for everyone else, but you want to update it extremely quickly, you could create a script that uses this method (could even wrap it in a transaction, I believe, although it's probably unnecessary):
Make a copy of the table named TABLENAME_COPY
Update the copy table
Rename the original table to TABLENAME_ORIG
Rename the copy table to TABLENAME
You would only experience downtime in between steps 3 and 4, and a scripted table rename would probably be quicker than an update of so many rows.
This does all assume that no else can update the table while your process is running, but they will still be able to read it fully at any point except between 3 & 4

SQL stored procedure failing in large database

I have a particular SQL file in which i copy all contents from on table in a database to another table in another database.
The traditional INSERT statements are used to perform the same operation. However this table has 8.5 Million records and it fails. The queries succeed with a smaller database.
Also in when i run the select * query for that particular table the SQL query express shows out of memory exception.
In particular there is one table that has some many records. So this table alone i want to copy from the old Db to the new Db.
What are alternate ways to achieve this?
Is there any quick work around by which we can avoid this exception and make the queries succeed?
Let me put it this way. Why would this operation fail when there are a lot of records?
I don't know if this counts as "traditional INSERT", but have you tried "INSERT INTO"?
http://www.w3schools.com/sql/sql_select_into.asp

Debug PostgreSQL sql query

Is there a way to configure PostgreSQL so that when I run a "delete from table_a;"
it outputs some information how many entries that was cascade deleted.
I'm running my querys in the cli application.
I found a solution. It was good enough for me, though I wanted a estimated statistics on how many rows that were effected.
This will ouput a list of all constraints triggered by the query.
EXPLAIN ANALYZE DELETE FROM table_a;
You could use a plpgsql trigger function on tables to increment a sequence on a delete to get an exact count.
You would need to reset the sequence before issuing the delete. You could use a different sequence per table to get per table statistics.

Proc SQL Delete takes WAY too long

I'm running the following SAS command:
Proc SQL;
Delete From Server003.CustomerList;
Quit;
Which is taking over 8 minutes... when it takes only a few seconds to read that file. What could be cause a delete to take so long and what can I do to make it go faster?
(I do not have access to drop the table, so I can only delete all rows)
Thanks,
Dan
Edit: I also apparently cannot Truncate tables.
This is NOT regular SQL. SAS' Proc SQL does not support the Truncate statement. Ideally, you want to figure out what's going on with the performance of the delete from; but if what you really need is truncate functionality, you could always just use pure SAS and not mess with SQL at all.
data Server003.CustomerList;
set Server003.CustomerList (obs=0);
run;
This effectively performs and operates like a Truncate would. It maintains the dataset/table structure but fails to populate it with data (due to the OBS= option).
Are there a lot of other tables which have foreign keys to this table? If those tables don't have indexes on the foreign key column(s) then it could take awhile for SQL to determine whether or not it's safe to delete the rows, even if none of the other tables actually has a value in the foreign key column(s).
Try adding this to your LIBNAME statement:
DIRECT_EXE=DELETE
According to SAS/ACCESS(R) 9.2 for Relational Databases: Reference,
Performance improves significantly by using DIRECT_EXE=, because the SQL delete statement is passed directly to the DBMS, instead of SAS reading the entire result set and deleting one row at a time.
I would also mention that in general SQL commands run slower in SAS PROC SQL. Recently I did a project and moved the TRUNCATE TABLE statements into a Stored Procedure to avoid the penalty of having them inside SAS and being handled by their SQL Optimizer and surrounding execution shell. In the end this increased the performance of the TRUNCATE TABLE substantially.
It might be slower because disk writes are typically slower than reads.
As for a way around it without dropping/truncating, good question! :)
You also could consider the elegant:
proc sql; create table libname.tablename like libname.tablename; quit;
I will produce a new table with the same name and same meta data of your previous table and delete the old one in the same operation.