How to improve perfomance of VFP application using SQL as Backend - sql

I am having the vfp application and its very complex . I am using the SQL as database , Application responding very late in some scenarios.
How to improve Performance in VFP application ?
Any tips?

There are a few things you need to consider;
Firstly when you run your query in a tool such as SQL Server Management tool, how long does the query take to run?
If it takes 5 minutes to run, then it will take a minimum of 5 minutes to get your data back.
Secondly, if the data you are retrieving is large, then this data has to be transferred from SQL to VFP before you can do anything.
Not much else can be said without an example.

Related

how to increase Oracle SQL database or web service performance?

I got a task to increase Oracle SQL database or web service performance. The web service required billions of data from the Oracle SQL database. Web service needs to populate those billions of data for each startup. Those data is mostly read-only and very rarely need an update or write data.
It is a very old codebase. That is why the solution was done in a way that it loads all data in memory to increase the performance. That is why it is slowing down development. It is like the first launch takes 30+ minutes. If for some reason those in-memory cached data becomes corrupted, I have to reload data from the database. It means another 30+ minutes waiting.
My task is to update this process. I have the flexibility to change the SQL database to something else that could help to speed up this process. Do you have any suggestion? Thanks in advance!
You can try to use MySQL. From my knowledge, MySQL has no limitation for the size of the database. I've attached a comparison you can look at between MySQL and Oracle. Comparison

Performance hit on DB2 transactional database after linking to SQL Server 2005

We have an AS400 mainframe running our DB2 transactional database. We also have a SQL Server setup that gets loaded nightly with data from the AS400. The SQL Server setup is for reporting.
I can link the two database servers, BUT, there's concern about how big a performance hit DB2 might suffer from queries coming from SQL Server.
Basically, the fear is that if we start hitting DB2 with queries from SQL Server we'll bog down the transactional system and screw up orders and shipping.
Thanks in advance for any knowledge that can be shared.
Anyone who has a pat answer for a performance question is wrong :-) The appropriate answer is always 'it depends.' Performance tuning is best done via measure, change one variable, repeat.
DB2 for i shouldn't even notice if someone executes a 1,000 row SELECT statement. Take Benny's suggestion and run one while the IBM i side watch. If they want a hint, use WRKACTJOB and sort on the Int column. That represents the interactive response time. I'd guess that the query will be complete before they have time to notice that it was active.
If that seems unacceptable to the management, then perhaps offer to test it before or after hours, where it can't possibly impact interactive performance.
As an aside, the RPG guys can create Excel spreadsheets on the fly too. Scott Klement published some RPG wrappers over the Java POI/HSSF classes. Also, Giovanni Perrotti at Easy400.net has some examples of providing an Excel spreadsheet from a web page.
I'd mostly agree with Buck, a 1000 row result set is no big deal...
Unless of course the system is looking through billions of rows across hundreds of tables to get the 1000 rows you are interested in.
Assuming a useful index exists, 1000 rows shouldn't be a big deal. If you have IBM i Access for Windows installed, there's a component of System i Navigator called "Run SQL Scripts" that includes "Visual Explain" that provides a visual explanation of the query execution plan. View that you can ensure that an index is being used.
On key thing, make sure the work is being done on the i. When using a standard linked table MS SQL Server will attempt to pull back all the rows then do it's own "where".
select * from MYLINK.MYIBMI.MYLIB.MYTABE where MYKEYFLD = '00335';
Whereas this format sends the statement to the remote server for processing and just gets back the results:
select * from openquery(MYLINK, 'select * from mylib.mytable where MYKEYFLD = ''00335''');
Alternately, you could ask the i guys to build you a stored procedure that you can call to get back the results you are looking for. Personally, that's my preferred method.
Charles

Sql Server Queries taking time

I am facing some issues with my stored procedures.
I am having 1 stored procedure for a Stack Bar graph, showing one months data.
Earlier on my local server it was taking more than 40 seconds, so I made some changes and now it takes 4 seconds. The same query when I run on my live server takes more than 40 seconds.
The count of the records are the same as in my local server.
Can anybody tell me what I should do to make it more fast on my live server?
A good start is to run SQL Server Management Studio (SSMS), load up the query, and switch on 'Display Actual Execution Plan', this will show you exactly what SQL is doing with your query. It will also show you a relative '%cost' in relation to the steps in the query. This helps to identify which table/join/aggregate whatever is causing the query to take so long.
I also believe that in the latest version of SSMS it advises which indexes should be added.
Hope this helps.
Rich.
It's complicated question . It's a lot of parameters
Can change time of execution
CPU speed,
Ram,
Indexes
Assuming server will be more powerful in terms of processing power and RAM, indexes is something you would like to look into.
use indexes with your mysql tables and also it may be because of your hosting server's performance. The server may be faced with down time.

Really slow schema information queries on SQL Server 2005

I have a database with a rather large number of tables, about 3500, and an application that needs to access a table list.
On a particular server this takes over 2.5 min to return.
EXEC sp_tables #table_type="'TABLE'"
I know there are faster ways to do that but sadly I'm not in a position to modify the application and need to find a way to push it below 30 seconds so the application doesn't throw timeout errors.
So. What, if anything, can I do to improve the performance of this sp within sql server?
I have seen these stored procedures run slow if you do not have the GRANT VIEW DEFINITION permission set on your user account. From what I read, this will cause a security check to occur slowing down the query.
Maybe a SQL guru can comment on why, if this does help your problem.
Well, sp_tables is system code and can't be changed (could workaround in SQL Server 2000, not SQL Server 2005+)
Your options are
Change the SQL
Change command timeout
Bigger server
You've already said "no" to the obvious solutions...
You need to approach this just like any other performance problem. Why is it slow? Namely, where does it block? Disk IO? CPU? Network? Lock contention? The scientific method is to use a methodology like Waits and Queues, or its newer SQL 2008 equivalent Troubleshooting Performance Problems in SQL Server 2008. The lazy way is to simply check the wait_type, wait_time and wait_resource columns in sys.dm_exec_requests for the session executing the sp_tables call. Once you find out what is blocking the execution, you can proceed accordingly.
If I'd venture a guess, you'll discover contention as the cause: other sessions are locking table's metadata exclusively and thus block the execution of sp_tables, which has to wait until all operations in front of it finish.

MS Access 2007 and SQL Server 2000

I've recently been upgraded to Office 2007. I have several Access databases (that I've kept in the Access 2000 format for several reasons) that are linked to SQL Server 2000 databases. I have dozens of queries in these databases that I use often. I create new queries daily, sorting, summarizing and generally analyzing the data.
Since the upgrade, some queries take an extremely long time to complete (minutes rather than seconds), and one new one I've tried to run doesn't complete at all, I have to end task on Access. It's a rather simple query, it joins 3 tables, and sorts on one of the fields. I do this ALL THE TIME, and now it appears I can't.
I've searched for discussions of similar problems, but haven't seen specific recommendations.
Any ideas?
I would suggest deleting all your ODBC linked tables and recreating them from scratch as a starting point.
If your queries do not need to make any changes to the data you may find converting them to SQL Pass through queries will speed them up considerable. Note these queries are not parsed through the Jet DB Engine but sent directly to the server and bypass any linked tables.
You will have to use MS SQL syntax and lose the QBE grid though and the result will be read only.
If you need to update data then maybe stored procedures are the way to go.
When I converted to SQL Server backend, I used SQL Server Migration Assistant. I recommend it highly. It's very good at what it does.
Having said that, I assume you're using linked tables in your FE. I convert slow-moving queries by copying the SQL from Access, then pasting it into a "new query" window on SQL Server Management Studio. Then, working through all the syntax changes one at a time, I convert the query to T-SQL and save it as a view with the same name as the query in Access.
I have a little routine that then renames the Access query to "Local_" and then creates a linked table entry to the view on SQL Server. You'll find that a query that used to run for minutes will run for seconds this way. You can, of course, do this manually.
SQL Server Migration Assistant, by the way, will convert many queries (it doesn't try to convert action queries, only select queries...) with little or no intervention.
I would use Access Data Projects with SQL Server 2000. It works great when your SQL backend is that old.