VB.NET: Display SQL Server Table Row Count in Real Time? - sql

I've got an app at work I support that uses a SQL Server 2008 DB (vendor created/supported app). One of the things this app does is load records into ETL tables in the DB all day to be moved to a data warehouse.
Unfortunately, the app is having lots of problems with the ETL tables right now and the vendor has no monitoring solution. I have no accesses to the DB to add a stored procedure or anything, but I can run a count * on the ETL tables to see if things are getting out of hand.
I have managed to write a VB.NET app that will return the COUNT of rows in these ETL tables so I can keep an eye on things, but it will only return the counts if I fire a button event.
I've never written an app that runs/updates "in real time" before, and I'm looking for some guidance on how I can create an app that would update these COUNT values in as close to real time as possible.
Any guidance would be greatly appreciated!

You could achieve that by writing a Console application, since you seem used to .Net.
The console application runs and you can read the values by using console.writeline() and console.readline() in your program.cs. Or you could update the record counts in a table or send an email.
When you say real time, the console application can be scheduled to run - e.g. through creating a task in task scheduler or sql agent, or it can be run by launching the exe. A rough example is that, you could send yourself an email every 10 minutes by creating a task that launches the console ap every 10 minutes.

If you're using a Windows Forms app, just add in a Timer object that fires the SQL query off. As an added bonus, you could include fields on the form to control how often the timer fires to get the resolution that's right for you.
You can use the Timer control in Console apps too, of course.

Related

Select Query from web application times out but completes with an error?

Background:
Two nights ago the old-as-hell and very poorly designed website for the company I work for got attacked by a bot that submitted about 5000+ phony orders. In the course of deleting all of those false orders from the database, SQL Management Studio crashed, and the application had to be stopped via task manager and restarted. After that I was getting optimistic concurrence control errors when trying to delete some of the fake records, and had to complete the cleanup via DELETE statement.
(yes, I KNOW it's generally bad practice to delete records from the results pane, but for people like me who aren't actually programmers but get stuck with the IT work because we're the only ones who know how to find the on switch, it makes me less paranoid that I won't delete a record I didn't mean to)
Ever since then, there is a specific page in the admin section of the site that takes a VERY long time to perform a SELECT query for a specific range. The query will complete if you sit there long enough, but here's a screenshot of the ColdFusion error box that comes up with it:
ColdFusion error message
I suspect that between the bot attack and Studio Express crashing in the middle of an DELETE query, part of the table is corrupted, which is why it exceeds the allowable time limit. I don't know if our webhost has a backup of the database (I've been in contact with them the last couple days).
What tools can I use to check for and repair errors on that table?

SQL Server Locking

I have an application that connects with a SQL Server database and cycles through batches of records to perform various tasks and then updates the database accordingly (i.e. "success", "error", etc...).
The potential problem I'm running into is that since it takes roughly a minute or so to get through each record (long story), if I have more than one user running the application there's a high chance of "data collisions" or users trying to process the same records at the same time. Which cannot happen if it is to execute properly.
Initially, I thought about adding a LOCKED column to help the application determine if the record was already opened by another user, however if the app were to crash or to be exited without completing the record it was currently on, then it would show that record as opened by another user indefinitely... right? Or am I missing an easy solution here?
Anyway, what would be ideal is if it were possible to have the application SELECT 100 records at a time, and "lock them out" on the database while the application processes them AND so that other users can run the application and SELECT a different set of 100 so as not to overlap. Is that possible? I've tried to do some research on the matter, but to be honest my experience in SQL Server is very limited. Thanks for any and all help!

Stopping SQL code execution

We have a huge Oracle database and I frequently fetch data using SQL Navigator (v5.5). From time to time, I need to stop code execution by clicking on the Stop button because I realize that there are missing parts in my code. The problem is, after clicking on the Stop button, it takes a very long time to complete the stopping process (sometimes it takes hours!). The program says Stopping... at the bottom bar and I lose a lot of time till it finishes.
What is the rationale behind this? How can I speed up the stopping process? Just in case, I'm not an admin; I'm a limited user who uses some views to access the database.
Two things need to happen to stop a query:
The actual Oracle process has to be notified that you want to cancel the query
If the query has made any modification to the DB (DDL, DML), the work needs to be rolled back.
For the first point, the Oracle process that is executing the query should check from time to time if it should cancel the query or not. Even when it is doing a long task (big HASH JOIN for example), I think it checks every 3 seconds or so (I'm looking for the source of this info, I'll update the answer if I find it). Now is your software able to communicate correctly with Oracle? I'm not familiar with SLQ Navigator but I suppose the cancel mechanism should work like with any other tool so I'm guessing you're waiting for the second point:
Once the process has been notified to stop working, it has to undo everything it has already accomplished in this query (all statements are atomic in Oracle, they can't be stopped in the middle without rolling back). Most of the time in a DML statement the rollback will take longer than the work already accomplished (I see it like this: Oracle is optimized to work forward, not backward). If you are in this case (big DML), you will have to be patient during rollback, there is not much you can do to speed up the process.
If your query is a simple SELECT and your tool won't let you cancel, you could kill your session (needs admin rights from another session) -- this should be instantaneous.
When you cancel a query, the Oracle client should send OCIBreak() but this isn't implemented on a Windows server, that could be the cause.
Also, have your DBA check the value of SQLNET.EXPIRE_TIME.

Start SQL Server Jobs when field = specific value

I don't know if this is even possible, so i would appreciate any ideas, even those outside of Sql Server 2005, on how this might be accomplished. I have a linked server set up to a remote mainframe and I have a simple import job that runs overnight. The problem is that the table on the mainframe that the import needs to come from is just a temporary report file that gets overwritten each time a user runs that report, sometimes with different parameters, so the data is always changing. One request was that the SQL job would run only when a specific user runs the report. This is stored as a field in the same mainframe report table that the import is coming from. Setting up a scheduled run on the mainframe is not an option since we don't control it an having the owners set it up would be costly, don't ask me why.
Any ideas that will keep me from forcing the user to run the mainframe report at a specific time would be helpful.
Well, the only thing that you could do from this side is to pull periodically and detect a change. You may try to set up a job that queries only report version, time-stamp and the author. The job runs every 5 minutes and triggers the import job when it detect changes. Not elegant, but it may be good enough.

WinForms ReportViewer: slow initial rendering

UPDATE 2.4.2010
Yeah, this is an old question but I thought I would give an update. So, I'm working with the ReportViewer again and it's still rendering slowly on the initial load. The only difference is that the SQL database is on the reporting server.
UPDATE 3.16.2009
I have done profiling and it's not the SQL that is making the ReportViewer render slowly on the first call. On the first call, the ReportViewer control locks up the UI thread and makes the program unresponsive. After about 5 seconds the ReportViewer will unlock the UI thread and display "Report is being generated" and then finally show the report. I know 5 seconds is not much but this shouldn't be happening. My coworker does the same thing in a program of his and the ReportViewer immediately displays the "Report is being generated" upon any request.
The only difference is that the reporting server is on one server and the data is on another server. However, when I am developing the reports within SSRS, there is no delay.
UPDATE
I have noticed that only the first load of the ReportViewer takes a long time; each subsequent load of the same or different reports loads fast.
I have a WinForms ReportViewer that I'm using in Remote processing mode that can take up to 30 seconds to render when the ReportViewer.RefreshReport() method is called. However, the report itself runs fast.
This is the code to setup my ReportViewer:
rvReport.ProcessingMode = ProcessingMode.Remote
rvReport.ShowParameterPrompts = False
rvReport.ServerReport.ReportServerUrl = New Uri(_reportServerURL)
rvReport.ServerReport.ReportPath = _reportPath
This is where the ReportViewer can take up to 30 seconds to render:
rvReport.RefreshReport()
I found the answer on other forums. MSDN explains that a DLL is searching for some Verisign web server and it takes forever... there are 2 ways to turn it off, one is a checkbox in internet explorer and another is adding some lines to the app.config file of the app.
You can pull a report in two modes, local and server. If you're running in local mode, it's going to pull both the data and the report definition onto your machine, then render them both. In server mode, it's going to just let SSRS do all the work, then pull back the information to render.
If you're using local mode, it could be a hardware issue. If you've got a huge dataset, that's a lot of data to store in memory.
Other than that, that's not a lot of info to go on...
Update: since you've noticed it's only the first call that takes a while, have you done any profiling to determine if the bulk of the work is done on the backend SQL calls or is spent in the actual report render?
If it's faster on subsequent calls, it's possible you're (incidentally) caching at one level or another. You can cache reports (http://www.sqlservercurry.com/2007/12/configure-report-to-be-cached-ssrs-2005.html) or it could be that the execution plan to return the data is being cached deep in SQL Server.
In summary of the various ideas already presented, it could be
startup time for the report viewer infrastructure on the client
cache loading time on the client
query execution time at the server
report rendering time at the server
Try running the report, closing down the client, restarting the client and running the report again. If the report is much faster the second time, repeat this experiment but load, run and unload another large application in between report runs.
If the second report run continues to be much quicker, then the difference you are seeing has more to do with the SQL Server's I/O cache than what's happening on the client. You can further test this by deliberately displacing the MSSQL cache by running a query that pulls a lot of data from tables that aren't used in the report.
All of the above is interesting but unimportant. If you want to ensure snappy report response Reporting Services provides extensive support for scheduled generation of reports, so that when the consumer requests the report, the only delay is network delivery.
If your users insist on reporting on up to the minute (live) data they'll either have to specify tighter constraint parameters or get used to waiting.
ReportServer always takes a while to wake up because it's running under IIS. There is a process time out on each AppPool. We have the same issue with our ASP.NET application's report viewer. You could try increasing the AppPool keep alive times in the IIS settings.
See here:
http://www.sqlreportingservices.net/Ask/5536.aspx
http://www.developmentnow.com/g/115_2005_9_0_0_597422/First-run-of-reports-is-SLOW.htm
I'm assuming you're running SQL2005 SSRS of course.
One option is to upgrade to 2008 where SSRS no longer depends on IIS.
Thinking way out of the box: Is the report server on different machine to the one running the application? The network could be taking a long time to resolve "reportServerURL". Once resolved the name would be cached and hence subsequent calls would be quicker..
I have had this problem before with badly configured DNS servers. Try replacing "reportServerUrl" with "reportServerIPAddress" and see if the initial call to ReportViewer is any faster.
I was having this same problem.
i find out that changing the default printer(slow network here) fix the problem.
The ReportViewer gets some information from the default printer,
and since the network here is very slow, i was having 10 seconds of delay
Hope it helps
UPDATE
I have noticed that only the first load of the ReportViewer takes a long time; each subsequent load of the same or different reports loads fast.
You are set to run on server which means the SRS server needs to do the rendering as such the first time there will be a delay for one or all of the following reasons (these are the slowest of the bunch, there are others but they are quicker):
DNS resolution: The URL needs to be resolved to an IP address. Once this is done it is cached locally which speeds it up.
ASP.NET/IIS needs time to warm up. There is all kinds of compilation and initial loading that must occur - after loaded it will remain in the servers memory until you restart IIS or the default clean up time occurs.
Reporting Services needs time to warm up in the same way as ASP.NET/IIS does.
To test for this use a network monitor such as Netmon (if you are a Microsoft fan) or Wireshark (my recommendation) and watch the traffic from your machine to the server. You'll see the DNS request, then the HTTP requests go and the delay will be in the returning data. On second call you will see the speed is vastly different in the return and DNS checks.
What you could do to prevent this is a warm up script - I don't know one for SRS but here is a link to a SharePoint one which would not be hard to change since it has the exact same issues.
It seems as though you are going after the SSRS report directly. You may want to hit the SSRS web service instead. That may improve your performance.
Here is a possible resolution for your problem:
Try to access the first report from web before accessing any report with the application.
If the problem doesn't appear, you could make an application that will "preload" the first report, in order to allow reporting services to do their start-up.
I've seen this kind of solution for some demo applications from Microsoft. The applications where using Analysis Services and Reporting Services.
Good luck otherwise
To my knowledge, I think it's a problem Microsoft is finding it tough resolve.
Initially, the report loader is only slow at firt time rendering of report and subsequent reports loads noramal (a bit faster).
To help counter this, place a Startup Form with a label (Label1) and Timer (Timer1) control. Set Label1.Text="Please, wait (about 15 secs)". Set Timer1.Interval=3.
At the form_Load event of the Startup Form, set Timer1.Start.
At Tick event of Timer1 place "frmMyReportForm.reportViewer1.SetDisplayMode(Microsoft.Reporting.WinForms.DisplayMode.Normal)"
"frmMyReportForm" any of the forms in your project containing a reportviewer control.
All the delays will be caught here so that when you generate the actual report, there will be no delays.
I hope this might be helpful to my fellow developers.