cshtml load time issue - asp.net-mvc-4

I'm investigating the perf issue...
I have log statement just before View("Index",Model) call to capture the time/latency information.
when i compare this log entry timestamp with the IIS log, there is always difference of few seconds.
It means, its taking time during the View.
What are different pointers to debug this issue?
cshtml is not making any server call but just iterating over loop from model data (30 count) and showing 1-2 line of information for each.

This may be caused by having multiple view engines configured for your application. By default, ASP.NET MVC loads both WebFormViewEngine and RazorViewEngine with WebFormViewEngine taking priority. So, ASP first searches your directory structure views for WebFormsViewEngine and if it doesn't find any. then goes on to search with RazorViewEngine. Of course if you aren't using any web forms for views, this is inefficient. You can disable by running this code in your applications bootstrap (e.g. global.asax)
ViewEngines.Engines.Clear();
ViewEngines.Engines.Add(new RazorViewEngine());

Related

VB.Net web app Delay in breakpoint hit

I am working on a VB.Net web application. If I placed a break-point in Page_Init event of the page. After hitting reload in browser, it takes around 5-6 seconds to hit that break-point. Whereas in my other application it hits almost instantly.
Any help on this would be a great help for me. Thanks in advance
I find out the problem. My application uses microsoft assess db. Those db files(.mdb file) are placed inside the bin folder as some of the db file is required by a third party dll to be place in the same location as the dll. Now I placed a copy of the .mbd file in the App_Data and changed the connection string accordingly. Now I got tremendous speed improvement.
This behavior was really strange. It's not taking time to complete the db call and get the result. It was taking time to hit the server method.
Anyway this solution makes my day. I post it here just in case someone else face the same and this might help.

SSRS 2008: How to generate multiple reports immediately?

Im building a site which brings up SSRS reports by opening new windows with the report url and report parameters. I can currenlty open a window for each report they want to run.
However, they also want the option to save the reports to a file share or Sharepoint of their choice, instead of having a bunch of browser window pop-ups for each report.
I understand I can use SSRS web services to setup a schedule (to run in a couple minutes from the time of request) which can save those files to a file share (or Sharepoint) but that seems like a hack to get a one time generating of reports onto a file share or sharepoint.
Is there any other way to instantly generate a bunch of reports, one time, immediately, without having to set them up on a scheduler that is set to run a couple minutes from the time they set it up?
"Note, they DO NOT want one report that has all the reports in it, these are seperate reports that are already built, and they want one file/window per report."
Not sure what you want when you say you want them all at once but one file window/per report? What presentation layer is showing this? You can make three seperate web calls at the same time to the webservice instead of the hosting site:
h ttp://(servername)/(ReporstServer)/PathtoReport1
h ttp://(servername)/(ReporstServer)/PathtoReport2
h ttp://(servername)/(ReporstServer)/PathtoReport3
instead of
h ttp://(servername)/(Reports)
If you just mean 'separate pages' on an Excel workbook you can do that with one report nesting other sub reports. You can build a master report that has rectangle objects that define pages as their properties and place a sub report in each of these rectangles.
Or you could make an html page that references the calls three seperate times in a 'form' object of the HTML doing a 'post' command.
< Form id="SSRSRender" action="http://(servername)/(reportServer)/(report) method="post" target="self">
"However, they also want the option to save the reports to a file share or Sharepoint of their choice, instead of having a bunch of browser window pop-ups for each report.
I understand I can use SSRS web services to setup a schedule (to run in a couple minutes from the time of request) which can save those files to a file share (or Sharepoint) but that seems like a hack to get a one time generating of reports onto a file share or sharepoint."
That's not a hack, that is the preferred method of saving a file is using the built in web service scheduler. Once a report is hosted (on a server hosting the SSRS) it can have configs set for SMTP send outs, file saves, and snapshots made.
If that is not enough you can create your own proxy classes if you want in C# or VB.NET and try to build your own front end talking to SSRS through SOAP requests to the Web Service.

A process monitor based on periodic sql selects - does this exist or do I need to build it?

I need a simple tool to visualize the status of a series of processes (ETL processes, but that shouldn't matter). This process monitor need to be customizable with color coding for different status codes. The plan is to place the monitor on a big screen in the office making any faults instantly visible to everyone.
Today I can check the status of these processes by running an sql statement against the underlying tables in our oracle database. The output of these queries are the abovementioned status codes for each process. I'm imagining using these sql statements, run periodically (say, every minute or so), as an input to this monitor.
I've considered writing a simple web interface for doing this, but I'm thinking something like this should exist out there already. Anyone have any suggestions?
If just displaying on one workstation another option is SQL Developer Custom Reports. You would still have to fire up SQL Developer and start the report, but the custom reports have a setting so they can be refreshed at a specified interval (5-120 seconds). Depending on the 'richness' of the output you want you can either:
Create a simple Table report (style = Table)
Paste in one of the queries you already use as a starting point.
Create a PL/SQL Block that outputs HTML via DBMS_OUTPUT.PUT_LINE statements (Style = plsql-dbms_output)
Get creative as you like with formatting, colors, etc using HTML tags in the output. I have used this to create bar graphs to show progress of v$Long_Operations. A full description and screen shots are available here Creating a User Defined HTML Report
in SQL Developer.
If you just want to get some output moving you can forego SQL Developer, schedule a process to use your PL/SQL block to write HTML output to a file, and use a browser to display your generated output on your big screen. Alternately make the file available via a web server so others in your office can bring it up. Periodically regnerate the file and make sure to add a refresh meta tag to the page so browsers will periodically reload.
Oracle Application Express is probably the best tool for this.
I would say roll your own dashboard. Depends on your skillset, but I'd do a basic web app in Java (spring or some mvc framework, I'm not a web developer but I know enough to create a basic functional dashboard). Since you already know the SQL needed, it shouldn't be difficult to put together and you can modify as needed in future. Just keep it simple I would say (don't need a middleware or single sign-on or fancy views/charts).

Nhibernate Profiler - Shows no information other than "session"?

So I am having problems getting NHibernate intergated in my MVC project. I therefore, installed the NHProfiler and initialized it in the Global.asax.cs file (NhibernateProfiler.Initialize();).
However, all I can see in the NHProf is a Session # and the time it took to come up. But selecting it or any other operations doesn't show me any information about the connection to the database or any information at all in any of the other windows such as:
- Statements, Entities, Session Usage
The Session Factory Statistics only shows Start time, execution time, and thats it.
Any thoughts.
Do you have any custom log4net configuration? Just thinking that might be overwriting NHProf's log4net listener after startup. If you refresh the page (and hence start another session*), does NHProf display another Session start? Also verify that your HibernatingRhinos.Profiler.Appender.dll (or HibernatingRhinos.Profiler.Appender.v4.0.dll if you're using .NET 4) is the same one as the current version of NHProf.
* I'm assuming that you're using Session-per-Request since this is a web app.

Cache data in SQL CE database

Background
I have an SQL CE database, that is constantly updated (every second).
I have a (web) application that allows a user to look at the data in real-time. At some point a user can click "take a snapshot" button, and it will open the snapshot in a different window.
And then on that form, there is "print" and "download" buttons that will either generate a page for printing, or will stream the data as CSV file - but same data snapshot has to be used, i.e. I can't go to the DB to get latest data for that.
Details
SQL CE dabatase is exposed through WCF web service.
Snapshot consists of up to 500 records, 10 columns each. Expiration time on the snapshot of 2 hours is sufficient.
It is a low-traffic application, so I don't expect more than few (5) connections at the same time.
Loosing snapshot is not a big deal, user can simply generate new one.
database is accessed by self-hosted WCF web service using Linq-to-SQL.
Web site is ASP.NET MVC hosted on UltiDev Cassini.
database, and web site are most likely be on the same box, when deployed. The entire app is intranet bound.
Problem
I need to cache the snapshot of the data at the moment user pressed "take a snapshot" button, so that I can use same data to generate print page, or generate a file for download.
Solution 1:
Each time there is a need to generate a snapshot, I will create a table in the database. Since there are no temp tables in SQL CE, I will need to clean it up myself.
Solution 2:
Cache the snapshot in-memory on either DB server, or web server.
Question:
Is there anything wrong with proposed solutions? Any different solution suggestions?
A consideration is the typical usage pattern. Do most snapshots eventually result in either being printed or exported or both?
If such is the case, we might as well "get it in memory" (temporarily) in the form of a non blocking (asynchronous) select statement from the device to the server. In this fashion the data will "be there" or well on its way when user decides to use it.
If on the other hand many snapshot end up not being effectively used, Solution #1 seems quite ok (maybe the table could be named after the account/user, hence guaranteeing "self clean up" based on the number of snapshot a user can maintain at a given time (though it seems to be just one, with even the tolerance of loosing it sometimes).
500 rows by 10 columns isn't really very large at all. For the sake of simplicity in this case, I might just generate the CSV data at the same time I generate the initial snapshot page, and then place the CSV data in a hidden field in the snapshot page. The "Print" and "Download CSV" buttons would then POST the form that contains the CSV data to a Print page that generates the printable version from the posted CSV data, or a page that streams the CSV directly back to the client's browser, respectively. This way, at least, you wouldn't have any clean-up issues to deal with, and you avoid having to cache something on the server (either in the cache proper or in the database) that might well end up never being used at all.
If you cached the CSV data in a hidden field client-side, you could even handle both the printing and the CSV display completely client-side with javascript, although I don't know if that's worth the trouble or not.