Has anyone tested UsePhysicalViewsIfNewer on a big system with lots of Views (thousands)?
If it has to check the disk each time doesn't make this slower? Does switching it off have any appreciable impact?
It's designed to be a dev-time feature (in particular if you're using RazorGenerator.MsBuild). It's meant to stop you from having to rebuild the application every time you change a view. I haven't tried using it in a real-world application, but turning it off in a production app should be the right thing to do.
Related
I want to compare the performance of some web frameworks (Ruby on Rails and ASP MVC3) but I don't know how to get started... Should I measure how fast each framework renders e 10k long loop or how fast its renders 10k lines of html? Are there maybe programs that can help you with this? Also how can the server load be monitored? Any help is appreciated!
Thijs
With respect, this is an unanswerable question. Is a Porche faster than a Prius? Well, no, not when the Porche is in the shop :-).
The answer depends on what you're trying to accomplish, how you do it, and how you code it. For example, Rails goes out of its way to transparently cache as much as it can, and then makes it trivially easy to cache stuff on your command. Of course there's a way to do the same in ASP MVC3, but is it as easy?
Can you find, hire, and train a suitable team in that knows how to use the framework? What's the culture of the organization (Windows or Unix?). I could write a really fast application in MS-Access and the same application poorly in Rails against a high-performance database and the MS-Access app would win. It's far from a given that an application will be written well, optimized, or whatever.
These days, a well-written application is typically performance bound on data I/O, and if this is the case, then it's which database you use that might matter. The loop-test you propose would test almost nothing, unless you're writing an application that calculates pi to the billionth place, or something.
I am sure there are published benchmarks of application frameworks available, but again, they need to make assumptions about what the application actually has to do.
The reality is that any reasonable framework (which includes both of the two you mention) is likely to be as fast as necessary for most scenarios, and again, what you do, and how you architect and implement it are the far more likely culprits for performance problems.
Once you do choose, there's a great (awesome) tool called NewRelic RPM which works with several frameworks -- I use it with Rails, and it gives you internal metrics at a level of detail that is beyond belief.
I don't mean to be glib, or unhelpful. But this is a little bit of a sore spot for me -- in so many cases people say "we should use foo instead of bar because foo's faster", and weeks go by as bar is replaced by foo. And then there are little incompatibilities. And an unexpected bug. And then, well, for some reason the new one is a little slower. And then after it gets optimized, it's finally just as fast.
I'll step down from my soapbox now :-)
I cached my control ,but there is still a long pause. What could I do to make this go even quicker?
What are some general tips for improving silverlight application loading speed.
If you use Silverlight toolkit themes, remove them from the project and measure the improvement. In our case, we noticed a major improvement in terms of load time performance just doing that. I subsequently copied some of the styles to our own resource files rather than refering the theming toolkit. Apologies if this does not apply in your case.
I'm developing a Rails 3 app deployed on Heroku which would like to optimize. I've explored different solutions such as query_reviewer or New Relic.
I couldn't make query_reviewer work with Rails 3.0.1 (also I had to switch to MySql, because PostgreSQL is not supported).
Regarding New Relic, it looks like a great free tool, but works only in production. I first need to improve many DB queries at development before getting to tune the app in production.
So none of this tools fit my needs.
Any advice? Maybe I should just rely on log traces and reduce the number of SQL queries?
You want to find out which activities aren't absolutely necessary and would save a good amount of time if you could "prune" them?
Forgive me for being a one-track answerer, but there's an easy way to do that, and it's easy to demonstrate.
While the code is running slowly and making you wait, manually interrupt it with Ctrl-C or whatever, and examine the stack trace. Do this a few times.
Anything you see it doing on more than one stack trace is responsible for a substantial percent of time, and it doesn't really matter exactly how much. If it's something you could prune, it will have that much less work to do.
If the efficacy of this method seems doubtful because it's low-tech, that's understandable, but in fact it can quickly find any problem any profiler can find.
I found that New Relic has a Development mode, which looks like an ideal setup for optimizing an application in development phase: http://support.newrelic.com/kb/docs/developer-mode
I tried air for a few applications such as tweetdeck, ebay, however they are prohibitively slow, I'm using a ubuntu 8.04 system.
Is this a common issue ?
In my first thought, AIR should be faster than web-browsing,
AIR only need to get the "data", and locally store the "format"
the dynamic effects are taken care of by FLEX rather than javascript,which should be faster.
what's your opinion?
It's a known issue - Flash/Flex for Linux is not as fast as the Windows version, AIR could not help this much, it's just a shell for executing Flex.
This question is really for SuperUser - I vote for moving it there.
Well I mean there are numerous reasons you could be wrong.
Time to load the AIR environment on your system is slow
Browser viewing is typically optimised; different parts are downloaded independantly
AIR may do misc things to be slow [writing to disk, etc]
Rendering in AIR may be more complex, hence slow
It's really hard to make a judgement like this though, because like anything, it depends on how well the implementation is written.
Obviously though, if it's slow for you then it's just slow. Go with whatever you want.
I'm not quite sure what your question is.
I've been with my current company for about four months now and I've noticed how several of our RnD scopes/documents use the term "lifecycle testing."
I've always thought that this term would mean the entire testing phase of a project, but the context of the term suggests that it instead is when the software is tested with "live" or "real" data in a staging environment as close to the production environment as possible.
This has led me to wonder if I have misunderstood the meaning of the phrase, in which case, can somebody explain what lifecycle testing is supposed to be or mean?
A lifecycle of software is it's behaviour in the following situations:
Startup. Does it load correctly? Is it fast at startup? (Depends on what kind of software)
Mid-life. Does it use much memory? Does it clean up memory? Does it do what it's ought to do?
Exeting. Does it cleanup resources correctly? Does it closes everything down well?
Lifecycle testing is very important for server applications, where it's especially focussed on "mid-life" (it's not an official term btw). Server apps may never crash while doing something importantly, and if they do: they shouldn't bring down the complete system.
The clue "lifetime" of being "live" or "real" isn't much true, it's more being "alive" than "live".
For instance; I've build a Flash client-application which is a "billboard" application, displayed at a large screen, and I am lifecycle-testing it:
Graphics, does everything show up well? Not just the first minutes, but even 12 hours without restarting the app.
Auto-update, does that work?
etc.