Roslyn Recompile Slow - asp.net-core

I have seen many videos of code changes to a controller and then refreshing the page showing the update very quickly (1-2 seconds) and everyone is always talking about how fast Roslyn is.
I have just installed VS2014 CTP3, created a web application, hit run and then edited the message Contact action method returns.
When I hit refresh in my browser, the page takes about seconds to load (the first time, after that its instant). This the app pool starting back up and recompiling the code, but this seems a lot slower than what I have seen others experiencing.
Is anyone else having this? Could it be doing a full recompile rather than a partial recompile each time? Does anyone know how I can find out what is causing the slowness?
Thanks

I think most of the time the speed of Roslyn is compared to the speed of the previous source to il compiler. In the previous version of .NET you always pre compiled everything to il (a .net dll) which will always be somewhat slower than not pre compiling. This performance loss might be mitigated by performance gains in the il to native engine which is currently also being optimized. Depending how much needs to be compiled, jit compilation might still be to slow for your situation, so you might need to pre compile some files and or libraries.
Looking at recent merged pull requests and the fact that Roslyn is still in beta you might see considerable performance gains between the current alfa release and the RTM.
https://github.com/aspnet/KRuntime/pull/522
https://github.com/aspnet/KRuntime/issues/498

Related

New to app updating and debugging deprecated code, what is the best thought process for making changes and updates?

I am extremely new to mobile developing and have recently been put in charge of updating a React-Native app using Expo, that hasn't been touched in about 18 months.
I am familiar with creating web applications or websites from scratch and thinking about debugging from a creation standpoint with updated code that is not deprecated but the idea of updating old code is foreign to me.
Can someone help share their thought process when it comes to this?
The mobile app itself runs alright - crashes and malfunctions from time to time.
Is the idea to go in and see what code is deprecated and which dependencies need to be updated? And/Or is the idea to try and find what is making the code crash and malfunction and leave deprecated code and old dependencies?
Thanks for your patience with what seems like a silly question.
And/Or is the idea to try and find what is making the code crash and malfunction and leave deprecated code and old dependencies?
It is typically recommended that crashes and malfunctions should be resolved before updating libraries. Some libraries may introduce further crashes and errors after upgrade due to breaking changes and deprecation. As such it would be difficult to separate code bugs from errors caused by library upgrades.
If some of the bugs and crashes cannot be resolved without library update, then fix as much as you can before updating the library.
Is the idea to go in and see what code is deprecated and which dependencies need to be updated?
This question is subjective. It depends on how much time can be spared for the upgrade process. Obviously, updating libraries can have other small benefits like improving performance or reducing code size etc, but if a library that is used extensively in code (eg. react-navigation) has not been updated for many major releases then update/ refactor and testing will take a long time and may not be worth the time if nothing is broken.
Furthermore, it is easier to update libraries regularly and frequently rather than wait for the current version to be deprecated. This way, we avoid having to deal with tons of breaking changes in one go.

Automatic rebuild/recompile of .Net Core 2.2 Web API after publishing

Whenever I deploy a newer version of my .Net Core 2.2 Web API, the first time the API is called by the consuming client (such as a GET), it takes a while for it to reply. Subsequent calls to the API are then fast. I believe this is because the first time a .Net Core web app is called, after the files having been updated, it has to do a quick re-build/re-compile (not sure what the correct term is).
Is there a way to get the API to be automatically re-built/re-compiled after publishing it? I'm using Visual Studio 2019 Community.
Thanks
There is no rebuild/recompile here. The published app is already compiled. You haven't given us any information about your hosting situation, but some means of hosting will take longer than others to restart. For example, if your running in IIS, the App Pool literally shuts down and restarts, which takes some period of time. The app itself also has a startup period, namely running everything in Program and Startup. That should be relatively quick, but can be slower depending on what you're doing there. For example, if you're migrating your DB at startup (an antipattern, but one many people use), that's obviously going to add some time to the startup.
Also, .NET Core is a JITted runtime. The compilation process actually produces what's know as IL code. This IL code is then run on the runtime, just in time, hence JIT. However, the IL code can be constructed to optimize for different run scenarios. What's good for startup speed isn't necessarily good for steady-state performance and vice-versa. The runtime takes a balanced approach, optimizing for reasonable performance both on startup and steady-state. Starting with .NET Core 2.1, the idea of tiered compilation was introduced. It's complicated, but it essentially amounts to compiling the application twice: once for optimal startup and once for optimal steady state. Then, then the different compilations are swapped in or out depending on the status of the application. This enables faster startup and better steady-state performance. It has to be turned on with a tag in your project file, though, if you want to use it:
<TieredCompilation>true</TieredCompilation>
Finally, .NET Core 3.0 has made greater strides here, both in improving tiered compilation and in introducing the ability to compile to native. Native compilation removes the runtime entirely, so everything runs right on the metal. That's going to obviously give you the best performance, but it's also the most persnickety, as you have to compile for the exact destination, down to the architecture, OS, and even version. However, it's not yet available for ASP.NET Core apps. Still, it's something to keep on your radar.

Managing LOH in vb.net

I have an vb.net application I distribute to my analysts – We assign perhaps 100 200MB images at a time. The app sequentially opens the large jpg image using GDI+ and the image is placed in the LOH. I scan each pixel looking for data. - when done I dispose the image and use GC.collect. But this does not clear the LOH, and as a result the LOH keeps increasing until the app crashes. A work around is to chop the assignment into 25 instance chunks, but this is risky as our analysts often do this late at night – perhaps after a beer or 2.
The C# construct is
GCSettings.LargeObjectHeapCompactionMode = GCLargeObjectHeapCompactionMode.CompactOnce
but there is no GCSettings available in vb.net
My vb.net code is
loadedImage.Dispose()
MasterImage.Dispose()
GC.Collect()
Finalize()
But I cannot find a vb.net method to force the LOH compaction
When done
Can you help?
GCSettings.LargeObjectHeapCompactionMode was added in .NET 4.5.1. It exists in VB.NET as well as C#. You're probably targeting a lower version of the .NET runtime. If you want access to this feature you will need to compile against a framework version of 4.5.1 or higher.
This likely won't solve the underlying problem, however. Your leak may not even be where you think it is. Profiling your application with an allocation profiler is the best way to track down resource leaks. Without a Minimal, Complete, and Verifiable example, it is difficult to guess where your application may be going wrong.

Separate 'debug' and 'release' builds?

I think it's better to release the version of the software which your developers actually tested; I therefore tend to delete the 'debug' target from the project/makefile, so that there's only one version that can be built (and tested, and debugged, and released).
For a similar reason, I don't use 'assertions' (see also Are assertions always bad? ...).
One person there argued that the reason for a 'debug' version is that it's easier to debug: but, I counter-argued that you may eventually want to support and debug whatever it is you released, and so you need to build a release which you can if necessary debug ... this may mean enabling debug symbols, and disabling some optimizations, even in the 'release' build.
Someone else said that "this is such a bad idea"; it's a policy I evolved some years ago, having been burned by:
Some developers' testing their debug but not release versions
Some developers' writing bugs which show up only in the release version
The company's releasing the release version after inadequate testing (is it ever entirely adequate?)
Being called on to debug the release version
Since then I've seen more than one other development shop follow this practice (i.e. not have separate debug and release builds).
What's your policy?
Having separate debug and release builds is a good idea, because it does make development easier.
But debug builds should be for development only, not for testing. You test release builds only. And you don't use developers to test those builds, you use testers.
It's a simple policy that gives the best of both worlds, IMO.
Edit: In response to a comment, I think it's obvious that debug and release builds (can) generate different code. Think "-DDEBUG" vs. "-DNDEBUG", and "#if defined(DEBUG)", etc.
So it's vital that you test the code that you end up shipping. If you do generate different code in debug and release builds, that means testing twice - regardless of whether or not it's tested by the same person.
Debug symbols are not that big an issue, however. Always build with debugging symbols, keep a copy of the unstripped binary, but release a stripped binary. As long as you tag each binary with a build number somehow, you should always be able to identify which unstripped binary corresponds to the stripped binary that you have to debug...
How to strip binaries and load symbols in your debugger from an external source is platform-dependent.
This might be minor, but it adds up to what others have said here. One of the advantages of having QA test release builds is that over time the built in debugging and logging capabilities of your software will advance due to the needs of developers who need to figure out why things are going wrong in QA.
The more the developers need to debug release builds, the better tools you'll have later when customers start having issues.
Of course, no reason for developers to work on release builds as part of the development cycle.
Also, I don't know any software company that has long enough cycles to afford the overhead of switching QA from debug to release builds halfway through a version's testing period. Having to do a full QA cycle is something that all too often happens pretty rarely.
Our policy is to have developers work on Debug builds, but EVERYONE else (QA, BAs, sales etc) runs the release version.
Yesterday I had to fix a bug that only showed up in the release build it, was obvious what was happening simply BECAUSE it only showed up in release
It's first one here in this shop, and I've been here 18 months or so.
Where things get hairy is when the Release build does different things to the debug build - Yes, I have been to Hell and seen this in some very old, very ropy production code.
I see no reason why not to have both if the only difference between the configurations are debug symbols and optimisations.
so you need to build a release which
you can if necessary debug ... this
may mean enabling debug symbols, and
disabling some optimizations, even in
the 'release' build.
Ummm... it sounds like you're doing a debug build to me... right?
The part where you went wrong is this statement:
I think it's better to release the
version of the software which your
developers actually tested
Developers don't test code. Tests test code.
Your unit tests should test ALL build configurations. Do not make your developers work with one hand tied behind their back - let them use all the debugging tools they have at there disposal. A Debug build is one of these.
Regarding asserts: the use of assertions greatly depends on whether or not you program by contract. If you do, then assertions merely check the contract in a debug build.
As per my answer in the linked thread, we also use the same build for debug and release for very similar reasons. The 10%-20% performance gains from the optimiser tend to be very minor when compared to manual optimisations at algorithm level. A single build removes many potential bugs. Specifically;
Uninitialised variables and small buffer overflows may end up with very different results in debug and optimised release builds.
Even with the symbolic information available, debugging an optimised release can be difficult as the object doesn't match the source, e.g. variables may have been optimised out and code may have been re-arranged. Thus bugs reported in tested release builds can be more difficult, and hence time-consuming, to track down.
Having compared unoptimised and optimised builds under automated regression tests, the performance gains provided by the optimisation don't provide enough extra value to have two builds in my case. It is may be worth noting that the software that I develop is very CPU hungry (e.g. creating and manipulating large surface models).
When developing with Java, I hate non-debug versions. When an exception is thrown, you get no line information which makes it hard or even impossible to track bugs down. Also, the runtime difference between debug and non-debug is around 5% with Java 5 or later, so this is really no issue and with todays hard disks, size doesn't matter anymore.
On the plus side using debug versions:
Stack traces contain all the information you need
Variables can be examined
If you have a problem in production, you can simply attach to the running process without having to stop the server first to install a debug version.
You won't get caught by clever optimization bugs
The build is more simple (just one artifact)
Developers work with debug builds, QA and everyone else uses the release version, which we call "production". The main advantage to this is that in the debug build, we can add lots of extra code and assertions. Some objects contain extra pieces of information that have no use except when viewing code in the debugger. Some objects validate themselves periodically to make sure that all the state information is consistent. These things make the debug version much slower, but they have helped us find no end of bugs that would have been hell to find in the production build.
As I said, all of our QA and performance testing uses production builds, and we do occasionally run into problems that show up in production but not in debug. But they're relatively rare, and as a developer, the advantages of debugging a debug build rather than a production build far outweigh that problem.
I think it depends on the project size and what type of build system and testing that you are using.
If you have an automated build system in place, and it's simple to run unit and functional tests on a given build, then you should never have any problems with multiple build types.
I've always subscribed to the "Ship what you debug, so you can debug what you ship" approach, for all the reasons you list in your question.
In my opinion this discussion missing a very important point:
It really depends upon what kind of project it is!
If you create a native (C/C++) project you will in effect be forced to create debug builds, simply because compiler optimizations can make debugging near impossible in some cases.
If you create web applications you might rather wish to simply have one build (although "build" is rather misleading for some web applications) that can enable logging features during runtime.
Although a native C++ project and a PHP web application are obviously not all kinds of project that exist, I hope my point got across.
P.S.: When developing for C#, you run into a border case since although using a debug build disables compiler optimizations, in my experience you will not run into nearly as much differences as with C++
here we develop in debug mode and do all unit testing in release mode. we are a small shop with just a few (under 12) application to support ranging from Classic ASP, ASP.Net, VB.Net, and C#.
We also have a dedicated person to handle all testing, debugged problems are thrown back to the developers.
We always build both, never even considered not doing so. Enabling debug options increases your code size and slows performance, possibly not an issue with your type of software when testing but what if the customer is running your code plus 5 other apps...
The issues with testing can be sorted out by using automated testing so you're release build can be effortlessly tested when you think you're ready to release. The failure of your developers or company to properly test release builds is not a failure in the idea of release and debug builds but in your developers and or company.
On your last point, I have never been called upon to debug a release build, just to fix it...
It's a tradeoff. Given that CPU cycles are cheap and getting cheaper while human cycles remain expensive, it makes a lot of sense to maintain only a single version of a large, complex program -- the debug(gable) version.
Always using assertions always is a safer policy than never using them. If producing separate debug and release versions, re-enable whatever #defined symbols you need to guarantee that assertions are enabled in the release version also.
I think the tradeoff is simple: yes, with only a release build, you really test what's actually being shipped. On the other hand, you do pay a price in ease of debugging for your developers and/or performance for the user, so it's up to you to check both cases.
On most medium- to large-size projects, ease of debugging will ensure a better product for your users in the end.
See this What's your most controversial programming opinion?
quote:
Opinion: Never ever have different
code between "debug" and "release"
builds
The main reason being that release
code almost never gets tested. Better
to have the same code running in test
as it is in the wild.
By removing the "debug target", you are forcing developers to debug on the release version of the software. What that probaly means in practice is two things:
1) "release builds" will have optimizations disabled (otherwised developers can't use a debugger)
2) No builds will have special PREPROCESSOR macros altering their execution.
So what you will really be doing is merging the release and debug configurations rather than eliminating just the "debug" mode.
I personally have done this with iOS development with no ill-effects. The amount of time spent in our written code is less than 1% of what is really happening, so the optimizations were not significant contributors. In this case, they really did seem to cause an increase in bugs, but even if they didn't, the idea of testing one way, then giving to QA with different code introduces just one more factor to consider with issues.
On the other hand, there are cases where the optimizations are necessary, where they are useful, and even where there is enough time for testing both. Usually, the changes between debug and release are so minor that it doesn't cause anyone any issues at all.
If you've got a real QA group who can be counted on to fully test the thing, I'd say make debug builds until you get close to the release, and then make sure a full QA cycle is done on the same build that's going out the door.
Although in at least one case we released something that still had some debug code in it. The only consequence was it ran a tiny bit slower and the log files were pretty damn big.
In my company we have both Debug and Release.
- The developers use the debug version to properly find and fix bugs.
- We are using TDD and so we have a big test suite that we run on our server that tests both debug and release build configurations as well as 64/32 builds we have as well.
So if using the "debug" configuration helps a developer to find a bug faster there is no reason not to use it - when the code goes into the server (to be further tested) or reviewed we use the "Release" one.
I learned to build the release version with .PDB files long ago so that I could debug the release version. What a lot of programmers tend to forget is that when you run the debug version, with all the optimizations turned off, you are debugging a different program altogether. It may behave like the release build (for the most part), but it is still a different program than the release build.
In addition, debugging the release build is not that difficult. And if you get a crash dump, you have to be able to do it anyway.

Whats the best way of finding ALL your memory when developing on the Compact Framework?

I've used the CF Remote Performance Monitor, however this seems to only track memory initialised in the managed world as opposed to the unmanaged world. Well, I can only presume this as the numbers listed in the profiler are way short of the maximum allowed (32mb on CE 5). Profiling a particular app with the RPM showed me that the total usage of all the caches only manages to get to about 12mb and then slowly shrinks as (I assume) something unmanaged starts to claim more memory.
The memory slider in System also shows that the device is very short on memory. If I kill the process the slider shows all the memory coming back. So it must (?) be this managed process that is swallowing the memory.
Is there any simple(ish?) fashion how one can track unmanaged memory usage in some way that might enable me to match it up with the corresponding P/Invoke calls?
EDIT: To all you re-taggers it isn't .NET, tagging the question like this confuses things. It's .NETCF / Compact Framework. I know they appear to be similar but they're different because .NET rocks whereas CF is basically just a wrapper around NotImplementedException.
Try enabling Interop logging.
Also, if you have access to the code of the native dll you are using, check this out: http://msdn.microsoft.com/en-us/netframework/bb630228.aspx
I've definitely been fighting with unmanaged issues in a C# managed app for a while -- it's not easy.
What I've found to be most helpful is to have a regular output to a text log file. For example you can print the output of GlobalMemoryStatus every couple of minutes along with logging every time you load a new form. From there you can at least see that either memory gradually erodes, or a huge chunks of memory disappeared at specific times of the day.
For us, we found a gradual memory loss all day as long as the device was being used. From there we eventually found that the barcode scanning device was being initialized for no particular reason in our Form base class (I blame the previous developer! :-)
Setting up this logging may be a small hassle, but for us it paid huge dividends in the long run especially with the device in live use we can get real data, instrumentation, stack traces from exceptions, etc.
Ok, I'm using C++ on CE, not C# so this may not be helpful, but...
I use a package called Entrk toolbox which monitors memory and resource usage, leaks, and exceptions under Windows CE. Pretty much like a lightweight CE version of boundschecker. Does the trick most times.