How often are fakes assemblies generated? - microsoft-fakes

Sometimes when I build my test project it takes a significantly longer time than usual. I've noticed whenever it does take a long time it's because the *.Fakes.dll assemblies are being generated. But why is it not generating fakes every time I rebuild my project? What determines if the Fakes assemblies will be generated?

Related

Roslyn slower than CodeDom

I am using Roslyn to generate a runtime dll using a number of VB files on disk.
I am first creating syntax trees from all those files and then using them for creating VBCompilation and them emitting the assembly. The whole process takes ~ 10 min. Creating syntax trees takes up a significant portion of that time.
If I compile the files using CodeDom, it takes ~3 min.
Is there a way to make Roslyn compile files faster?

MSBuild - Assemblies differ slightly after each clean+build

I'm trying to work with an existing home grown implementation of click-once. Currently we manually update the manifest for assemblies that we actually changed. I'm attempting to make it automatic based on a binary comparison of the existing assemblies and the newly built assemblies. Unfortunately, it seems that each time I run clean + build (automated build script) there are small differences to the assemblies, essentially invalidating the use of our click-once solution at all. I'm guessing that these differences are caused by some sort of guid generation or something along those lines. Is there anyway to prevent the differences in the assemblies?
And unfortunately, due to our branching/CI strategy I don't have the option of not cleaning because each release is from a new branch.
Otherwise, any suggestions on how I can compare two assemblies to see if any code has changed, without having access to the source code.
Thanks,
David
Typically, autobuild systems check the filesystem timestamps of the binary vs the source files (or object files vs source files, depending on the language). If the source is newer than the binary/object, a rebuild is triggered. This strategy may work better for you instead of actually diffing binaries/
I found BitDiffer a tool from www.BitWidgets.com that compares what has changed in an assembly. While this runs slower than a binary comparison, it removes the need to have MSBuild create an identical assembly.
Thanks,
David

Does MSBuild know if a project needs to be recompiled?

First, I have a base assumption from watching Visual Studio compile things with its default .*proj files that, if you build the same solution twice in a row, it detects that nothing has changed and seems to fly through the solution build. Does this mean it knows that nothing was changed in a project and doesn't have to make a new DLL output?
If that's the case, I have a question. Say I have a solution with multiple class libraries, and an MSBuild task in each project that automatically increments the build's version by modifying AssemblyInfo.cs. Thing is (if my previous assumption is correct) it does this every time and triggers a new rebuild of each class library. Is there a target or property in MSBuild that can tell if the project needs recompilation, and skip my versioning step if so?
I ask because let's say I update project A, but not project B in a solution. If I run a build on the solution, I want it to update the version on project A, but since project B hasn't changed, I want to leave it alone.
Found something: http://msdn.microsoft.com/en-us/library/ms171483.aspx
MSBuild can compare the timestamps of
the input files with the timestamps of
the output files and determine whether
to skip, build, or partially rebuild a
target. In the following example, if
any file in the #(CSFile) item
collection is newer than the hello.exe
file, MSBuild will run the target;
otherwise it will be skipped:
<Csc
Sources="#(CSFile)"
OutputAssembly="hello.exe"/> </Target>
...that worked. But then got me thinking, what if someone pulls the code down from source control without the assemblies (which is how we do it)? Since it has no output to compare against, it'll do a compile and increment the version anyway. I think the complexities might lead me to abandon this approach.
It doesn't really matter if you increment on a developers box - what's important is that your daily/CI build is only incremented when needed. So, what I've done in the past is have some small XML file contain the next build number, and have an MSBuild task take this xml file and create a file called Version.cs (containing the versioning attributes you'd usually find in AssemblyInfo.cs).
Version.cs is never checked into your soure control - it's generated by the build.
Developers will sync the current XML file, build their binaries, and get the current version number. The continous integration build may also do the same thing. But a daily/official build will check out the XML file, increment the version information, and then check it in. From that moment on the version number has officially changed.
There are variations on this theme, but the general idea works.

DLL and LIB files - what and why?

I know very little about DLL's and LIB's other than that they contain vital code required for a program to run properly - libraries. But why do compilers generate them at all? Wouldn't it be easier to just include all the code in a single executable? And what's the difference between DLL's and LIB's?
There are static libraries (LIB) and dynamic libraries (DLL) - but note that .LIB files can be either static libraries (containing object files) or import libraries (containing symbols to allow the linker to link to a DLL).
Libraries are used because you may have code that you want to use in many programs. For example if you write a function that counts the number of characters in a string, that function will be useful in lots of programs. Once you get that function working correctly you don't want to have to recompile the code every time you use it, so you put the executable code for that function in a library, and the linker can extract and insert the compiled code into your program. Static libraries are sometimes called 'archives' for this reason.
Dynamic libraries take this one step further. It seems wasteful to have multiple copies of the library functions taking up space in each of the programs. Why can't they all share one copy of the function? This is what dynamic libraries are for. Rather than building the library code into your program when it is compiled, it can be run by mapping it into your program as it is loaded into memory. Multiple programs running at the same time that use the same functions can all share one copy, saving memory. In fact, you can load dynamic libraries only as needed, depending on the path through your code. No point in having the printer routines taking up memory if you aren't doing any printing. On the other hand, this means you have to have a copy of the dynamic library installed on every machine your program runs on. This creates its own set of problems.
As an example, almost every program written in 'C' will need functions from a library called the 'C runtime library, though few programs will need all of the functions. The C runtime comes in both static and dynamic versions, so you can determine which version your program uses depending on particular needs.
Another aspect is security (obfuscation). Once a piece of code is extracted from the main application and put in a "separated" Dynamic-Link Library, it is easier to attack, analyse (reverse-engineer) the code, since it has been isolated. When the same piece of code is kept in a LIB Library, it is part of the compiled (linked) target application, and this thus harder to isolate (differentiate) that piece of code from the rest of the target binaries.
One important reason for creating a DLL/LIB rather than just compiling the code into an executable is reuse and relocation. The average Java or .NET application (for example) will most likely use several 3rd party (or framework) libraries. It is much easier and faster to just compile against a pre-built library, rather than having to compile all of the 3rd party code into your application. Compiling your code into libraries also encourages good design practices, e.g. designing your classes to be used in different types of applications.
A DLL is a library of functions that are shared among other executable programs. Just look in your windows/system32 directory and you will find dozens of them. When your program creates a DLL it also normally creates a lib file so that the application *.exe program can resolve symbols that are declared in the DLL.
A .lib is a library of functions that are statically linked to a program -- they are NOT shared by other programs. Each program that links with a *.lib file has all the code in that file. If you have two programs A.exe and B.exe that link with C.lib then each A and B will both contain the code in C.lib.
How you create DLLs and libs depend on the compiler you use. Each compiler does it differently.
One other difference lies in the performance.
As the DLL is loaded at runtime by the .exe(s), the .exe(s) and the DLL work with shared memory concept and hence the performance is low relatively to static linking.
On the other hand, a .lib is code that is linked statically at compile time into every process that requests. Hence the .exe(s) will have single memory, thus increasing the performance of the process.

Compiling Massive VB.NET Project

To compile my current project, one exe with ~90,000 loc + ~100 DLL's it takes about a half hour or more depending on the speed of the workstation.
The build process is one of running devenv from Powershell scripts. This works very well with no problems.
The problem is that it is slow. I want to speed up this build process.
MSBuild (using VS-2005) is one option but there's a bug specifying icons to the vb compiler/linker on the command line such that it won't successfully link.
What other options are there to "make" VB.NET programs?
(Faster workstation is not an option.)
Do you absolutely have to compile the whole solution every time? With that many assemblies it seems unlikely that they all need to be built unless they actually change. If your solution is made up of multiple projects, you might consider creating multiple solutions in your build environment. One master solution could contain all the projects, another that includes the ones that change most often. You can then configure your build process to focus on the projects that have changed. Depending on the source control system you use, you may be able to query the system to determine which projects have changed since the last build, and only build those projects.
There's NAnt, and Cruisecontrol.NET for continuous build.
You mentioned that getting a faster PC is not an option, but how much memory do you have? 2GB should be the minimum for a developer machine. Also, using a fast 10K RPM hard disk makes a big difference.
Have you tried disabling any virus scanner during your build?
If you can, upgrade to the 3.5 version of MSBuild. It can build solution files, and enables support for multiprocessor support (or here if you need to host it yourself) enabling it to build projects in parallel.
The caveat is that you need to be using project references so it knows what to build.
Also, how long is it taking now? Have you looked at the CPU/Memory Usage (using something like PerfMon) to see if it is a bottleneck?
There's not much you can do to make the build process any faster short of adding more cores, CPU power, and memory to your machine, but that isn't an option in your case.
Most large projects are not self-contained in a single EXE. More often, logical units are moved into seperate assemblies, which can either be a DLL or EXE. The end result is a whole bunch of little assemblies, instead of one enormous one.
To cite one example, one project that I worked on was enormous, consisting of 700+ forms and 10s of 1000s of classes. Functionally related forms, such as those related to printing, report generation, user interrogation, etc were self-contained in their own EXEs. If I was working on the reports, I'd exclude all projects not related to reports from the build process, and this helps bring the compilation time down from a half hour to a few seconds.
This programming style can be tricky, but when it done right, it simply works and works flawlessly.
If you have a big number of projects then you should try to reduce them. You can always split them up in dll's later. The fewer projects the faster it can build. Especially if it has to build them in a certain order.
Breaking them in smaller solutions is also an option.