What do I copy to have a safe backup of a VS solution in progress? It seems to be more complicated than just
everything in the solution directory (.sln file, .csproj, .csproj.user *.xaml, *.cs, )
Testing a backup solution, I was told I could ignore ./bin and ./obj as they a rebuilt, so as a test (after backup) I moved them away. A clean and rebuild without those went badly wrong, with over 2,000 errors.
Starting a new C# WPF project and copying in the .xaml and .cs files, then clearing up a handful of errors (such as a reference to MySQL) seemed to work OK, but that is less than ideal.
So is there a sensible list of things to backup, for example to send a project to someone else?
Related
In a huge SSDT database solution with lots of projects and references, I'm adding a reference from my project to system databases(master, msdb), it works well, and the build is successful.
And after some time I start receiving errors about incorrect reference. I go to the references section and see this: https://pasteboard.co/JqzDSDh.png
I tried removing the second reference and the errors were gone but then this issue comes back and I see two identical references again.
Thank you!
Most probably something is wrong with your project.sqlproj file. Try to search master.dacpac keyword there and make sure that there is no multiple entries. Make sure that dacpac path is not fully hardcoded, but uses $(DacPacRootPath) variable there.
This is an example how the reference should look like (make sure that you have the right SQL version defined in the path. Mine one is 140 here).
<ArtifactReference Include="$(DacPacRootPath)\Extensions\Microsoft\SQLDB\Extensions\SqlServer\140\SqlSchemas\master.dacpac">
<HintPath>$(DacPacRootPath)\Extensions\Microsoft\SQLDB\Extensions\SqlServer\140\SqlSchemas\master.dacpac</HintPath>
<SuppressMissingDependenciesErrors>False</SuppressMissingDependenciesErrors>
<DatabaseVariableLiteralValue>master</DatabaseVariableLiteralValue>
</ArtifactReference>
If that wouldn't help, try to run "Clean Solution", then delete all *.jfm files and *.dbmdl files, bin and obj folders and re-build the project.
I have a client who needs to ensure that the system cannot be compromised from a 'disgruntled employee' - ie taking a copy of the 'front end' (the data is not a problem, it is the actual workings and coding that needs to be secured.
The current system is too large to make an ACCDE file (40,000kb).
I have tried reducing, compacting etc. No Joy
I also tried creating a brand new copy and re-importing all modules and objects. No Joy
I then tried creating a blank database with only 1 form and some code to undertake the importing of the objects. The code worked fine.
The file with just the form and code worked in creating the ACCDE file but the code then failed to run as it is 'transferring objects' and ACCDE files will not allow that.
Are there any alternatives to solve the original problem?
Thank you
When you are creating ACCDE file, be sure to check if project is compilable:
After this, you'll get errors, that prevents you to create ACCDE file.
Other point is that if you are using other files as source libraries (through References), then you should compile them also. ACCDE would accept only ACCDE files as Access files as library sources.
Anyway, just try first solution, it should help you as I think. Because 40 Mb - not a size for ACCDE. Only limitations I know: not more than 1000 objects in database, not more 2 Gb database size for 32 bit Access.
Van Ng - Thank you.
I missed the obvious - after checking if the project using 'Compile Project...' I found some old, redundant code that I had left in with a variable that had not been declared.
I have fixed that and it compiled.
Problem Solved - many thanks.
I am working on a project that will copy files to a database every time something is added to a specific directory. Now the program works fine when I'm testing with a small set of data but I was wondering if someone could explain how the FileSystemWatcher.Created event work.
My main concern is when I use this on a larger scale the program may slow down when it handles 100,000+ files.
If this is an issue could anyone explain if there is some sort of workaround to polling the original folder, lets call that "C:\folder", and maybe poll a temp folder instead.
I have not tested the watcher with 100,000 files. However, in most cases you should not have so many files in a folder awaiting processing. I recommend a structure like
C:\folder
C:\folder\processing
C:\folder\archive
C:\folder\error
As soon as you begin working on a given file, move it into processing. If you successfully process it, move the file again to archive. If there is an error while processing a file, instead move it into error.
This will make it easier for you to keep the files organized and diagnose problems that occur in production.
With that file structure, you will not run into issues with large numbers of files in the folder you are watching, unless you receive files in incredibly large bursts compared to the speed with which they can be moved into the processing state.
I've got a question about building a deployment script using SSDT.
Could anyone tell me if it's possible to build a deployment script using SQLPackage.exe where the source file is NOT a dacpac file, but uses the .sql files instead?
To give some background, I've created a project in Visual Studio 2012 for my database schema. This works great, and SSDT builds the folder structure without a problem (functions, stored procedures etc which contain all the .sql files).
Here's the problem - the database in question is from a legacy system, and is riddled with errors. Most of these errors we don't care about anymore and it's not practical or safe to fix them all, so for years we've basically ignored them. However it means we can't build the project and therefore can't generate the dacpac file. Now this doesn't prevent us from doing the schema compare and syncing the database with the file system (a local mercurial repository). However it does seemingly prevent us from building a deployment script.
What I'm looking for is a way of building the deployment script using SQLPackage.exe without having to generate the dacpac file. I need to use the .sql files in the file system instead. Visual Studio will produce a script of the differences without building the dacpac, so this makes me think it must be possible to do it using SQLPackage.exe using one of the parameters.
Here's an example of SQLPackage.exe which I'd like to adapt to use the .sql files instead of the dacpac:
sqlpackage.exe /Action:Script /SourceFile:"E:\SourceControl\Project\Database
\test_SSDTProject\bin\Debug\test_SSDTProject.dacpac" /TargetConnectionString:"Data
Source=local;Initial Catalog=TestDB;User ID=abc;Password=abc" /OutputPath:"C:
\temp\hbupdate.sql" /OverwriteFiles:true /p:IgnoreExtendedProperties=True
/p:IgnorePermissions=True /p:IgnoreRoleMembership=True /p:DropObjectsNotInSource=True
This works fine because it uses the dacpac file. However I need to point it at the folder structure where the .sql files are instead.
Any help would be great.
As has been suggested in comments, I think that biting the bullet and fixing the errors is the way ahead. You say
it's not practical or safe to fix them all,
but I think you should give this a bit more thought. I have recently been in a similar situation to you, and the key to emerging from it is to realise that the operational risk associated with dropping procedures and functions that will throw an exception as soon as they are called is zero.
Note that this does not apply if the reason these objects won't build is that they contain cross-database or cross-server references that are present in production but not in your project; this is a separate problem altogether, but also a solvable one.
Nor am I in favour of "exclude from build" as an alternative to "delete"; a while ago I saw a project where this technique had been deployed extensively; it makes it harder to see what does what from the source files and I am now of the opinion that "Build Action=None" is simply "commenting out the bits that don't work" for the Snapchat generation.
The key to all of this, of course, is source control. This addresses the residual risk that one day you might indeed want to implement a working version of one of your currently non-working procedures, using the non-working code as a starting point. It also obviates the need to keep stuff hanging around in the solution using Build Action=None, as one can simply summon an earlier revision of the code that contained the offending objects.
If my experience is any guide, 60 build errors is nothing; these could easily be caused by references to three or four objects that no longer exists, and can be consigned to the dustbin of source control with some enthusiastic use of the "Delete" key.
Do you have a copy of SQL Compare at your disposal? If not, it might be worth downloading the trial to see if it will work in your scenario.
Here are the available switches:
http://documentation.red-gate.com/display/SC10/Switches+used+in+the+command+line
At the very least you'll need to specify the following:
/scripts1:
/server2:
/database2:
/ScriptFile:
I just joined a team that has no CI process in place (not even an overnight build) and some sketchy development practices. There's desire to change that, so I've now been tasked with creating an overnight build. I've followed along with this series of articles to: create a master solution that contains all our projects (some web apps, a web service, some Windows services, and couple off tools that compile to command line executables); created an MSBuild script to automatically build, package, and deploy our products; and created a .cmd file to do it all in one click. Here's a task that I'm trying to accomplish now as part of all this:
The team currently has a practice of keeping the web.config and app.config files outside of source control, and to put into source control files called web.template.config and app.template.config. The intention is that the developer will copy the .template.config file to .config in order to get all of the standard configuration values, and then be able to edit the values in the .config file to whatever he needs for local development/testing. For obvious reasons, I would like to automate the process of renaming the .template.config file to .config. What would be the best way to do this?
Is it possible to do this in the build script itself, without having to stipulate within the script every individual file that needs to be renamed (which would require maintenance to the script any time a new project is added to the solution)? Or might I have to write some batch file that I simply run from the script?
Furthermore, is there a better development solution that I can suggest that will make this entire process unnecessary?
After a lot of reading about Item Groups, Targets, and the Copy task, I've figured out how to do what I need.
<ItemGroup>
<FilesToCopy Include="..\**\app.template.config">
<NewFilename>app.config</NewFilename>
</FilesToCopy>
<FilesToCopy Include="..\**\web.template.config">
<NewFilename>web.config</NewFilename>
</FilesToCopy>
<FilesToCopy Include"..\Hibernate\hibernate.cfg.template.xml">
<NewFilename>hibernate.cfg.xml</NewFilename>
</FilesToCopy>
</ItemGroup>
<Target Name="CopyFiles"
Inputs="#(FilesToCopy)"
Outputs="#(FilesToCopy->'%(RootDir)%(Directory)%(NewFilename)')">
<Message Text="Copying *.template.config files to *.config"/>
<Copy SourceFiles="#(FilesToCopy)"
DestinationFiles="#(FilesToCopy->'%(RootDir)%(Directory)%(NewFilename)')"/>
I create an item group that contains the files that I want to copy. The ** operator tells it to recurse through the entire directory tree to find every file with the specified name. I then add a piece of metadata to each of those files called "NewFilename". This is what I will be renaming each file to.
This snippet adds every file in the directory structure named app.template.config and specifies that I will be naming the new file app.config:
<FilesToCopy Include="..\**\app.template.config">
<NewFilename>app.config</NewFilename>
</FilesToCopy>
I then create a target to copy all of the files. This target was initially very simple, only calling the Copy task in order to always copy and overwrite the files. I pass the FilesToCopy item group as the source of the copy operation. I use transforms in order to specify the output filenames, as well as my NewFilename metadata and the well-known item metadata.
The following snippet will e.g. transform the file c:\Project\Subdir\app.template.config to c:\Project\Subdir\app.config and copy the former to the latter:
<Target Name="CopyFiles">
<Copy SourceFiles="#(FilesToCopy)"
DestinationFiles="#(FilesToCopy->'%(RootDir)%(Directory)%(NewFileName)')"/>
</Target>
But then I noticed that a developer might not appreciate having his customized web.config file being over-written every time the script is run. However, the developer probably should get his local file over-written if the repository's web.template.config has been modified, and now has new values in it that the code needs. I tried doing this a number of different ways--setting the Copy attribute "SkipUnchangedFiles" to true, using the "Exist()" function--to no avail.
The solution to this was building incrementally. This ensures that files will only be over-written if the app.template.config is newer. I pass the names of the files as the target input, and I specify the new file names as the target output:
<Target Name="CopyFiles"
Input="#(FilesToCopy)"
Output="#(FilesToCopy->'%(RootDir)%(Directory)%(NewFileName)')">
...
</Target>
This has the target check to see if the current output is up-to-date with respect to the input. If it isn't, i.e. the particular .template.config file has more recent changes than its corresponding .config file, then it will copy the web.template.config over the existing web.config. Otherwise, it will leave the developer's web.config file alone and unmodified. If none of the specified files needs to be copied, then the target is skipped altogether. Immediately after a clean repository clone, every file will be copied.
The above turned out be a satisfying solution, as I've only started using MSBuild and I'm surprised by its powerful capabilities. The only thing I don't like about it is that I had to repeat the exact same transform in two places. I hate duplicating any kind of code, but I couldn't figure out how to avoid this. If anyone has a tip, it'd be greatly appreciated. Also, while I think the development practice that necessitates this totally sucks, this does help in mitigating that suck factor.
Short answer:
Yes, you can (and should) automate this. You should be able to use MSBuild Move task to rename files.
Long answer:
It is great that there is a desire to change from a manual process to an automatic one. There are usually very few real reasons not to automate. Your build script will act as living documentation of how build and deployment actually works. In my humble opinion, a good build script is worth a lot more than static documentation (although I am not saying you should not have documentation - they are not mutually exclusive after all). Let's address your questions individually.
What would be the best way to do this?
I don't have a full understanding of what configuration you are storing in those files, but I suspect a lot of that configuration can be shared across the development team.
I would suggest raising the following questions:
Which of the settings are developer-specific?
Is there any way to standardise local developer machines so that settings could be shared?
Is it possible to do this in the build script itself, without having to stipulate within the script every individual file that needs to be renamed?
Yes, have a look at MSBuild Move task. You should be able to use it to rename files.
...which would require maintenance to the script any time a new project is added to the solution?
This is inevitable - your build scripts must evolve together with your solution. Accept this as a fact and include in your estimates time to make changes to your build scripts.
Furthermore, is there a better development solution that I can suggest that will make this entire process unnecessary?
I am not aware of all the requirements, so it is hard to recommend something very specific. I can say suggest this:
Create a shared build script for your solution
Automate manual tasks as much as possible (within reason)
If you are struggling to automate something - it could be an indicator of an area that needs to be rethought/redesigned
Make sure your team mates understand how the build works and are able to make changes to it themselves - don't "own" the build and become a bottleneck
Bear in mind that going from no build script to full automation is not an overnight process. Be patient and first focus on automating areas that are causing the most pain.
If I have misinterpreted any of your questions, please let me know and I will update the answer.