I'm an extreme newbie to managing TFS, so please bear with me and know I'll need baby steps. I'll try to be as specific as possible.
I recently inherited an MVC ASP.net website written by a former colleague. Generally he would work directly in the production environment and commit changes as he went along. Obviously that's not good practice, so when I received it I decided to set it up in TFS along with a proper testing and development environment. I created the team project collection, added the existing solution to the collection, set up branching and branch hierarchy, and mapped the work environments. From what I can tell it's set up just like our other site that was configured in TFS before I came on (the person who set it up is long gone).
The issue I'm seeing now is that checking in changes don't seem to be affecting the actual code behind the site. Whether I make the changes in the test branch and then check-in/merge changeset with the production branch, or make the changes directly in production, saving and checking in changes doesn't actually affect the site. If I go into solution explorer and look at the files I just edited, my checked-in changes are not there. Same if I edit a web.config or something, I can then open in up in another text editor and my changes are nowhere to be found.
I followed Microsoft's instructions as closely as I could but clearly I missed something, I just have no idea what.
Related
I use Bamboo regularly as a QA tester to deploy pull requests and feature branches/release branches, but I'm not a developer and have a layman's understanding of how it works.
Our Bamboo configuration is set up to remove inactive branches after a certain amount of time (2 weeks) which happens pretty regularly with longer-term projects, unfortunately. (When that happens, I do know how to configure a new plan and run a new build.) Often, with these larger projects, they've been deployed manually many times over the course of the project, resulting in a large list of possible "release" versions when I go to "Promote existing release to this environment."
Now, I have a brand-new build of a brand-new plan for a project I've been working on off and on for a year and I would like to delete all these old builds (releases?) that show up in the dropdown when I want to just deploy the current version of the current new build, but I can't figure out where to do it (neither can the devs I've asked, but it's NBD to them, whereas this is a constant annoyance for me).
All the advice I can find online says things like "all builds are automatically deleted when the branch expires ...." and that doesn't seem to be true, because these are definitely from old expired plan branches. They also explain how to delete things manually .... from an existing plan branch, which I don't have, because the older plan branches expired and were removed.
Am I using the wrong terminology here and these aren't "builds" and there's a separate way to delete them? Do we have a setup that's failing to delete them when it should? Do devs need to do something different with their branches? I obviously don't have access to global settings but I could put in a request if that's what needs to change.
To be clear, I'm talking about going to deployment preview, selecting "promote existing release to this environment," entering in the jira number/beginning of the branch name, and seeing a million of these (which all look identical because our branch names are hella long):
deployment preview screenshot
I have read through all the Bamboo documentation relating to plans, builds, branches, and deployment, and Googled various combinations of relevant keywords and haven't found a solution. I've also asked devs I work with and they don't know either.
I have a couple of Composite C1 CMS websites.
To edit them currently I use the web based CMS on the live site.
However - I would like to update the (code & content) in Visual Studio locally - then sync to the web. However, if my local copy is older than that online (e.g. a non techy client has edited something on the live site) and I Web Deploy - it will go over the top of the new file on the server.
I need a solution that works out the newest change? I can't find anything in Google or the C1 docs.
How can I sync - preferably using Web Deploy. Do I need some kind of version control?
Is there a best practice for this - editing the live site through the web interface seems a bit dicey & is slow.
The general answer to this type of scenario seems to be to use the Package Creator. With that you can develop locally, add the files you've changed to a package, and install that package on a live site. This solution does not at all cover all the parts of you question though, and has certain limitations:
You cannot selectively add content to a package. It's all pages or no pages.
Adding datatypes is easy, but updating them later requires you to delete the datatype (and data), and recreate the datatype.
In my experience packages works well for incremental site updates, if you limit the packages content to be front end stuff, like css, images and such.
You say you need a solution that works out the newest changes - I believe the only solution to this is yourself, with the aid of some tooling. I don't think there's a silver bullet solution here.
Should you use a version control system? Yes! By all means. Even if you are not sharing your code with anyone, a VCS is a great way to get to know Composite C1 from a file system perspective, as you can carefully track what files are changed on disk, as you develop. This knowledge is crucial when you want to continuously add features the a website that is already alive and kicking - you need to know what to deploy, and what not to touch.
Make sure you read the docs on how Composite fits in VCS: http://docs.composite.net/Configuration/C1-and-Version-Control
I assume that your sites are using the XML data storage (if you where using SQL Data Store, your content would not be overridden upon sync).
This means that your entire web application lives in one folder on disk on the web server, which can be an advantage here.
I'll try to outline a solution that could work for you, although I must stress that I've never tried this - I'm making it up as I type.
Let's say you're using git, download the site in it's entirety from the production web server, and commit the whole damned thing* to your master branch.
Then you create a new feature branch from that commit, and start making the changes you want to deploy later, and carefully commit your work as you go along, making sure you only commit the changes that are needed for your feature to work, to the feature branch.
Now, you are ready to deploy, and you switch back the master branch, and again download the entire site and commit it to master.
You then merge your feature branch into the master branch, and have git do all the hard work of stitching you changes in with the changes from the live site. There are bound to be merge conflicts, and that is where you will have to jump in, and decide for yourself what content needs to go live.
After this is done and tested, you can web deploy the site up to the production environment.
Changes to the live site might have occurred while you where merging, so consider closing the site, or parts of it, during this process.
If you are using SQL Data Store i suggest paying for a tool like Red Gate's SQL Compare and SQL Data Compare or SQL Delta, to compare your dev database to the production database, and hand pick SQL scripts that can be applied to the production database along with your feature deployment.
'* Do consider using a .gitignore file to avoid committing certain files - refer to the docs for mere info.
I suppose you should use the Package Creator
Also have a look here: http://docs.composite.net/Configuration/C1-and-Version-Control
I am in the process of building a completely fresh version of an application that has been in existence for a good many years. I can look back with horror now at some of the things I had done, but the whole point of life is to learn as we go along. The nice thing now is that I have a clean slate from which to work, and it's because of that that I thought that I would seek some advice from you all.
User settings are great for those things that each individual user would naturally want to and ought to be able to change, a theme or visual style for example. Application settings should quite obviously apply to the entire application irrespective of whoever uses it.
Somewhere in the middle though are a set of settings that I would like to give the system administrator the opportunity to change (default work periods, appointment time slots, the currency the company wants to use as its main trading one etc etc). These can't be user settings because individual users should not be able to change them, nor should they be application settings because I as the developer have no idea what the end user (or to be more exact the senior end user) would want to set them to.
Many years ago I might have considered writing such settings to the registry, or an ini file. I could perhaps (as this is an application that is tightly integrated with its own custom database) create a one off settings table, and read in the relevant settings at program startup. I could perhaps opt for a separate 'universal settings' xml configuration file stored in the all users directory. Clearly a number of options.
What I would like to try and establish though is the most efficient way to approach this. What is the best trade off between file read and write operations as against reading everything into a set of public constants at application start-up? These are not going to be settings that will only be referred to occasionally so efficiency is going to be key.
Just so that there is no ambiguity as to what the application will be. Traditional winforms, using vs 2012 as the development ide and vb.net as the code base based on .net4.5 and ef 5.0. Backend data to be stored in either sql express or full sql server. Target operating system for end users will be windows 7 or above (so due respect for the uac will be required).
I'd welcome any suggestions that you might have.
We are using TFS 2010 (Visual Studio) for our deployments and have client code projects (.csproj files) and database projects (.dbproj files) We understand that when our Developers add files to our application there is a corresponding reference to these files in the Project file. If I push a changeset from Dev to QA that includes the project file, and the project file contains a reference to a file that's been added that is not in the changeset, I will receive a build error.
Once we started pushing just changesets (as opposed to performing full builds) this quickly became our number one bottleneck in doing TFS builds. I would deploy the database project and there would be 20 errors. The only way I could correct them was to navigate down the entire solution explorer tree and exclude each orphaned reference individually. This has proved far too time consuming and on the advice of our lead programmer we have returned to doing full builds of QA and UAT.
We are in the early stages of this product, and therefore we will be adding many files for some time. We need a better solution for this problem. Neither the manual exclusions nor asking developers to not check in code until it is ready for qa will suffice for us. Has anybody out there had any experience with this problem and if so how did you deal with it? Thanks!
Jon
Pushing changesets to QA selectively is known as cherry picking and causes the sorts of issues that you are experiencing. This is not the recommended practice, instead setup the Qa build so that successful build is part of checkin. This way that if a part of a fix is missed ( as it may be in multiple change sets ) the build will fail and the checkin cannot be performed.
Second have the developers do the second checkin to QA or merge the dev change sets to Qa and have the team lead coordinate changes to project files by watching for changes by turning on "notify changes made by others " or setting a policy for the dev team. Full builds should always be done as partials do not always pick up the complete pick up the dependency graph.
I have set up a RavenDB for evaluation. I wrote some code which pushed some documents into it. I then have a web site which renders those documents.
Throughout the day, I used the Raven Studio to modify some text in those documents, so that I could see the changes come through in my web site.
Problem: It seems that after going home for the night, when I come in the next day my database has changed - my documents have reverted to the 'pre-changed' versions... what's going on??
I've looked through the Raven console output, and there were no update commands issued on my developer machine overnight (nor would I expect there to be!!)
Note: this is just running on my development machine.
As far as I know, RavenDB has no code in it that would automatically undo commited write operations and honestly, this would really scare me. Altogether this sounds really weird and I can't think of a scenario where that could actually happen. I suggest you send the logfiles to ravendb support if it happens again, because this would be a really serious issue.
My colleague had this very problem with updates being reverted. The update we made was to add a property, and then also a document specific value for this property, to all the documents. We called SaveConfiguration() and saw the change being done in the Raven Studio. A while later some of the documents had lost it's new property.
I decided to turn on the logging and therefore added an NLog.config file, to get the logging started I touched the web.config. This of course restarted the application, and "voila", the updates appeared in the Raven Studio again.
After a while they disappeared from the Raven Studio, so I assumed that this was a studio problem. I therefore tried to retrieve the objects from the database in a test controller, unfortunately the objects were lacking the property value here too, so it wasn't just a studio problem.
With the logging turned on we updated the documents of the specific type again, and according to the logs and also the studio we actually updated the documents. Not long thereafter the documents reverted by losing it's added property yet again (my colleague started crying at this point - true story)..
Later I came to realize that this was all because of our live web application still had the old version of the object. When it was read in the web application the data was returned without the extra property. Because of this it seems like our DocumentSession thought that the object had changed (in all fairness), so when we called SaveChanges even these objects was written to the database - without it's extra property.
Is my conclusion correct? What is the solution to this problem? I'm thinking CQRS, because then we will never call "SaveChanges()" on the DocumentSession for reads.
Adam,
Just making sure, did you call SaveChanges() after you made your modifications?
There is absolutely nothing in RavenDB that would cause this behavior.