I'd like to be able to deploy my github 'dev' and 'production' branches to different AppHarbor hosted applications. To achieve this I've set up an application slug on github. The two applications are now correctly deployed based on the related branch but the problem is that they share the same RavenHQ database.
What needs to be done so that each application has its own database (for testing purposes) ?
Note: I'm using the bronze/free plan for both AppHarbor and RavenHQ.
If you're running two separate AppHarbor applications, you should just add the RavenDB add-on to both and you should be fine.
Related
I am in a development team which have just about finished developing a system for a client which involves a MVC4 Web, a WCF service platform and a Windows Store App which communicates with the web that the service.
We are running Continuous Integration practices for the Web & Service solutions which include automated deployment to dev, test, acctest and production environments. Building, testing, configuring and deploying to production is one click and five minutes away.
The one huge pitfall that we've had in this project was the fact that we chose to develop the app as a Windows Store App without investigating deployment possibilities which do not involve publishing the application to Windows Store. This is a process called sideloading, and i will not go deep into the technical requirements which Microsoft impends to enable this.
Our client will be using the application on 20~ Surface Pro tablets, and we are investigating into an automated release/deploy process for the application. As of this moment, we are using OneDrive to manage build artifacts and let the customer IT admin download the artifact from there to manually install the app on all clients. In the future, however, it is very possible that the organization who ordered the system will deploy this worldwide and there will be a requirement to deploy the application to hundreds, if not thousands of clients.
We spent entire weeks investigating whether Windows Intune can be a good platform for automated deployment of the application. If an organization installs the Intune platform, it's clients get the Company Portal which is like a private Store, where we could upload the app and updates to it in the future. There was, however, one big minus with the Company Portal - it has NO update management for Store Apps. That is, releasing a new version of our application to the Company Portal does not work like releasing a patch or update of your app to the Windows Store - there's no notification that there is a new version, and the application does not update itself. It's basically a new application that needs to be downloaded and installed after the previous version has been uninstalled.
Has anyone developed Windows Store Line-Of-Business applications which you had to sideload to multiple clients, and if so - which solution did you choose for update/patch management?
I am experiencing the exact same problem. Intune is indeed limited and too complex for many scenarios at the same time. Another option to "deploy" LOB Windows Store apps is described here: http://msdn.microsoft.com/en-us/library/windows/apps/jj657971.aspx. This covers the well known powershell deployment which is not very practical.
However, I have found an early stage, unofficial POC project on codeplex which I am currently investigating. You might want to take look at this: https://bootybay.codeplex.com/
I have a question about whether cloud vendors have an inter-operable mechanism. For example, I am developing a WCF service and hosting in Azure successfully. After a pro-long time using Azure, can I use the same code for deploying it in AWS? Will it be possible? Does the API of both matches the same for deploying? If not, what are all the extra care needed for hosting the same service when switching over other Cloud Vendors like Salesforce.com, OpenStack, etc.,
In general, you can't just take what you develop for one Cloud platform and put it on another: they have different functionality sets and expose different APIs. However, the more low-level you make your code, the more likely it is that you'll find another vendor with a very similar API, since virtualizing infrastructure is simpler (and closer to standardized) than virtualizing a CMS application.
If you're using just IaaS, you can probably port fairly rapidly but you have to do more work to make your application. If you're using PaaS (or SaaS!) then you're more locked-in but you get more support for developing rapidly: it's that support platform which is both the value-add and the lock-in, and you won't get one without the other.
If you're using an Azure web role for hosting your WCF service then from deployment point of view you will not have many problems with AWS. You'll simply use facilities offered by AWS SDK for .NET (aka Publish to AWS CloudFormation). For sure you'll have to change the logging part if you've used Azure Diagnostic and alla Azure services with related AWS services. We did this multiple times in the last year and it works.
For worker role it's not so simple because in Azure they are easily deployed like web role, but in AWS you haven't direct deployment from Visual Studio so you have to do some manual work using Windows Services or something else
I have the following scenario:
We develop a silverlight 4 app for our customers, that will be used as an out-of-browser app. The app is working offline, i.e. app and database are on the users local machine. The app is using WCF-RIA-services to connect to the local database. The database will be an SQL Server Express, SQL Server CE or MySQL. We are using MVVMLight and MEF.
An external webserver is only used for updating the app from time to time or adding new modules to the app. To achieve this we do something similar as shown in Jeremy Likness blog (http://www.wintellect.com/CS/blogs/jlikness/archive/2010/05/25/silverlight-out-of-browser-dynamic-modules-in-offline-mode.aspx )
The reasons why we are doing such a scenario are complex. But to keep a long story short it is mainly for compatibility reasons for a later online version and we don't want to use WPF. So we need to get this working with Silverlight and WCF-RIA services.
Ok, that's the scenario and here's the question:
Do we need a local webserver in this scenario? The app is programmatically installed as out-of-browser, the database is local and connected via WCF-RIA.
If yes, which webserver would be sufficient? It should be installed and configured via an initial setup that is executed by the customer. The customer should not have to do anything with configuring the webserver.
Any other ideas or comments on this scenario? Any other possible solutions for this?
Thanks for your help
Dirk
silverlight wasn't meant to be used this way I think. So it would be like when you are developing app in visual studio and use Cassini to see result - everything runs locally - but you still need a web server. Maybe more info here - http://www.infoq.com/news/2010/06/WPF-vs-Silverlight
I´m not able to provide with a full answer to your problem, as we are currently facing the same problem. (WPF not being cross-platform, Very specific hardware on some clients)
But I may share some of our thoughts on our type of Thick-Silverlight-Client:
To keep deployment etc. simple we use a self-hosting process (installed as background process)
We may not use RIA as the background process has to run using Mono VM (but for MS-only solution see Can WCF RIA Services be self hosted? )
Architectural thoughts on standalone "Clients":
Depending on your requirements implementing a server for each client communicating with the "main"-server by messages (NServiceBus) may be overkill. But if you want to use a client database if offline and silverlight for ui you should consider using an event-driven-architecture.
There is a slideshow on combining "Event-Driven-Architecture" & "CQRS" with Silverlight. But i would not use it as a blueprint more like an inspiration.
http://www.slideshare.net/dennisdoomen/cqrs-and-event-sourcing-an-alternative-architecture-for-ddd
I have a scenario where I have to setup a test environment where I want to be able to tell my NAnt or other build tool to make an new IIS web application, put the latest bins in the newly created IIS web application, and post me an email where the new address and port where the new application are addressed, is this possible and how? which tool?
There are several ways to approach this:
Set up a continuous integration (CI) server on the test environment. This is a viable option if your test environment machine doesn't change often and it's a single machine.
Push the installation from your development machine using tools like PsExec
Combination of the two: you have a build CI server which pushes the installation to (multiple) test environments.
Of course, you also need a good build script which will set up the IIS application (NAnt offers tasks for this). Emailing to you can be done by CI server (CruiseControl.NET Email Publisher, Hudson...).
I suggest taking some time to read this excellent article series: Automation for the people: Deployment-automation patterns
Our CruiseControl .Net build server does exactly this as part of it's NAnt build-script process...
Once the code is retrieved from source control, it's all built/compiled in turn. Web projects are then handled slightly differently to normal .dlls, as they are deployed to a particular folder (either on the current machine or otherwise) where IIS (also set-up by the script) to serve the pages.
Admittedly, we're using Virtual Directories instead of creating and disposing of new website instances on the server, as otherwise we'd have to manage the port numbers for each website.
NAnt has the capabilities of doing all of this IIS work, as well as all of the email work too - I'd certainly recommend looking at this avenue of enquiry to solve your problem. Plus, you also get the continous integration aspect as a side-benefit in your case!
I am in the process of setting up some IIS hosted WCF projects for continuous integration and am stuck trying to find the best and simplest way to get deployment automated.
Right now, I have the build and deploy working with CC.NET, MSBUILD and a batch file that copies the necessary files to the deployment folder, but I think there must be a better way.
Ideally, I'd like something like web deployment projects, but for WCF.
I would settle for a nice Power Shell script to copy all the necessary files and exclude all the fluff.
Well, there isn't anything stopping you from using a web deployment project for hosting your WCF class library. The SVC file will be picked up by IIS and routed appropriately. We use a standard deployment project and a custom action to create the IIS vroot so that we have a finer control over the settings but a standard web deployment project will do the job as well.
Unless you are running under IIS7 then as far as IIS is concerned it's just standard content that has it's own handler. When you get to Windows 2008 / Windows 7 Beta then things can change a bit as those versions have a very different handler model.
I've found this post to be really helpful: http://msdn.microsoft.com/en-us/library/bb332338.aspx
This depends very much on the technologies you are using. On a previous project, we used TFS, with Team Build. The result was that the WCF projects were built into a folder structure that matched their deployment structure. Additional tasks in the MSBUILD script triggered a deployment script (written in PERL, I think). This took care of all deployment tasks, from deleting old folders, creating the new, creating databases and populating with reference data, then deploying the service and web sites, and finally running Installation Verification scripts and publishing the results to a web site.
On the other hand, if all you've got is a hammer, then hammer away.