How to configure the publish profiles to use NTLM authentication - msbuild

In Visual Studio 2012, using publish profiles along with web deploy simplifies the deployments quite a bit. However it still is missing few things or may be I don't know how to use it yet.
I prefer to use the NTLM authentication without storing the username and password (especially) in the publish profiles. How can this be done? If I leave the username and password empty, I am prompted for it. Is there a way like manually modifying the .pubxml files?
Why is the username/password stored in PublishProfileName.pubxml that I have checked in the source control and not in PublishProfileName.pubxml.user that is local to each user? I could at least save the username but obviously don't want that to be checked in.
The Configuration itself is not part of PublishProfileName.pubxml but is stored in PublishProfileName.pubxml.user as LastUsedBuildConfiguration.
Same for the Platform as last point.
I am also missing support for multi-server deployments. I am currently forced to use batch files in addition to Publish Profiles.
EDIT
The command line that works fine for publishing is
MSBuild.Exe MyProject.sln /p:Configuration=QA /p:DeployOnBuild=true;PublishProfile=PublishToQA;AllowUntrustedCertificate=true /p:authType=NTLM /p:UserName=
In this I would like to omit the /p:Configuration=QA if the configuration becomes part of the publish profile itself.

Some answers to your questions.
I prefer to use the NTLM authentication without storing the username and password (especially) in the publish profiles. How can
this be done? If I leave the username and password empty, I am
prompted for it. Is there a way like manually modifying the .pubxml
files?
Your authentication is typically driven by how Web Deploy is hosted. By default if you are using the Web Management Service then you are using IIS users for auth. With IIS users you can control which users have permissions to specific sites/apps. You can configure WMSVC to use windows auth as well though. If you have issues using VS for those scenarios let me know.
If you are using the Remote Agent service to host Web Deploy then in this case you'll be using windows auth.
Why is the username/password stored in PublishProfileName.pubxml that I have checked in the source control and not in
PublishProfileName.pubxml.user that is local to each user? I could
at least save the username but obviously don't want that to be checked
in.
We have another mechanism for you to determine what information is private/shared. With the exception of the password all publish info is shared (and checked in by default). In order to simplify the design you can either have a publish profile which is shared, or one which is not shared at all. There is no in-between in which you have a profile that some fields are shared and other not. Password is special cased here and encrypted on a per-user/per-machine basis in the .pubxml.user file.
If you'd like to have a private publish profile then you can simply not check in the .pubxml file which corresponds to the publish profile. These are stored in the Properties\PublishProfiles (or My Project\PublishProfiles for VB) and just exclude them from the project and don't check the files in. The publish dialog looks for the profiles on disk, not just the ones which are in the project. Everything should continue to work.
We don't support the concept of selectively storing values in the .pubxml.user file. The publish dialog will only store a set number of values in that file. Instead of
The Configuration itself is not part of PublishProfileName.pubxml but is stored in
PublishProfileName.pubxml.user as LastUsedBuildConfiguration.
Same for the Platform as last point.
This was a mistake it should have been stored in the .pubxml file, not the .pubxml.user file. We have since fixed this, but haven't had a chance to release the update yet.
The Configuration property cannot be set in the publish profile. The Configuration property is a core part of the build process. To be more specific, the reason why we didn't call this property Configuration is because the .pubxml file is imported into the definition of the .csproj/.vbproj during a build & publish. Since other properties are defined based on Configuration you cannot change the value once it's been set. I just blogged with way too much detail on this subject at http://sedodream.com/2012/10/27/MSBuildHowToSetTheConfigurationProperty.aspx. This limitation is an MSBuild thing not a publish limitation. For command line you should specify Configuration in the following way:
msbuild.exe myproj.csproj /p:...(other properties)... /p:Configuration=
I am also missing support for multi-server deployments. I am currently forced to use batch files in addition to Publish Profiles.
We don't have direct support for this, but if you expand on your needs I may be able to help. FYI I have an extension which you may be interested in. I have posted a 5 min video to http://sedodream.com/2012/03/14/PackageWebUpdatedAndVideoBelow.aspx.

You are free (and encouraged) to manually edit your pubxml files, so feel free to remove the password.
To switch to NTLM, change AuthType to NTLM in the first PropertyGroup.
Platform and Configuration remain build configuration, the user file just stores them so Visual Studio knows what the last configuration you deployed was.
By multi-server, do you mean a web farm? If so, you might try looking at the Web Farm Framework which basically performs MSDeploy syncs from the primary server to the others.
Alternatively, you could switch to the command line and use postSync to upload and execute a batch file on the remote server that triggers the other deployments from there.

Related

How to manage database credentials for mule proejct

I am using database connector component, with vault component to store the database credentials. Now as per the documentation of both components i have created different properties file for each environment to store the encrypted credentials for diff env.
Following is the structure of my mule project
Now the problem with this structure is that i have to build new deployable zip file whenever i have to update the database credentials for any environment.
I need a solution where i can keep all credentials encrypted and centralized and i don't have to create a build every time after updated the credentials, We can afford to restart the server, but building new zip and deploying is really cumbersome.
Second problem we have this approach is a developer needs to know the production db to update it in properties file, this is also a security issue.
Please suggest alternate approach for credentials management for mule projects.
I'm going to recommend you do NOT try to change the secure solution provided to you by MuleSoft. To alleviate the need for packaging and deployment, you would have to extract the properties files outside of the deployment and this would be a huge risk. Regardless of where you store the property files within the deployment if you change the files, you have to package and re-deploy. I see the only solution to your problem as moving the files outside of the deployment and securely storing them. Mule has provided a solution while it may be cumbersome, they are securing these files first with encryption and secondly within the server container. You can move out the property files but you have to provide a custom implementation and you will be assuming great risk to your protected resources.
Set a VM arguement e.g. environment.type=local for local machine on your anypoint studio.
Read this variable in wherever you are reading your properties file in a way that environment type is read dynamically such as below.
" location="classpath:properties/sample-app-${environment.type}.properties" doc:name="Secure Property Placeholder"/>
In order to set the environment type on your production server(or wherever you are using mule runtime), open \conf\wrapper.conf and add the arguement wrapper.java.additional.=-Dserver.type=production. If you already have any property in this file, you may need to set the value of n appropriately. For example 13 or 14.
This way you don't need to generate different deployment artefacts for different environment because correct properties file is picked by using environment specific VM arguement.

Handling local user content with MS WebDeploy

What is the best practice approach to local user generated content when using Microsoft WebDeploy and Team City to deploy fixes to a site?
Using the deployment process described by Troy Hunt:
http://www.troyhunt.com/2010/11/you-deploying-it-wrong-teamcity_26.html
When changes are made to a site the WebDeploy agent updates the site including removing old files that are no longer needed - which is great. However in the case where a site contains user generated data (say users can upload an image which is stored as a file on disk or a simple CMS where page content files can be updated by the user), what is the best practice to prevent these files being deleted by the deployment agent?
Is there an ignore flag for certain folders?
Should the user files be stored outside the root of the deployed website (Is this a security risk)?
You basically need to use MSDeploy's skip rules. This will tell MSDeploy to ignore certain files, folder, or subfolders etc.
It depends on where you implement these to what the syntax will look like. But you have the following options:
If your publishing through VS.Net using a publishing profile you can include skip rules here (I've taken this approach and seen it work fine). This SO question should point you in the right direction - MSDeploy skip rules when using MSBuild PublishProfile with Visual Studio 2012
If your using a vs.net web solution (website / web application) I later found out you can also implement skip rules in the web.config. Although the following article is a bit old the approach may still be viable - How to write skip and replace rules for MSDeploy (I havent used, or tested this approach)
Last, but not least, you could use MSDeploy skip rule on the command line itself. So assuming you execute msdeply directly (as opposed to via msbuild) you would need to append a skip parameter with the relevant attributes you require. Further information can be found at: Demystifying MSDeploy skip rules or Web Deploy Operation Settings (Look for the skip command reference, about 2/3 down the page) (Using publishing profiles with MSBuild ultimately makes this call for you, i've seen it in action working by using the first approach above).
Hope that helps!

Getting configuration strings from Weblogic

This question is related to Weblogic 12c.
I have an EAR file that I want to deploy in various environments (dev, QA, pre-prod and prod). However, my application requires a username and a password (to connect to another server) and they're not the same across the four environments. I don't want to package 4 different property files in 4 different EAR files. I want a single generic EAR file. Beside, I don't want to handle the prod password during packaging.
Ideally, I'd like the admin of each environment to provide the appropriate username nad password for the environment. Unlike Tomcat, Jetty or JBoss(?), I think it's not possible for a WebLogic Admin to specify this information in a way that it will become available under the java:comp/env JNDI context.
How can an application obtain some admin-defined configuration strings from Weblogic?
BTW, it's not a username/password for a JDBC connection.
From what I understand, you need to change parameters based on the environment you are using right?
If you would like to override parameterss on the fly you can use WebLogic deployment plan concept.
Did you mean that you need to provide username/password to start-up the application?
If so, you may accomplish that by creating a script with WLST http://docs.oracle.com/cd/E15051_01/wls/docs103/config_scripting/using_WLST.html
As far as I know, the WebLogic way is to
Define your username/password as env-entry in the deployment descriptor
Deploy your application together with the plan.mxl whereas each environment admin maintains his own envrionemnt-specific version of the plan.xml
That way you get them into /comp/env/config
More details here: http://docs.oracle.com/cd/E11035_01/wls100/deployment/config.html
Only drawback known to me: plan.xml will always contain the unencrypted password but as the admin knows the password anyway and this is "his" file on "his" maschine that should be fine.

Where are the best locations to write an error log in Windows?

Where would you write an error log file, say ErrorLog.txt, in Windows? Keep in mind the path would need to be open to basic users for file write permissions.
I know the eventlog is a possible location for writing errors, but does it work for "user" level permissions?
EDIT: I am targeting Windows 2003, but I was posing the question in such a way as to have a "General Guideline" for where to write error logs.
As for the EventLog, I have had issues before in an ASP.NET application where I wanted to log to the Windows event log, but I had security issues causing me heartache. (I do not recall the issues I had, but remember having them.)
Have you considered logging the event viewer instead? If you want to write your own log, I suggest the users local app setting directory. Make a product directory under there. It's different on different version of Windows.
On Vista, you cannot put files like this under c:\program files. You will run into a lot of problems with it.
In .NET, you can find out this folder with this:
Environment.GetFolderPath(Environment.SpecialFolder.LocalApplicationData)
And the Event Log is fairly simple to use too:
http://msdn.microsoft.com/en-us/library/system.diagnostics.eventlog.aspx
Text files are great for a server application (you did say Windows 2003). You should have a separate log file for each server application, the location is really a matter of convention to agree with administrators. E.g. for ASP.NET apps I've often seen them placed on a separate disk from the application under a folder structure that mimics the virtual directory structure.
For client apps, one disadvantage of text files is that a user may start multiple copies of your application (unless you've taken specific steps to prevent this). So you have the problem of contention if multiple instances attempt to write to the same log file. For this reason I would always prefer the Windows Event Log for client apps. One caveat is that you need to be an administrator to create an event log - this can be done e.g. by the setup package.
If you do use a file, I'd suggest using the folder Environment.SpecialFolder.LocalApplicationData rather than SpecialFolder.ApplicationData as suggested by others. LocalApplicationData is on the local disk: you don't want network problems to stop you from logging when the user has a roaming profile. For a WinForms application, use Application.LocalUserAppDataPath.
In either case, I would use a configuration file to decide where to log, so that you can easily change it. E.g. if you use Log4Net or a similar framework, you can easily configure whether to log to a text file, event log, both or elsewhere (e.g. a database) without changing your app.
The standard location(s) are:
C:\Documents and Settings\All Users\Application Data\MyApp
or
C:\Documents and Settings\%Username%\Application Data\MyApp
(aka %UserProfile%\Application Data\MyApp) which would match your user level permission requirement. It also separates logs created by different users.
Using .NET runtime, these can be built as:
AppDir=
System.Environment.GetFolderPath(Environment.SpecialFolder.CommonApplicationData)
or
AppDir=
System.Environment.GetFolderPath(Environment.SpecialFolder.ApplicationData)
followed by:
MyAppDir = IO.Path.Combine(AppDir,'MyApp')
(Which, hopefully, maps Vista profiles too).
Personally, I would suggest using the Windows event log, it's great. If you can't, then write the file to the ApplicationData directory or the ProgramData (Application Data for all users on Windows XP) directory.
The Windows event log is definitely the way to go for logging of errors. You're not limited to the "Application" log as it's possible to create a new log target (e.g. "My Application"). That may need to be done as part of setup as I'm not sure if it requires administrative privileges or not. There's a Microsoft example in C# at http://support.microsoft.com/kb/307024.
Windows 2008 also has Event Log Forwarding which can be quite handy with server applications.
I agree with Lou on this, but I prefer to set this up in a configuration file like Joe said. You can use
file value="${APPDATA}/Test/log-file.txt"
("Test" could be whatever you want, or removed entirely) in the configuration file, which causes the log file to be written to "/Documents and Settings/LoginUser/Application
Data/Test" on Windows XP and to "/Users/LoginUser/AppData/Roaming/Test on Windows Vista.
I am just adding this as I just spent way too much time figuring how to make this work on Windows Vista...
This works as-is with Windows applications. To use logging in web applications, I found Phil Haack's blog entry on this to be a great resource:
http://haacked.com/archive/2005/03/07/ConfiguringLog4NetForWebApplications.aspx
%TEMP% is always a good location for logs I find.
Going against the grain here - it depends on what you need to do. Sometimes you need to manipulate the results, so log.txt is the way to go. It's simple, mutable, and easy to search.
Take an example from Joel. Fogbugz will send a log / dump of error messages via http to their server. You could do the same and not have to worry about the user's access rights on their drive.
I personally don't like to use the Windows Event Log where I am right now because we do not have access to the production servers, so that would mean that we would need to request access every time we wanted to look at the errors. It is not a speedy process unfortunately, so your troubleshooting is completely haulted by waiting for someone else. I also don't like that they kind of get lost within the ones from other applications. Sure you can sort, but it's just a bit of a nucance scrolling down. What you use will end up being a combination of personal preference coupled along with limitations of the enviroment you are working in. (log file, event log, or database)
Put it in the directory of the application. The users will need access to the folder to run and execute the application, and you can check write access on application startup.
The event log is a pain to use for troubleshooting, but you should still post significant errors there.
EDIT - You should look into the MS Application Blocks for logging if you are using .NET. They really make life easy.
Jeez Karma-killers. Next time I won't even offer a suggestion when the poster puts up an incomplete post.

Understanding IIS6 permissions, ACL, and identity--how can I restrict access?

When an ASP.NET application is running under IIS6.0 in Windows 2003 Server with impersonation, what user account is relevant for deciding file read/write/execute access privileges? I have two scenarios where I am trying to understand what access to grant/revoke. I thought the most relevant user is probably the identity specified in the Application Pool, but that doesn't seem to be the whole story.
The first issue concerns executing a local batch file via System.Diagnostics.Process.Start()--I can't do so when the AppPool is set to IWAM_WIN2K3WEB user, but it works fine if it is set to the Network Service identity. I of course made sure that the IWAM user has execute rights on the file.
The second involves writing to a file on the local hard drive--I'd like to be able to prevent doing so via the access control list via folder properties, but even when I set up all users in the folder as "read" (no users/groups with "write" at all), our ASP.NET still writes out the file no problem. How can it if it doesn't have write access?
Google search turns up bits and pieces but never the whole story.
what user account is relevant for [..] file read/write/execute access
As a rule: Always the user account the application/page runs under.
The IWAM account is pretty limited. I don't think it has permissions to start an external process. File access rights are irrelevant at this point.
If a user account (Network Service in your case) owns a file (i.e. has created it), it can do anything to this file, even if not explicitly allowed. Check who owns your file.
Process Monitor from Microsoft is a great tool to track down subtleties like this one.
A bit more searching reveals that the IWAM user isn't that well documented and we should stick with NETWORK SERVICE or a manually-supplied identity if we want to specify permissions for that user.