Azure Cache not persisting Session State across VIP swaps? - asp.net-mvc-4

As a follow-up to this post: Enabling co-located Session Caching in an Azure Cloud Service - I'm trying to store session state in Azure Cache to persist sessions between VIP swaps. Quoted from the answer:
To fix this problem, I'd like you to try the new Cache Service
(Preview). In this way you create dedicate cache for your subscription
so that you can use them across cloud service deployments, virtual
machines and websites.
I've set up an Azure Cache (Preview) instance, used its endpoint and primary access key in my web.config, and deployed to my Azure Cloud Service Staging slot.
I then logged in using Forms auth, and redeployed to the same slot. My credentials were persisted! This was great to see.
But then I VIP swapped to Production, logged in the same way to the production instance, redeployed to Staging, VIP swapped again, and then refreshed, expecting to remain logged in, but it didn't work - my session was lost on both production and staging.
I've followed the instructions found here:
http://www.windowsazure.com/en-us/manage/services/cache/net/how-to-in-role-cache/#getting-started-cache-role-instance
What could be causing this? No exceptions are thrown - my access key works (tested by giving it a bogus one and getting an exception)... I'm not sure what's going on. Config sections in web.config:
<sessionState mode="Custom" customProvider="AFCacheSessionStateProvider" xdt:Transform="Insert">
<providers>
<add name="AFCacheSessionStateProvider" type="Microsoft.Web.DistributedCache.DistributedCacheSessionStateStoreProvider, Microsoft.Web.DistributedCache" cacheName="default" dataCacheClientName="default" applicationName="AFCacheSessionState"/>
</providers>
</sessionState>
And:
<dataCacheClient name="default">
<autoDiscover isEnabled="true" identifier="mysite.cache.windows.net" />
<securityProperties mode="Message" sslEnabled="false">
<messageSecurity authorizationInfo="{my key}" />
</securityProperties>
</dataCacheClient>
As for timeout policy - I have it set to never expire with eviction enabled. I'm one of a handful of users and the cache is storing cookies in 128MB of space, so I don't think it's related to expiry.
I also noticed that in the docs, there is no entry for applicationName as I have above. I tried removing it and re-testing, to no avail - my Prod session is still lost upon VIP swap.
What am I doing wrong?
Update:
From a microsoft forum post:
I was able to reproduce the issue. I am investigating.

Forms authentication is not based on session state. It relies only on client-side cookies. Cookies are encrypted and validated with keys specified in machineKey section of web.config.
Default config is:
<machineKey validationKey="AutoGenerate,IsolateApps"
decryptionKey="AutoGenerate,IsolateApps"
validation="SHA1" decryption="Auto" />
AutoGenerate means that each physical machine gets its own decryptionKey. Cookies generated by production VM will not be accepted by staging VM.
After VIP swap all cookies set by old production VM will be rejected by new production VM (ex-Staging VM), causing all users to be logged out.
You need to specify machineKey values explicitly to force Forms Auth to generate cookies that will be valid for both new and old production VMs (see How To: Configure MachineKey, Web Farm Deployment Considerations section).
Check this online tool for machineKey section generation: http://aspnetresources.com/tools/machineKey.
UPD: There is a related note in Manage Deployments in Windows Azure/Managing ASP.NET machine keys for IIS:
Windows Azure automatically manages the ASP.NET machineKey for
services deployed using IIS. If you routinely use the VIP Swap
deployment strategy, you should manually configure the ASP.NET machine
keys.

Related

Are "normal" environment variables more secure than IIS environment variables?

I have an ASP.Net Core website running on IIS. I need to store some passwords that the site needs to access in production. No paid password storing systems are available to me. I chose to store my passwords in environment variables. So on the production machine I have:
a service account my_prod_service_account
an application pool MyProdAppPool that runs under my_prod_service_account
a website MyDotNetCoreSite that runs in the MyProdAppPool
Approach 1: Normal Environment Variables
I login to the production machine as my_prod_service_account and set environment variables for this user in Powershell:
[Environment]::SetEnvironmentVariable("Pwd1", "MyPrecioussss1", "User");
[Environment]::SetEnvironmentVariable("Pwd2", "MyPrecioussss2", "User");
After this MyDotNetCoreSite can read these environment variables.
Approach 2: system.webServer\aspNetCore Environment Variables
Something similar can be achieved with %WINDIR%\system32\inetsrv\config\applicationHost.config (IIS configuration file) on the production machine. It can be edited manually or through UI, but in the end it looks like this:
<configuration>
<location path="MyDotNetCoreSite">
<system.webServer>
<aspNetCore>
<environmentVariables>
<environmentVariable name="Pwd1" value="MyPrecioussss1" />
<environmentVariable name="Pwd2" value="MyPrecioussss2" />
</environmentVariables>
</aspNetCore>
</system.webServer>
</location>
</configuration>
After iisreset MyDotNetCoreSite can read these values as environment variables.
Question
I want to change my password storage method from Approach 1 to Approach 2. The former sets environment variables per user, the latter per site (which I think is neater). But I can't find enough documentation to judge whether Approach 2 has the same level of security as Approach 1. Setting a "normal" environment variable stores it in the registry at HKEY_Users\my_prod_service_account SID\Environment\Pwd1. Accessing the registry usually requires elevated permissions, and if someone breaks into it, we will have bigger problems than hackers knowing Pwd1. Is applicationHost.config as secure as the registry? Can I confidently store a password in it?
I can just give some head-scratch questions/concerns:
I am curious, have you done a set in a command line and checked if any unwanted passwords are listed in the output in plaintext?
Also, if you store all passwords in one file like in Approach#2, this makes a nice honeypot. I do not know how well the encryption works, Lex Li mentioned.

Which matcher should I use for a service, hosted on localhost {port} for a local Service Fabric cluster

I have a question regarding Service Fabric and Traefik.
I have managed to succesfully deploy the Traefik application to a local cluster (and actually out in my own Azure Infra too). This is alongside a service (MyService) in another application I am trying to have Traefik (RP) sit in front of.
I can see the Traefik dashboard, and I can see a backend (seemingly indicating that it has succesfully called the SF management API correctly for my application and service).
I can also see an accompanying frontend, with some routing rules (matchers). However, for the life of me, I can't get a simple request through the RP to my service.
I can hit my service directly. Service Fabric (SF) says it's in a good state also.
My local SF cluster isn't secured, so that simplifies things somewhat with .toml set up, etc.
My Service is hosted on localhost:9025 (endpoint is exposed in the service manifest and port set up (Kestrel in API)) the same too.
Traefik is set up on port 5000 (as opposed to 80 - see below).
To hit a simple version check, explicitly, I would use http://localhost:9025/myservice/myaction/v1/version
Doing http://localhost:5000/myservice/myaction/v1/version gets me either a 404 or 503 (depending on what I'm doing with matcher/modifier).
I have modified the Traefik endpoint from port 80 to 5000 too, just to switch it up and avoid any port conflicts. (I dont have an IIS sites up as it stands.) Netstat confirms that no other port is being used either.
The matcher in the Service Manifest looks like this:
<Extensions>
<Extension Name="Traefik">
<Labels xmlns="http://schemas.microsoft.com/2015/03/fabact-no-schema">
<Label Key="traefik.frontend.rule">PathPrefix:/myservice</Label>
<Label Key="traefik.enable">true</Label>
</Labels>
</Extension>
``` </Extensions>
One last thing, I guess, that would have really helped would be the ability to see the "resolved" requests. That is a request that comes into the RP, and then is matched or modified so that I can see what the RP actually reconciles a request out too. Perhaps this already exists, but tweaking of various logging didn't yield this info.
Ok,
So there is nothing wrong with the Service Manifest relating to Traefik, but rather its the exposure of the Endpoint in the Manifest which is not understood by Traefik.
This won't work:
<Endpoint Name="MyService" Protocol="http" Type="Input" Port="9025" />
However, this will:
<Endpoint Name="MyService" UriScheme="http" Port="9025" />
( The other attributes, I omitted, can still be added, but this would seem the minimum needed for Traefik to enumerate it as a viable backend)
A clear indication of wiring is indicated in the Traefik logs (this was previously absent)
Wiring frontend frontend-fabric:/MyApp/MyService to entryPoint http
AND, in the UI, for the backend, the server URI is displayed, again, this was not before.
Forgive me if this is documented somewhere, but I couldn't find anything OTHER than I did consider not seeing the server URI an issue based on a screenshot on the set up Website for Service Fabric and Traefik.
Another symptom is that the Backend, if not wired up correctly, will be displayed red, when correctly configured it will be green.
As I say, all probably very obvious but I lost many hours on this simple amendment I needed to make.

Missing configuration for the issuer of security tokens error

I inherited an existing project without its development environment. I have UAT code and a backup of the Production database. I can run up the site locally via Visual Studio but have hit an authentication problem trying to setup a fresh standalone DEV server on AWS (single server, no load balancer). The doco indicates the Prod server is a dual server setup with a load balancer.
The front end site pages do display, although some search is not working. On trying to log into the backend pages, Chrome returns "The xxx page isn't working. xxx redirected you too many times." Using developer tools, I can see the page redirects back and forth between SWT?realm=... and sitefinity?wrap_defalted=true&wrap_access_token... On the second redirect response header there is "X-Authentication-Error:Missing configuration for the issuer of security tokens 'https://xxx/Sitefinity/Authenticate/SWT' "
I tried different values in the web.config lines:
<federatedAuthentication>
<wsFederation passiveRedirectEnabled="true" issuer="http://localhost" realm="http://localhost" requireHttps="true"/>
<cookieHandler requireSsl="false"/>
</federatedAuthentication>
but that actually made things worse so I have reverted.
I checked all the settings mentioned in http://docs.sitefinity.com/administration-switch-to-claims-based-authentication and they seem to be set correctly. I don't really know what else I can check to get this working.
I found http://docs.sitefinity.com/administration-configure-security, but it does not seem like these settings are set (I don't have access to Prod server so can't confirm if it is actually setup with load balancing). I am currently using a 30 day trial license so am not sure if this is contributing to the problem. The official license is in the process of being transferred by the client. The domain name associated with the official license would be different to the domain my new server is currently running on.
I am also running version 8 code on a version 9 install of Sitefinity. I wanted to get it working before I tried to upgrade the code. I think there was also an assembly load to manifest mismatch when I tried upgrading my local version.
Found the solution: Don't mess with the SecurityConfig.config file.
<securityTokenIssuers>
<add key="B886AA7BFB5515BA63F577A44BBEB5C7AE674035514D128BC397346B11F4C97A" encoding="Hexadecimal" membershipProvider="Default" realm="http://localhost" />
</securityTokenIssuers>
<relyingParties>
<add key="B886AA7BFB5515BA63F577A44BBEB5C7AE674035514D128BC397346B11F4C97A" encoding="Hexadecimal" realm="http://localhost" />
</relyingParties>
Even though it is running on a server, the above lines should still point to localhost. It seems like these only need to be edited if you have a multi-server setup with an entirely separate STS.
I initially changed it to match the new domain name, but after some experimentation around adding localhost and HTTP variations, it seems like it works best with just localhost.
Even when I changed the web.config entry above to use the new domain as the issuer instead of localhost and the SecureConfig.config to specify only the new domain as the realms, it didn't seem to work. I guess the authentication must try to hit localhost specifically.

why my session id change several times on farm server?

i have a web application (mvc4 and .net4.5) on a web farm server and one thing is confusing me, my session id changing whiteout reason and strongly and i lose all user data that i stored them in session state. but it works fine on local machine.
i use this config in my web config:
<sessionState mode="StateServer" customProvider="DefaultSessionProvider"
cookieName="abcd" timeout="120" >
<providers>
<add name="DefaultSessionProvider" type="System.Web.Providers.DefaultSessionStateProvider, System.Web.Providers,
Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" connectionStringName="DefaultConnection" />
</providers>
</sessionState>
and my machine key is this:
<machineKey compatibilityMode="Framework45"
validationKey="702C65CF39B1ED514AC4B92326C3A84B3D88990DDF784AA0895659B528ED95F8CA0A9CD1AF5ED92A2599362684CB8D204AC30D07E6BF0CF65194A5129"
decryptionKey="1C49E6BA2F9423387FBC91389A0C5C8D06B61875BCE4916A40474ED"
validation="SHA1" decryption="AES" />
my session time out is on 120 minutes and i can not find why this happening to my web application.i use my log class to view what happening on my web application and I'm sure
session id changing.
for example when user go to another area or when user want to pay money by online bank payment i redirect it to bank page and when user redirect to my site from bank in same window (i do not open another tab or window to do this) session id changed.
i store small data like user id in my session.
i use this syntax to store session:
HttpContext.Current.Session[System.Web.HttpContext.Current.Session.SessionID] = "abc"
and read by this syntax:
var myval=HttpContext.Current.Session[System.Web.HttpContext.Current.Session.SessionID]
it like that server do no use my config and do itself work.
i want to know is it possible that some configuration may be set on my farm server and it case it do not use my config and do works for itself?
To extend from comment, it looks like you have to configure your web apps correctly as per the following Microsoft Support article
http://support.microsoft.com/kb/325056
With server-side state management, if a client switches servers in the middle of the session, the new server does not necessarily have access to the client’s state information (because it is stored on a different server). You can use multiple servers with server-side state management, but you need either intelligent load balancing (to always forward requests from
a client to the same server) or centralized state management (where state is stored in a
central database to which all web servers have access).
Make sure you have the same MachineKey in all your web servers or else they can't share session data.
The objects you store in the session need to be serializable

glassfish load balancer principle of operation

I have configured cluster with two instances on glassfish 3.1.1 and iPlanet Web Server as a load-balancer (on the same machine). For test application provided with glassfish everything works ok (and this application has session replication enabled).
But when I try to make my own application working following situation takes place: it responds when I send requests on ports of a particular instances (that is 28080 and 28081), but when I try to send request through load balancer (port 81) I get error 404. My application has not session replication enabled yet, but it can just make a connection and create two other sessions for each instance. I would like to get similar effect with load balancer.
So I would like to determine:
Is session replication strongly required to load balancer works fine?
Does anyone know any other reasons of this error?
Message from iPlanet log:
[23/Aug/2012:05:44:16] failure ( 4120) myHost: for host 127.0.0.1 trying to GET /myApp/login.jsp, service-j2ee reports: PWC6117: File "c:/webserver7/https-myHost/docs/myApp/login.jsp" not found
Additional conclusions:
(81 - http-listener port on iPlanet)
When I send GET http://localhost:81/testApp then loadbalancer passes it to glassfish and returns correct site. But when I try the same with my test application, GET http://localhost:81/myApp then iPlanet looks for this site in its own resources (docs directory as in log above)
fragment of myHost-obj.conf:
<Object name="default">
AuthTrans fn="match-browser" browser="*MSIE*" ssl-unclean-shutdown="true"
NameTrans fn="name-trans-passthrough" name="lbplugin" config-file="C:/WebServer7/https-myHost/config/loadbalancer.xml"
NameTrans fn="assign-name" name="perf" from="/.perf"
NameTrans fn="ntrans-j2ee" name="j2ee"
NameTrans fn="pfx2dir" from="/mc-icons" dir="C:/WebServer7/lib/icons" name="es-internal"
PathCheck fn="uri-clean"
PathCheck fn="check-acl" acl="default"
PathCheck fn="find-pathinfo"
PathCheck fn="find-index-j2ee"
PathCheck fn="find-index" index-names="index.html,home.html,index.jsp"
ObjectType fn="type-j2ee"
ObjectType fn="type-by-extension"
ObjectType fn="force-type" type="text/plain"
Service method="(GET|HEAD)" type="magnus-internal/directory" fn="index-common"
Service method="(GET|HEAD|POST)" type="*~magnus-internal/*" fn="send-file"
Service method="TRACE" fn="service-trace"
Error fn="error-j2ee"
AddLog fn="flex-log"
</Object>
First, if you are running the Load Balancer plugin, then you may have a support contract (a GlassFish license is required before you put the plugin into production). If so, calling support is a good option.
To answer your first question, session replication is not required for the Load Balancer to work.
As a shameless plug, I have a 5-part youtube series on setting this up. You can skip the videos on downloading and installing and go straight to setup/configuration/testing. Based on what you describe, I suspect the issue isn't the plugin itself, but the loadbalancer.xml configuration. Look at loadbalancer.xml and see if myApp is configured.
Hope this helps.