Azure container shared access signature expiring - azure-storage

I'm having trouble with Azure Blobs and Shared Access Signatures when they expire. I need to grant access to a blob for longer than 1 hour (1 year), so I'm using a named container policy, but unfortunately . Its expiring after 1 hr
SharedAccessPolicy sharedAccessPolicy = new SharedAccessPolicy();
sharedAccessPolicy.Permissions = SharedAccessPermissions.Read;
sharedAccessPolicy.SharedAccessStartTime = DateTime.UtcNow;
//sharedAccessPolicy.SharedAccessExpiryTime = DateTime.UtcNow.AddYear(1); No need to define expiry time here.
BlobContainerPermissions blobContainerPermissions = new BlobContainerPermissions();
blobContainerPermissions.SharedAccessPolicies.Add("default", sharedAccessPolicy);
container.SetPermissions(blobContainerPermissions);
Console.WriteLine("Press any key to continue....");
Console.ReadLine();
CloudBlob blob = container.GetBlobReference(path);
string sas = blob.GetSharedAccessSignature(new SharedAccessPolicy()
{
SharedAccessExpiryTime = DateTime.UtcNow.AddDays(7),//add expiry date only when you're creating the signed URL
}
, "default");
Console.WriteLine(blob.Uri.AbsoluteUri + sas);
Process.Start(new ProcessStartInfo(blob.Uri.AbsoluteUri + sas));
Console.WriteLine("Press any key to continue....");
Console.ReadLine();

Related

What is the strategy of data-protection key rotation with multiple pods?

I used services.AddDataProtection().PersistKeysToFileSystem(path).ProtectKeysWithAzureKeyVault(authData). to encrypt data-protection keys. In 24 hours since deployment no new data-protection key was generated. This means that until the current data-protection key expires no encryption is in place.
Now ,to force the data-protection key generation I can delete the latest data-protection key and restart the pods, but this will lead to race condition described here: https://github.com/dotnet/aspnetcore/issues/28475 so I will need to restart them again. Will the users having cookies encrypted with the now deleted data-protection key be logged out?
This also bothers me, because what exactly happens when there is a data-protection key rotation every 180 days? User's cookies are encrypted using it so if they are signed in would their cookies no longer be valid?
Additionally if one of let's say 6 pods generates new data-protection key when is the time the rest syncs up? Is it possible that you will fetch a form using 1 pod and then submit it using the other while they use different data-protection keys?
How to deal with all that?
This issue is still open, there is a meta issue that links to other open issues about the subject.
https://github.com/dotnet/aspnetcore/issues/36157
I had the same problem, but instead of pods I have AWS Lambda functions.
I solved the problem by disabling automatic key generation:
services.AddDataProtection()
.DisableAutomaticKeyGeneration()
And managing the keys myself. I have at least two keys:
The default key. Expires 190 days after activation. It is the default key during 180 days.
The next key. It activates 10 days before the current key expires. It expires 190 days after activation. It will be the default key during 180 days.
This is the code I execute before deploying lambda function and then once a month:
public class KeyringUpdater
{
private readonly ILogger<KeyringUpdater> logger;
private readonly IKeyManager keyManager;
public KeyringUpdater(IKeyManager keyManager, ILogger<KeyringUpdater> logger)
{
this.logger = logger;
this.keyManager = keyManager;
}
private IKey? GetDefaultKey(IReadOnlyCollection<IKey> keys)
{
var now = DateTimeOffset.UtcNow;
return keys.FirstOrDefault(x => x.ActivationDate <= now && x.ExpirationDate > now && x.IsRevoked == false);
}
private IKey? GetNextKey(IReadOnlyCollection<IKey> keys, IKey key)
{
return keys.FirstOrDefault(x => x.ActivationDate > key.ActivationDate && x.ActivationDate < key.ExpirationDate && x.ExpirationDate > key.ExpirationDate && x.IsRevoked == false);
}
public void Update()
{
var keys = this.keyManager.GetAllKeys();
logger.LogInformation("Found {Count} keys", keys.Count);
var defaultKey = GetDefaultKey(keys);
if (defaultKey == null)
{
logger.LogInformation("No default key found");
var now = DateTimeOffset.UtcNow;
defaultKey = this.keyManager.CreateNewKey(now, now.AddDays(190));
logger.LogInformation("Default key created. ActivationDate: {ActivationDate}, ExpirationDate: {ExpirationDate}", defaultKey.ActivationDate, defaultKey.ExpirationDate);
keys = this.keyManager.GetAllKeys();
}
else
{
logger.LogInformation("Found default key. ActivationDate: {ActivationDate}, ExpirationDate: {ExpirationDate}", defaultKey.ActivationDate, defaultKey.ExpirationDate);
}
var nextKey = GetNextKey(keys, defaultKey);
if (nextKey == null)
{
logger.LogInformation("No next key found");
nextKey = this.keyManager.CreateNewKey(defaultKey.ExpirationDate.AddDays(-10), defaultKey.ExpirationDate.AddDays(180));
logger.LogInformation("Next key created. ActivationDate: {ActivationDate}, ExpirationDate: {ExpirationDate}", nextKey.ActivationDate, nextKey.ExpirationDate);
}
else
{
logger.LogInformation("Found next key. ActivationDate: {ActivationDate}, ExpirationDate: {ExpirationDate}", nextKey.ActivationDate, nextKey.ExpirationDate);
}
}
}

Use Custom CNG provider to get Private key from the HSM

I have our own CNG provider. Using c# with .net framework 4.6.1 with window 7. I am using clrsecurity.
string fp = "223298a5c7c9f78a42d83a5ffbxxxxxxxx";
//string fp = "331ffa497d90d19446171f85xxxxxxxx"; //MS
// Load the certificate with the specified serial number
X509Store store = new X509Store(StoreLocation.CurrentUser);
store.Open(OpenFlags.ReadOnly);
X509Certificate2Collection certificates = store.Certificates.Find(X509FindType.FindBySerialNumber, fp, false);
// check if at least one certificate has been found
if (certificates.Count != 1)
{
throw new Exception("The certificate with the serial number " + fp + " could not be found.");
}
X509Certificate2 cert = certificates[0];
CngKey cngKey = null;
if (cert.HasCngKey())
{
//rsa = cert.GetRSAPrivateKey();
cngKey = cert.GetCngPrivateKey();
}
Property of cngKey:
The problem is I am not able to set the provider name into CngKey object.
So how to use the clrsecurity dll for non Microsoft KSP.

Azure Pack REST API Authentication

After hours of search in Microsoft messed up API documentation for its products, i am still no where on how to authenticate a rest API request in windows azure pack distribution.
Primarily i want to create an API which automate the process of deploying virtual machine, but I cant find any documentation on how to acquire the authentication token to access the resources.
Some documentation states the use of ADFS, but don't provide any reference on the ADFS REST API for authentication.
And I don't want to use ADFS in the first place. I want to authenticate using AZURE tenant and admin interface.
In conclusion, if anyone can provide any help on the REST API authentication, it will make my day.
Thanks in advance.
You can use the following PowerShell to acquire an access token.
Add-Type -Path 'C:\Program Files\Microsoft Azure Active Directory Connect\Microsoft.IdentityModel.Clients.ActiveDirectory.dll'
$tenantID = "<the tenant id of you subscription>"
$authString = "https://login.windows.net/$tenantID"
# It must be an MFA-disabled admin.
$username = "<the username>"
$password = "<the password>"
# The resource can be https://graph.windows.net/ if you are using graph api.
# Or, https://management.azure.com/ if you are using ARM.
$resource = "https://management.core.windows.net/"
# This is the common client id.
$client_id = "1950a258-227b-4e31-a9cf-717495945fc2"
$creds = New-Object "Microsoft.IdentityModel.Clients.ActiveDirectory.UserCredential" `
-ArgumentList $username,$password
$authContext = New-Object "Microsoft.IdentityModel.Clients.ActiveDirectory.AuthenticationContext" `
-ArgumentList $authString
$authenticationResult = $authContext.AcquireToken($resource,$client_id,$creds)
# An Authorization header can be formed like this.
$authHeader = $authenticationResult.AccessTokenType + " " + $authenticationResult.AccessToken
I am doing some similar job like you did.
static string GetAspAuthToken(string authSiteEndPoint, string userName, string password)
{
var identityProviderEndpoint = new EndpointAddress(new Uri(authSiteEndPoint + "/wstrust/issue/usernamemixed"));
var identityProviderBinding = new WS2007HttpBinding(SecurityMode.TransportWithMessageCredential);
identityProviderBinding.Security.Message.EstablishSecurityContext = false;
identityProviderBinding.Security.Message.ClientCredentialType = MessageCredentialType.UserName;
identityProviderBinding.Security.Transport.ClientCredentialType = HttpClientCredentialType.None;
var trustChannelFactory = new WSTrustChannelFactory(identityProviderBinding, identityProviderEndpoint)
{
TrustVersion = TrustVersion.WSTrust13,
};
//This line is only if we're using self-signed certs in the installation
trustChannelFactory.Credentials.ServiceCertificate.SslCertificateAuthentication = new X509ServiceCertificateAuthentication() { CertificateValidationMode = X509CertificateValidationMode.None };
trustChannelFactory.Credentials.SupportInteractive = false;
trustChannelFactory.Credentials.UserName.UserName = userName;
trustChannelFactory.Credentials.UserName.Password = password;
var channel = trustChannelFactory.CreateChannel();
var rst = new RequestSecurityToken(RequestTypes.Issue)
{
AppliesTo = new EndpointReference("http://azureservices/TenantSite"),
TokenType = "urn:ietf:params:oauth:token-type:jwt",
KeyType = KeyTypes.Bearer,
};
RequestSecurityTokenResponse rstr = null;
SecurityToken token = null;
token = channel.Issue(rst, out rstr);
var tokenString = (token as GenericXmlSecurityToken).TokenXml.InnerText;
var jwtString = Encoding.UTF8.GetString(Convert.FromBase64String(tokenString));
return jwtString;
}
Parameter "authSiteEndPoint" is your Tenant Authentication site url.
default port is 30071.
You can find some resource here:
https://msdn.microsoft.com/en-us/library/dn479258.aspx
The sample program "SampleAuthApplication" can solve your question.

How to specify Server side encryption with S3 pre signed urls?

This is a S3 issue so I am posting this here and not in the Salesforce stackexchange.
Basically my Salesforce code generates pre-signed urls for S3. These are consumed by the front end to upload and download files.
This is working perfectly. Now we need to specify SSE (server side encryption).
Based on the documentation, SSE-S3 does not work with pre signed urls.
http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingServerSideEncryption.html
So I have to use SSE with customer generated keys.
http://docs.aws.amazon.com/AmazonS3/latest/dev/ServerSideEncryptionCustomerKeys.html
Here the text says this:
When creating a presigned URL, you must specify the algorithm using the x-amz-server-side​-encryption​-customer-algorithm in the signature calculation.
This is how I am calculating my signature and it is working fine with upload and download of files into the bucket sans the encryption.
public string getStringToSign() {
// ==== CONSTRUCT THE STRING TO SIGN ====
string stringToSign = S3Connection.AWS_HEADER_ENCRYPTION_SCHEME + '\n'
+ this.dateStampISO + '\n'
+ this.dateStampYYYYMMDD + '/' + this.awsRegion + '/s3/aws4_request' + '\n'
+ EncodingUtil.convertToHex(Crypto.generateDigest(S3Connection.AWS_ENCRYPTION_SCHEME, blob.valueOf(this.getRequestString())));
return stringToSign;
}
public blob getSigningKey() {
// ==== GENERATE THE SIGNING KEY ====
Blob dateKey = Crypto.generateMac('hmacSHA256', Blob.valueOf(this.dateStampYYYYMMDD), Blob.valueOf('AWS4' + this.accessSecret));
Blob dateRegionKey = Crypto.generateMac('hmacSHA256', Blob.valueOf(this.awsRegion), dateKey);
Blob dateRegionServiceKey = Crypto.generateMac('hmacSHA256', blob.valueOf(this.awsServiceName), dateRegionKey);
Blob signingKey = Crypto.generateMac('hmacSHA256', blob.valueOf('aws4_request'), dateRegionServiceKey);
//Blob signingKey2 = Crypto.generateMac('hmacSHA256', blob.valueOf('x-amz-server-side​-encryption​-customer-algorithm'), signingKey);
return signingKey;
}
public string getSignature() {
// ==== GENERATE THE SIGNATURE ====
return this.generateSignature(this.getStringToSign(), this.getSigningKey());
}
public string generateSignature(string stringToSign, blob signingKey) {
blob signatureBlob = Crypto.generateMac('hmacSHA256', blob.valueOf(stringToSign), signingKey);
return EncodingUtil.convertToHex(signatureBlob);
}
So my question is how do I add "x-amz-server-side​-encryption​-customer-algorithm" in this signature calculation.
Thanks in advance!

Why isn't Opc.Ua.UserIdentity sending the password cleanly to the OPC server?

I have a problem with the UserIdentity(user, password) constructor.
My password is 4 characters long. When the password arrives at the server it is 36 characters long. The first 4 characters are my password - the rest is random garbage.
The Opc.Ua.Client.dll & Opc.Ua.Core.dll have version 1.0.238.1.
What is causing this and what can I do to send the password correctly?
UPDATE
ApplicationConfiguration configuration = Helpers.CreateClientConfiguration();
X509Certificate2 clientCertificate = configuration.SecurityConfiguration.ApplicationCertificate.Find();
configuration.CertificateValidator.CertificateValidation += new CertificateValidationEventHandler(CertificateValidator_CertificateValidation);
EndpointDescription endpointDescription = Helpers.CreateEndpointDescription(Url);
EndpointConfiguration endpointConfiguration = EndpointConfiguration.Create(configuration);
endpointConfiguration.OperationTimeout = 300000;
endpointConfiguration.UseBinaryEncoding = true;
ConfiguredEndpoint endpoint = new ConfiguredEndpoint(null, endpointDescription, endpointConfiguration);
BindingFactory bindingFactory = BindingFactory.Create(configuration);
if (endpoint.UpdateBeforeConnect)
{
endpoint.UpdateFromServer(bindingFactory);
endpointDescription = endpoint.Description;
endpointConfiguration = endpoint.Configuration;
}
SessionChannel channel = SessionChannel.Create(
configuration,
endpointDescription,
endpointConfiguration,
bindingFactory,
clientCertificate,
null);
m_Session = new Session(channel, configuration, endpoint);
m_Session.ReturnDiagnostics = DiagnosticsMasks.All;
m_Session.KeepAlive += new KeepAliveEventHandler(Session_KeepAlive);
m_Session.Notification += new NotificationEventHandler(m_Session_Notification);
UserIdentity identity;
if (userName == null || userName.Length == 0)
{
identity = new UserIdentity();
}
else
{
identity = new UserIdentity(userName, password);
}
m_Session.Open("ATF UA client", identity);
log.Debug("Connect ok");
The rest is not garbage at all. It shall be the same ServerNonce you sent to the OPC UA Client in the CreateSessionResponse.
According to OPC UA specification the UserIdentityToken encrypted format is :
Length - Byte[4] => The length of your password
TokenData - Byte[*] => Your password
ServerNonce - Byte[*]
The password is 36 bytes long because OPC UA Server mainly use 32bytes ServerNonce and your password is 4 bytes long...
You should also verify that the ServerNonce sent with that UserIdentityToken is the same as the one you provide in your CreateSessionResponse.