I deleted a key vault that was used in a storage account.
Now if I try to change anything in the Encryption section of the storage (like change the encryption type or using a new key), I am getting:
The operation failed because the specified key vault key 'https://dev-certs2.vault.azure.net/keys/<my-previous-key/xxxxxxxxxxxxxxxx' was not found
Is there a way to change it without having to create a new storage account?
By default, the Soft delete will be enabled when you create the keyvault, the default retention period is 90 days, if your keyvault was deleted within 90 days, then you can follow the steps below, if it exceeds 90 days, there seems to be no way to do that without creating a new storage account.(not 100% sure, you may need to contact the azure support)
1.Use powershell to check if the keyvault was in Removed state, if there is no output, it means that exceeds 90 days.
Get-AzKeyVault -VaultName joyk -Location <the same location with the storage> -InRemovedState
2.Use powrershell to recover the previously deleted keyvault.
Undo-AzKeyVaultRemoval -VaultName joyk -ResourceGroupName <group-name> -Location <the same location with the storage>
3.Navigate to the storage account in the portal -> Encryption , you will be able to change the Encryption type or use a new key. After configuring, then you can delete the keyvault again.
Related
I am trying to clone a SQL VM to another resource group,
Cloning a normal VM is simple,
Create disk snapshot (OS & data Disks)
Create Disk from snapshot
create VM from managed Disk
The image I have is (image: Sql Server 2019 Standard on Windows Server 2022-Gen2), following the above steps only creating a vm but not SQL Virtual Machine.
Please let me know if anyone knows the correct steps or any documentation.
Thanks in advance.
You can perform this activity through PowerShell. Please go through below steps-
To create a snapshot using the Azure portal, complete these steps, you can skip this if you have already created a snapshot.
In the Azure portal, select Create a resource.
Search for and select Snapshot.
In the Snapshot window, select Create. The Create snapshot window appears.
For Resource group, select an existing resource group or enter the name of a new one.
Enter a Name, then select a Region and Snapshot type for the new snapshot. If you would like to store your snapshot in zone-resilient storage, you need to select a region that supports availability zones. For a list of supporting regions, see Azure regions with availability zones.
For Source subscription, select the subscription that contains the managed disk to be backed up.
For Source disk, select the managed disk to snapshot.
For Storage type, select Standard HDD, unless you require zone-redundant storage or high-performance storage for your snapshot.
If needed, configure settings on the Encryption, Networking, and Tags tabs. Otherwise, default settings are used for your snapshot.
Select Review + create.
Moving snapshot of SQL Virtual Machine under different resource group in another subscription
PowerShell:
#Provide the subscription Id of the subscription where snapshot exists
$sourceSubscriptionId='yourSourceSubscriptionId'
#Provide the name of your resource group where snapshot exists
$sourceResourceGroupName='yourResourceGroupName'
#Provide the name of the snapshot
$snapshotName='yourSnapshotName'
#Set the context to the subscription Id where snapshot exists
Select-AzSubscription -SubscriptionId $sourceSubscriptionId
#Get the source snapshot
$snapshot= Get-AzSnapshot -ResourceGroupName $sourceResourceGroupName -Name $snapshotName
#Provide the subscription Id of the subscription where snapshot will be copied to
#If snapshot is copied to the same subscription then you can skip this step
$targetSubscriptionId='yourTargetSubscriptionId'
#Name of the resource group where snapshot will be copied to
$targetResourceGroupName='yourTargetResourceGroupName'
#Set the context to the subscription Id where snapshot will be copied to
#If snapshot is copied to the same subscription then you can skip this step
Select-AzSubscription -SubscriptionId $targetSubscriptionId
#Store your snapshots in Standard storage to reduce cost. Please use Standard_ZRS in regions where zone redundant storage (ZRS) is available, otherwise use Standard_LRS
#Please check out the availability of ZRS here: https://docs.microsoft.com/en-us/Az.Storage/common/storage-redundancy-zrs#support-coverage-and-regional-availability
$snapshotConfig = New-AzSnapshotConfig -SourceResourceId $snapshot.Id -Location $snapshot.Location -CreateOption Copy -SkuName Standard_LRS
#Create a new snapshot in the target subscription and resource group
New-AzSnapshot -Snapshot $snapshotConfig -SnapshotName $snapshotName -ResourceGroupName $targetResourceGroupName
I've configured Always Encrypted for my SQL installation, that is I've got a CMK pointing towards a Windows Keystore key, which in turn is used to decrypt the CEK.
Now I'm trying to think of some nice backup solutions for the CMK.
Currently I have the exact same RSA key configured in Azure, I've confirmed both keys to work (Windows Keystore key and Azure) by encrypting with the first and decrypting with the latter.
But the problem I'm having is, in case I lose the windows keystore key, I lose the ability to decrypt Always Encrypted keys.
The Azure key doesn't "expose" the key, meaning I can encrypt and decrypt with the key, but I can't export it.
When configuring key rotation in SQL you need the "original key".
I've tried to simply make a new CMK in SQL which points to the Azure environment by using "ALTER COLUMN ENCRYPTION KEY", but I get an error when I try to access the data.
My guess is that the CEK contains some metadata linking it to the key that is Windows based.
My question then is, is there a way to manually decrypt the column encryption key using a valid RSA key?
My question then is, is there a way to manually decrypt the column encryption key using a valid RSA key?
Yes, you can manually decrypt the column encryption key and master key using Always Encrypted with secure enclaves, but these features are only allowed in DC-series hardware configuration along with Microsoft Azure Attestation which are available only in few Locations. So, you need to select a location (an Azure region) that supports both the DC-series hardware and Microsoft Azure Attestation.
Note: DC-series is available in the following regions: Canada Central, Canada East, East US, North Europe, UK South, West Europe, West US.
Choose DC-series while deploying the SQL Database by following the steps below.
Make sure to SQL Server is deployed in DC-series supported location. Click on configure database.
Select hardware configuration
Select DC-series, click on OK, Apply and deploy the database.
Now create attestation provider using Azure Portal. Search for attestation in search bar and select Microsoft Azure Attestation.
On the Overview tab for the attestation provider, copy the value of the Attest URI property to clipboard and save it in a file. This is the attestation URL, you will need in later steps.
Select Policy on the resource menu on the left side of the window or on the lower pane.
Set Attestation Type to SGX-IntelSDK.
Select Configure on the upper menu.
Set Policy Format to Text. Leave Policy options set to Enter policy.
In the Policy text field, replace the default policy with the below policy.
[ type=="x-ms-sgx-is-debuggable", value==false ]
&& [ type=="x-ms-sgx-product-id", value==4639 ]
&& [ type=="x-ms-sgx-svn", value>= 0 ]
&& [ type=="x-ms-sgx-mrsigner", value=="e31c9e505f37a58de09335075fc8591254313eb20bb1a27e5443cc450b6e33e5"]
=> permit(); }; ```
Configure your database in SSMS. Click on Options and give attestation URL which you have copied in step 5.
Using the SSMS instance from the previous step, in Object Explorer, expand your database and navigate to Security > Always Encrypted Keys.
Provision a new enclave-enabled column master key:
Right-click Always Encrypted Keys and select New Column Master Key....
Select your column master key name: CMK1.Make sure you select either Windows Certificate Store (Current User or Local Machine) or Azure Key Vault.
Select Allow enclave computations.
Now simply encrypt your column. See below example to encrypt.
ALTER TABLE [HR].[Employees]
ALTER COLUMN [SSN] [char] (11) COLLATE Latin1_General_BIN2
ENCRYPTED WITH (COLUMN_ENCRYPTION_KEY = [CEK1], ENCRYPTION_TYPE = Randomized, ALGORITHM = 'AEAD_AES_256_CBC_HMAC_SHA_256') NOT NULL
WITH
(ONLINE = ON);
ALTER TABLE [HR].[Employees]
ALTER COLUMN [Salary] [money]
ENCRYPTED WITH (COLUMN_ENCRYPTION_KEY = [CEK1], ENCRYPTION_TYPE = Randomized, ALGORITHM = 'AEAD_AES_256_CBC_HMAC_SHA_256') NOT NULL
WITH
(ONLINE = ON);
ALTER DATABASE SCOPED CONFIGURATION CLEAR PROCEDURE_CACHE;
Verify the encrypted data.
To decrypt using customer encrypt key, see below example.
ALTER TABLE [HR].[Employees]
ALTER COLUMN [SSN] [char](11) COLLATE Latin1_General_BIN2
WITH (ONLINE = ON);
GO
Is it possible to create a data base scoped credential in synapse for Azure Blob Storage in SYnapse?
Tried this scenario :
WITH IDENTITY = 'SHARED ACCESS SIGNATURE', SECRET = '<your SAS secret without the preleading ?>'; CLOSE MASTER KEY; -- only necessary if you need to close the master key context. (it will close with the session/query close)
But it is failing
SAS Database scoped credential for Azure Storage account is supported only in Azure SQL database and not in Synapse.
You need to use Storage account keys only in synapse.
In synapse you would get the below error:
Secret provided can not contain more than 120 characters. Please provide a valid credential.
I've a SQL server 2014 running on one of our server. We're in the process of implementing security steps for our databases. I've encrypted a column in one of the table in the database on the server. The issue is when I restore the backup on my local SQL server and run a query to decrypt the column data it gives me null values. On the other end when I decrypt the column data on the main server it works fine. I found a thread on this forum which states to do the following when restoring the encrypted database on different server.
USE [master];
GO
OPEN MASTER KEY DECRYPTION BY PASSWORD = 'StrongPassword';
ALTER MASTER KEY ADD ENCRYPTION BY SERVICE MASTER KEY;
GO
select File_Name
, CONVERT(nvarchar,DECRYPTBYKEY(File_Name))
from [test].[dbo].[Orders_Customer]
I tried doing above still no luck.
Can anybody point me in the right direction? Any help is greatly appreciated.
Thanks
You've opened the Master key (in your example) in the Master DB
Change the first line to use
Use Test;
The Open Master Key statement works in the context of the Current Database. You opened it whilst in Master, but then selected that data from the Test DB.
Does Amazon provide a way to copy a bucket from one account to a different account? I am uploading several gb of files to my own bucket for a client app for development purposes, but when handing off the code I'm going to want to switch the bucket to their account (so I am no longer paying for the storage). Uploading is taking quite awhile because there are many small files, and I would like to avoid the same arduous process later, when I move the files into the other bucket.
You could use crossftp ( http://www.crossftp.com/ ) to server transfer it from one account to another. But you will still have to pay the traffic.
other solution would be: http://gallery.bucketexplorer.com/displayimage-93.html
boto works well. According to the creator of boto:
Assuming you have account A and account B and all of the objects are
currently stored in a bucket owned by account A, you should be able to
grant account B read access to the existing bucket(s) and then, using
the account B credentials COPY the objects into a bucket(s) owned by
account B. I think something like this should work:
conn = S3Connection(...) source_bucket =
conn.lookup(source_bucket_name) dest_bucket =
conn.lookup(dest_bucket_name) source_key =
source_bucket.lookup(key_name) dest_key = source_key.copy(dest_bucket,
key_name, preserve_acl=True)
This would create a key of the same name in the destination bucket and
would also preserve the source key's ACL and metadata.