I want to setup a custom domain for a azure storage account(v2?, not classic).
With this answer I managed to use powershell to set it up for one domain and one storage account.
For another domain and another storage account I thought I had it configured correctly but when I try to configure it now I get this error:
Set-AzureRmStorageAccount -ResourceGroupName "ExampleGroup" -Name "test" -CustomDomainName test.example.com -UseSubDomain $true
Set-AzureRmStorageAccount : CustomDomainNameAlreadySet: Custom domain name is already set. Current value must be cleared before setting a new value.
+ Set-AzureRmStorageAccount -ResourceGroupName "ExampleGroup" -Name " ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : CloseError: (:) [Set-AzureRmStorageAccount], CloudException
+ FullyQualifiedErrorId : Microsoft.Azure.Commands.Management.Storage.SetAzureStorageAccountCommand
The only answer I've found implies that one should use the classical portal which is not an option as v2 storage accounts does not show up there.
How can I clear the CustomDomainName value?
At the moment if you have a custom domain name set and want to replace it, you have to unregister it first. To unregister it, set the CustomDomainName to an empty string and don't send UseSubDomain.
Related
I need to update the backend pool (Maintenance) used by an existing routing rule in Azure Frontdoor to a different existing backend pool (Maintenance2). Here is the UI screen from where it can be done. Can someone advise on how to do this via PowerShell. I have tried via the cmdlets (https://learn.microsoft.com/en-us/powershell/module/az.frontdoor/set-azfrontdoor?view=azps-9.0.1 ) but unable to get the correct set of commands.
I have tried via the cmdlets (https://learn.microsoft.com/en-us/powershell/module/az.frontdoor/set-azfrontdoor?view=azps-9.0.1 ) but unable to get the correct set of commands.
In order to update the backend pool (Poo1) used by an existing routing rule in Azure Front Door to a different existing backend pool (Pool2).
Created a Front Door environment with backend Pools [Pool1/Pool2] which they are pointing to routing rules
Pool1 -> Rule1 and Pool2 -> Rules2
Click on Rule1
WorkAround:
Login into Powershell
tag to the current subscription where Front Door was created. using below command
az account set --subscription "******-****-****-****-*********"
Verify the Backend Pool on Front Door using this command
az network front-door backend-pool list --front-door-name "FrontDoorName" --resource-group "ResoruceGroupName"
Update Backend Pool for Rule1 from pool1 to pool2
using below command
az network front-door routing-rule update --front-door-name "Front Door Name" --name "Rule Name" --resource-group "Resource Group Name" --backend-pool "New Backend Pool"
example:
az network front-door routing-rule update --front-door-name "testfrontdoor" --name "Rule1" --resource-group "rg-testdemo" --backend-pool "pool2"
Output:
Resulted output on Front Door Rule1
Now Rule1 is points to Backend Pool "Pool2" instead of original one "Pool1".
Thank you Swarna. The solution provided is in CLI and the question was for powershell.
I was able to figure out how to do this in PowerShell. It requires the use of 3 Azure PS cmdlets- Get-AzFrontDoor, New-AzFrontDoorRoutingRuleObject and Set-AzFrontDoor. The way it works in the background is that when an update is performed on the Routing Rule, the routing rule is deleted and recreated with the changes. In-order to do this via PS, we have to get the existing frontdoor properties, routing rule properties and put the changes in New-AzFrontDoorRoutingRuleObject. Lastly use Set-AzFrontDoor to apply the changes to frontdoor.
$subscription='Sub1'
Select-AzSubscription $Sub1
$frontdoorName='Frontdoor1'
$resourcegroupname='fdrrg'
$MaintenanceBackPool='Maintenance2'
$PrimaryBackPool='Maintenance1'
$RoutingRuleName='Route1'
#get the current frontdoor property object
$frontdoorobj=Get-AzFrontDoor -ResourceGroupName $resourcegroupname -Name $frontdoorName
#get the Routing Rules and filter the one which needs to be modified
$RoutingRuleUpdate=$frontdoorobj.RoutingRules
$RoutingRuleUpdate2=$RoutingRuleUpdate|Where-Object {$_.Name -contains $RoutingRuleName}
#get the list of all frontendendpointIds as an array (this is required to account for more than 1 frontends/domains associated with the routing rule)
#Perform string manipulation to get the frontend/domain name from the ID
[String[]] $frontdoorHostnames=$RoutingRuleUpdate2.FrontendEndpointIds | ForEach-Object {"$PSItem" -replace '.*/'}
#get the position of the Routing Rule (to be modified) in the Routing Rules collection
$i=[array]::indexof($RoutingRuleUpdate.Name,$RoutingRuleName)
#Update the Routing Rule object with the changes needed- in this case a different backendpool
$updatedRouteObj=New-AzFrontDoorRoutingRuleObject -Name $RoutingRuleUpdate[$i].Name -FrontDoorName $frontDoorName -ResourceGroupName $resourcegroupname -FrontendEndpointName $frontdoorHostnames -BackendPoolName $MaintenanceBackPool
$RoutingRuleUpdate[$i]=$updatedRouteObj
#Finally update the frontdoor object with the change in Routing Rule
Set-AzFrontDoor -InputObject $frontdoorobj -RoutingRule $RoutingRuleUpdate
Write-Output "Successfully Updated RoutingRule:$RoutingRuleName to backendpool:$MaintenanceBackPool"**
I have a PowerShell script that uses Az PowerShell modules to retrieve properties of all webapps within a resource group. Now, I also need to fetch the MinTlsVersion property as in below. Can I do it using one of Az modules?
When a call to Get-AzWebApp command is made in the script, a request is sent to /subscriptions/<s>/resourceGroups/<rg>/providers/Microsoft.Web/sites endpoint. The response object has property siteConfig set to null. Is there a way to call Get-AzWebApp such that the property is not null so I can use the minTlsVersion sub-property under the siteConfig object?
If there's no way to above:
I see that the client receives minTlsVersion by sending a GET request to /subscriptions/<s>/resourceGroups/<rg>/providers/Microsoft.Web/sites/<st>/config/web endpoint. Can we hit the same endpoint by using one of the Az PowerShell modules? Though, I would prefer a request that can return minTlsVersion of all webapps in a resource group in a single call.
You need to iterate through each app, try the command as below, it works on my side.
$grouname = "<resource-group-name>"
$apps = Get-AzWebApp -ResourceGroupName $grouname
$names = $apps.Name
foreach($name in $names){
$tls = (Get-AzWebApp -ResourceGroupName $grouname -Name $name).SiteConfig.MinTlsVersion
Write-Host "minTlsVersion of web app" $name "is" $tls
}
I have been using the Azure PowerShell module and I use this cmdlet to obtain either published or unpublished image details:
Get-AzureVMImage | where-object { $_.Label -like "$ImageName" }
I need to move to the Az module. The replacement cmdlet seems to be Get-AzVMImage. And that does not seem to provide a way to list unpublished images.
So, how do you obtain a list of unpublished images and their details?
According to my understanding, you want to get the custom image. If so you can use the command "Get-AzImage" to get it. For example:
Connect-AzAccount -Subscription "your subscrition id" -Tenant "your tenant id"
Get-AzImage -ImageName "" -ResourceGroupName ""
I wish to check content of one database on server where I'm able to log into by means of Windows Authentication. Sounds really simple and many examples are provided over the Internet.
I tried few examples and each fails on my machine. I suspect, that there might be problem during credentials conversion.
My code (shortened) is as follows:
[System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.SMO")
$User=[System.Security.Principal.WindowsIdentity]::GetCurrent().Name
$credentials = Get-Credential $saUser | Select-Object *
$Pwd = $credentials.Password | ConvertFrom-SecureString
$targetConn = New-Object ('Microsoft.SqlServer.Management.Common.ServerConnection') ('myServer', $User, $Pwd)
$targetServer = New-Object ('Microsoft.SqlServer.Management.Smo.Server') $targetConn
till now there's no error message.
When I type $targetServer, I don't see any objects listed (no Databases as well).
When I tried to check $targetServer.Databases, I received:
The following exception was thrown when trying to enumerate the collection: "Failed to connect to server mmyServer."
ConvertFrom-SecureString converts a secure string into an "encrypted standard string" (a hash, which is intended to store the encrypted string in text format). So, you're providing a password hash ($Pwd) as the password argument when creating the $targetConn object, which is invalid.
You can get the plaintext password from the PSCredential object $credentials this way:
$Pwd = $credentials.GetNetworkCredential().Password
However, according to the documentation for the contructors for the ServerConnection class, you can also provide a secure string as the password argument. So it should work if you simply leave out the | ConvertFrom-SecureString, i.e.
$Pwd = $credentials.Password
That's probably a better idea, since it's a little more secure. If you use the first method to get the plaintext password, there's a possibility that the RAM location that stores the $Pwd variable will be paged out while the script is running, resulting in the plaintext password being written to the disk.
Trying to precompile my assets in Rails app and sync with Amazon S3 Storage:
with this mesage:
Any feedback appreciated:
Expected(200) <=> Actual(400 Bad Request)
response => #<Excon::Response:0x00000007c45a98 #data={:body=>"<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<Error><Code>InvalidArgument</Code><Message>Authorization header is invalid -- one and only one ' ' (space) required</Message><ArgumentValue>AWS [\"AKIAINSIQYCZLWYSROWQ\", \"7RAxhY5nLkbACICMqjDlee5pCaEhf4LKgSpJ+R9k\"]:LakbTXVMX6I72MViNie/fe+79qU=</ArgumentValue><ArgumentName>Authorization</ArgumentName><RequestId>250C76936044E6D5</RequestId><HostId>j2jK/dv0xTnNddtSFHuVicGv5wWjXl4zXuhOyPcO6+2WWlAYWSkn0CHPwdtnOPet</HostId></Error>", :headers=>{"x-amz-request-id"=>"250C76936044E6D5", "x-amz-id-2"=>"j2jK/dv0xTnNddtSFHuVicGv5wWjXl4zXuhOyPcO6+2WWlAYWSkn0CHPwdtnOPet", "Content-Type"=>"application/xml", "Transfer-Encoding"=>"chunked", "Date"=>"Tue, 20 Aug 2013 13:28:36 GMT", "Connection"=>"close", "Server"=>"AmazonS3"}, :status=>400, :remote_ip=>"205.251.235.165"}, #body="<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<Error><Code>InvalidArgument</Code><Message>Authorization header is invalid -- one and only one ' ' (space) required</Message><ArgumentValue>AWS [\"AKIAINSIQYCZLWYSROWQ\", \"7RAxhY5nLkbACICMqjDlee5pCaEhf4LKgSpJ+R9k\"]:LakbTXVMX6I72MViNie/fe+79qU=</ArgumentValue><ArgumentName>Authorization</ArgumentName><RequestId>250C76936044E6D5</RequestId><HostId>j2jK/dv0xTnNddtSFHuVicGv5wWjXl4zXuhOyPcO6+2WWlAYWSkn0CHPwdtnOPet</HostId></Error>", #headers={"x-amz-request-id"=>"250C76936044E6D5", "x-amz-id-2"=>"j2jK/dv0xTnNddtSFHuVicGv5wWjXl4zXuhOyPcO6+2WWlAYWSkn0CHPwdtnOPet", "Content-Type"=>"application/xml", "Transfer-Encoding"=>"chunked", "Date"=>"Tue, 20 Aug 2013 13:28:36 GMT", "Connection"=>"close", "Server"=>"AmazonS3"}, #status=400, #remote_ip="205.251.235.165">
Have had an error with the same message twice now and both times it was due to pasting an extra space at the end of access key or secret key in config file.
Check where your setting the aws_access_key_id to use with your asset syncer.
This should be something that looks like AKIAINSIQYCZLWYSROWQ, whereas it looks like you've set it to a 2 element array of both your access key id and the secret access key.
Furthermore, given that you've now placed those credentials in the public domain you should revoke them immediately.
Extra space at the end of access key is one issue and the reason is copying from Amazon IAM UI puts the extra space.
The other thing is that when you have configuration in /.aws/credentials folder or other configuration conflicts with environment values. This happened to me when configuring CircleCI and docker machines.
This error also happens if you haven't enabled GET/POST in cloudfront and try to do GET/POST to api which are hosted behind cloudfront.
Error 400 occurs more than 20 cases. Here is a pdf that describe all errors: List of AWS S3 Error Codes