(NetcfgInvalidSubnet) Subnet 'mySubnet' is not valid in virtual network 'myVnet' - azure-virtual-network

I have a weird issue while trying to create an azure container instance referencing an existing virtual network and subnet.
For that I am using the following command describes in microsoft docuementation running from azure CLI:
az container create
--name mycontainer --resource-group myresourcegroup --image crazlabjira01.azurecr.io/jira-servicemanagement:4.19.0 --vnet **myVnet** --vnet-address-prefix 172.27.0.0/16 --subnet **mySubnet** --subnet-address-prefix 172.27.14.0/24
My subnet is in the range of the Vnet so why does the command returns the following error :
(NetcfgInvalidSubnet) Subnet 'mySubnet' is not valid in virtual network 'myVnet'
Please note that if I create that container using the UI and network defined above, it works without any trouble
Thanks for help

This is because you are adding a new subnet in an existing vnet in which you already have one subnet whose address space is the same as your new subnet.

I tested in my environment and it is working fine for me.
az container create --name myconatainer98044 -g v-ra****-***tr**e --image mycontainer7723.azurecr.io/atlassian/jira-servicemanagement:latest --vnet RG-VNET --vnet-address-prefix 10.0.0.0/16 --subnet TestSubnet --subnet-address-prefix 10.0.1.0/24
When you create a virtual network you can configure the address space. By default, I was given an address space of 10.0.0.0/16 which allows addresses from 10.0.0.0 to 10.0.255.255.
By changing the subnet to a valid value for a 10.0.0.0/16 address space, like 10.0.1.0/24, you will likely be successful:
For you Make Sure the subnet that you are trying to use is not being used by other resource provider. Here you might be trying to use a subnet with 172.27.14.0/24 which is already in use.
Example : If you using a subnet for creating a SQL instance and the
same subnet you are using for creating the Container Instance it will
throw an error which you are getting. So Microsoft doesn’t support
the same subnet to create a different resource provider.
In my Case my TestSubnet is delegated to Microsoft.Containerintance
Please refer this document for get the clear understanding.

Related

"flag provided but not defined: -enable_iam_login" when try to start google cloud sql proxy

I'm trying to connect Public cloud SQL instance using Cloud SQL IAM database authentication.
I have enabled "cloudsql_iam_authentication" flag and created a IAM service account granting necessary role
I followed this documentation: https://cloud.google.com/sql/docs/mysql/authentication
I used this command to connect to instance
cloud_sql_proxy -instances=my-project:us-central1:my-db-name=tcp:3306 -enable_iam_login
Once I tried to connect to instance I'm getting following error
flag provided but not defined: -enable_iam_login
Make sure you're using the latest version.
Also, note: there's a new v2 version of the Proxy which has a much easier to use interface. In v2 this becomes:
cloud-sql-proxy --auto-iam-authn --port 3306 my-project:us-central1:my-db
The README has a lot more information including sample invocations.

Azure route table not showing effective routes

I have this Vnet and subnet and I created a route table and associated it to the subnet. However, under Effective routes, it is empty... I also have a VNet Gateway and it has a VPN to on-prem and the gateway is learning BGP routes but they are not listed here. I do have a NIC associated in the subnet as well. The NIC's menu shows all the routes. Why is that? Here is the screenshot of it. The route table is empty here
The route table is associated to this subnet
Here is the NIC in the subnet and it shows all the routes
Thanks!
Difan
I tried to reproduce the same in my environment. Its working fine this issue may cause of required permission and connection fails.
Go to Azure portal -> Your VM -> setting -> click networking -> select your network interface -> click effective routes
Named of effective routes as shown below:
Otherwise try to fix it via PowerShell
To Get the effective route table on a network interface
Get-AzEffectiveRouteTable -NetworkInterfaceName "MyNetworkInterface" -ResourceGroupName "MyResourceGroup
To gets the effective routes for a network interface
Get-AzEffectiveRouteTable `
-NetworkInterfaceName myVMNic1 `
-ResourceGroupName myResourceGroup `
| Format-Table
For your Reference :
az network nic | Microsoft Docs

Connect pgAdmin4 to cloud SQL

I'm trying to connect postgreSQL instance in cloud SQL to my pgAdmin.
And I'm totally confused.
How can i do that?
When you are creating your postgres instance you have to allow access to the ip address from the postgres' client is running.
Create your postgresql instance
In the Create a PostgreSQL instance window give the instance id and password to you postgres user in the “Default user password” section.
Click on “Show configuration options” and locate “Set connectivity”, there You have to give access to Your pc ip address in the “Authorized networks” under “Public IP” section click on “Add network” introduce the ip into the “Network” box and click “done”, You can check the client ip address in the link.
If you are done with the configurations click create.
Now to verify the connectivity from the client to Cloud SQL instance I recommend you to do it the first time with the command line console.
In you pc launch the command line console,
execute : psql -h [postgres instance ip address] -u postgres.
You can follow the official documentation for “Connecting psql Client Using Public IP” in the link.

Google cloud dataproc failing to create new cluster with initialization scripts

I am using the below command to create data proc cluster:
gcloud dataproc clusters create informetis-dev
--initialization-actions “gs://dataproc-initialization-actions/jupyter/jupyter.sh,gs://dataproc-initialization-actions/cloud-sql-proxy/cloud-sql-proxy.sh,gs://dataproc-initialization-actions/hue/hue.sh,gs://dataproc-initialization-actions/ipython-notebook/ipython.sh,gs://dataproc-initialization-actions/tez/tez.sh,gs://dataproc-initialization-actions/oozie/oozie.sh,gs://dataproc-initialization-actions/zeppelin/zeppelin.sh,gs://dataproc-initialization-actions/user-environment/user-environment.sh,gs://dataproc-initialization-actions/list-consistency-cache/shared-list-consistency-cache.sh,gs://dataproc-initialization-actions/kafka/kafka.sh,gs://dataproc-initialization-actions/ganglia/ganglia.sh,gs://dataproc-initialization-actions/flink/flink.sh”
--image-version 1.1 --master-boot-disk-size 100GB --master-machine-type n1-standard-1 --metadata "hive-metastore-instance=g-test-1022:asia-east1:db_instance”
--num-preemptible-workers 2 --num-workers 2 --preemptible-worker-boot-disk-size 1TB --properties hive:hive.metastore.warehouse.dir=gs://informetis-dev/hive-warehouse
--worker-machine-type n1-standard-2 --zone asia-east1-b --bucket info-dev
But Dataproc failed to create cluster with following errors in failure file:
cat
+ mysql -u hive -phive-password -e '' ERROR 2003 (HY000): Can't connect to MySQL server on 'localhost' (111)
+ mysql -e 'CREATE USER '\''hive'\'' IDENTIFIED BY '\''hive-password'\'';' ERROR 2003 (HY000): Can't connect to MySQL
server on 'localhost' (111)
Does anyone have any idea behind this failure ?
It looks like you're missing the --scopes sql-admin flag as described in the initialization action's documentation, which will prevent the CloudSQL proxy from being able to authorize its tunnel into your CloudSQL instance.
Additionally, aside from just the scopes, you need to make sure the default Compute Engine service account has the right project-level permissions in whichever project holds your CloudSQL instance. Normally the default service account is a project editor in the GCE project, so that should be sufficient when combined with the sql-admin scopes to access a CloudSQL instance in the same project, but if you're accessing a CloudSQL instance in a separate project, you'll also have to add that service account as a project editor in the project which owns the CloudSQL instance.
You can find the email address of your default compute service account under the IAM page for your project deploying Dataproc clusters, with the name "Compute Engine default service account"; it should look something like <number>#project.gserviceaccount.com`.
I am assuming that you already created the Cloud SQL instance with something like this, correct?
gcloud sql instances create g-test-1022 \
--tier db-n1-standard-1 \
--activation-policy=ALWAYS
If so, then it looks like the error is in how the argument for the metadata is formatted. You have this:
--metadata "hive-metastore-instance=g-test-1022:asia-east1:db_instance”
Unfortuinately, the zone looks to be incomplete (asia-east1 instead of asia-east1-b).
Additionally, with running that many initializayion actions, you'll want to provide a pretty generous initialization action timeout so the cluster does not assume something has failed while your actions take awhile to install. You can do that by specifying:
--initialization-action-timeout 30m
That will allow the cluster to give the initialization actions 30 minutes to bootstrap.
By the time you reported, it was detected an issue with cloud sql proxy initialization action. It is most probably that such issue affected you.
Nowadays, it shouldn't be an issue.

Trying to connect codeanywhere with google compute engine via ssh

codeanywhere provides a public ssh key. i've input it into the metadata and the vm instance in google compute engine, but every time I try to connect from the codeanywhere interface I get an authorization failure and the logs in the instance console have:
yadayada: error: Could not load host key: /etc/ssh/ssh_host_ed25519_key
the instance was setup using the cloud launcher with node. I have not done anything else to this instance, it's brand spanking new.
Thoughts?
ugh, nevermind. I changed the username in the codeanywhere interface to codeanywhere-ssh-key and it worked.