Azure: Error while using Azure virtual network, It says subnet is not valid in virtual network - azure-virtual-network

$rg1="firstyear-rg-01"
$loc="eastasia"
New-AzResourceGroup -name $rg1 -location $loc
$ec1 = New-AzVirtualNetworkSubnetConfig -Name "ec-lab-sn-01" -AddressPrefix "10.0.0.0/27"
$cs1 = New-AzVirtualNetworkSubnetConfig -Name "cs-lab-sn-01" -AddressPrefix "10.0.1.0/27"
$it1 = New-AzVirtualNetworkSubnetConfig -Name "it-lab-sn-01" -AddressPrefix "10.0.2.0/27"
$mc1 = New-AzVirtualNetworkSubnetConfig -Name "mech-lab-sn-01" -AddressPrefix "10.0.3.0/27"
$vn1 = New-AzVirtualNetwork -Name "firstyear-vn-01" -ResourceGroupName $rg1 -Location $loc -AddressPrefix "10.0.0.0/25" -Subnet $ec1,$cs1,$it1,$mc1
The above is the exact code I tried, but it gives error:
New-AzVirtualNetwork: Subnet 'cs-lab-sn-01' is not valid in virtual
network 'firstyear-vn-01'. StatusCode: 400 ReasonPhrase: Bad Request
ErrorCode: NetcfgInvalidSubnet ErrorMessage: Subnet 'cs-lab-sn-01' is
not valid in virtual network 'firstyear-vn-01'. OperationID :
c5bd59de-a637-45ec-99a7-358372184e98
What am I doing wrong?

If you are using a virtual network with an address range 10.0.0.0/25, the subnet AddressPrefix should be included in that virtual network. You can assign subnets to address prefixed like 10.0.0.0/27, 10.0.0.32/27, 10.0.0.64/27, 10.0.0.96/27 according to the IP Calculator.

I ran into this issue when setting up a subnet in Azure using Terraform.
When I run terraform apply, I get the error below:
module.subnet_private_1.azurerm_subnet.subnet: Creating...
╷
│ Error: creating Subnet: (Name "my-private-1-dev-subnet" / Virtual Network Name "my-dev-vnet" / Resource Group "MyDevRG"): network.SubnetsClient#CreateOrUpdate: Failure sending request: StatusCode=0 -- Original Error: Code="NetcfgInvalidSubnet" Message="Subnet 'my-private-1-dev-subnet' is not valid in virtual network 'my-dev-vnet'." Details=[]
│
│ with module.subnet_private_1.azurerm_subnet.subnet,
│ on ../../../modules/azure/subnet/main.tf line 1, in resource "azurerm_subnet" "subnet":
│ 1: resource "azurerm_subnet" "subnet" {
Here's how I fixed it:
The issue was that I was assignining subnet_address_prefixes that were already assinged to a subnet to the new subnet.
I had already assinged ["10.1.1.0/24"] to an already existing subnet, and I made a mistake in my module to assign it again to the new subnet that I was creating.
All I had to do was to use a different subnet_address_prefixes, which is ["10.1.2.0/24"] and everything worked fine.

In your case, this is because your chosen IP Ranges of the subnets are not part of the Virtual Network IP Range.
Generally such error can occur either because of a subnet with the same name already exist, your chosen ip subnet range is not part of the virtual network ip range or your chosen subnet ip ranges are overlapping.
When you are not sure about the boundaries of your IP Ranges, you can use an IP Range calculator.
As you can see here, your virtual network ranges from 10.0.0.2 to 10.0.0.126. Therefor none of your subnets is in that range as you used:
"10.0.0.0/27","10.0.1.0/27","10.0.2.0/27","10.0.3.0/27"
Depending on the size you need, you can go for a configuration as suggested by #nancy Xiong.
Virtual network : 10.0.0.0/25
subnets: 10.0.0.0/27, 10.0.0.32/27, 10.0.0.64/27, 10.0.0.96/27

Related

How to use EKS with suitable volumes and resolve subnet IP insufficient issue on AWS?

I deployed an application in EKS. The deployment always pending, when I checked the events found these issues.
$ kubectl get events
LAST SEEN TYPE REASON OBJECT MESSAGE
89s Warning FailedScheduling pod/awx-demo-111111111-122222 running PreBind plugin "VolumeBinding": binding volumes: provisioning failed for PVC "awx-demo-projects-claim"
49m Warning FailedDeployModel ingress/awx-demo-ingress Failed deploy model due to InvalidSubnet: Not enough IP space available in subnet-031f9c702bc474e8f. ELB requires at least 8 free IP addresses in each subnet.
status code: 400, request id: 11111111-2222-3333-4444-555555555555
32m Warning FailedDeployModel ingress/awx-demo-ingress Failed deploy model due to InvalidSubnet: Not enough IP space available in subnet-01322i912fas0123na. ELB requires at least 8 free IP addresses in each subnet.
status code: 400, request id: 11111111-2222-3333-4444-555555555515
15m Warning FailedDeployModel ingress/awx-demo-ingress Failed deploy model due to InvalidSubnet: Not enough IP space available in subnet-031f9c702bc474e8f. ELB requires at least 8 free IP addresses in each subnet.
status code: 400, request id: 11111111-2222-3333-4444-555555555525
89s Normal WaitForPodScheduled persistentvolumeclaim/awx-demo-projects-claim waiting for pod awx-demo-111111111-122222 to be scheduled
21m Warning ProvisioningFailed persistentvolumeclaim/awx-demo-projects-claim Failed to provision volume with StorageClass "gp2": invalid AccessModes [ReadWriteMany]: only AccessModes [ReadWriteOnce] are supported
It seems there are device issue and subnet issue. I created the EKS cluster and node group with these configurations:
resource "aws_eks_cluster" "this" {
encryption_config {
resources = ["secrets"]
provider {
key_arn = aws_kms_key.this.arn
}
}
enabled_cluster_log_types = ["api", "authenticator", "audit", "scheduler", "controllerManager"]
name = local.cluster_name
version = "1.20"
role_arn = aws_iam_role.eks_cluster.arn
vpc_config {
subnet_ids = [
data.aws_ssm_parameter.private_subnet_0_id.value,
data.aws_ssm_parameter.private_subnet_1_id.value,
]
security_group_ids = [aws_security_group.this.id]
endpoint_public_access = true
}
depends_on = [
aws_iam_role_policy_attachment.eks_cluster_policy,
aws_iam_role_policy_attachment.eks_vpc_resource_controller,
aws_iam_role_policy_attachment.eks_service_policy,
]
tags = merge(
local.tags,
)
}
resource "aws_eks_node_group" "this" {
cluster_name = local.cluster_name
node_group_name = local.node_group_name
node_role_arn = aws_iam_role.eks_nodes.arn
instance_types = ["m5.2xlarge"]
subnet_ids = [
data.aws_ssm_parameter.private_subnet_0_id.value,
data.aws_ssm_parameter.private_subnet_1_id.value,
]
scaling_config {
desired_size = 2
max_size = 2
min_size = 2
}
lifecycle {
ignore_changes = [scaling_config[0].desired_size]
}
depends_on = [
aws_iam_role_policy_attachment.eks_worker_node_policy,
aws_iam_role_policy_attachment.eks_cni_policy,
aws_iam_role_policy_attachment.ec2_container_register_readonly,
]
tags = merge(
local.tags,
)
}
I didn't define the volume type for EBS, maybe it's using the default setting. How to fix the issue?
For the VPC has insufficient IP addresses issue, if create a new subnet for EKS to use, is it necessary to delete the EKS cluster or node group?
By the way, the deployment I used was https://raw.githubusercontent.com/ansible/awx-operator/0.13.0/deploy/awx-operator.yaml.
The install was used https://github.com/ansible/awx-operator#basic-install.
#miantian, Continuing our discussion from the comments:
A subnet size cannot just be increased. If you change the subnet size, it will be recreated. But as the EKS is there, the subnet creation will fail. So, I would say - start fresh. Delete everything and then start fresh.
Regd the volume issue, by default EKS only supports ReadWriteOnce access mode. This is because of the technical limitation of AWS where an EBS volume can only be attached to 1 EC2 instance. If you want to use ReadWriteMany access mode, you need to use EFS.
If you want to use EFS, look up NFS/EFS client provisioner for EKS. There are few steps you need to follow in order to create an EFS provisioner in EKS. Then, you can start using ReadWriteMany access mode.

resolve.conf (generated) wrong order? (2 routers)

I have 2 routers in my network.
A) The one issued by my ISP (limited settings, had even to ask to get portforwarding settings), which is alo my modem.
B) My own router (wher i set my DHCP etc)
Now the generated resolve.txt on raspberrian and archlinux list:
domain local
nameserver <IP of A>
nameserver <IP of B>
As in understand it this is the order it will try to use when resolving names, but her it schould try my internal B before trying to resolve using A.
PS: Both subnetmasks are 255.255.255.0
Router A has 192.168.0.1
Router B has 192.168.1.1
All devices are in the 192.168.1.### range.
PPS: Archlinux is setup to use networkmanager, not a manual configured dhcpcd
NetworkManager may use dnsmasq for dhcp and to handle dns lookups.
I noticed that dnsmasq reverses the order of nameservers. Look at your logs. That would show up better in log if we also set dnsmasq to call dns servers in parallel:
#/etc/dnsmasq.conf
#all-servers
#/etc/dnsmasq.d/laptop.conf
all-servers
log-queries=extra
log-async=100
log-dhcp
#/etc/dnsmasq.d/servers.conf
server=66.187.76.168
server=162.248.241.94
server=165.227.22.116
/var/log/dnsmasq.log--
Mar 14 02:14:20 dnsmasq[3216]: 71700 127.0.0.1/38951 cached firefox.settings.services.mozilla.com is <CNAME>
Mar 14 02:14:20 dnsmasq[3216]: 71700 127.0.0.1/38951 forwarded firefox.settings.services.mozilla.com to 165.227.22.116
Mar 14 02:14:20 dnsmasq[3216]: 71700 127.0.0.1/38951 forwarded firefox.settings.services.mozilla.com to 162.248.241.94
Mar 14 02:14:20 dnsmasq[3216]: 71700 127.0.0.1/38951 forwarded firefox.settings.services.mozilla.com to 66.187.76.168
...order of calls is reversed in log lines!
I got rid of systemd-resolved to rely on dnsmasq.

How to declare a variable and reuse it in icinga2 hosts section?

For now I use the below config for Icinga2 host server to work:
vars.health_check["my_module1"]={
host = "HEALTH_CHECK_SERVER_URL"
module = "my_module1"
}
vars.health_check["my_module2"]={
host = "HEALTH_CHECK_SERVER_URL"
module = "my_module2"
}
The problem as you see is that I have to redeclare the same host address. When I put the host address outside of service like below, it does not work and reloading of Icinga2 fails:
end_url = "HEALTH_CHECK_SERVER_URL"
vars.health_check["my_module1"]={
host = "$end_url$"
module = "my_module1"
}
vars.health_check["my_module2"]={
host = "$end_url$"
module = "my_module2"
}
I even tried to use vars.end_url but again the same scenario. How should I declare a variable in Icinga2.
You can use the host's address with $address$ so if the host's address is the what the URL resolves to it should work like:
end_url = "HEALTH_CHECK_SERVER_URL"
vars.health_check["my_module1"]={
host = "$address$"
module = "my_module1"
}
vars.health_check["my_module2"]={
host = "$address$"
module = "my_module2"
}
Have you looked into Icinga2 Director?. It's handy and host configs are more easily managed. Also, monitoring-portal.org Is a good resource for the Icinga Community.
If you use director you can make a clone of the command and then set the arguments to variables like $end_url$ then create the field. Then you can add the field to your template(import) and enter it once there.
For example we use this method for SNMP Community strings. We have a field for $snmp_community$ attached to our templates. So in any command where we need the community we just use this variable. This is how Icinga2 knows all our LAN Distro's community strings, and if we need to change it we just change it once.

How to api-query for the default vhost

The RabbitMQ documentation states:
Default Virtual Host and User
When the server first starts running, and detects that its database is uninitialised or has been deleted, it initialises a fresh database with the following resources:
a virtual host named /
The api has things like:
/api/exchanges/#vhost#/?name?/bindings
where "?name?" is a specific exchange-name.
However, what does one put in for the #vhost# for the default-vhost?
As write here: http://hg.rabbitmq.com/rabbitmq-management/raw-file/3646dee55e02/priv/www-api/help.html
As the default virtual host is called "/", this will need to be encoded as "%2f".
so:
/api/exchanges/%2f/{exchange_name}/bindings/source
full:
http://localhost:15672/api/exchanges/%2f/test_ex/bindings/source
as result:
[{"source":"test_ex","vhost":"/","destination":"test_queue","destination_type":"queue","routing_key":"","arguments":{},"properties_key":"~"}]

tftp retry timeout exceeded

My issue is retry count exceeds when I download kernel image to Econa processor board (Econa is ARM based processor) via TFTP as shown below
CNS3000 # tftp 0x4000000 bootpImage.cns3420.uclibc
MAC PORT 0 : Initialize bcm53115M
MAC PORT 2 : Initialize RTL8211
TFTP from server 192.168.0.219; our IP address is 192.168.0.112
Filename 'bootpImage.cns3420.uclibc'.
Load address: 0x4000000
Loading: T T T T T T T T T T
Retry count exceeded; starting again
Following are the points which may help you in finding the cause of this error.
Ping response is OK
CNS3000 # ping 192.168.0.219
MAC PORT 0 : Initialize bcm53115M
MAC PORT 2 : Initialize RTL8211
host 192.168.0.219 is alive
When I tried to verify TFTP is running, I tried as shown below. It seems TFTP server is working. I placed a small file in /tftpboot:
# echo "Hello, embedded world" > /tftpboot/hello.txt"
Then I did localhost
# tftp localhost
tftp> get hello.txt
Received 23 bytes in 0.1 seconds
tftp> quit
Please note that there is no firewall or SELinux on my machine.
Please verify location of these files are OK. I have placed kernel image file bootpImage.cns3420.uclibc in /tftpbootTFTP service file is located in /etc/xinetd.d/tftp.
My TFTP service file is:
service tftp
{
socket_type =dgram
protocol=udp
wait=yes
user=root
server=/usr/sbin/in.tftpd
server_args=-s /tftpboot -b 512
disable=no
per_source=11
cps=100 2
flags=ipv4
}
printenv response in U-boot is:
CNS3000 # printenv
bootargs=root=/dev/mtdblock0 mem=256M console=ttyS0
baudrate=38400
ethaddr=00:53:43:4F:54:54
netmask=255.255.0.0
tftp_bsize=512
udp_frag_size=512
mmc_init=mmcinit
loading=fatload mmc 0 0x4000000 bootpimage-82511
running=go 0x4000000
bootcmd=run mmc_init;run loading;run running
serverip=192.168.0.219
ipaddr=192.168.0.112
bootdelay=5
port=1
bootfile=/tftpboot/bootpImage.cns3420.uclibcl
stdin=serial
stdout=serial
stderr=serial
verify=n
Environment size: 437/4092 bytes
Regards
Waqas
Loading: T T T T T T T T T T
Means there is no transfer at all; this can be caused by wrong interface setting i.e.
u-boot is configured for 100Mbit full duplex, and you try to connect via half duplex or 10Mbit (or some mix of it). Another point is the MTU size, should be 1500 (u-boot cannot handle packet fragmentation)
Hint for windows/vmware users:
tftp timeouts from u-boot are caused by windows ip-forwarding.
1) If you have a home network : switch it of.
2) You are running Routing and Remote Access service : shut down service
3) check registry for ip forwarding:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\IPEnableRouter
set value to 0 (and maybe reboot)