I have created Virtual Network and connected API Management to Virtual Network.
I am thinking to host my REST API in Azure Container Instances in my VNET and then expose those API in Azure API Management by configuring IP Address of Azure Container Instance REST API into Azure API Management web service url.
I have one doubt, if this is right way of doing it.
I am wondering if Azure Container Instance gets restarted and if IP Address will change, then my API exposed in API Mangament will be broken. Does IP Address gets changed if Azure Container Instances gets restarted for some reason.
There are some limitations for Azure container instances.
The IP address of a container won't typically change between updates,
but it's not guaranteed to remain the same. As long as the container
group is deployed to the same underlying host, the container group
retains its IP address. Although rare, and while Azure Container
Instances makes every effort to redeploy to the same host, there are
some Azure-internal events that can cause redeployment to a different
host. To mitigate this issue, always use a DNS name label for your
container instances.
Terminated or deleted container groups can't be updated. Once a
container group has stopped (is in the Terminated state) or has been
deleted, the group is deployed as new.
However, It's a rare case that Azure container instances will be redeployed to a different host. Also, if you have a container instance in a VNet, you're unable to directly set a --dns-name-label value and you only could access the instance via its private IP address from the outside world and other container groups. Note: Containers in a group are not discoverable through DNS. They can only be accessed through ‘localhost’, in combination with their exposed ports. You could get more references from More about networking in this blog.
Related
I have 2 services running in a single private vpc subnet (same available zone). Each service is based on a container here: https://github.com/spring-petclinic/spring-petclinic-microservices .
I've setup route53 service endpoints for both services.
When I run my tasks (each within their own service) service A times out calling service B over service B's route53 endpoint. Using localhost doesn't work because these containers are in separate services.
When I create a container for my task definition, I assign the port that my container is using (using port mapping field). However I notice in the console there is this note: "Host port mappings are not valid when the network mode for a task definition is host or awsvpc. To specify different host and container port mappings, choose the Bridge network mode."
Since I'm using Fargate I am using awsvpc mode. So is this telling my port mapping setting isnt doing anything ? Is that why my services are timing out ?
Then when I google bridge mode, this seems to tell me that awscpv networking mode support service discovery: https://aws.amazon.com/about-aws/whats-new/2018/05/amazon-ecs-service-discovery-supports-bridge-and-host-container-/
So how does "bridge mode" work here ? Why does port mapping field not work for awsvpc ?
Edit:
I read this How to communicate between Fargate services on AWS ECS? and he just says "I created a new service and things started working." That's a bit disheartening.
Edit2:
Yes my vpc has enabled dns resolution.
As it turns out the security group on my service was only allowing http on port 80. That is the inbound rules the default SG that the service wizard gives you. I updated it to allow traffic on my container ports and they seem to be talking to each other now.
I have a project where I need to load data into a mysql database. I need to whitelist an IP address in order to do the insertion and I was planning on using Azure container instances to load data into the database. But, Azure containers change their ip after every run. So are you able to bind a static outbound public ip address to containers? I do not want to use azure firewall gateway since that's too expensive and while I got the process working on a linux vm, a container would suit my needs better since I just need to run a script several times a day.
The laptops of our company have a WCF sync/client installed which communicates with the Server.
The data transfer works as long as they do not connect with the VPN.
When they connect with the VPN, I can make the WCF client Sync again if I add the "proxyaddress" paramater to the .config file.
Question : how can I make it work in both scenario's? Is there a way the WCF client makes a "smart selection" of multiple endpoints?
This issue more relates to network, route, instead of WCF.
When we connect to VPN, an extra virtual network interface is created on the local machine. At the same time, the local routing table is changed, which caused the issue that the internal network address could not be accessed. We can solve this by setting up a proxy address. A more general way is to set a static route on the local machine.
Route add –p 172.17.10.0 mask 255.255.255.0 172.17.16.1
The first address is a destination network address. the last address is a local gateway, which can be routed by a local network interface. This will lead to the data packages sent to the destination network to be addressed from the specified network interface.
Here is a related link.
https://docs.oracle.com/cd/E53394_01/html/E54745/gmyag.html
Feel free to let me know if there is anything I can help with.
We have a SQL Azure database and enabled VNET service endpoint. The service endpoint is listed in our VNET and the Azure SQL lists our VNET. According to documentation found here, connections applications inside our VNET should use the Azure backbone and not travel through the public internet.
There was another stack overflow article asking a similar question but I still didn't see an answer (maybe I missed it). That article is here
This is great, but I don't see how to build the connection string to utilize this internal network path since the only name available is the public DNS name (which we can still use with SSMS to manage the server from our on-premise location).
Is Azure smart enough to know that this public DNS name is routed differently when used inside the VNET versus when its used from our on-premise site?
Is Azure smart enough to know that this public DNS name is routed differently when used inside the VNET versus when its used from our on-premise site?
Yes. And that doesn't even require a VNET service endpoint. Connections within Azure, even across Regions never leave Microsoft private networks.
A Virtual Network Service Endpoint is mostly just a firewall rule on your SQL Instance, so you can cut off all public IP access if you want.
I'm specifically trying to do this with Apache Storm (1.0.2), but it's relevant to any service that is secured with Kerberos. I'm trying to run a secured Storm cluster in Docker. There are a number of out-of-the-box docker images out there for Storm, and they work great unsecured. I'm using https://github.com/Baqend/docker-storm. I also have Storm running securely on RHEL VM's.
However, my understanding is that Kerberos ties hostnames to principals, so if I'm making service foobar available to clients, I need to create a principal of foobar/hostname#REALM. Then a client service might connect to hostname with principal foobar, Kerberos will look up foobar/hostname#REALM in its database, find that it's there (because we created a principal with exactly that name), and everything will work.
In my case, it's described here: https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.0/bk_installing_manually_book/content/configure_kerberos_for_storm.html. The nimbus authenticates as storm/<nimbus host>#REALM, and the supervisors and outside clients authenticate as storm/REALM. Everything works.
But here in 2017, we have containers and hostnames are no longer static. So how would I Kerberize a service that runs in Docker Data Center (or Kubernetes, etc)? I have to attach an unknown hostname to the server authentication. I imagine I could create a principal for all possible hostnames and dynamically pick the right one at startup based on where the container lives, but that's kludgy.
Am I misunderstanding how Kerberos works? Is there a solution here that I don't see? I see multiple examples online of people running Storm in Docker, but I can't imagine that nobody's clusters are secure.
I don't know Apache Storm or Docker, but based on previous workings with JBOSS in a cluster in which an inbound client could be connecting to any one of a possible number of different hosts, then you would simply assign a virtual name to the entire pool at the load balancer and kerberize the service according to the virtual name instead of individual host name at the host level. So if you're making service foobar available to clients, you need to create a service principal (SPN) of foobar/virtualhostname#REALM in your Directory to kerberize the service with. You assign that SPN to a user account (not a computer account) to give it the flexibility to work with any Kerberized service which uses that SPN. If you are using Active Directory, you must create a keytab with the SPN inside of it, and place the keytab on each host running the kerberized service instance foobar/virtualhostname#REALM.