Was asked to build a cloud architecture plan in azure with serverless, yet with ability to transition to a kubernetes. So was thinking to use azure container instance since function not really a container deployment model. My limited understanding of aci seems to lacking a lot, e.g. health check, scale, auto heal etc
Question, whats the best practice for aci is there a workaround to support these capabilities, looking at msg website it looks promising but hard to dig out the exact recommended design
Combine ACI with the ACI Logic Apps connector, Azure queues and Azure Functions to build robust infrastructure that can elastically scale out containers on demand. With Azure Container Instances, you can run complex tasks that are capable of responding to events.
For the Azure container Instance itself, it mainly benefits for its fastest and simplest compare with the VM, AKS, Web App and so on. But you do not have much control on it. And its main aim is to test your image if it can run as you expect.
The Azure Logic or Azure Function, just help you to create and delete the Container Instance in the time you want. Or they can get the state or some message from the Container Instance and no more. So if you want to use the Azure Container Instance and other services such as Azure Logic, you need to know what it can help you.
If you have another questions about this issue, please let me know and I will try my best to help you.
Related
A simple question about scalability. I have been studying about scalability and I think I understand the basic concept behind it. You use an orchestrator like Kubernetes to manage the automatic scalability of a system. So in that way, as a particular microservice gets an increase demand of calls, the orchestrator will create new instances of it, to deal with the requirement of the demand. Now, in our case, we are building a microservice structure similar to the example one at Microsoft's "eShop On Containers":
Now, here each microservice has its own database to manage just like in our application. My question is: When upscaling this system, by creating new instances of a certain microservice, let's say "Ordering microservice" in the example above, wouldn't that create a new set of databases? In the case of our application, we are using SQLite, so each microservice has its own copy of the database. I would asume that in order to be able to upscale such a system would require that each microservice connects to an external SQL Server. But if that was the case, wouldn't that be a bottle neck? I mean, having multiple instances of a microservice to attend more demand of a particular service BUT with all those instances still accessing a single database server?
In the case of our application, we are using SQLite, so each microservice has its own copy of the database.
One of the most important aspects of services that scale-out is that they are stateless - services on Kubernetes should be designed according to the 12-factor principles. This means that service-instances cannot have its own copy of the database, unless it is a cache.
I would asume that in order to be able to upscale such a system would require that each microservice connects to an external SQL Server.
yes, if you want to be able to scale-out, you need to use a database that are outside the instances and shared between the instances.
But if that was the case, wouldn't that be a bottle neck?
This depend very much on how you design your system. Comparing microservices to monoliths; when using a monolith, the whole thing typically used one big database, but with microservices it is easier to use multiple different databases, so it should be much easier to scale-out the database this way.
I mean, having multiple instances of a microservice to attend more demand of a particular service BUT with all those instances still accessing a single database server?
There are many ways to scale a database system as well, e.g. caching read-operations (but be careful). But this is a large topic in itself and depends very much on what and how you do things.
We are developing a Web API using .Net Core. To perform background tasks we have used Hosted Services.
System has been hosted in AWS Beantalk Environment with the Load Balancer. So based on the load Beanstalk creates/remove new instances of the system.
Our problem is,
Since background services also runs inside the API, When load balancer increases the instances, number of background services also get increased and there is a possibility to execute same task multiple times. Ideally there should be only one instance of background services.
One way to tackle this is to stop executing background services when in a load balanced environment and have a dedicated non-load balanced single instance environment for background services only.
That is a bit ugly solution. So,
1) Is there a better solution for this?
2) Is there a way to identify the primary instance while in a load balanced environment? If so I can conditionally register Hosted services.
Any help is really appreciated.
Thanks
I am facing the same scenario and thinking of a way to implement a custom service architecture that can run normally on all of the instance but to take advantage of pub/sub broker and distributed memory service so those small services will contact each other and coordinate what's to be done. It's complicated to develop yes but a very robust solution IMO.
You'll "have to" use a distributed "lock" system. You'll have to use, for example, a distributed memory cache who put a lock when someone (a node of your cluster) is working on background. If another node is trying to do the same job, he'll be locked by the first lock if the work isn't done yet.
What i mean, if all your nodes doesn't have a "sync handler" you can't handle this kind of situation. It could be SQL app lock, distributed memory cache or other things ..
There is something called Mutex but even that won't control this in multi-instance environment. However, there are ways to control it to some level (may be even 100%). One way would be to keep a tracker in the database. e.g. if the job has to run daily, before starting your job in the background service you might wanna query the database if there is any entry for today, if not then you will insert an entry and start your job.
Say I have like 1000 VMs with different services running on them with different technologies used like python, NET, java and different middleware like rabbitmq, redis etc.
How can I dynamically handle the interactions between the services and provide scalability?
For Example, say I have Service A which is pushing Data to a rabbitmq then the data is processed by service B while fetching additional data from Service C. You see at the end I have a decentralized system which is pulling data somewhere and pushing it somewhere else... a total mess! Scale it up to 2000 microservices omg XD.
The moment I change one thing a lot of other systems are affected.
Do you know something maybe like an ESB where I can couple two services together with a message transform adapter in the middle of it and I can change dependenciesat runtime? Like the stream doesn't end in service F anymore and does end in G for example?
I think microservices are a good idea because they can be stateless, can scale, can easily be deployed as a container. But I don't know a good tool/program for managing the data flow. The rabbitmq doesn't support enough enterprise integration patterns. Do you have any advice?
How can I dynamically handle the interactions -
See if using an existing EIP pattern solves your problem to implement the logistics
Depending on how your design shapes up, you may need to use Distributed Lock Management
Or maybe your application is simple enough to use a Consul K/V store as a semaphore & a simple mosquitto topic based bus.
Provide scalability
What is the solution you are trying to scale? AMQP, Consul, "microservices" in themselves are very scalable & distributed
However, to scale your thought process & devops, you need to find a way to see things as patterns that help you split the problem & tackle the complexity
Do you know something maybe like an ESB where I can couple two services together with a message transform adapter in the middle of it and I can change dependenciesat runtime?
Read up on EIP. ESBs are just one of the many ways you can solve your problem. RTFM, & get some perspective.
But I don't know a good tool/program for managing the data flow.
Ask yourself if your problem is related to distributed workflow management, or if a data pipeline is what you are really looking for
Look at Spark, Storm, Luigi, Airflow - they all have a different purpose - but you will know what to do with them if you manage to read up on everything else in this post ;)
I've been needing a new VM host for some time now, and from working with/on AWS at work, "The Cloud" seems to be a good idea.
I've done some math, and no matter how I count, it's going to be cheaper to do it myself, than colo or something else. Plus, I really like lots of blinking lights :D
A year or so, I heard about Openstack and have been looking cursory at it since then. Seems big and complex (and scary!), and some friends who have been trying to do it at work for a year and still not quite finished/succeeded, indicate that it is what it seems :)
However, I like tormenting myself, so I've decided I'm going to give it a try. It does provide all the functionality, and then some, that I need. Theoretically, I could go with Vagrant, but that's not quite half-way to what I want/need.
So, I've been looking at https://en.wikipedia.org/wiki/OpenStack#Components and from that came to the following conclusion:
Required: (Nova, Glance, Horizon, Cinder)
This seems to be the "core" services. I need all of them.
Nova
Compute fabric controller
Glance
Image service (for templates)
Horizon
Dashboard
Cinder
Block storage devices (can work with ZoL w/ 3rd party driver)
Less important: (Barbican, Trove, Designate)
I really don't need any of this, it's more of "could be nice to have at some point".
Barbican
REST API designed for the secure storage, provisioning and management of secrets
Trove
Database-as-a-service provisioning relational and non-relational database engine
Designate
DNS as a Service
Possibly not needed: (Neutron, Keystone)
These ones I don't know if I need. I have DHCP, VLAN, VPN, DNS, LDAP, Kerberos services on the network that work just fine, and I'm not replacing them!
Neutron (previously Quantum)
Network management (DHCP, VLAN)
Keystone
Identity service (can work with existing LDAP servers)
Not needed: (Swift, Ceilometer, Ironic, Zaqar, Searchlight, Sahara, Heat, Manilla)
Meh! I'm doing this for me, for my basement and for my own development and enjoyment, so don't need that. Would be nice to go with a fully object based storage, but that's not feasible for me at this time.
Swift
Object storage system
Ceilometer
Telemetry Service (billing)
Ironic
Bare metal provisoning instead of virtual machines
Zaqar
multi-tenant cloud messaging service for Web developers (~ SQS)
Searchlight
Advanced and consistent search capabilities across various OpenStack cloud services
Sahara
Easily and rapidly provision Hadoop (storing and managing vast amounts of data cheaply and efficient) clusters
Heat
Orchestration layer (store the requirements of a cloud application in a file that defines what resources are necessary for that application)
Manila
Shared File System Service (manage shares in a vendor agnostic framework)
If we don't count storage (I already have my own block storage, which I can use with Cinder and some 3rd party plugins/modules) and compute nodes (everything that's left over will become compute nodes), can I run all this on one machine? With a hot standby/failover?
Everything is going to be connected to the same power jack, same rack, same [outgoing] network cable so more redundancy that that is overkill. I don't even need that, but "why not" :)
The basic recommendation I've heard is four to six machines. And after a lot of pestering the ones who said that, it turns out that "two storage, two controller, two compute". Which, is what I was thinking as well: Running this on two machines should be enough. They're basically only going to run Glance, Horizon and Cinder. And possibly Neutron and Keystone.
Neither of them seems to be very resource-heavy.
Is there something I'm missing?
Oh, and nothing of this is going to face the 'Net! It's all just for me.
Though it is theoretically possible to bring up OpenStack without Keystone, it is almost practically impossible and makes the system pretty inconvenient to use.
You can definitely run full OpenStack on a machine (or even in a VM). Checkout the devstack (http://docs.openstack.org/developer/devstack/) -- you just run a shell script to bring up a full working OpenStack setup.
As long as you are not worried about availability and your workload is minimal, single-node deployment is a pretty good start to get your hands wet.
I've deployed a single micro-instance redis on compute engine using the (very convenient) click-to-deploy feature.
I would now like to update this configuration to have a couple of instances, so that I can benchmark how this increases performance.
Is it possible to modify the config while it's running?
The other option would be to add a whole new redis deployment, bleed traffic onto that over time and eventually shut down the old one. Not only does this sound like a pain in the butt, but, I also can't see any way in the web UI to click-to-deploy multiple clusters.
I've got my learners license with all this, so would also appreciate any general 'good-to-knows'.
I'm on the Google Cloud team working on this feature and wanted to chime in. Sorry no one replied to this for so long.
We are working on some of the features you describe that would surely make the service more useful and powerful. Stay tuned on that.
I admit that there really is not a good solution for modifying an existing deployment to date, unless you launch a new cluster and migrate your data over / redirect reads and writes to the new cluster. This is a limitation we are working to fix.
As a workaround for creating two deployments using Click to Deploy with Redis, you could create a separate project.
Also, if you wanted to migrate to your own template using the Deployment Manager API https://cloud.google.com/deployment-manager/overview, keep in mind Deployment Manager does not have this limitation, and you can create multiple deployments from the same template in the same project.
Chris