Can somebody suggest to me how can I do load balancing in a Rasa Chatbot? There is not much documentation or tutorials available.
If you deploy your Rasa bot via Rasa X, you can do so in Kubernetes or Openshift. Both of these cluster environments provide native load balancing and the availability to scale pods when you have more traffic. https://rasa.com/docs/rasa-x/installation-and-setup/openshift-kubernetes/
Related
We have a web applications developed with Angular and .Net, which is deployed on an Azure Cloud platform, lets say External A-Cloud.
We need to lift the same application and host in a different Internal Cloud Platform, lets say Internal B-Cloud.
How can we achieve this, please share some thoughts to do the ground work to start the process,
Warm Regards
KdM
Migrate an externally hosted cloud based application to our Cloud platform.
We have both AWS and Azure, but the externally hosted one is in Azure cloud platform
We can move from any Cloud to any Cloud. But we need to understand few points first.
How are the Angular and .Net hosted in Azure
If they are hosted on simple Virtual Machines - Then we can create a Virtual Machine in AWS and Migrate or host the apps in AWS ( Yes we definitely need to consider foundation of AWS like VNETs , hope thats already done )
If the Angular and .Net hosted in Azure is of Kubernetes and docker based
We need to Create EKS in AWS and then as its docker based, the same Manifest files etc would work in EKS as well with minor changes
We can look at migration tools as well if they are Windows VM based
I was taking a look at Hub—the dataset format for AI—and noticed that hub integrates with GCP and AWS. I was wondering if it also supported integrations with MinIO.
I know that Hub allows you to directly stream datasets from cloud storage to ML workflows but I’m not sure which ML workflows it integrates with.
I would like to use MinIO over S3 since my team has a self-hosted MinIO instance (aka it's free).
Hub allows you to load data from anywhere. Hub works locally, on Google Cloud, MinIO, AWS as well as Activeloop storage (no servers needed!). So, it allows you to load data and directly stream datasets from cloud storage to ML workflows.
You can find more information about storage authentication in the Hub docs.
Then, Hub allows you to stream data to PyTorch or TensorFlow with simple dataset integrations as if the data were local since you can connect Hub datasets to ML frameworks.
I am currently gathering some information for a blog on Octopus roles, envs and targets and have a question. While I was thinking of all the possible roles, I thought of a load balancer.
I have never seen a tentacle installed on the machine hosting an s/w load balancer. Is there anyone in this group who has seen such a scenario.
Pardon me if my question is silly but my knowledge of Load balancers is not great.
Thanks in advance.
Regards Tarun
There's nothing special about a machine that runs a load balancer that would preclude putting a Tentacle on that machine. The only question is, what are you going to deploy to that machine hosting the load balancer?
New load balancer software might make sense. Or frameworks or other support software (.NET, node-js, whatever). But ordinarily you would not put web applications on this machine; the load balancing machine would not itself have the web applications that it load balances. (Of course you could run web apps on the same machine; the load balancer just needs to be told the URL to get to the app on its own machine. But it's not a usual configuration.)
And most other management tasks are easily accomplished with other tools like ansible, chocolatey, etc. So unless you're well invested in OctopusDeploy already, installing a tentacle on the load-balancing machine seems questionable.
We are currently following the default deployment instructions for Spinnaker which states using m4.xlarge as the instance type.
http://www.spinnaker.io/v1.0/docs/creating-a-spinnaker-instance#section-amazon-web-services
We did make an unsuccessful attempt to deploy it to m4.large but the services didnt start.
Has anyone tried the something similar and succeeded?
It really depends on the size of your cloud.
There are 4 core services that you need - gate/deck/orca/clouddriver. You can shut the other ones off if say, you don't care about automated triggers, baking or jenkins integration.
I'm able to run this locally with the docker images with about 8 gigs of ram and it works. Using s3 instead of cassandra also helps here.
You can play around with the settings in the baked image of spinnaker, but for internal demos and what not, I've been able to just spin up a VM, install docker and run the docker compose config correctly on m4.large.
I'm trying to setup a production server for a rails 3 app on a single amazon ec2 instance, and am wondering what route to take.
I'm quite new to deploying rails apps - is there a pre-existing ami I can use for rails3?
Any tips/wisdom/advice appreciated - thanks!
I'd like to suggest to use verified EC2 AMIs, for instance by Rightscale. You can not use the Rightscale as a service, but theirs AMIs are pretty stable and reliable.
UPDATE: I advice to use Amazon Linux Machine based on CentOS 6
In terms of reasonably priced EC2 management services check out scalr.com.
As for gems that makes it easier to deploy Rails to EC2 have a look at Poolparty and Rubber.
Ubuntu has a nice guide on EC2 and Ubuntu images. https://help.ubuntu.com/community/EC2StartersGuide
Also see http://alestic.com/ I just set up one of these images on my EC2 free usage tier with no issues.