How to utilize disk space of Amazon EBS attached to a DCOS Agent machine - dcos

We have EBS attached to our centos machines which are used as DCOS agent machines. However when a DCOS cluster is
created, the mounted EBS storage is not utilized for Total DCOS disk capacity.
Please can you let me know, if there are anyways to include them. The DCOS otherwise is working properly and we are able to execute applications ( ArangoDB, Spark ) in them.
I've checked this link : https://dcos.io/docs/1.8/usage/storage/external-storage/ . But it doesn't seem to solve my purpose.

Mount Disk Resources is probably what you are looking for.
You can learn more about Mount/Path disk at the Mesos documentation.

Related

Why can't I use the persistent disk's storage I just bought on Google Cloud Compute Engine?

I have just set up my first Google Cloud Compute Engine instance so I can run some Python scripts on large files. As part of the setup I added a 1TB persistent disk:
When I SSH into the the virtual machine I don't see the storage added. This means I can't download my dataset.
How do I access the persistent disk?
Thanks.
When you add an additional persistent disk that makes the disk available to your compute engine but you must then format it and mount it before use. This is similar to the notion of adding an additional physical disk to your desktop. Just adding a disk means it is there from a hardware perspective but it must still be defined to the operating system.
There is documentation on the recipe here (Adding or resizing zonal persistent disks)
In summary:
Use sudo lslbk to find the device id.
Format the disk using sudo mkfs.ext4.
Use sudo mkdir to create a mount point.
Use sudo mount to mount the file system.
You can also edit /etc/fstab to mount the file system at boot time.

Shared storage mount on AWS Windows EC2?

i just ask for opportunities to mount a shared storage in Windows Server 2016 or higher.
So i found a lot of on AWS EFS, EBS, S3
My problem is to install data of a Software in a shared volume.
EBS is just for local mounting, can i map it on other servers?
EFS ist not for Windows but my favourite choice, are the solutions to mount it as volume in Windows or i should mount it on Linux first?
Is S3 mountable to use it like a file system?
What can i use or what is the best solution für shared storing data using cloud technologies
EBS is just for local mounting, can i map it on other servers?
Indeed, you can mount EBS only to a single EC2 instance. However - you can expose the mounted EBS as NFS on the instance
EFS ist not for Windows but my favourite choice, are the solutions to mount it as volume in Windows or i should mount it on Linux first?
I don't think that matters.. (if I'm mistaken, please correct me)
Edit: I stay corrected, AWS EFS from Windows Server 2012 seem you mouting EFS on Windows doesn't work properly. You could mount the EFS on a linux server and expose it as NFS.
Is S3 mountable to use it like a file system?
S3 is not indeded to be mounted as a filesystem and it's not montable by default. If needed, there are 3rd party tools to do that (e.g. fuse-s3fs on linux), but IMHO it's not the most efficient way.
As well there are solutions from AWS (aws storage gateway) to migrate on-premise content to S3 and back.
Mounting S3 as FS I'd consider mounting S3 only when
the files are created or read, not updated (S3 doesn't support updating part of an object)
the objects (files) are to be shared by other services or through internet

Deploy spinnaker components to pivotal cloud foundry

I want to deploy Spinnaker components to private cloud (PCF). I want to know whether the following procedure works or it. Download spring-cloud-spinnaker-1.0.0.BUILD-SNAPSHOT.jar (mentioned in https://cloud.spring.io/spring-cloud-spinnaker) and run it (on Linux machine), then deploy the Spinnaker components to required space from local host.
If this procedure works, what are the requirements of my system? else mention the way to deploy.
Yes, Spring Cloud Spinnaker is the proper way to install Spinnaker components into a PCF setup.
Each Spinnaker module is installed with custom settings, some including resources (for example, clouddriver needs 4GB RAM + 2GB disk space), and Spring Cloud Spinnaker applies that.
Spring Cloud Spinnaker itself needs 8 GB RAM + 4 GB disk to operate properly. This is cited here => https://github.com/spring-cloud/spring-cloud-spinnaker#running-spring-cloud-spinnaker. When run locally, that probably won't be a problem. Should you install it into PCF itself, that would be a critical setting.
If you run into issues with the installer, you can reach out for assistance at http://join.spinnaker.io/ on the #general channel.

Is it possible to deploy Spinnaker to an instance smaller than m4.xlarge on AWS?

We are currently following the default deployment instructions for Spinnaker which states using m4.xlarge as the instance type.
http://www.spinnaker.io/v1.0/docs/creating-a-spinnaker-instance#section-amazon-web-services
We did make an unsuccessful attempt to deploy it to m4.large but the services didnt start.
Has anyone tried the something similar and succeeded?
It really depends on the size of your cloud.
There are 4 core services that you need - gate/deck/orca/clouddriver. You can shut the other ones off if say, you don't care about automated triggers, baking or jenkins integration.
I'm able to run this locally with the docker images with about 8 gigs of ram and it works. Using s3 instead of cassandra also helps here.
You can play around with the settings in the baked image of spinnaker, but for internal demos and what not, I've been able to just spin up a VM, install docker and run the docker compose config correctly on m4.large.

Amazon EC2 Windows AMI with shared S3 storage

I've currently got a base Windows 2008 Server AMI that I created on Amazon EC2. I use it to create 20-30 EBS-based EC2 instances at a time for processing large amounts of data into PDFs for a client. However, once the data processing is complete, I have to manually connect to each machine and copy off the files. This takes a lot of time and effort, and so I'm trying to figure out the best way to use S3 as a centralised storage for the outputted PDF files.
I've seen a number of third party (commercial) utilities that can map S3 buckets to drives within Windows, but is there a better, more sensible way to achieve what I want? Having not used S3 before, only EC2, I'm not sure of what options are available, and I've not been able to find anything online addressing the issue of using S3 as centralised storage for multiple EC2 Windows instances.
Update: Thanks for suggestions of command line tools for using S3. Was hoping for something a little more integrated and less ad-hoc. Seeing as EC2 is closely related to S3 (S3 used to be the default storage mechanism for AMIs, etc), that there might be something neater/easier I could do. Perhaps even around Private Cloud Networks and EC2 backed S3 servers, etc, or something (an area I know nothing about). No other ideas?
I'd probably look for a command line tool. A quick search on Google lead me to a .Net tool:
http://s3.codeplex.com/
And a Java one:
http://www.beaconhill.com/opensource/s3cp.html
I'm sure there are others out there as well.
You could use an EC2 instance with EBS exported through samba which can act as a centralized storage that windows instances can map?
this sounds very much like a hadoop/Amazon MapReduce job to me. Unfortunately, hadoop is best deployed on Linux:
Hadoop on windows server
I assume the software you use for pdf-processing is Windows only?
If this is not the case, I'd seriously consider porting your solution to Linux.