SSH has a useful opt-in feature that allows you to reuse a connection between invocations:
ssh host 'echo example' # this opens the connection to host, and leaves it open
ssh host 'echo example2' # this reuses the connection from the previous command
Is there something similar for the AWS S3 command-line interface? For example:
aws s3 mv s3://bucketname/example1 s3://bucketname/example2
aws s3 mv s3://bucketname/example3 s3://bucketname/example4
It would be great if the first command would open a connection, and leave it open for the second to take advantage of it. This would speed up AWS S3 CLI tremendously when running a ton of small commands.
You cannot. You could always use boto3 directly, the Python SDK, or in fact any of the SDKs for any of the supported languages. It would allow you to persist a connection, but the CLI does not support that sort of thing.
Related
I've created a Redis database on the Redis cloud with AWS. Till now I've added 5 hashes(key-value pairs) to the database but I can't seem to find a way to view those hashes. Can anyone tell me how to do that?
You should be able to connect using the command line redis-cli and the host, port and password for your Redis instance, then use the command hgetall <keyname> to see the contents of the hash stored at <keyname>.
Alternatively, download a copy of the graphical RedisInsight tool, and connect that to the host, port and password you're running Redis on.
Dask jobqueue seems to be a very nice solution for distributing jobs to PBS/Slurm managed clusters. However, if I'm understanding its use correctly, you must create instance of "PBSCluster/SLURMCluster" on head/login node. Then you can on the same node, create a client instance to which you can start submitting jobs.
What I'd like to do is let jobs originate on a remote machine, be sent over SSH to the cluster head node, and then get submitted to dask-jobqueue. I see that dask has support for sending jobs over ssh to a "distributed.deploy.ssh.SSHCluster" but this seems to be designed for immediate execution after ssh, as opposed to taking the further step of putting it in jobqueue.
To summarize, I'd like a workflow where jobs go remote --ssh--> cluster-head --slurm/jobqueue--> cluster-node. Is this possible with existing tools?
I am currently looking into this. My idea is to set-up an SSH tunnel with paramiko and then use Pyro5 to communicate with the cluster object from my local machine.
I have an application written in python, which run on a VPS server. It is a small application that writes, reads and receives read requests from a SQLite database, through a TCP socket.
The downside is that the application runs only when the console is open (using the ssh protocol), when closing the console, that is, the ssh session closes the application.
How should it be implemented? Or I must implement it? because the server is a ubuntu server
nohup should help in your case:
in your ssh session, launch your python app prefixed with nohup, as recommended here
exit your ssh session
The program should continue working even if its parent shell (the ssh session) is terminated.
There are (at least) two solutions:
1- The 'nohup' command, use it as follows: nohup python3 yourappname.py &
This will run your program in the background and won't be killed if you terminate the ssh session, It'll also give you a free prompt after running this command to continue your work.
2- Another GREAT option is the 'screen' command.
This gives you everything that nohup gives you, besides It allows you to check the output of your program (if any) in later logins. Although It may look a little complicated at first sight, but it's SUPER COOL! and I highly recommend you to learn it and enjoy it for the rest of your life!
A good explanation of it is available here
I have
a jenkins server
two AWS ElastiCache Redis instances
Occasionally the developer team needs to issue FLUSHALL, and desires to do so from Jenkins so they don't need to hunt down a system administrator, or fool around in shell.
Optimally, I'd use AWS CLI. I don't see how in the AWS CLI toolset to do this.
Is there a way to execute FLUSHALL from a shell script?
Thanks
You might want to look at using lamda for AWS environment. This can be a starting point https://docs.aws.amazon.com/lambda/latest/dg/vpc-ec.html
I want to make sure I'm not storing sensitive keys and credentials in source or in docker images. Specifically I'd like to store my MySQL RDS application credentials and copy them when the container/task starts. The documentation provides an example of retrieving the ecs.config file from s3 and I'd like to do something similar.
I'm using the Amazon ECS optimized AMI with an auto scaling group that registers with my ECS cluster. I'm using the ghost docker image without any customization. Is there a way to configure what I'm trying to do?
You can define a volume on the host and map it to the container with Read only privileges.
Please refer to the following documentation for configuring ECS volume for an ECS task.
http://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_data_volumes.html
Even though the container does not have the config at build time, it will read the configs as if they are available in its own file system.
There are many ways to secure the config on the host OS.
In my past projects, I have achieved the same by disabling ssh into the host and injecting the config at boot-up using cloud-init.