Rclone Compression | zip | rar and data transfer [closed] - backup

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 months ago.
Improve this question
I have been using rclone to back up google drive data to AWS S3 cloud storage. I have multiple google drive accounts whose backup happens on AWS S3. All those google drives have different numbers of documents.
I want to compress those documents into a single zip file and then it needs to be copied on S3.
Is there any way to achieve the same?
I referred to the link below, but it doesn't have complete steps to accomplish the task.
https://rclone.org/compress/
Any suggestion would be appreciated.

Rclone can't compress the files, but you can instead use a simple code to zip or rar the files and then use rclone to back them up to AWS.
If this is OK, I can explain the details here.

Related

Cloud bucket virus security [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I have implemented an antivirus system using ClamAV on one of my apps which uses Google cloud storage for uploading files.
Currently what I am doing is, listening to bucket upload, download it on one of my servers, scan it using ClamAV, and deleting it if it was infected.
I am a newbie to this, Is it possible that the whole cloud bucket gets infected by a virus on upload only.
i.e, can a virus execute himself on the bucket(any cloud bucket) itself?
If yes then please suggest some other solution to solve this issue as my current solution would be ineffective in this case.
Object Storage systems do not provide an execution framework hence an infected file cannot infect other files in the bucket.

can i open AMAZON S3 COUNTRY account? [closed]

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 days ago.
Improve this question
i have question about amazon S3 account is it amazon S3 account open for every country . i mean every one can open account like Google drive. ?
This breaks down the services available for each region. It may help you out!
In order to use Amazon S3, you will require an Amazon Web Services account.
Yes, these accounts are available to people in any country.
You will be asked to supply a credit card number.
Once you have your Account, you can start using AWS.

Multi stream SCP to transfer large amount of small files from EC2 [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
I am using scp to download millions of small files (100 - 1000 kb) files from my EC2 instances. scp seems to transfer one file at a time and does not utilize fully my 1 gbps connection.
Is there a more efficient way to download the files? For various technical reasons, achieving and downloading is not an option.
Take a look at rsync. It can also work through ssh.
If you are still able to use tar, but not able to create a tarball on the remote host, you can try something like:
ssh ec2instance "tar c /path/to/source" | tar x -C /path/to/destination
You can use the v option to tar, or the pipe viewer to get feedback on the transfer.
If the above is not an option either, try running several (a dozen) scp in parallel to reduce the effect of the overhead induced by many small files.
(Also make sure that the filesystem is not the bottleneck.)

Using amazon s3 with cloudfront as a CDN [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I would like to serve user uploaded content (pictures, videos, and other files) from a CDN. using Amazon S3 with cloudfront seems like a reasonable way to go. My only question is about the speed of the file system. My plan was to host user media with the following uri. cdn.mycompany.com/u/u/i/d/uuid.jpg.
I don't haven any prior experience with S3 or CDN's and I was just wondering if this strategy would scale well to handle a large amount of user uploaded content. And if there might be another conventional way to accomplish this.
You will never have problems dealing with scale on CloudFront. It's an enterprise-grade beast.
Disclaimer: Not if you're Google.
It is an excellent choice. Especially for streaming video and audio, CloudFront is priceless.
My customers use my plugin to display private streaming video and audio, one of them even has 8,000 videos in one bucket without problems.
My question stemmed from a misunderstanding of S3 buckets as a conventional file system. I was concerned that hacking too many files in the same directory would create overhead in finding the file. However, it turns out that S3 buckets are implemented more something like a hashmap so this overhead doesn't actually exist. See here for details: Max files per directory in S3

go through files and OCR pdf [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
Is there free way to go though bunch of pdf image only files and folders (in different location) and OCR them?
I would be really interested it... please suggest..
Try VietOCR, which monitors a watch folder for new input images. The program requires GhostScript to recognize PDF format.
I recommend OCRvision OCR PDF software. It has OCR folder watch where you can configure any folder as a monitored folder and the software will auto-OCR the PDF files there and convert any new scanned documents to searchable PDF. You can download the software from the web site.
PS:- I work for OCrvision