Can we configure Marklogic database backup on S3 bucket - amazon-s3

I need to configure Marklogic Full/Incremental backup in the S3 bucket is it possible? Can anyone share the documents/steps to configure?
Thanks!

Yes, you can backup to S3.
You will need to configure the S3 credentials, so that MarkLogic is able to use S3 and read/write objects to your S3 bucket.
MarkLogic can't use S3 for journal archive paths, because S3 does not support file append operations. So if you want to enable journal archives, you will need to specify a custom path for that when creating your backups.
Backing Up a Database
The directory you specified can be an operating system mounted directory path, it can be an HDFS path, or it can be an S3 path. For details on using HDFS and S3 storage in MarkLogic, see Disk Storage Considerations in the Query Performance and Tuning Guide.
S3 Storage
S3 requires authentication with the following S3 credentials:
AWS Access Key
AWS Secret Key
The S3 credentials for a MarkLogic cluster are stored in the security database for the cluster. You can only have one set of S3 credentials per cluster. You can set up security access in S3, you can access any paths that are allowed access by those credentials. Because of the flexibility of how you can set up access in S3, you can set up any S3 account to allow access to any other account, so if you want to allow the credentials you have set up in MarkLogic to access S3 paths owned by other S3 users, those users need to grant access to those paths to the AWS Access Key set up in your MarkLogic Cluster.
To set up the AW credentials for a cluster, enter the keys in the Admin Interface under Security > Credentials. You can also set up the keys programmatically using the following Security API functions:
sec:credentials-get-aws
sec:credentials-set-aws
The credentials are stored in the Security database. Therefore, you cannot use S3 as the forest storage for a security database.
if you want to have Journaling enabled, you will need to have them written to a different location. Journal archiving is not supported on S3.
The default location for Journals are in the backup, but when creating programmatically you can specify a different $journal-archive-path .
S3 and MarkLogic
Storage on S3 has an 'eventual consistency' property, meaning that write operations might not be available immediately for reading, but they will be available at some point. Because of this, S3 data directories in MarkLogic have a restriction that MarkLogic does not create Journals on S3. Therefore, MarkLogic recommends that you use S3 only for backups and for read-only forests, otherwise you risk the possibility of data loss. If your forests are read-only, then there is no need to have journals.

Related

Automated RDS backups and details of their storage in S3

I am responsible for some RDS instances (MariaDB), all running in the same region (with no multi-az setup) and configured to use the automated backup provided by AWS. According to AWS documentation, the automated backups are stored in S3 but there is no detail on the storage class in the documentation.
Business wants assurances that the backups are stored in multiple AZs (not all storage classes offer this!). I've tried to find out more about RDS backup storage through AWS support, but they've been rather unhelpful, claiming these are "internal details" and they can't tell me anything more than I read in the documentation.
So: is it known whether RDS automated backups are stored in multiple AZs, or should I just use AWS Backup with Cross-Region Backups?

How to set up AWS S3 bucket as persistent volume in on-premise k8s cluster

Since NFS has single point of failure issue. I am thinking to build a storage layer using S3 or Google Cloud Storage as PersistentVolumn in my local k8s cluster.
After a lot of google search, I still cannot find an way. I have tried using s3 fuse to mount volume to local, and then create PV by specifying the hotPath. However, a lot of my pods (for example airflow, jenkins), complained about no write permission, or say "version being changed".
Could someone help figuring out the right way to mount S3 or GCS bucket as a PersistenVolumn from local cluster without using AWS, or GCP.
S3 is not a file system and is not intended to be used in this way.
I do not recommend to use S3 this way, because in my experience any FUSE-drivers very unstable and with I/O operations you will easily ruin you mounted disk and stuck in Transport endpoint is not connected nightmare for you and your infrastructure users. It's also may lead to high CPU usage and RAM leakage.
Useful crosslinks:
How to mount S3 bucket on Kubernetes container/pods?
Amazon S3 with s3fs and fuse, transport endpoint is not connected
How stable is s3fs to mount an Amazon S3 bucket as a local directory

Apache Atlas and AWS S3

i am working on a project that has a requirement to store scientific data on AWS S3 as raw data for the beginning of a data lake. we are planning JSON for application data and using S3 metadata to persist application metadata (JSON schema) and process metadata. at the moment, on site S3 is the only service that we have available to us from the AWS cloud.
the client would like a publish environment where they can get the raw data back as files. we would like to avoid building a custom catalog and security infrastructure.
i don't see anything about Apache Atlas that will connect directly to AWS S3. but we can put Apache Hive on top of AWS S3 and then put Apache Atlas and Ranger on top of that. but not sure if this is how we can publish the raw data from S3 or if that even works as Hive is more of a processing environment.
is it possible to use Apache Atlas and Ranger on top of AWS S3 directly?

ML backup and Amazon S3. How?

I would like to put MLbackup directory to cloud. Is there any limitations to pushing Full and incremental backups with journal archiving enabled, to S3 compatible Object storage? Is journal archiving supported in S3 compatible cloud storage? What would happen if I put backup with journal archiving enabled to S3 storage? Will it eventually work or I will get errors?
Also, provide documents link to configure ML to point to cloud storage.
You can backup to S3, but if you want to have Journaling enabled, you will need to have them written to a different location. Journal archiving is not supported on S3.
The default location for Journals are in the backup, but when creating programmatically you can specify a different $journal-archive-path.
Backing Up a Database
The directory you specified can be an operating system mounted directory path, it can be an HDFS path, or it can be an S3 path. For details on using HDFS and S3 storage in MarkLogic, see Disk Storage Considerations in the Query Performance and Tuning Guide.
S3 and MarkLogic
Storage on S3 has an 'eventual consistency' property, meaning that write operations might not be available immediately for reading, but they will be available at some point. Because of this, S3 data directories in MarkLogic have a restriction that MarkLogic does not create Journals on S3. Therefore, MarkLogic recommends that you use S3 only for backups and for read-only forests, otherwise you risk the possibility of data loss. If your forests are read-only, then there is no need to have journals.

Hadoop upload files from local machine to amazon s3

I am working on a Java MapReduce app that has to be able to provide an upload service for some pictures from the local machine of the user to an S3 bucket.
The thing is the app must run on an EC2 cluster, so I am not sure how I can refer to the local machine when copying the files. The method copyFromLocalFile(..) needs a path from the local machine which will be the EC2 cluster...
I'm not sure if I stated the problem correctly, can anyone understand what I mean?
Thanks
You might also investigate s3distcp: http://docs.amazonwebservices.com/ElasticMapReduce/latest/DeveloperGuide/UsingEMR_s3distcp.html
Apache DistCp is an open-source tool you can use to copy large amounts of data. DistCp uses MapReduce to copy in a distributed manner—sharing the copy, error handling, recovery, and reporting tasks across several servers. S3DistCp is an extension of DistCp that is optimized to work with Amazon Web Services, particularly Amazon Simple Storage Service (Amazon S3). Using S3DistCp, you can efficiently copy large amounts of data from Amazon S3 into HDFS where it can be processed by your Amazon Elastic MapReduce (Amazon EMR) job flow. You can also use S3DistCp to copy data between Amazon S3 buckets or from HDFS to Amazon S3.
You will need to get the files from the userMachine to at least 1 node before you will be able to use them through a MapReduce.
The FileSystem and FileUtil functions refer to paths either on the HDFS or the local disk of one of the nodes in the cluster.
It cannot reference the user's local system. (Maybe if you did some ssh setup... maybe?)