I am trying to create a pipeline to export Qliksense app data to AWS S3 bucket, but not sure if I can do it directly.
Two options I tried:
Use export API to export data as qvf to local disk, then connect to S3 via python script and push the data.
2.Store the data as csv using qliksense script locally and then push to S3.
Basically my ideal solution would be use a single python script to connect to Qliksense, read the data, convert to csv and export to S3.
Any ideas/code/approach would be helpful.
Qliksense is a tool to consume data and display in general, but when there is need
STORE INTO vFile (text);
is easy option. This way you will get csv file saved on drive. If you are running python script on the same machine is easy as you can EXECUTE statement which can run python script and store it to s3 using boto3 library.
Otherwise just use network drive instead and then in cron put data from there via python to s3.
Last option is that on qliksense server you can map s3 bucket. There are few tools for that like TnTDrive.
So much depends on your infrastructure.
Related
I'm trying to set up a connection a Google Drive folder and S3 bucket, but I'm not sure where to start.
I've already created a sort of "Frankenstein process", but it's easy to use only by me and sharing it to my co-workers it's a pain.
I have a script that generates a plain text file and saves it into a drive folder. And to upload, I've installed Drive file stream to save it in my mac, then all I did was create a script using Python3, with the boto3 library, to upload the text file into different s3 buckets depending on the file name.
I was thinking that I can create a lambda to process the file into the s3 buckets but I cannot resolve how to create the connection between drive and s3. I would appreciate if someone could give me a piece of advise on how to start with this.
Thanks
if you just simply want to connect google drive and aws s3 there is one service name zapier which provide different type of integration without line of code
https://zapier.com/apps/amazon-s3/integrations/google-drive
For more details you can check this link out
I have seen docs on how to import data but how do I do backups?
Is there tooling, is there an API, or is it safe to copy the files on the file system?
exporting data is done via COPY TO. Exports can go to the local filesystem or amazon s3.
every node containing data will export in parallel. you can export to a network attached storage or s3 or simply copy the exported files to another host using scp.
I need to access some data during the map stage. It is a static file, from which I need to read some data.
I have uploaded the data file to S3.
How can I access that data while running my job in EMR?
If I just specify the file path as:
s3n://<bucket-name>/path
in the code, will that work ?
Thanks
S3n:// url is for Hadoop to read the s3 files. If you want to read the s3 file in your map program, either you need to use a library that handles s3:// URL format - such as jets3t - https://jets3t.s3.amazonaws.com/toolkit/toolkit.html - or access S3 objects via HTTP.
A quick search for an example program brought up this link.
https://gist.github.com/lucastex/917988
You can also access the S3 object through HTTP or HTTPS. This may need making the object public or configuring additional security. Then you can access it using the HTTP url package supported natively by java.
Another good option is to use s3dist copy as a bootstrap step to copy the S3 file to HDFS before your Map step starts. http://docs.aws.amazon.com/ElasticMapReduce/latest/DeveloperGuide/UsingEMR_s3distcp.html
What I ended up doing:
1) Wrote a small script that copies my file from s3 to the cluster
hadoop fs -copyToLocal s3n://$SOURCE_S3_BUCKET/path/file.txt $DESTINATION_DIR_ON_HOST
2) Created bootstrap step for my EMR Job, that runs the script in 1).
This approach doesn't require to make the S3 data public.
I have a python web application deployed on Google App Engine.
I need to grab a log file stored on Amazon S3 and load it into Google Cloud Storage. Once it is in Google Cloud Storage I may need to perform some transformations and eventually import the data into BigQuery for analysis.
I tried using gsutil as a some sort of proof of concept, since boto is under the hood of gsutil and I'd like to use boto in my project. This did not work.
I'd like to know if anyone has managed to transfer file directly between the 2 clouds. If possible I'd like to see a simple example. In the end this task has to be accomplished through code executing on GAE.
Per this thread, you can stream data from S3 to Google Cloud Storage using gsutil but every byte still has to take two hops: S3 to your local computer and then your computer to GCS. Since you're using App Engine, however, you should be able to pull from S3 and deposit into GCS. It's the same progression as above except App Engine is the intermediary, i.e. every byte travels from S3 to your app and then to GCS. You could use boto for the pull side and the Google Cloud Storage API for the push side.
Google allows you to import entire buckets from S3 to the storage service:
https://cloud.google.com/storage/transfer/getting-started
You can set file filters on the source bucket to only import the file you want, or a "directory" (i.e. anything with a certain prefix).
I'm not aware of any cloud provider that provides an API for transferring data to a competing cloud provider. Cloud providers have no incentive to help you move your data to the competition. You will almost certainly have to read the data to an intermediate machine then write it to Google.
GCP supports not only transfer from S3, also it supports all the storage which have S3-compatible API's.
https://cloud.google.com/storage-transfer/docs/create-transfers
https://cloud.google.com/storage-transfer/docs/s3-compatible
Is there a way to transfer a file from the web directly to my Amazon S3 account?
For example, I want to transfer a large RDF file from www.data.gov directly to Amazon S3 without having to download the file to my local machine first.
You need a server somewhere that will execute the curl command. The easiest way is probably to use this a tool that I wrote for AWS EC2: https://github.com/mjhm/cURLServer. You can check out the docs on a live version at http://ec2-204-236-157-181.us-west-1.compute.amazonaws.com/