Mime type for VCF file to open using google drive API - google-drive-android-api

What should be the MIME type for vcf files for google drive api.

Without knowing the specifics, here's a trick/hack I would use:
Go to drive.google.com and create / upload / ... one.
Use the playground here (Try it! at the bottom) to find the file in question and look at the 'mimeType' field.
Good Luck

try getting the mime type from file extension it worked for me.
here is the code:
final MimeTypeMap mime = MimeTypeMap.getSingleton();
String tmptype = mime.getMimeTypeFromExtension("vcf");

Related

Google Drive - use WebViewLink vs thumbnailLink

I'm using the Google Drive API where I can gain access to 2 pieces of data that I need to display a jpg file oin my program. WebViewLink is the "large" size image while thumbnailLink is the "thumb" smaller size of the same image.
I'm having an issue with downloading the WebViewLink that I do not have with the thumbnailLink. Part of my code calls either exif_imagetype($filename) or getimagesize($filename) so I can retrieve the type, height & width etc for the $filename. This is successful for the thumbnailView but not the WebViewLink...
code snippet...
$WebViewLink = "https://drive.google.com/a/treering.com/file/d/blablabla";
$type = exif_imagetype($WebViewLink);
--- results in the error
"PHP Warning: exif_imagetype(): stream does not support seeking..."
where as...
$thumbnailLink = "https://lh6.googleusercontent.com/blablabla";
$type = exif_imagetype($thumbnailLink);
--- successful
where $type = 2 // a .jpg file
Not sure what I need to do to gain a usable WebViewLink... maybe use the "export" function to copy to a file on my server that is accessible, then use that exported file for the functions that fail above?
Thanks for any help.
John
I think you are using the wrong property to get the image of the file.
WebViewLink
A link for opening the file in a relevant Google editor or viewer in a browser.
thumbnailLink
A short-lived link to the file's thumbnail, if available. Typically lasts on the order of hours.
You can try using the iconLink():
A static, unauthenticated link to the file's icon.
Sample image of thumbnailLink:
Sample image of a iconLink:
It will still show relevant image about the file.
Hope it helps!

Adwords api report without download

I'm working with Adwords API, I already can download reports like: all keywords with impressions, clics, ctr, conversions, etc...
The problem is, I need to show this report on our web tool when the user set the date range.
Now I'm doing this: The user selects Start Date 01/09/2014 End Date 15/09/2014, I call Adwords Api, download CSV, parsing it, and then show the results on the screen, but this way is not optimal and I would like to know how to call the API and get the results "at the moment", getting an XML or JSON without download a file.
Is it possible??
The only way I found was calling CampaignService class.. getting all campaigns, then for each campaign calling AdgroupService for get all Adgroups, then keywords.... It's really impractical.
How can I do it?
Thank you very much.
It appears in recent versions of the libary they created a function (getAsString()) so we can achieve this:
$reportDownloader = new ReportDownloader($session);
$reportDownloadResult = $reportDownloader->downloadReport($reportDefinition);
//Normal way of downloading to file
//$reportDownloadResult->saveToFile($filePath);
//printf("Report with name '%s' was downloaded to '%s'.\n",
// $reportDefinition->getReportName(), $filePath);
//New way by calling getAsString();
$reportAsString = $reportDownloadResult->getAsString();
echo $reportAsString;
Old suggestions about passing null as the $filePath no longer work.
Yes, you can get XML without downloading file. Just set $path to Null when calling ReportUtils::DownloadReport() and it returns you response without saving it to file.
The AdWords API library (as of v201506) allows you to set the download_format to one of CSVFOREXCEL, CSV, TSV, XML, GZIPPED_CSV, or GZIPPED_XML. It unfortunately does not support JSON, even if you ask to download the report data (not as a file).
$reportAsString = $reportDownloadResult->getAsString();
$xml = simplexml_load_string($reportAsString);

Play 2.1 : Access files on server

Let's suppose I have a folder on my server. I want to be able to access this folder from a URL using Play 2.1.
I really don't know how and I have been searching a lot on the Internet, to no avail.
Here is a folder in which there are files I want to access :
/home/user/myFiles
I'd like that when I type the following URL
localhost:9000/filesOnServer/filename
I download the file named "filename" in the folder myFiles.
This is not what I want to do :
GET /filesOnServer/*file controllers.Assets.at(path="/anything", file)
Indeed, this way I can only access files inside the play application directory.
Moreover, if I were to use dist, then the files are stored in a .jar and we can no longer add files to the application.
Thank you for your help.
Are you using scala or java?
I think you should look for Play.getFile() and SimpleResult()
Here is a sample of a little Controller method in scala, it might not be the most efficient but it seems to work !
def getFile() = Action {
val file = Play.getFile("../../Images/img.png")(Play.current)
val fileContent = Enumerator.fromFile(file)
SimpleResult(
header = ResponseHeader(200, Map(
CONTENT_LENGTH -> file.length.toString,
CONTENT_TYPE -> "image/png",
CONTENT_DISPOSITION -> s"attachment; filename=${file.getName}")),
body = fileContent)
}
Hope it will help you!
Note: you could also use new java.io.File(absolutePath)

google big query: export table to own bucket results in unexpected error

I'am stuck trying to export a table to my google cloud storage bucket.
Example job id: job_0463426872a645bea8157604780d060d
I tried the cloud storage target with alot of different variations, all reveal the same error. If I try to copy the natality report, it works.
What am I doing wrong?
Thanks!
Daniel
It looks like the error says:
"Table too large to be exported to a single file. Specify a uri including a * to shard export." Try switching the destination URI to something like gs://foo/bar/baz*
Specify the file extension along with the pattern. Example
gs://foo/bar/baz*.gz in case of GZIP (compressed)
gs://foo/bar/baz*.csv in case of csv (uncompressed)
The foo directory is the bucket name and bar directory can be your
date in string format which could be generated on the fly.
I was able to do it with:
bq extract --destination_format=NEWLINE_DELIMITED_JSON myproject:mydataset.mypartition gs://mybucket/mydataset/mypartition/{*}.json

Locally calculate dropbox hash of files

Dropbox rest api, in function metatada has a parameter named "hash" https://www.dropbox.com/developers/reference/api#metadata
Can I calculate this hash locally without call any remote api rest function?
I need know this value to reduce upload bandwidth.
https://www.dropbox.com/developers/reference/content-hash explains how Dropbox computes their file hashes. A Python implementation of this is below:
import hashlib
import math
import os
DROPBOX_HASH_CHUNK_SIZE = 4*1024*1024
def compute_dropbox_hash(filename):
file_size = os.stat(filename).st_size
with open(filename, 'rb') as f:
block_hashes = b''
while True:
chunk = f.read(DROPBOX_HASH_CHUNK_SIZE)
if not chunk:
break
block_hashes += hashlib.sha256(chunk).digest()
return hashlib.sha256(block_hashes).hexdigest()
The "hash" parameter on the metadata call isn't actually the hash of the file, but a hash of the metadata. It's purpose is to save you having to re-download the metadata in your request if it hasn't changed by supplying it during the metadata request. It is not intended to be used as a file hash.
Unfortunately I don't see any way via the Dropbox API to get a hash of the file itself. I think your best bet for reducing your upload bandwidth would be to keep track of the hash's of your files locally and detect if they have changed when determining whether to upload them. Depending on your system you also likely want to keep track of the "rev" (revision) value returned on the metadata request so you can tell whether the version on Dropbox itself has changed.
This won't directly answer your question, but is meant more as a workaround; The dropbox sdk gives a simple updown.py example that uses file size and modification time to check the currency of a file.
an abbreviated example taken from updown.py:
dbx = dropbox.Dropbox(api_token)
...
# returns a dictionary of name: FileMetaData
listing = list_folder(dbx, folder, subfolder)
# name is the name of the file
md = listing[name]
# fullname is the path of the local file
mtime = os.path.getmtime(fullname)
mtime_dt = datetime.datetime(*time.gmtime(mtime)[:6])
size = os.path.getsize(fullname)
if (isinstance(md, dropbox.files.FileMetadata) and mtime_dt == md.client_modified and size == md.size):
print(name, 'is already synced [stats match]')
As far as I am concerned, No you can't.
The only way is using Dropbox API which is explained here.
The rclone go program from https://rclone.org has exactly what you want:
rclone hashsum dropbox localfile
rclone hashsum dropbox localdir
It can't take more than one path argument but I suspect that's something you can work with...
t0|todd#tlaptop/p8 ~/tmp|295$ echo "Hello, World!" > dropbox-hash-demo/hello.txt
t0|todd#tlaptop/p8 ~/tmp|296$ rclone copy dropbox-hash-demo/hello.txt dropbox-ttf:demo
t0|todd#tlaptop/p8 ~/tmp|297$ rclone hashsum dropbox dropbox-hash-demo
aa4aeabf82d0f32ed81807b2ddbb48e6d3bf58c7598a835651895e5ecb282e77 hello.txt
t0|todd#tlaptop/p8 ~/tmp|298$ rclone hashsum dropbox dropbox-ttf:demo
aa4aeabf82d0f32ed81807b2ddbb48e6d3bf58c7598a835651895e5ecb282e77 hello.txt