How to access Firefox Sync bookmarks without Firefox - api

Firefox 4 syncs bookmarks and other settings to a host run by mozilla.
How do I access my bookmarks there (without Firefox)?
Is there a documented API?
It seems https://developer.mozilla.org/en/Firefox_Sync should contain the neccessary documentation but all links except the first point to empty pages.
I found a script called weave.py here https://github.com/mozilla/weaveclient-python/blob/master/weave.py that is supposed to be able to access those bookmarks but it is unable to use my credentials. It seems to expect usernames without "#" characters.
Is there any documentation out there on how to access Firefox sync data. Preferably with examples.
Right now I don't even know the entry point to this supposed web service.
When I go to https://services.mozilla.com/ I can change my password and presumably remove everything.

If you look at https://wiki.mozilla.org/Services/Sync, I think that's the documentation you want. More detail is at https://wiki.mozilla.org/Labs/Weave/Sync/1.1/API.

Indeed, the username is sha1 + base32. Python code:
import base64
import hashlib
base64.b32encode(hashlib.sha1('myemail#gmail.com').digest()).lower()

The WeaveID returned by ID.get("WeaveID").username is indeed SHA-1 hashed and base32 encoded.
A nice way to do this in Java is to use Apache Commons Codec, which includes Base32 since version 1.5:
public String getWeaveID(String email) throws UnsupportedEncodingException
{
byte[] sha = DigestUtils.sha(email.getBytes("UTF-8"));
Base32 b32 = new Base32(64, new byte[]{ }, false);
return b32.encodeToString(sha).toLowerCase();
}

Related

Decrypting cognito codes with KMS client from aws-sdk-v3

I am following this instruction to implement custom message sender in Cognito https://docs.aws.amazon.com/cognito/latest/developerguide/user-pool-lambda-custom-sms-sender.html
All works well with similar code (I use Typescript on AWS Lambda):
import {buildClient, CommitmentPolicy, KmsKeyringNode} from '#aws-crypto/client-node';
import b64 from 'base64-js';
const {decrypt} = buildClient(CommitmentPolicy.REQUIRE_ENCRYPT_ALLOW_DECRYPT);
const keyring = new KmsKeyringNode({keyIds: ["my-key-arn"]});
...
const {plaintext} = await decrypt(keyring, b64.toByteArray(event.request.code));
console.log(plainttext.toString()) // prints plain text exactly as I need
However, this library #aws-crypto/client-node makes my bundle really huge, almost 20MB! Probably because it depends on some of older AWS libs...
I used to use modular libraries like #aws-sdk/xxx which indeed give much smaller bundles.
I have found that for encrypt/decrypt I can use #aws-sdk/client-kms. But it doesn't work!
I am trying the following code:
import {KMSClient, DecryptCommand} from "#aws-sdk/client-kms";
import b64 from 'base64-js';
const client = new KMSClient;
await client.send(new DecryptCommand({CiphertextBlob: b64.toByteArray(event.request.code), KeyId: 'my-key-arn'}))
Which gives me an error:
InvalidCiphertextException: UnknownError
at deserializeAws_json1_1InvalidCiphertextExceptionResponse (/projectdir/node_modules/#aws-sdk/client-kms/dist-cjs/protocols/Aws_json1_1.js:3157:23)
at deserializeAws_json1_1DecryptCommandError (/projectdir/node_modules/#aws-sdk/client-kms/dist-cjs/protocols/Aws_json1_1.js:850:25)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async /projectdir/node_modules/#aws-sdk/middleware-serde/dist-cjs/deserializerMiddleware.js:7:24
at async /projectdir/node_modules/#aws-sdk/middleware-signing/dist-cjs/middleware.js:14:20
at async StandardRetryStrategy.retry (/projectdir/node_modules/#aws-sdk/middleware-retry/dist-cjs/StandardRetryStrategy.js:51:46)
at async /projectdir/node_modules/#aws-sdk/middleware-logger/dist-cjs/loggerMiddleware.js:6:22
at async REPL7:1:33 {
'$fault': 'client',
'$metadata': {
httpStatusCode: 400,
requestId: '<uuid>',
extendedRequestId: undefined,
cfId: undefined,
attempts: 1,
totalRetryDelay: 0
},
__type: 'InvalidCiphertextException'
}
What am I doing wrong? Does this KMSClient support what I need?
I have also tried AWS CLI aws kms decrypt --ciphertext-blob ... command, gives me exactly same response. Though if I encrypt and decrypt any random message like "hello world", it works like a charm.
What am I doing wrong and what is so special about Cognito code ciphertext so I have to decrypt it somehow another way?
Short answer: Cognito does not use KMS to encrypt the text, it uses the Encryption SDK. So you cannot use KMS to decrypt Cognito ciphertext.
Longer answer: I spent the past day trying to get a Python email-sender-trigger function working against Cognito using boto3 and the KMS client until I found another post (somewhere?) explaining that Cognito does not encrypt data using KMS, rather the Encryption SDK. Of course these two encryption mechanisms are not compatible.
For JavaScript and Node.js applications, it looks like you have an alternative to including the entire crypto-client: https://www.npmjs.com/package/#aws-crypto/decrypt-node
If all you are doing is decrypting, the above package will let you decrypt using the Encryption SDK and it's only 159KB.
I have managed to solve my task. I have realized that indeed it does not simply uses KMS to encrypt the text, the encryption/decryption process is much more complicated.
There is reference page https://docs.aws.amazon.com/encryption-sdk/latest/developer-guide/message-format.html
It describes how the message looks like, with all the headers and body, with IV, AAD, keys, etc... I have written my own script to parse it all and properly decrypt, it worked! Probably it's too long to share... I suggest to use the reference instead. Hopefully in future they will publish proper modular version of SDK.
The one from '#aws-crypto' didn't work for me, probably doesn't support all the protocols properly. This might be not the truth at the moment you are reading it.

Expo / React Native Basic Authorization

I'm new to React native and Expo, but started to write my own app on it, with the same backend i used with my Cordova app.
Unfortunately i hit a roadblock trying to recreate the btoa() function from browsers, that i use to authenticate users with Basic authorization.
No matter what i try, i can't seem to get the same result as i did with btoa. I tried researching the subject, but i can't find a solid answer what's the difference between Base64.encode() and btoa().
I know i'm doing something wrong. When i try out the post request with Postman, i get the correct Basic auth token with it. But when i do it in code with base64 encoding(tried multiple libraries), the result differs.
Example:
test#test.com:asdasd
in postman: "dGVzdEB0ZXN0LmNvbTphc2Rhc2Rhc2Q="
in app(to utf8, then base64): "W29iamVjdCBBcnJheUJ1ZmZlcl0="
Relevant part of my code:
const utf8_enc = utf8.encode(email+':'+password);
const b64_enc = base64.encode(utf8_enc);
console.log(b64_enc);
Used libraries:
Base64- https://www.npmjs.com/package/base-64
UTF8 - https://github.com/mathiasbynens/utf8.js
Please tell me why are the two different, and how can i recreate the Postman version.
Thank you!
Ok, I see what's happening now. If you follow the docs for that utf8 package, it won't import correctly in React Native. You can see that it's not imported correctly by trying to access decode() or version as both will give you undefined. I think the reason is because they don't support es2015 modules (see this rejected PR). This package will however work fine in Node.js or in the browser.
Oddly enough, you do have access to encode() when you import. It just doesn't do what you think it does. When you attempt to use encode(), all it actually returns is the string: [object ArrayBuffer]. In fact, no matter what string you pass to it, it'll always return the same result. Now if you use btoa() on this string (with or without UTF-8 conversion since there's no difference in this case), you will see that you get that same output in the browser: W29iamVjdCBBcnJheUJ1ZmZlcl0=
So, how to get around this?
If all you expect are extended ASCII strings, then you don't need to encode it in UTF-8 as they will all be within the valid character set. So you can just do:
base64.encode(email+':'+password);
However, if you anticipate supporting all Unicode characters, then you have a few options to convert that string:
Fork the utf8 package to have it support modules/exporting.
Copy paste the entirety of the utf8 source and put it in your own local library and export the functions.
Write your own UTF-8 encoder/decoder using the method suggested here which itself is from the MDN Documentation.
So there's a reference to a solution, here is the relevant encode part of the code from the MDN documentation turned into a function:
function utf8encode(str) {
return encodeURIComponent(str).replace(/%([0-9A-F]{2})/g, function(
match,
p1
) {
return String.fromCharCode(parseInt(p1, 16));
});
}

Get a file from IVirtualImageProvider

I have a custom plugin for serving images trought LDAP IPlugin
and IVirtualImageProvider now im doing a task of importing users from LDAP to our own system and as such i need to import those images, i was wondering if there is any way of using the plugin i previously created to import those images, perhaps something in the like of
ImageResizer.ImageJob i = new ImageResizer.ImageJob("http://host/ad/A68986", "~/uploads/<guid>.<ext>", new ImageResizer.ResizeSettings(
"width=2000;height=2000;format=jpg;mode=max"));
But the first parameter (source) would be "resolved" by my LDAP plugin, ImageResizer API
Edit: I figured out this is possible since source can be a IVirtualFile, that implies that i know in advance which one to create (for my case my own ldap) it would be nice to pass the url and somehow get the correct IVirtualFile
Yes, ImageJob resolves any 'app-relative virtual paths' using installed IVirtualImageProviders. Such paths must begin with "~/", and match the path prefix and syntax you've designed, of course.
In your case, this might look like
var i = new ImageResizer.ImageJob("~/ad/A68986", "~/uploads/<guid>.<ext>",
new ImageResizer.ResizeSettings("width=2000;height=2000;format=jpg;mode=max"));
You can also call Config.Current.Pipeline.GetFile to get an IVirtualFile reference based on a path, if you just want the original data.

Handling of Thumbnails in Google Drive Android API (GDAA)

I've run into the following problem when porting an app from REST API to GDAA.
The app needs to download some of (thousands of) JPEG images based on user selection. The way this is solved in the app is by downloading a thumbnail version first, using this construct of the REST API:
private static InputStream getCont(String rsid, boolean bBig){
InputStream is = null;
if (rsid != null) try {
File gFl = bBig ?
mGOOSvc.files().get(rsid).setFields("downloadUrl" ).execute():
mGOOSvc.files().get(rsid).setFields("thumbnailLink").execute();
if (gFl != null){
GenericUrl url = new GenericUrl(bBig ? gFl.getDownloadUrl() : gFl.getThumbnailLink());
is = mGOOSvc.getRequestFactory().buildGetRequest(url).execute().getContent();
}
} catch (UserRecoverableAuthIOException uraEx) {
authorize(uraEx.getIntent());
} catch (GoogleAuthIOException gauEx) {}
catch (Exception e) { }
return is;
}
It allows to get either a 'thumbnail' or 'full-blown' version of an image based on the bBig flag. User can select a thumbnail from a list and the full-blown image download follows (all of this supported by disk-base LRU cache, of course).
The problem is, that GDAA does not have an option to ask for reduced size / thumbnail version of an object (AFAIK), so I have to resort to combining both APIs, which makes the code more convoluted then I like (bottom of the page). Needles to state that the 'Resource ID' needed by the REST may not be immediately available.
So, the question is: Is there a way to ask GDAA for a 'thumbnail' version of a document?
Downloading thumbnails isn't currently available in the Drive Android API, and unfortunately I can't give a timeframe to when it will be available. Until that time, the Drive Java Client Library is the best way to get thumbnails on Android.
We'd appreciate if you go ahead and file a feature request against our issue tracker: https://code.google.com/a/google.com/p/apps-api-issues/
That gives requests more visibility to our teams internally, and issues will be marked resolved when we release updates.
Update: I had an error in the discussion of the request fields.
As Ofir says, you can't get thumbnails with the Drive Android API and you can get thumbnails with the Drive Java Client Library. This page has is a really good primer for getting started:
https://developers.google.com/drive/v3/web/quickstart/android
Oddly, I can't get the fields portion of the request to work as it is on that quick start. As I've experienced, you have to request the fields a little differently.
Since you're doing a custom field request you have to be sure to add the other fields you want as well. Here is how I've gotten it to work:
Drive.Files.List request = mService.files()
.list()
.setFields("files/thumbnailLink, files/name, files/mimeType, files/id")
.setQ("Your file param and/or mime query");
FileList files = request.execute();
files.getFiles(); //Each File in the collection will have a valid thumbnailLink
A sample query might be:
"mimeType = 'image/jpeg' or mimeType = 'video/mp4'"
Hope this helps!

Locally reading S3 files through Spark (or better: pyspark)

I want to read an S3 file from my (local) machine, through Spark (pyspark, really). Now, I keep getting authentication errors like
java.lang.IllegalArgumentException: AWS Access Key ID and Secret
Access Key must be specified as the username or password
(respectively) of a s3n URL, or by setting the fs.s3n.awsAccessKeyId
or fs.s3n.awsSecretAccessKey properties (respectively).
I looked everywhere here and on the web, tried many things, but apparently S3 has been changing over the last year or months, and all methods failed but one:
pyspark.SparkContext().textFile("s3n://user:password#bucket/key")
(note the s3n [s3 did not work]). Now, I don't want to use a URL with the user and password because they can appear in logs, and I am also not sure how to get them from the ~/.aws/credentials file anyway.
So, how can I read locally from S3 through Spark (or, better, pyspark) using the AWS credentials from the now standard ~/.aws/credentials file (ideally, without copying the credentials there to yet another configuration file)?
PS: I tried os.environ["AWS_ACCESS_KEY_ID"] = … and os.environ["AWS_SECRET_ACCESS_KEY"] = …, it did not work.
PPS: I am not sure where to "set the fs.s3n.awsAccessKeyId or fs.s3n.awsSecretAccessKey properties" (Google did not come up with anything). However, I did try many ways of setting these: SparkContext.setSystemProperty(), sc.setLocalProperty(), and conf = SparkConf(); conf.set(…); conf.set(…); sc = SparkContext(conf=conf). Nothing worked.
Yes, you have to use s3n instead of s3. s3 is some weird abuse of S3 the benefits of which are unclear to me.
You can pass the credentials to the sc.hadoopFile or sc.newAPIHadoopFile calls:
rdd = sc.hadoopFile('s3n://my_bucket/my_file', conf = {
'fs.s3n.awsAccessKeyId': '...',
'fs.s3n.awsSecretAccessKey': '...',
})
The problem was actually a bug in the Amazon's boto Python module. The problem was related to the fact that MacPort's version is actually old: installing boto through pip solved the problem: ~/.aws/credentials was correctly read.
Now that I have more experience, I would say that in general (as of the end of 2015) Amazon Web Services tools and Spark/PySpark have a patchy documentation and can have some serious bugs that are very easy to run into. For the first problem, I would recommend to first update the aws command line interface, boto and Spark every time something strange happens: this has "magically" solved a few issues already for me.
Here is a solution on how to read the credentials from ~/.aws/credentials. It makes use of the fact that the credentials file is an INI file which can be parsed with Python's configparser.
import os
import configparser
config = configparser.ConfigParser()
config.read(os.path.expanduser("~/.aws/credentials"))
aws_profile = 'default' # your AWS profile to use
access_id = config.get(aws_profile, "aws_access_key_id")
access_key = config.get(aws_profile, "aws_secret_access_key")
See also my gist at https://gist.github.com/asmaier/5768c7cda3620901440a62248614bbd0 .
Environment variables setup could help.
Here in Spark FAQ under the question "How can I access data in S3?" they suggest to set AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables.
I cannot say much about the java objects you have to give to the hadoopFile function, only that this function already seems depricated for some "newAPIHadoopFile". The documentation on this is quite sketchy and I feel like you need to know Scala/Java to really get to the bottom of what everything means.
In the mean time, I figured out how to actually get some s3 data into pyspark and I thought I would share my findings.
This documentation: Spark API documentation says that it uses a dict that gets converted into a java configuration (XML). I found the configuration for java, this should probably reflect the values you should put into the dict: How to access S3/S3n from local hadoop installation
bucket = "mycompany-mydata-bucket"
prefix = "2015/04/04/mybiglogfile.log.gz"
filename = "s3n://{}/{}".format(bucket, prefix)
config_dict = {"fs.s3n.awsAccessKeyId":"FOOBAR",
"fs.s3n.awsSecretAccessKey":"BARFOO"}
rdd = sc.hadoopFile(filename,
'org.apache.hadoop.mapred.TextInputFormat',
'org.apache.hadoop.io.Text',
'org.apache.hadoop.io.LongWritable',
conf=config_dict)
This code snippet loads the file from the bucket and prefix (file path in the bucket) specified on the first two lines.