I am wanting to use a CredentialPicker to prompt for a username and password. When I configure an instance of this class, I can set CredentialPickerOptions.PreviousCredential to a value previously obtained by CredentialPickerResults.Credential. I believe this causes the dialog to prepopulate the credentials.
However, persisting this value appears to be non-trivial; it's an IBuffer, whose members don't appear to contain the relevant credentials. Programming Windows 8 Apps with HTML, CSS, and JavaScript, page 657, implies that this should be possible:
An IBuffer containing the credential as an opaque byte array. This is what you can
save in your own persistent state if needs be and passed back to the picker at a later time; we’ll
see how shortly.
Unfortunately, the we'll see how shortly appears to only refer to the fact that the value can be passed back from memory into PreviousCredential; I didn't find any mention of how it's persisted.
Also, I want to persist the credentials using the recommended approach, which I believe is to use PasswordVault, however, this appears to only allow me to save the credentials as username and password strings rather than an IBuffer.
Thanks for taking the time to ask, and I certainly agree that I could've been more clear in that part of the book. Admittedly, I spent less time on Chapter 14 than I would have liked, but I'll try to remedy that in the next edition. Feedback like yours is extremely valuable in knowing where I need to make improvements, so I appreciate it.
Anyway, writing a buffer to a file is something that was mentioned back in Chapter 8 (and could've been mentioned again here...page 325, though it doesn't mention IBuffer explicitly). It's a straightforward job using the Windows.Storage.FileIO class as shown below (promise!).
First, a clarification. You have two ways to save the entered credentials. If you want to save the plain-text credentials, then absolutely use the Credential Locker. The benefit here is that those credentials can roam automatically with the user if that roaming passwords is enabled in PC Settings (it is by default). Otherwise, you can save the opaque CredentialPickerResults.credential property directly to a file. It's already encrypted and scrambled, so you don't need to use the credential locker in that case.
Now for saving/loading the credential property, which is an IBuffer. For this you use FileIO.writeBufferAsync to save and FileIO.readBufferAsync to reload.
I modified the Credential Picker sample, scenario 3 to demonstrate this. To save the credential property, I added this code at the end of the completed handler for pickAsync:
//results.credential will be null if the user cancels
if (results.credential != null) {
//Having retrieved a credential, write the opaque buffer to a file
var option = Windows.Storage.CreationCollisionOption.replaceExisting;
Windows.Storage.ApplicationData.current.localFolder.createFileAsync("credbuffer.dat", option).then(function (file) {
return Windows.Storage.FileIO.writeBufferAsync(file, results.credential);
}).done(function () {
//No results for this operation
console.log("credbuffer.dat written.");
}, function (e) {
console.log("Could not create credbuffer.dat file.");
});
}
Then I created a new function to load that credential, if possible. This is called on the Launch button click instead of launchCredPicker:
//In the page ready method:
document.getElementById("button1").addEventListener("click", readPrevCredentialAndLaunch, false);
//Added
function readPrevCredentialAndLaunch() {
Windows.Storage.ApplicationData.current.localFolder.getFileAsync("credbuffer.dat").then(function (file) {
return Windows.Storage.FileIO.readBufferAsync(file);
}).done(function (buffer) {
console.log("Read from credbuffer.dat");
launchCredPicker(buffer);
}, function (e) {
console.log("Could not reopen credbuffer.dat; launching default");
launchCredPicker(null);
});
}
//Modified to take a buffer
function launchCredPicker(prevCredBuffer) {
try {
var options = new Windows.Security.Credentials.UI.CredentialPickerOptions();
//Other options omitted
if (prevCredBuffer != null) {
options.previousCredential = prevCredBuffer;
}
//...
That's it. I put the modified JS sample on http://www.kraigbrockschmidt.com/src/CredentialPickerJS_modified.zip.
.Kraig
Author, Programming Windows 8 Apps in HTML, CSS, and JavaScript (free ebook)
Related
I'm not familiar enough with node.js or lambda to see an obvious solution to a dilemma I have. I'm writing some utilities on lambda to manipulate images in an S3 bucket and make them accessible via the GatewayAPI to rest calls.
BACKGROUND DETAILS:
One of the utilities I have retrieves the headObject information such as the mtime, size and metadata. The images themselves will likely be coming in from various means and I won't always have control over adding metadata to them when they arrive/are-created. But I don't really need it until it's necessary to view details about the image from a web interface. And when I do that, I use a thumbnail instead so I created a lambda create-event triggered script (and also have a fall back variation of it via the gatewayAPI) that will create a thumbnail (either when the image is first uploaded to S3 or whenever I make the gateway CreateThumbbnail call) at which time it adds metadata to the thumbnail for the image with things like the original image mimetype, pixel width and height.
What I would like to be able to do, is to create a 'GetObjectInfo' that firsts pulls the headObject data, then checks to see if the bucket specified is or is not the bucket with the associated thumbnail files. (e.g. if it is or is not a thumbnail object) If it is 'not' a thumbnail, I want to then go retrieve -- or at least attempt to retrieve -- the headObject for the associated thumbnail file and attach the thumbnail file's metadata (if the thumbnail exists) onto the data from the original head request before returning the information.
The problem is, that when I set up an async callback scheme, the first headObject request completes, the second never seems to get out of the starting gate.
The method in my class is:
getHeadObject(bucket,object,callback) {
console.log(bucket, "CLASS-head#1")
this.s3.headObject({"Bucket":bucket,"Key":object}, function(err,data){
console.log(bucket, "CLASS-head#2")
callback(err,data)
})
}
getObjectInfo(bucket,object,callback) {
let scope = this
console.log(bucket,"CLASS-object#1")
this.getHeadObject(bucket,object,function(err,data) {
console.log(bucket,"CLASS-object#2")
if(err)
callback(err,data)
else
callback(null,data)
})
}
The lambda code that calls it recursively is:
var cInst = new myClass()
cInst.getObjectInfo(srcBucket,filePath,function(err,data) {
if(data.status == 1) { // if parent request success
// if parent is not thumbnail
if(srcBucket != THUMB_BUCKET) { // see if a thumbnail exists
let thumbPath = myClass.getThumbPath(srcBucket,userId,directory,targetObject)
console.log('---- thumbPath', thumbPath)
cInst.getObjectInfo(THUMB_BUCKET,thumbPath, function(err,thumbData) {
console.log("thumbData #1",thumbData)
if(thumbData.status == 1) { // thumbnail exists
console.log("thumbData")
}
})
}
context.succeed(myClass.createResponse(1, data, api))
} else {
context.fail(myClass.createResponse(data.status, data, api))
}
})
First call on the parent is see
{bucket} "CLASS-object#1"
{bucket} "CLASS-head#1"
{bucket} "CLASS-head#2"
{bucket} "CLASS-object#2"
on the second I only see:
image-thumbnails "CLASS-object#1"
image-thumbnails "CLASS-head#1"
(getThumbPath is just a static utility function that builds the thumbnail path based on the parameters related to the original file. It is already tested as working and produces something like {original-bucket-name}/{userid}/{subdirectory}/{file-basename_150x150.jpg} for any given image - I confirmed that in this instance, the thumbnail exists and matches the path returned by getThumbPath and the acl appears to have permission to read the bucket and the object)
UPDATE: More weirdness
I tried setting the permissions to publicly readable on the thumbnail and it worked. So I started messing with the acl. For the time being since I am still testing, I just gave the role for the scripts full S3 permissions.
But I noticed now that it's working and not working intermittently. One time it completes, the next time it doesn't. WTF is going on here?
I would bet that this is the most common problem that people see when using Node.js with Lambda.
When a Node.js Lambda reaches the end of the main thread, it ends all other threads. When it reaches the end of the handler, it stops all concurrent promises or async calls that are running.
To make sure that the lambda does not prematurely terminate those threads, wait until those promises are complete by using await.
In your case, the following will work: wrap any async calls in a promise and then await them.
await new Promise(async (resolve, reject) => {
cInst.getObjectInfo(srcBucket,filePath,function(err,data) {
if(data.status == 1) {
if(srcBucket != THUMB_BUCKET) {
...
...
await new Promise((resolve2, reject2) => {
cInst.getObjectInfo(THUMB_BUCKET,thumbPath, function(err,thumbData) {
...
...
resolve2();
})
})
}
context.succeed(myClass.createResponse(1, data, api))
resolve();
} else {
context.fail(myClass.createResponse(data.status, data, api))
reject();
}
})
})
Meteors loginWithPassword() function doesn't provide me the object systemData, which I adding to user doc (not to profile obj) during registration. The thing is, that if I look into console after logging in, I can see that object systemData (that means probably it's not publish issue), but not in callback of loginWithPassword() function, where I need them (to dynamically redirect user to proper page). Is there way to get this object, without any ugly things like timers?
Meteor.loginWithPassword(email, password, function(errorObject) {
if (errorObject) {
...
} else {
// returns true
if (Meteor.userId()) {
// returns false
if (Meteor.user().systemData) {
...
}
// user doc without systemData object
console.log(JSON.stringify(Meteor.user());
}
}
I've adding object systemData on creating user:
Accounts.onCreateUser(function(options, user) {
if (options.profile) {
user.profile = options.profile;
}
...
user.systemData = systemDataRegularUser;
return user;
});
Are you sure publish data to Client ?
I get User Info Using loginWithPassword in callback function.
Meteor.loginWithPassword username,password,(error,result1)->
options =
username: username
password: password
email: result['data']['email']
profile:
name: result['data']['display-name']
roles: result.roles
console.log Meteor.user(), result1
I Create user flowing code: (options contains systemData)
Accounts.createUser option
The first problem is that you want a custom field on a user document published to the client. This is a common question - see the answer here.
The next problem is that even after you add something like:
Meteor.publish("userData", function () {
return Meteor.users.find(this.userId, {fields: {systemData: 1}});
});
I think you still have a race condition. When you call loginWithPassword, the server will publish your user document, but it will also publish another version of the same document with the systemData field. You are hoping that both events have completed by the time Meteor.user() is called. In practice this may just work, but I'm not sure there is any guarantee that it always will. As you suggested, if you added a slight delay with a timer that would probably work but it's an ugly hack.
Alternatively, can you just add systemData to the user's profile so it will always be published?
I didn't find exact way how to solve this, but found easy workaround.
To make some action right after user logged in (eg. dynamically redirect user to proper page), you can hook on your home page with Iron router.(If you using it.) :
this.route('UsersListingHome', {
path: '/',
template: 'UsersListing',
data: function() { ... },
before: function() {
if (isCurrentUserAdmin() && Session.get('adminJustLogged') !== 'loggedIn') {
Router.go('/page-to-redirect');
Session.set('adminJustLogged','loggedIn');
}
}
});
After click on logout of course if (isCurrentUserAdmin()) { Session.set('adminJustLogged', null); }
I've further thought about calling Meteor.call('someMethod') to fetch userData object in Method callback, but now I'm satisfied.
PS: I know that it's not recommended to have plenty session variables or other reactive data source for speeding-up my app, but I believe, that one is not tragedy :)
PS2: Anyway, thanks for your answers.
What is the correct method for setting a client to auto answer with the vLine API for WebRTC calls?
Looking at your comment, it looks like you have figured this out. But for completeness and for future reference I will go ahead and answer.
To auto answer a call, all you have to do is call MediaSession.start() when an incoming call comes in, instead of throwing a prompt to the user.
Here is an example snippet:
client.on('add:mediaSession', onAddMediaSession, self);
// Handle new media sessions
onAddMediaSession(event){
var mediaSession = event.target;
mediaSession.on('enterState:incoming', onIncoming, self);
},
// Handle new incoming calls and autoAccept
onIncoming(event){
var mediaSession = event.target;
// Auto Accept call instead of a prompt
mediaSession.start();
}
Note that you can do this in your code even if you are using the UI Widgets.
When javascript is run in the browser there is no need to try and hide function code because it is downloaded and viewable in source.
When run on the server the situation changes. There are use cases such as api where you want to provide users with functions to call without allowing them to view the code that which is run.
On our specific case we want to execute user submitted javascript inside node. We are able to sandbox node.js api however we would like to add our own api to this sandbox without users being able to toString the function to view the code which is run.
Does anyone have a pattern or know of a way of preventing users from outputting a functions code?
Update:
Here is a full solution (i believe) based on the accepted answer below. Please note that although this is demonstrated using client side code. You would not use this client side as someone can see the contents of your hidden function by simply reading the downloaded code (although it may provide some basic slow down to inspect the code if you have used a minify).
This is meant for server side use where you want to allow users to run api code within a sandbox env but not allow them to view what the api's do. The sandbox in this code is only to demonstrate the point. It is not an actual sandbox implementation.
// function which hides another function by returning an anonymous
// function which calls the hidden function (ie. places the hidden
// function in a closure to enable access when the wraped function is passed to the sandbox)
function wrapFunc(funcToHide) {
var shownFunc = function() {
funcToHide();
};
return shownFunc;
}
// function whose contents you want to hide
function secretFunc() {
alert('hello');
}
// api object (will be passed to the sandbox to enable access to
// the hidden function)
var apiFunc = wrapFunc(secretFunc);
var api = {};
api.apiFunc = apiFunc;
// sandbox (not an actual sandbox implementation - just for demo)
(function(api) {
console.log(api);
alert(api.apiFunc.toString());
api.apiFunc();
})(api);
If you wrap a callback in a function, you can use another function in that scope which is actually hidden from the callback scope, thus:
function hideCall(funcToHide) {
var hiddenFunc = funcToHide;
var shownFunc = function() {
hiddenFunc();
};
return shownFunc;
}
Then run thusly
var shtumCallBack = hideCall(secretSquirrelFunc);
userCode.tryUnwindingThis(shtumCallBack);
The userCode scope will not be able to access secretSquirrelFunc except to call it, because the scope it would need is that of the hideCall function which is not available.
With RavenDB, creating an IDocumentSession upon app start-up (and never closing it until the app is closed), allows me to use optimistic concurrency by doing this:
public class GenericData : DataAccessLayerBase, IGenericData
{
public void Save<T>(T objectToSave)
{
Guid eTag = (Guid)Session.Advanced.GetEtagFor(objectToSave);
Session.Store(objectToSave, eTag);
Session.SaveChanges();
}
}
If another user has changed that object, then the save will correctly fail.
But what I can't do, when using one session for the lifetime of an app, is seeing changes, made by other instances of the app (say, Joe, five cubicles away), to documents. When I do this, I don't see Joe's changes:
public class CustomVariableGroupData : DataAccessLayerBase, ICustomVariableGroupData
{
public IEnumerable<CustomVariableGroup> GetAll()
{
return Session.Query<CustomVariableGroup>();
}
}
Note: I've also tried this, but it didn't display Joe's changes either:
return Session.Query<CustomVariableGroup>().Customize(x => x.WaitForNonStaleResults());
Now, if I go the other way, and create an IDocumentSession within every method that accesses the database, then I have the opposite problem. Because I have a new session, I can see Joe's changes. Buuuuuuut... then I lose optimistic concurrency. When I create a new session before saving, this line produces an empty GUID, and therefore fails:
Guid eTag = (Guid)Session.Advanced.GetEtagFor(objectToSave);
What am I missing? If a Session shouldn't be created within each method, nor at the app level, then what is the correct scope? How can I get the benefits of optimistic concurrency and the ability to see others' changes when doing a Session.Query()?
You won't see the changes, because you use the same session. See my others replies for more details
Disclaimer: I know this can't be the long-term approach, and therefore won't be an accepted answer here. However, I simply need to get something working now, and I can refactor later. I also know some folks will be disgusted with this approach, lol, but so be it. It seems to be working. I get new data with every query (new session), and I get optimistic concurrency working as well.
The bottom line is that I went back to one session per data access method. And whenever a data access method does some type of get/load/query, I store the eTags in a static dictionary:
public IEnumerable<CustomVariableGroup> GetAll()
{
using (IDocumentSession session = Database.OpenSession())
{
IEnumerable<CustomVariableGroup> groups = session.Query<CustomVariableGroup>();
CacheEtags(groups, session);
return groups;
}
}
Then, when I'm saving data, I grab the eTag from the cache. This causes a concurrency exception if another instance has modified the data, which is what I want.
public void Save(EntityBase objectToSave)
{
if (objectToSave == null) { throw new ArgumentNullException("objectToSave"); }
Guid eTag = Guid.Empty;
if (objectToSave.Id != null)
{
eTag = RetrieveEtagFromCache(objectToSave);
}
using (IDocumentSession session = Database.OpenSession())
{
session.Advanced.UseOptimisticConcurrency = true;
session.Store(objectToSave, eTag);
session.SaveChanges();
CacheEtag(objectToSave, session); // We have a new eTag after saving.
}
}
I absolutely want to do this the right way in the long run, but I don't know what that way is yet.
Edit: I'm going to make this the accepted answer until I find a better way.
Bob, why don't you just open up a new Session every time you want to refresh your data?
It has many trade-offs to open new sessions for every request, and your solution to optimistic concurrency (managing tags within your own singleton dictionary) shows that it was never intended to be used that way.
You said you have a WPF application. Alright, open a new Session on startup. Load and query whatever you want but don't close the Session until you want to refresh your data (e.g. a list of order, customers, i don't know...). Then, when you want to refresh it (after a user clicks on a button, a timer event is fired or whatever) dispose the session and open a new one. Does that work for you?