Does anyone know if its possible to set the One-Drive quota to 0? Client has a concern about data leakage, so I figure the fastest thing to do is to change the quota. Unfortunately you cant do it though the UI (25 GB min). And it looks like we only have a GET no SET.
Any ideas?
function getQuota() {
WL.api({ path: "/me/skydrive/quota", method: "GET" }).then(
function(response) {
log(JSON.stringify(response).replace(/,/g, ",\n"));
},
function(response) {
log("Could not access quota, status = " +
JSON.stringify(response.error).replace(/,/g, ",\n"));
}
);
}
No, there is no API available to modify a user's quota.
Quota is a representation of how much space is available to a user on OneDrive, due to a number of factors. An similar scenario would be attempting to mark half of a 700mb CD-R disk unusable, there just isn't tooling build to handle that flow.
If you are worried about persisting state in OneDrive, I would recommend having that client move the sensitive data into a subfolder, and grant read-only access to other parties. You could check for changes with the 'modifiedDate' value on the folder to discover any modifications, but that is not a proactive but reactive approach.
Unfortunately OneDrive does not have a source control model built into its design, which is what it sounds like your customers needs.
Related
I decided to move my application to a new level by creating a RESTful API.
I think I understand the general principles, I have read some tutorials.
My model is pretty simple. I have Projects and Tasks.
So to get the lists of Tasks for a Project you call:
GET /project/:id/tasks
to get a single Task:
GET /task/:id
To create a Task in a Project
CREATE /task
payload: { projectId: :id }
To edit a Task
PATCH /task/:taskId
payload: { data to be changed }
etc...
So far, so good.
But now I want to implement an operation that moves a Task from one Project to another.
My first guess was to do:
PATCH /task/:taskId
payload: { projectId: :projectId }
but I do not feel comfortable with revealing the internal structure of my backend to the frontend.
Of course, it is just a convention and has nothing to do with security, but I would feel better with something like:
PATCH /task/:taskId
payload: { newProject: :projectId }
where there is no direct relation between the 'newProject' and the real column in the database.
But then, the next operation comes.
I want to copy ALL tasks from Project A to Project B with one API call.
PUT /task
payload: { fromProject: :projectA, toProject: :projectB }
Is it a correct RESTful approach? If not - what is the correct one?
What is missing here is "a second verb".
You can see that we are creating a new task(s) hence: 'PUT' but we also 'copy' which is implied by fromProject and toProject.
Is it a correct RESTful approach? If not - what is the correct one?
To begin, think about how you would do it in a web browser: the world wide web is the reference implementation for the REST architectural style.
One of the first things that you will notice: on the web, we are almost always using POST to make changes to the server. You fill in a form in a browser, submit the form, the browser takes information from the input controls of the form to create the HTTP request body, the server figures out how to do the work that is described.
What we have in HTTP is a standardized semantics for messages that manipulate individual documents ("resources"); doing useful work is a side effect of manipulating documents (see Webber 2011).
The trick of POST is that it is the method whose standardized meaning includes the case where "this method isn't worth standardizing" (see Fielding 2009).
POST /2cc3e500-77d5-4d6d-b3ac-e384fca9fb8d
Content-Type: text/plain
Bob,
Please copy all of the tasks from project A to project B
The request line and headers here are metadata in the transfer of documents over a network domain. That is to say, that's the information we are sharing with the general purpose HTTP application.
The actual underlying business semantics of the changes we are making to documents is not something that the HTTP application cares about -- that's the whole point, after all.
That said - if you are really trying to do manipulation of document hierarchies in general purpose and standardized way, then you should maybe see if your problem is a close match to the WebDAV specifications (RFC 2291, RFC 4918, RFC 3253, etc).
If the constraints described by those documents are acceptable to you, then you may find that a lot of the work has already been done.
Background: why use cookies with Lambdas?
OWASP is very clear that cookies are the best option for session management:
.. cookies .. are one of the most extensively used session ID exchange mechanisms, offering advanced capabilities not available in other methods.
However AWS's API Gateway literature often talks about using JWTs for authentication rather than cookies. While some tech blogs out there seem to think it's ok to use JWTs in this way, there are definitely recognised issues with JWTs. Two issues of particaular note are:
(a) you can't easily invalidate a JWT. At best you can keep a server-side database of blocked JWTs and make sure any service validating JWTs also implements a check against this block list. That sounds a lot like implementing regular old sessions, largely defeating the point of using JWTs.
(b) if your want to use JWTs for authorization as well as authentication, you'll run into issues when you need to update the authorization and it's not a change being driven by the enduser themselves. Scenarios in this category include: a system administrator or account manager changes the user's access level; a trial/contract is ended by a cron job; a webhook is triggered by a 3rd party SaaS integration (e.g. Stripe). You might say, "in that case use a separate mechanism for authorization", but then again you're back to good old sessions.
To be clear, I understand the value of JWTs in letting one server communicate its trust in a user's identity to another server, but that's a very different purpose to session management.
Session management in Node
All roads seem to lead to express-session as the most battle-tested implementation of sessions in Node. It offers a wide range of storage options to choose from*.
In the context of Lambdas, in principle you could try and use express-session as though it were just a function factory, for functions with the signature (req,res,next)=>void, that's rather hacky, and in no way recommended by express-sessions. It's also not entirely clear how best to match that call signature to the objects you get in an AWS Lambda, nor which storage mechanisms are optimised for lambdas (which are ephemeral and need to start quickly).
I would really like a lightweight node module that lets you do something like the following:
import {Sessions} from 'sessions';
// configure session management. Should be super lightweight for use in Lambda.
const sessions = new Sessions({
/* ..basic cookie & expiration config, */
secret: "something", // extra security recommended by express
store: { // object with following interface:
createSession(sessionId, metadata),
getFromSessionId(sessionId),
updateSession(sessionId, metadata),
customIndexedProperties: ['userId'], // in addition to sessionId
getSessionIdsFromIndexedProperty(propertyName, propertyValue),
}
});
// create session. Note the api is not opinionated about response header mechanics.
response.headers['set-cookie'] = await sessions.createCookieForSession({
userId: 'user1',
/*...other user info */
});
// get user's session. Again not opinionated about where cookie comes from.
const userSessionInfo = await sessions.getSessionFromCookie(request.header['cookies']);
// update a user's session, but not initiated by the user themselves
const sessionIds = await sessions.getSessionIdsFromIndexedProperty('userId', 'user1');
await sessions.updateSession(sessionIds[0], {something: 'has change'});
Questions
Is my above thinking reasonable?
Are there any node packages I've not encountered that might be helpful.
If not, why not? I have come across a few people with closely related problems, but it must be a fairly common issue when working with serverless.
If I were to implement a module to my own liking how do I get any confidence that I've done a good job security-wise? I could use pieces of express-session that are relevant, but that's not a great long-term solution for good security.
Related to 4, if I were to try and hook into express-sessions, but just build my own store that does what I need, how would I get confidence in the security? Also, I haven't managed to find any docs on what the official api is for an express-session store, which I find amazing given that express-sessions seems to be the go to for sessions in Node.
Any help would be massively appreciated, thanks!
P.S. I appreciate a lot of what I'm discussing relates to open source projects that are often poorly funded. However, I was very surprised to have reached the above conclusions about the state of the ecosystem and wondered if I was missing something.
*Annoyingly, the suggested DynamoDb store package isn't great. Two of the features we'd want from it are not support, indeed PRs seem to have been opened back in 2019 but never looked at by the maintainer. Technically we don't absolutely have to use DynamoDb as our store, but it is does offer a lot of features we like.
Do you plan to allow the creation of multiple objects in one only call? For example, currently if I want to create 50 devices (by import), I need to call the API 50 times.
I think it can load the server more unnecessarily that if all objects are contained in the same call.
For a project we don't want to communicate the measurements in real time (every seconds) but postpone the storage in cumulocity. So potentially we need to create ~4000 measurements in one time every hours. Is this approach realistic?
sure, there's no problem with this approach. It also permits you to optimise your mobile bandwidth, if you send the data over a mobile data channel. POST a measurement collection instead of a single measurement, i.e., use
Content-Type: application/vnd.com.nsn.cumulocity.measurementCollection+json
and in the body, use
{ "measurements": [ { ... first measurement ... }, { ... second measurement ... }, ... ] }
If you plan to create a large number of measurements at the same time and on a regular base on our public production system, we appreciate an advance note for capacity provisioning.
There's currently no bulk API for creating multiple managed objects in the same call. It's not been a bottleneck for our customers in practical roll-out scenarios.
However, there's an API for bulk registration of devices. Maybe that helps? It's used by the upload button on the device registration page, and is described here: https://cumulocity.com/guides/reference/device-credentials/ ("Bulk device credentials")
Cheers,
André
I want to use breezejs api for storing data in local storage (indexdb or websql) and also want to sync local data with sql server.
But I am failed to achieve this and also not able to find sample app of this type of application using breezejs, knockout and mvc api.
My requirement is:
1) If internet is available, the data will come from sql server by using mvc web api.
2) If internet is shutdown, the application will retrieve data from cached local storage (indexdb or websql).
3) As soon as internet is on, the local data will sync to sql server.
Please let me know Can I achieve this requirement by using breezejs api or not?
If yes, please provide me some and links and sample.
If no, what other can we use for achieving this type of requirement?
Thanks.
Please help me to meet this requirement.
You can do this, but I would suggest simply using localstorage. Basically, every time you read from the server or save to the server, you export the entities and save that to local storage. THen, when you need to read in the data, if the server is unreachable, you read the data from localstorage and use importentities to get it into the manager and then query locally.
function getData() {
var query = breeze.EntityQuery
.from("{YourAPI}");
manager.executeQuery.then(saveLocallyAndReturnPromise)
.fail(tryLocalRestoreAndReturnPromise)
// If query was successful remotely, then save the data in case connection
// is lost
function saveLocallyAndReturnPromise(data) {
// Should add error handling here. This code
// assumes tis local processing will be successful.
var cacheData = manager.exportEntities()
window.localStorage.setItem('savedCache',cacheData);
// return queried data as a promise so that this detour is
// transparent to viewmodel
return Q(data);
}
function tryLocalRestoreAndReturnPromise(error) {
// Assume any error just means the server is inaccessible.
// Simplified for example, but more robust error handling is
// warranted
var cacheData = window.localStorage.getItem('savedCache');
// NOTE: should handle empty saved cache here by throwing error;
manager.importEntities(cacheData); // restore saved cache
var query = query.using(breeze.FetchStrategy.FromLocalCache);
return manager.executeQuery(query); // this is a promise
}
}
This is a code skeleton for simplicity. You should check catch and handle errors, add an isConnected function to determine connectivity, etc.
If you are doing editing locally, there are a few more hoops to jump through. Every time you make a change to the cache, you will need to export either the whole cache or the changes (probably depending on the size of the cache). When there is a connection, you will need to test for local changes first and, if found, save them to the server before requerying the server. In addition, any schema changes made while offline complicate matters tremendously, so be aware of that.
Hope this helps. A robust implementation is a bit more complex, but this should give you a starting point.
I am a PHP dev looking to port my API over to the Parse platform.
Am I right in thinking that you only need cloud code for complex operations? For example, consider the following methods:
// Simple function to fetch a user by id
function getUser($userid) {
return (SELECT * FROM users WHERE userid=$userid LIMIT 1)
}
// another simple function, fetches all of a user's allergies (by their user id)
function getAllergies($userid) {
return (SELECT * FROM allergies WHERE userid=$userid)
}
// Creates a script (story?) about the user using their user id
// Uses their name and allergies to create the story
function getScript($userid) {
$user = getUser($userid)
$allergies = getAllergies($userid).
return "My name is {$user->getName()}. I am allergic to {$allergies}"
}
Would I need to implement getUser()/getAllergies() endpoints in Cloud Code? Or can I simply use Parse.Query("User")... thus leaving me with only the getScript() endpoint to implement in cloud code?
Cloud code is for computation heavy operations that should not be performed on the client, i.e. handling a large dataset.
It is also for performing beforeSave/afterSave and similar hooks.
In your example, providing you have set up a reasonable data model, none of the operations require cloud code.
Your approach sounds reasonable. I tend to put simply queries that will most likely not change on the client side, but it all depends on your scenario. When developing mobile apps I tend to put a lot of code in cloud code. I've found that it speeds up my development cycle. For example, if someone finds a bug and it's in cloud code, make the fix, run parse deploy, done! The change is available to all mobile environments instantly!!! If that same code is in my mobile app, it really sucks, cause now I have to fix the bug, rebuild, push it to the app store/google play, wait x number of days for it to be approved, have the users download it... you see where I'm going here.
Take for example your
SELECT * FROM allergies WHERE userid=$userid query.
Even though this is a simple query, what if you want to sort it? maybe add some additional filtering?
These are the kinds of things I think of when deciding where to put the code. Hope this helps!
As a side note, I have also found cloud code very handy when needing to add extra security to my apps.