I am using NSRULConnection to make http request on my iphone application. All works just fine.
The problem is after logged in I need to keep the same session to get data from the server.
I read a few posts saying all I need was using the same instance of NSURLConnection and it would use the same session... if that is true, that doesn't make sense to me, cause the NSURLConnection is not mutable and there is no method to change the request since I have to access different pages.
Is there anyway simple way to keep a session using NSURLConnection.
If you are managing sessions using cookies, there is no need to do anything special to achieve session management.The URL loading system automatically sends any stored cookies appropriate for an NSURLRequest. unless the request specifies not to send cookies. So, your sessions should be managed automatically for you.
However, as the Apple's doc says, if someone has set the cookie-acceptance policy to reject all cookies or only accept cookies selectively, you might be in a fix (you can change the cookie acceptance policy yourself too). In such a case, you might resort to URL based session-management; in which you append a session-identifier to the URL as a parameter (You can get this identifier as a part of the successful log-in response), which can be extracted on the server-side. This, however, is considered really bad practice.
Another way, which I have come across more often, is to get a session-identifier as part of the response for a successful log-in and include that identifier in all your subsequent requests as a parameter. Although this would require a major change in the way the server handles the sessions.
Related
I need to get some data from a REST API in my GraphQL API. For that I'm extending RESTDataSource from apollo-datasource-rest.
From what I understood, RESTDataSource caches automatically requests but I'd like to verify if it is indeed cached. Is there a way to know if my request is getting its data from the cache or if it's hitting the REST API?
I noticed that the first request takes some time, but the following ones are way faster and also, the didReceiveResponse method is not called everytime I make a query. Is it because the data is loaded from the cache?
I'm using apollo-server-express.
Thanks for your help!
You can time the requests like following:
console.time('restdatasource get req')
this.get(url)
console.timeEnd('restdatasource get req')
Now, if the time is under 100-150 milliseconds, that should be a request coming from the cache.
You can monitor the console, under the network tab. You will be able to see what endpoints the application is calling. If it uses cached data, there will be no new request to your endpoint logged
If you are trying to verify this locally, one good option is to setup a local proxy so that you can see all the network calls being made. (no network call meaning the call was read from cache) Then you can simply configure your app using this apollo documentation to forward all outgoing calls through a proxy like mitmproxy.
For a RESTful API that I'm creating, I need to have some functionality that get's a resource, but if it doesn't exist, creates it and then returns it. I don't think this should be the default behaviour of a GET request. I could enable this functionality on a certain parameter I give to the GET request, but it seems a little bit dirty.
The main point is that I want to do only one request for this, as these requests are gonna be done from mobile devices that potentially have a slow internet connection, so I want to limit the requests that need to be done as much as possible.
I'm not sure if this fits in the RESTful world, but if it doesn't, it will disappoint me, because it will mean I have to make a little hack on the REST idea.
Does anyone know of a RESTful way of doing this, or otherwise, a beatiful way that doesn't conflict with the REST idea?
Does the client need to provide any information as part of the creation? If so then you really need to separate out GET and POSTas otherwise you need to send that information with each GET and that will be very ugly.
If instead you are sending a GET without any additional information then there's no reason why the backend can't create the resource if it doesn't already exist prior to returning it. Depending on the amount of time it takes to create the resource you might want to think about going asynchronous and using 202 as per other answers, but that then means that your client has to handle (yet) another response code so it might be better off just waiting for the resource to be finalised and returned.
very simple:
Request: HEAD, examine response code: either 404 or 200. If you need the body, use GET.
It not available, perform a PUT or POST, the server should respond with 204 and the Location header with the URL of the newly created resource.
I want to be able to notify the user if he entered the wrong username/password, or if for example the database is down. I am not sure if I need to do it in the didLoadResponse and just check that the response is not isOK or in the didFailLoadWithError.
Thanks
How you handle it depends on how you perform a login.
If you do basic authentication, by passing the username and password in the header of the request, then you'll get an error back from the service you're calling. And your delegate method, "objectLoader:didFailWithError:" method will get called. This method will most likely get called if there's a catastrophic problem on the backend, like the database being down.
If you have a separate webservice that performs a login operation, then it probably sends back a valid block, indicating whether the user-pass was valid or not. In this case, your "objectLoader:didLoadObject:" method probably got called, and you'll have to decipher the result appropriately.
Keep in mind that this behavior is totally controlled by what the back-end services do. If you can't talk directly with the people working on the services, then this may just be trial-and-error, and until you discover how those services work.
I get confused when and why should you use specific verbs in REST?
I know basic things like:
Get -> for retrieval
Post -> adding new entity
PUT -> updating
Delete -> for deleting
These attributes are to be used as per the operation I wrote above but I don't understand why?
What will happen if inside Get method in REST I add a new entity or inside POST I update an entity? or may be inside DELETE I add an entity. I know this may be a noob question but I need to understand it. It sounds very confusing to me.
#archil has an excellent explanation of the pitfalls of misusing the verbs, but I would point out that the rules are not quite as rigid as what you've described (at least as far as the protocol is concerned).
GET MUST be safe. That means that a GET request must not change the server state in any substantial way. (The server could do some extra work like logging the request, but will not update any data.)
PUT and DELETE MUST be idempotent. That means that multiple calls to the same URI will have the same effect as one call. So for example, if you want to change a person's name from "Jon" to "Jack" and you do it with a PUT request, that's OK because you could do it one time or 100 times and the person's name would still have been updated to "Jack".
POST makes no guarantees about safety or idempotency. That means you can technically do whatever you want with a POST request. However, you will lose any advantage that clients can take of those assumptions. For example, you could use POST to do a search, which is semantically more of a GET request. There won't be any problems, but browsers (or proxies or other agents) would never cache the results of that search because it can't assume that nothing changed as a result of the request. Further, web crawlers would never perform a POST request because it could not assume the operation was safe.
The entire HTML version of the world wide web gets along pretty well without PUT or DELETE and it's perfectly fine to do deletes or updates with POST, but if you can support PUT and DELETE for updates and deletes (and other idempotent operations) it's just a little better because agents can assume that the operation is idempotent.
See the official W3C documentation for the real nitty gritty on safety and idempotency.
Protocol is protocol. It is meant to define every rule related to it. Http is protocol too. All of above rules (including http verb rules) are defined by http protocol, and the usage is defined by http protocol. If you do not follow these rules, only you will understand what happens inside your service. It will not follow rules of the protocol and will be confusing for other users. There was an example, one time, about famous photo site (does not matter which) that did delete pictures with GET request. Once the user of that site installed the google desktop search program, that archieves the pages locally. As that program knew that GET operations are only used to get data, and should not affect anything, it made GET requests to every available url (including those GET-delete urls). As the user was logged in and the cookie was in browser, there were no authorization problems. And the result - all of the user photos were deleted on server, because of incorrect usage of http protocol and GET verb. That's why you should always follow the rules of protocol you are using. Although technically possible, it is not right to override defined rules.
Using GET to delete a resource would be like having a function named and documented to add something to an array that deletes something from the array under the hood. REST has only a few well defined methods (the HTTP verbs). Users of your service will expect that your service stick to these definition otherwise it's not a RESTful web service.
If you do so, you cannot claim that your interface is RESTful. The REST principle mandates that the specified verbs perform the actions that you have mentioned. If they don't, then it can't be called a RESTful interface.
I want to store values in variables to access form another page (a.k.a State management).
Now I cannot use sessions since I have multiple Zope instances & if one fails the user need to be redirected to another Zope instance and one session is valid only for one Zope instance.
Now my remaining options are
submit a Hidden input tag using POST method
Passing through URL with GET method
Using cookies
Using Database (which I think is 'making simple things complex'.)
I am not even considering the first 2 methods and I think using cookies is not secure.
So is there a commercial or open source module that can securely (encryption etc.) do cookie management.
If not I will have to use a database.
Please inform me, if I am missing something.
Version - Zope 2.11.1
The SESSION support built-in to Zope 2 actually keeps the session in a temporary partition of the ZODB so I think it actually is valid for multiple Zope clients connecting to the same ZEO server. The cost of this is that all session changes invoke the transaction machinery and result in a commit, so just make sure you're not using the SESSION in something very low-level like PAS auth or you'll have commits hitting your ZODB for every image, CSS file, and JS file.