I want to know how odoo pos working in offline. I checked other resources but i cant found anything.I cant find anything in their official site or nothing found in any other site.
inside every browser there is cache memory so..
the pos stores the data in the cache memory of the browser called DOM( Document Object Model) so, it saves the data as the object.
like in the pos there is order so, it creates the object of order in the DOM and saves the customer detail in it as same as for the cart item it stores that in DOM.
and there is a javascript method which calls the actual code in server so, that order get saved and processed. but if the connection is loss or nework not available then it storing data to that cache means DOM.. and when it get connection because it pinging at every perticualar time interval it sends all this stored data to the server and process it.
in short it is a magic of javascript.
Related
Let's say I have a notes app. I want to enable the user to make changes while he is offline, save the changes optimistically in a Mobx store, and add a request to save the changes (on the server) to a queue.
Then when the internet connection is re-established I want to run the requests in the queue one by one so the data in the app syncs with data on the server.
Any suggestions would help.
I tried using react-native-job-queue but it doesn't seem to work.
I also considered react-native-queue but the library seems to be abandoned.
You could create a separate store (or an array in AsyncStorage) for pending operations, and add the operations to an array there when the network is disconnected. Tell your existing stores to look there for data, so you can render it optimistically. Then, when you detect a connection, run the updates in array order, and clear the array when done.
You could also use your existing stores, and add something like pending: true to values that haven't posted to your backend. However, you'll have less control over the order of operations, which sounds like it is important.
As it turns out I was in the wrong. The react-native-job-queue library does work, I just made a mistake by trying to pass a function reference (API call) to the Worker instead of just passing an object that contains the request URL and method and then just implement the Worker to make the API call based on those parameters.
I have a subroutine in my Controller
<HttpPost>
Sub Index(Id, varLotsOfData)
'Point B.
'By the time it gets here - all the data has been accepted by server.
What I would like to do it capture the Id of the inbound POST and mark, for example, a database record to say "Id xx is receiving data"
The POST receive can take a long time as there is lots of data.
When execution gets to point B I can mark the record "All data received".
Where can I place this type of "pre-POST completed" code?
I should add - we are receiving the POST data from clients that we do not control - that is, it is most likely a client's server sending the data - not a webbrowser client that we have served up from our webserver.
UPDATE: This is looking more complex than I had imagined.
I'm thinking that a possible solution would be to inspect the worker processes in IIS programatically. Via the IIS Manager you can do this for example - How to use IIS Manager to get Worker Processes (w3wp.exe) details information ?
From your description, you want to display on the client page that the method is executing and you can show also a loading gif, and when the execution completed, you will show a message to the user that the execution is completed.
The answer is simply: use SignalR
here you can find some references
Getting started with signalR 1.x and Mvc4
Creating your first SignalR hub MVC project
Hope this will help you
If I understand your goal correctly, it sounds like HttpRequest.GetBufferlessInputStream might be worth a look. It allows you to begin acting on incoming post data immediately and in "pieces" rather than waiting until the entire post has been received.
An excerpt from Microsoft's documentation:
...provides an alternative to using the InputStream propertywhich waits until the whole request has been received. In contrast, the GetBufferlessInputStream method returns the Stream object immediately. You can use the method to begin processing the entity body before the complete contents of the body have been received and asynchronously read the request entity in chunks. This method can be useful if the request is uploading a large file and you want to begin accessing the file contents before the upload is finished.
So you could grab the beginning of the post, and provided your client-facing page sends the ID towards the beginning of its transmission, you may be able to pull that out. Of course, this would be reading raw byte data which would need to be decoded so you could grab the inbound post's ID. There's also a buffered one that will allow the stream to be read in pieces but will also build a complete request object for processing once it has been completely received.
Create a custom action filter,
Action Filters for executing filtering logic either before or after an action method is called. Action Filters are custom attributes that provide declarative means to add pre-action and post-action behavior to the controller's action methods.
Specifically you'll want to look at the
OnActionExecuted – This method is called after a controller action is executed.
Here are a couple of links:
http://www.infragistics.com/community/blogs/dhananjay_kumar/archive/2016/03/04/how-to-create-a-custom-action-filter-in-asp-net-mvc.aspx
http://www.asp.net/mvc/overview/older-versions-1/controllers-and-routing/understanding-action-filters-vb
Here is a lab, but I think it's C#
http://www.asp.net/mvc/overview/older-versions/hands-on-labs/aspnet-mvc-4-custom-action-filters
currently the JSONStore API provides a load() method that says in the documentation:
This function always stores whatever it gets back from the adapter. If
the data exists, it is duplicated in the collection". This means that
if you want to avoid duplicates by calling load() on an already
populated collection, you need to empty or drop the collection before.
But if you want to be able to keep the elements you already have in
the collection in case there is no more connectivity and your
application goes for offline mode, you also need to keep track of
these existing elements.
Since the API doesn't provide a "overwrite" option that would replace the existing elements in case the call to the adapter succeeds, I'm wondering what kind of logic should be put in place in order to manage both offline availability of data and capability to refresh at any time? It is not that obvious to manage all the failure cases by nesting the JS code due to the promises...
Thanks for your advices!
One approach to achieve this:
Use enhance to create your own load method (i.e. loadAndOverwrite). You should have access to the all the variables kept inside an JSONStore instance (collection name, adapter name, adapter load procedure name, etc. -- you will probably use those variables in the invokeProcedure step below).
Call push to make sure there are no local changes.
Call invokeProcedure to get data, all the variables you need should be provided in the context of enhance.
Find if the document already exists and then remove it. Use {push: false} so JSONStore won't track that change.
Use add to add the new/updated document. Use {push: false} so JSONStore won't track that change.
Alternatively, if the document exists you can use replace to update it.
Alternatively, you can use removeCollection and call load again to refresh the data.
There's an example that shows how to use all those API calls here.
Regarding promises, read this from InfoCenter and this from HTML5Rocks. Google can provide more information.
Im currently implementing an update feature into an app that I'm building. It uses NSUrlConnection and the NSURLConnectionDelegate to download the files and save them to the users device.
At the moment, 1 update item downloads multiple files but i want to display this download of multiple files using 1 UIProgressView. So my problem is, how do i get the expected content length of all the files i'm about to download? I know i can get the expectedContentLength of the NSURLResponse object that gets passed into the didReceiveResponse method but thats just for the file thats being downloaded.
Any help is much appreciated. Thanks.
How about having some kind of information file on your server, which actually gives you the total bytes. You could load that at first and then load your files. Then you can substract the loaded amount for each file from the total amount.
Another method would be to connect to all files at first, and cancel the connection after you received responses. Add the expected bytes of all files and then use that as a basis for showing the total progress while loading files.
Downside of #1: you have to manually keep track of the bytes.
Downside of #2: you'll have the double amount of requests, even though they get cancelled after the response.
Use ASIhttp opensource framework widely used for this purpose,
here u just need to set progressview delegate..so it will keep updating your progress view
Try this
http://allseeing-i.com/ASIHTTPRequest/
I am trying to establish the best practice for handling the creation of child objects when the parent object is incomplete or doesn't yet exist in a web application. I want to handle this in a stateless way so in memory objects are out.
For example, say we have a bug tracking application.
A Bug has a title and a description (both required) and any number of attachments. So the "Bug" is the parent object with a list of "Attachment" children.
So you present a page with a title input, a description input, and a file input to add an attachment. People then add the attachments but we haven't created the Parent Bug as yet.
How do you handle persisting the added attachments ?
Obviously we have to keep track of the attachments added, but at this point we haven't persisted the parent "Bug" object to attach the "Attachment" to.
Create the incomplete bug and define a process for handling incompleteness. (Wait X minutes/hours/days, then delete it/email someone/accept it as it is.) Even without a title or description, knowing that a problem occurred and the information in the attachment is potentially useful. The attachment may include a full description, but the user just put it somewhere other than you'd attended. Or it may only contain scattered data points which are meaningless on their own - but could corroborate another user's report.
I would normally just create the Bug object and its child, the Attachment object, within the same HTTP response after the user has submitted the form.
If I'm reading you right, the user input consists of a single form with the aforementioned fields for bug title, description, and the attached file. After the user fills out these fields (including the selection of a file to upload), then clicks on Submit, your application receives all three of these pieces of data simultaneously, as POST variables in the HTTP request. (The attachment is just another bit of POST data, as described in RFC 1867.)
From your application's end, depending on what kind of framework you are using, you will probably be given a filename pointing to the location of the uploaded file in some suitable temporary directory. E.g., with Perl's CGI module, you can do:
use CGI qw(:standard);
my $query = CGI->new;
print "Bug title: " . $query->param("title") . "\n";
print "Description: " . $query->param("description"). "\n";
print "Path to uploaded attachment: " . $query->param("attachment") . "\n";
to obtain the name of the uploaded file (the file data sent through the form by your user is automatically saved in a temporary file for your convenience), along with its metadata. Since you have access to both the textual field data and the file attachment simultaneously, you can create your Bug and Attachment objects in whatever order you please, without needing to persist any incomplete data across HTTP responses.
Or am I not understanding you here?
In this case I would consider storing the attachments in some type of temporary storage, be it Session State, a temp directory on the file system, or perhaps a database. Once the bug has been saved, I would then copy the attachments to their actual place.
Careful with session though, if you let them upload large attachments you could push memory issues depending on your environment.
One approach I saw before was as soon as the user opens the new bug form, a new bug is generated in your database. Depending on your app this may or may not be a good thing. If your collecting data from a user for example, this is useful as you get some intelligence even if they fail to enter the data, and leave your site. You still know they started the process, and whatever else you collected like user agent etc..