Shopware 6: How long do articles that have been placed on the wish list by customers remain stored? - shopware6

How long do articles that have been placed on the wish list by customers remain stored?
I expect a short answer for my customer. How long the wish list is stored or even how long it is in the cache

Wishlist items are stored indefinitely in a dedicated database table for all logged-in customers. There is no clean up for this.
For guest customers, it's a bit more complex. If they've disabled cookies, they can't use the wishlist at all without logging in.
Otherwise, it uses the browser's local storage, falling back to session storage and failing that uses a cookie storage.
Local storage has no inherent expiration, but can be cleared by the user or browser. Session storage is cleared after the session ends - usually when closing the last tab of the page.
So for a short answer: Usually, the wishlist does not expire.

Related

Cookie Authentication for Client - is session store needed?

I am creating an application and I am looking for a solution for user authentication (checking if the user is logged in, basically). I am reading online and it seems that many people recommend using a session store/table in your db (store roles, views etc..) vs. just storing the cookie id in the DB in that users column. My question is, what is the difference between storing this data in a "session" store, which is basically just another table and storing this data in your database alongside the other user data (username, passwordHash etc..). I understand that this is useful for data that may change when the user logs in and out again, but are there any advantages to having a session store if my applications state stays consistent across log ins. Thanks.
You need a way to store user data between HTTP requests and sessions helps you to do so.When a user visits our site, it creates a new session for the user and assigns them a cookie. Next time the user comes to the site , the cookie is checked and the session id which is stored in the cookie is retrieved and searched in the session store .Session store is a place where you store all your data regarding your session.So using a session store automates this method and it eases your work.So whenever someone pings your server it will add the session id of the user in your database. I will recommend foe you to look into JWT which is also a interesting way to do authentication.

Architecture for fast globally distributed user quota management

We have build a free globally distributed mobility analytics REST API. Meaning we have servers all over the world which run different versions (USA, Europe, etc..) of the same application. The services are behind a load balancer so I can't guarantee that the same user always get's the same application/server if he/she does requests today or tomorrow. The API is public but users have to provide an API key in order for us to match them to their paid request quota.
Since we do heavy number crunching with every request, we want to minimize request times as far as possible, inparticular for authentication/authorization and quota monitoring. Since we currently only use one user database (which has to be located in a single data center) there are cases where users in the US make a request to an application/server in the US which authenticates the user in Europe. So we are looking for a solution where the user database interaction:
happens on the same application server
get's synchronized between all application servers
should be easily integrable into java application
should be fast (changes happen in every request)
Things we have done so far:
a single database on each server > not synchronized, nightmare
a single database for all servers > ok, when used with slave as fallback but American users have to authenticate over the Atlantic
started installing bdr but failed on the way (no time, too complex, hard to make transition)
looked at redis.io
Since this is my first globally distributed REST API I wonder how other companies do this. (yelp, Google, etc.)
Any feedback is very kindly appreciated,
Cheers,
Daniel
There is no right answer, there are several ways to perform this. I'll describe the one I'm most familiar with (and which probably is one of the simplest ways to do it, although probably not the most robust one).
1. Separate authentication and authorization
First of all separate authentication and authorization. An user authenticating across the Atlantic is fine, every user request that requires authorization to go across the Atlantic is not. Main user credentials (e.g. a password hash) are a centralized resource, there is no way around that. But you do not need the main credentials every time a request needs to be authorized.
This is how a company I worked at did it (although this was postgres/python/django, nothing to do with java):
Each server has a cache of database queries in memcached but it does not cache user authentication (memcached is very similar to redis, which you mention).
The authentication is performed in the main data centre, always.
A successful authentication produces a user session that expires after a day.
The session is cached.
This may produce a situation in which a user can have an action authorized by more than one session at a given time. The algorithm is that: if at least one user session has not expired, the user is authorized. Since the cache is local, there is no need for a costly operation of fetching data across the Atlantic for every request.
2. Session revival
Sessions expiring after exactly a day may be annoying for the user, since he can never be sure that his session will not expire in the next couple of seconds. Each request the user makes that is authorized by a session shall extend the session lifetime to a full day again.
This is easy to implement: You need to keep in the session the timestamp of the last request made. And count the session lifetime based on that timestamp. If more than a single session may authorize a request the algorithm shall select the younger session (with the longest lifetime remaining) to update.
3. Request logging
This is as far as the live system I worked with went. The remainder of the answer is about how I would extend such a system to make space for logging the requests and verifying request quota.
Assumptions
Lets start by making a couple of design assumptions and argue why they are good assumptions. You're now running a distributed system, since not all data on the system is in a single place. A distributed system shall give priority to response speed and be as horizontal (not hierarchical) as possible, to achieve this consistency is sacrificed.
The session mechanism above already sacrifices some consistency. For example, if a user logs in and talks continuously to server A for a day and half and then a load balancer points the user to server B, the user may be surprised that he needs to login again. That is an unlikely situation: in most cases a load balancer would divide the user's requests equally over both server A and server B over the course of that day and half. And both, server A and server B will have live sessions at that point.
In your question you wonder how google deals with request counting. And I'll argue here that it deals with it in an inconsistent way. By that, I mean the fact that you cannot enforce a strict upper limit of requests if you're sacrificing consistency. Companies like google or yelp simply say:
"You have 12000 requests per month but if you do more than that you will pay $0.05 for every 10 requests above that".
That allows for an easier design: you can count requests at any time after they happened, the counting does not need to happen in real time.
One last assumption: a distributed system will have problems with duplicated internal data. This happens because parts of the system are running in real time and parts are doing batch processing without stopping or timestamping the real time system. That happens because you cannot be 100% sure about the state of the real time system at any given point. It is mandatory that every request coming from a customer have a unique identifier of some sort, it can be as simple as customer number + sequence number, but it needs to exist in every request. You may also add such a unique identifier the moment you receive the request.
Design
Now we extend our user session that is cached on every server (often cached in a different state on different servers that are unaware of each other). For every customer request we store the request's unique identifier as part of the session cache. That's right, the request counter in the central database is not updated in real time.
At a point in time (end of day processing, for example) each server performs a batch processing of request identifiers:
Live sessions are duplicated, and the copies are expired.
All request identifiers in expired sessions are concatenated together, and the central database is written to.
All expired sessions are purged.
This produces a race condition, where a live session may receive requests whilst the server is talking to the central database. For that reason we do not purge request identifiers from live sessions. This, in turn, causes a problem where a request identifier may be logged to the central database twice. And it is for that reason that we need the identifiers to be unique, the central database shall ignore request updates with identifiers already logged.
Advantages
99.9% Uptime, the batch processing does not interrupt the real time system.
Reduced writes to the database, and reduced communication with the database in general.
Easy horizontal growth.
Disadvantages
If a server goes down, recovering the requests performed may be tricky.
There is no way to stop a customer from doing more requests than he is allowed to (that's a peculiarity of distributed systems).
Need to store unique identifiers for all requests, a counter is not enough to measure the number of requests.
Invoicing the user does not change, you just query the central database and see how many requests a customer performed.

MVC - Store secure information

I just come a cross with this question during my MVC studies.
Is it possible that b is the correct answer?
You are designing a distributed application. The application must store secure information that is
specific to an individual user. The data must be automatically purged when the user logs off. You
need to save transient information in a secure data store. Which data store should you use?
A. Session state
B. Database storage
C. Profile properties
D. Application state
Thanks,
If "The data must be automatically purged when the user logs off", then there is literally no need for B or C. D (application state) is single across users, so your best bet is A.
From MSDN
...application state is a useful place to store small amounts of often-used data that does not change from one user to another. For information on saving data on a per-user basis see ASP.NET Session State Overview and ASP.NET Profile Properties Overview. [Ref]
This indicates A and C are possibilities, however -
[Profile properties] is similar to session state, except that the profile data is not lost when a user's session expires. [Ref]
which does not satisfy, "data must be automatically purged when the user logs off.", leaving A as the appropriate answer.
My thoughts on this question: Session in asp.net can be configured to store info in db, and by default it stores info in-proc, that's not suitable for distributed application.
So, session option alone does not fit. But db option can be used with session: this will satisfy condition of purging info after user logoff from one side, and store info in secure store (db) from the other.
Upd. If i could choose multiple options (each as a part of solution) i would choose session + state server or database. But since i can choose only one answer, i would prefer session.
It is possible for B to be a valid answer, but A is a better option.

Preserving authentication cookies, but disallowing concurrent access at different sites

I have a web application where I want users to only be able to use it from one location (meaning a user can't actively be using the application at two locations). Currently I got this working in a very common way by only allowing 1 cookie session to be valid and removing any existing ones when a user logs in. Unfortunately I've been told that my method of only allowing 1 cookie is unacceptable because my users move around a lot to different sites and are tired of having to login every time. An easy solution would just be to allow more than 1 cookie, but I can't do this because I need to make sure a user account is not being used at two locations at the same time.
I'm wondering what is the best way to implement a system like this where a user can't be active at more than 1 location, but shouldn't necessarily have to login at every location they visit.
One possible idea I had was to allow multiple cookies to be recorded, but once a cookie becomes active (meaning I notice that session navigating the application) all of the other cookies are locked out for a certain timelimit like 15 mins. If no cookie session has been active for 15 mins then allow any cookie to login and gain dominance over the others untill it exceeds the timelimit.
Edit: It's ok for them to remain logged in after they leave a location
One way to do this is to log their last ip address and at what time that access was. On each access, you can check their last access.
If the last access is from the same ip, let them through.
If the last access is from a different ip, check how long ago that was. You can then define a cut-off point for how long they need to be idle before they can access it from another location. 15 minutes seems reasonable.
All of this can be done on the backend and this would possibly provide a higher level of security.
The browser allows users to store their credentials. Let them use this feature to log back in without hassle.
No need for a timeout. Allow multiple cookies, but only one active one.
Instruct your users to close the application when they leave their workstations. Make this something that's easy to do. Put a close button on each page or perhaps catch onBeforeUnload and notify the server that the page is no longer being displayed. Do keep the session when the user closes the application, but mark it as currently inactive.
When you get a request with a cookie that belongs to an inactive session, activate that session without complaints if the user has no other session active.
If the user still has another session active, something fishy is going on. So remove all sessions and send the user to the login screen.
(That'll teach them :) )

Shopify landing_site Order Attribute

I have a client that is asking for pretty detailed information about how the landing_site attribute of an Order resource works. The documentation here says that this is set to the first page that someone visits when they come to the shop.
How persistent is this? For example, if someone visits a shop (entering via the home page, let's say) then I assume that the landing_site will be "/". Let' say that visitor then comes back a day or two later (this time via a link with a ref parameter) and visits a product page. Does the landing_site attribute update to "/products/sample-product?ref=mytoken"?
If not, how long does this value persist? Is there a way to reset it? If someone at Shopify could explain this, I think it would be something that a lot of app developers would reference.
Thanks.
Session
If the customer never creates a cart, then cookies are used to keep a reference to this data. curl -I snowdevil.myshopify.com can be used to see how persistent these cookies are:
Set-Cookie: _session_id=...; path=/; HttpOnly
This header value shows that a session cookies is being stored, which means it will expire when the user closes their browser. It also indicates that only a session_id is stored, so the session data itself is stored on the server.
The data will not be stored indefinitely on the server, because there isn't a way to know when the user closes his browser. Currently, the session data itself will expire after 1 day.
Cart
This same value will also get persisted along with any cart data when a product is added to the user's cart. So, if you look at your cookies for you shop just after adding a product to your cart, you will see there is a separate "cart" cookie which currently expires after 2 weeks. This data will persist this long, even after the user has closed their browser, but will not persist if the user deletes their cookies.
Disclaimer
A key word to take note of here is "currently", since to my knowledge, Shopify has not made a commitment to keep this data around for a certain amount of time.