I have a split (frontend .accdb + backend .accdb) Access 365 database which is shared among some users.
I would like to periodically compact/repair it via VBA, but i wonder what would happen when it is being used by more than one user at the same time.
I think it is impossible to do such thing when more than one user is connected, am I wrong?
How can I check if more than one user is connected? A semaphore system could be weak.
You need to make a small change in your front-end code that, for each user, periodically checks a back-end value signifying whether the back-end is available or usable. When I say periodically checks, it'd be best for this to be checked on every create, update or delete operation (reads are unimportant).
For example, I used to keep a tbl_Kvs (key-value store) in both the front-end and back-end for storing client and server global variables, and one of the keys in the back-end tbl_Kvs was dbReadOnly = 0 (zero representing false, the 'normal' mode of operation, allowing full CRUD access to users).
An admin can go in & set dbReadOnly = 1 (or any other non-zero value), which effectively puts the back-end into a read-only state from that point in time (providing your front-end code takes proper account of the dbReadOnly variable).
Now that your back-end file is effectively isolated from new writes, you can take a copy of it and compact/repair that copy at your own leisure, whilst the existing back-end is still serving users in a read-only fashion.
When all users have logged off, you can then switch the newly compacted copy of the back-end with the live one, and reset dbReadOnly = 0 to re-allow normal write operations.
If you don't want to wait for all users to voluntarily close their connections, you could re-code the front-end to close the application when dbReadOnly = 1 and then deny any new attempted connections (but I never really needed to use forced user booting for my scenarios).
A completely different option I used to use sometimes, was to have an event logging table in my back-end, and code in the front-end which logged all user actions (including opening & closing connections - logon & logoff). I then made an admin query which read the back-end events table, and produced a report of currently logged in users. I could then physically go round those logged in users and ask them to close the front-end for a few minutes whilst I did the compact/repair admin. This was a less strict approach I guess, better for when all the users were in the same building.
Related
I am currently developing both back and frontend for a web application. The backend (Python/PostgreSQL/Redis) is basically a RESTish server which provides APIs for user management and other things.
The thing I would like to optimize is the following:
Whenever a user logs-in, a session token is generated for the user and saved on the client's browser. The session token is also the key in a redis database which holds information like the user id, expiration, ip, ...
For every API-request a database query is made to check if the user has the privileges or if the account has been locked or whatever.
The problem is that I can't store these information in the redis database because of synchronization issues (The admin can modify the user account at any time).
As I said, for every API-request the database is queried again. I would like to eliminate this by somehow lazily querying the database.
I've read that there is an API for PostgreSQL which can emit events for connection sessions (LISTEN/NOTIFY). One could combine this with a TRIGGER so the backend knows when a session token can be invalidated and updated.
I am not sure if this is the right approach, considering how this will scale.
I'm new to Azure SQL Database as this is my first project to migrate from a on premise setup to everything on Azure. So the first thing that got my concern is that there is a limit on concurrent login to the Azure SQL Database, and if it exist that number, then it'll start dropping the subsequent request. For the current service tier (S0), it caps at 60 concurrent logins, which I have already reached multiple times as I've encountered a few SQL failures from looking at my application log.
So my question is:
Is it normal to exceed that number of concurrent login? I'd like to get an idea of whether or not my application is having some issue, or my current service tier is too low.
I've looked into our database code to make sure we are not leaving database connection open. We use Enterprise Library, every use of DBCommand and IDataReader are being wrapped within a using block, thus they will get disposed once it runs out of scope.
Note: my web application consists of a front end app with multiple web services supporting the underlying feature, and each service will connect to the same database for specific collection of data, which makes me think hitting 60 concurrent login might be normal since a page or an action might involve multiple calls behind the scene, thus multiple connection to the database from a few api, and if there are more than one user on the application, then 60 is really easy to reach.
Again, in the past with on prem setup, I never notice this kind of limitation.
Thanks.
To answer your question, the limit is 60 on an S(0)
http://gavinstevens.com/2016/11/30/sql-server-vs-azure-sql/
This is a more general question, so bear with my abstraction of the following problem.
I'm currently developing an application, that is interfacing with a remote server over a public api. The api in question does provide mechanisms for fetching data based on a timestamp (e.g. "get me everything that changed since xxx"). Since the amount of data is quite high, I keep a local copy in a database and check for changes on the remote side every hour.
While this makes the application robust against network problems (remote server in maintenance, network outage, etc.) and enables employees to continue working with the application, there is one big gaping problem:
The api in question also offers write access. E.g. my application can instruct the remote server to create a new object. Currently I'm sending the request via api, and upon success create the object in my local database, too. It will eventually propagate via the hourly data fetching, where my application (ideally) sees that no changes need to be made to the local database.
Now when the api is unreachable, i create the object in my database, and cache the request until the api is reachable again. This has multiple problems:
If the request fails (due to not beforehand validateble errors), I end up with an object in the database which shouldn't even exist. I could delete it, but it seems hard to explain to the user(s) ("something went wrong with the api, we deleted that object again").
The problem especially cascades when depended actions que up. E.g. creating the object, and two more requests for modifying it. When the initial create fails, so will the modifying requests (since the object does not exist on the remote side)
Worst case is deletion - when an object is deleted locally, but will not be deleted on the remote site, I have no way of restoring it (easily).
One might suggest to never create objects locally, and let them propagate only through the hourly data sync. This unfortunately is not an option. If the api is not accessible, it can be for hours. And it is mandatory that employees can continue working with the application (which they cannot when said objects don't exist locally).
So bottom line:
How to handle such a scenario, where the api might not be reachable, but certain requests must be cached locally and repeated when the api is reachable again. Especially how to handle cases where those requests unpredictable fail.
This is a very general question. I am a bit confused with the term state. I would like to know what do people mean by "state of an application"? Why do they call webserver as "stateless" and database as "stateful"?
How is the state of an application (in a VM) transferred, when the VM memory is moved from one machine to another during live migration.
Is transferring the memory, caches and register values of a system enough to transfer the state of the running application?
You've definitely asked a mouthful -- it's unfortunate that the word state is used in so many different contexts, but each one is a valid use of the word.
State of an application
An application's state is roughly the entire contents of its memory. This can be a difficult concept to get behind until you've seen something like Erlang's server loops, which explicitly pass all the state of the application in a variable from one invocation of the function to the next. In more "normal" programming languages, the "state" of the program is all its global variables, static variables, objects allocated on the heap, objects allocated on the stack, registers, open file descriptors and file offsets, open network sockets and associated kernel buffers, and so forth.
You can actually save that state and resume execution of the process elsewhere. The BLCR checkpoint tools for Linux do exactly this. (Though it is an extremely uncommon task to perform.)
State of a protocol
The state of a protocol is a different sort of meaning -- the statelessness of HTTP requests means that every web browser communication with webservers essentially starts over, from scratch -- every cookie is re-transmitted in both directions to try to "fake" some amount of a "session" for the user's sake. The servers don't hold any resources open for any given client across requests -- each one starts from scratch.
Networked filesystems might also be stateless (earlier versions of NFS) or stateful (newer versions of NFS). The earlier versions assumed every individual packet of reading, writing, or metadata control would be committed as it arrived, and every time a specific byte was needed from a file, it would be re-requested. This allowed the servers to be very simple -- they would do what the client packets told them to do and no effort was required to bring servers and clients back to consistency if a server rebooted or routers disappeared. However, this was bad for performance -- every client requested static data hundreds or thousands of times each day. So newer versions of NFS allowed some amount of data caching on the clients, and persistent file handles between servers and clients, and the servers had to keep track of the state of the clients that were connected -- and vice versa: the clients also had to know what promises they had made to the servers.
A stateful firewall will keep track of active TCP sessions. It knows which sessions the system administrators want to allow through, so it looks for those initial packets specifically. Once the session is set up, it then tracks the established connections as entities in their own rights. (This was a real advancement upon previous stateless firewalls which considered packets in isolation -- the rulesets on previous firewalls were much more permissive to achieve the same levels of functionality, but allowed through far too many malicious packets that pretended a session was already active.)
An application state is simply the state at which an application resides with regards to where in a program is being executed and the memory that is stored for the application. The web is "stateless," meaning everytime you reload a page, no information remains from the previous version of the page. All information must be resent from the server in order to display the page.
Technically, browsers get around the statelessness of the web by utilizing techniques like caching and cookies.
Application state is a data repository available to all classes. Application state is stored in memory on the server and is faster than storing and retrieving information in a database. Unlike session state, which is specific to a single user session, application state applies to all users and sessions. Therefore, application state is a useful place to store small amounts of often-used data that does not change from one user to another.
Resource:http://msdn.microsoft.com/en-us/library/ms178594.aspx
Is transferring the memory, caches and register values of a system enough to transfer the state of the running application?
Does the application have a file open, positioned at byte 225? If so, that file is part of the application's state because the next byte written should go to position 226.
Has the application authenticated itself to a secure server with a time-based key? Then that connection is part of the application's state, because if the application were to be suspended for 24 hours after saving memory, cache, and register values, when it resumes it will no longer have a valid connection to the secure server because it will have timed out.
Things which make an application stateful are easy to overlook.
I want to create my site and in the page have it so that the forum pages will use the forum mysql user having privileges on mydb.forum_table, mydb_forum_table2.
and the profile page to use the profile user having access to mydb.users and mydb.profiefields
and so on with the photogallery, blog, chat and...
is this the right way to do it! I'm thinking of principle of least privileges but I wonder why I haven't seen other big known CMS do it!
One of the critical resources for a database is connections. Generally databases are configured with a maximum number of connections, an each time a process needs to make a query, it needs a connection to do so. Database connections are expensive objects to create -- they take time and memory, and most importantly, connections are established for a specific user. The generally accepted 'best practice' for web applications is for the application, when it needs a database connection, to check a pool for an available connection. If there's a free connection in the pool, the web app will pull that connection, use it as necessary, and then return it to the pool for reuse. If there are no free connections, the app will create a new one, use it, and then place it in the pool for reuse.
If you're dealing with an application that uses multiple database users (for privilege management) and you need to use connection pooling, your application will need to establish many pools (one for each user), which will usually result in your application acquiring at least one connection for each database user it is using. This is inefficient, error prone, and needlessly complex.
If you're truly intent on limiting your application's access to data, then you should probably investigate how much support your database has for views. If views are well-supported, then you can create a view (or views) that are customized to the needs any given portion of your application.
My recommendation would be to stick to a single database user, and then use the time you just freed up to do more debugging of your application. You'll get better results, and will aggravate fewer DBAs.
If I understand correctly, the question is about implementing module access control based on the permissions on the tables that are used by the module.
I think it would be complicated to maintain (the link between modules, and tables), and slow to have to check the permissions on each table accessed by the module.