I have started working on an existing frontend MS Access application built with VBA. It is linked to a backend Access database. So basically there are two Access databases - one frontend and one backend.
Because of the nature of the work involving vehicles, each user has their own copy of identical database with their individual data stored in backend. Constant Access to one backend db via internet is not really possible as the users are able to connect to internet (network) once a day only. Right now we just copy each backend database via ftp and store it as usr_backenddb_date.accdb
Backend databases contain around 16 tables with most of them containing an autonumber field as a primary key. Further tables are connected with each other referencing the primary key as a foreign key. I would ideally like to create a function in vba that lets me select the database and merge all the data from that database to an identical central database. For the initial part, I am thinking of doing something like this How to merge two identical database data to one? with possibly trying to cascade the change in autonumber field so the references remain intact.
I wanted to know if this approach is doable or if anyone has any other ideas and suggestion that I can look into.
Related
I am not sure what the correct forum is for a question like this, so if it would go better on a different one could you move it there please?
I have split my database into a front and back-end database. The front end is using linked tables which are linked to the back-end real tables. If a user changes something in a table on the front-end database, the changes are carried over to the backend database.
Why is this and how can I prevent this from happening? Is there a way to change the settings to make the database read only? Whether it's through VBA or not, I would accept either answer.
That's a feature, not a bug. You're using a linked table, it's linked.
If you want a separate table, make a separate table, and make some ETL (extract/transform/load) process to keep the two tables in sync as needed, accordingly with whatever business rules you need to implement.
If your Access DB is connecting to SQL Server via SQL authentication, you could have the SQL user on the SQL Server side only authorized to SELECT, and denied UPDATE, DELETE and INSERT permissions. Expect errors on the Access side when the linked table is modified then.
I want to move a DB2 10.1 database schema including tables, view, keys, sequences and its data to a new database on another network.
The new database is in a Production environment and because of security protocols, I cannot copy it directly across the networks. The current database is in a non-Production test environment.
I'm looking to generate script or scripts which will recreate the database schema including the tables, view, keys, sequences and its data. These scripts can then be transferred to the other network and run from there.
How can I do this? I have looked at db2move and db2look, but this looks like I will have problems when inserting the data because of referential constraints and the sequencing of primary keys (want to keep the current key id as it is being used as a reference number by the business team, but it is auto generated in the database).
Thanks
I have MANY SUB-OFFICES each one with its own independent and isolated SQL Server database. I have been tasked to design a new CENTRAL OFFICE Database so that we can upload "constantly" data from the sub offices. The central office database does not exist so I'm free to design it from scratch.
I mainly need to COPY CERTAIN TABLES from EACH sub office instance to a NEW database on the central office, and this will happen regularly.
The transfer will occur via web services (therefore XML) since each DB is stored on a different location so that creates the following constraints:
a) Data arrives in chunks (I can still control what data comes first)
b) MainOffice DB will NOT have direct access to any sub office database
c) Each sub office might decide to run the update at different times.
I'm planning to add an OfficeCode column on certain tables on the central database so we can store the sub office code, that way I will be able to know what record belong to what sub office.
MY QUESTION:
What is the recommended way to handle the fact that PK values on the SubOffice DB will not be the same as the PK values on the central office and therefore the FK possibly will end up pointing to the wrong records, after being imported?
As I mentioned, data will arrive in chunks via web services and not the entire set of tables at once, but I can control what to do with the data arrived since I write the web service itself.
You should seriously consider using UNIQUEIDENTIFIER values for the keys. Otherwise, study up on the ways to deal with replication issues when using int values for keys.
Rather than re-invent the wheel, why not use the off-the-shelf solution?
SQL Server replication already covers everything you described and has explicit support for Replicating Identity Columns:
To use identity columns in a replication topology that has updates at
more than one node, each node in the replication topology must use a
different range of identity values, so that duplicates do not occur.
For example, the Publisher could be assigned the range 1-100,
Subscriber A the range 101-200, and Subscriber B the range 201-300. If
a row is inserted at the Publisher and the identity value is, for
example, 65, that value is replicated to each Subscriber. When
replication inserts data at each Subscriber, it does not increment the
identity column value in the Subscriber table; instead, the literal
value 65 is inserted. Only user inserts, but not replication agent
inserts cause the identity column value to be incremented.
Replication handles identity columns across all publication and
subscription types, allowing you to manage the columns manually or
have replication manage them automatically.
When you signed up to rewrite replication from scratch over web services you signed up to redo +15 years of know-how and experience in replicating data that SQL Server replication has already solved. This problem you see now is just one of the many many problems that lay ahead. I reckon There are legitimate cases to use another technology instead of replication, are you sure your case is one of them?
Every now and then when I'm browsing data in a database, I get tired of writing ad-hoc queries to join in the various tables I want to see, and I go looking for an app that will:
Allow me to follow foreign key relationships
Automatically display tables in a tree-like format based on relationships
Compose views by automatically joining on foreign keys
I know this can be done because I wrote (and lost) such an app many years ago, but I can't seem to find anything out there. The closest I've seen is generated "scaffolding" such as RoR and MS Dynamic Data.
You could try the "Data Browser" included in the tool Jailer (Screenshots). It allows you to navigate through the database based on relationships.
I'm planing a webproject, containing 4 websites build in MVC3. As a databaseserver I'm going to use the ms sql server.
Each of this websites will have something arround 40 tables. But some of the tables are shared between the websites:
Contact, Cities, Postalcodes, Countries...
How to handle this? should I put all the tables of each database into a common database (so that the database of website 1,2,3 and website 4 are in one databse together). Or should I create one database containing shared datase?
But then I think I'm getting problems with the data consitency, because I think there is no way to point from one database to an other (linking for example the citytable in database one to the buldingtable in databse 2).
Any ideas?
Thanks a lot!
What I like about splitting it out into separate databases is that if each web site has its own database, and one of those web sites gets extremely popular, it is very easy to just move their database to a different, more powerful database server and not much has to change except (a) you need to reference the central "control" data remotely (or replicate/mirror/etc), and (b) you point that web site at a different database server. Another benefit is that if two web sites have the same types of tables (e.g. Patients), you don't have to have tables like Patients_WebSite1, Patients_WebSite2, with different stored procedures that are identical except for table names (or ugly dynamic SQL procedures that paste the table name in). Separated out you can have the exact same schema and the exact same codebase without having to combine everyone's data into a single table.
If you mix the data within a single database, data consistency is easier, and the whole setup is slightly simpler, but splitting it out when you grow is a lot tougher. If you split it out into different databases, no you won't be able to enforce referential integrity using standard DRI (foreign keys). You can accomplish this in other ways if it is important (triggers, validation before insert/update, etc).