I've inherited an Access 2007 database with a remit to make it available to our sales reps (10 of them) who work on laptops, which can connect to a central server but they also have a requirement to use the db offline when they are on their travels. I'm not sure how to manage this.
The db is fairly straightforward: 3 tables for customers, prices and products.
Each rep needs to have his own individual customer table, but the prices and products tables are read-only so that they are updated centrally by one person. They are updated maybe half-a dozen times a year.
I though that the simplest way would be to create a copy of the database with all three tables on a central server for each of them, then tell them to download the new copy to their laptops whenever there are any updates.
Is there any way I could automate the process (such as giving them a button on a form to press with some vba coding behind the scenes to do the copy?)
Or is there better way of managing this?
Thanks
Related
I'm using this forced "down time" to finally take my business from Excel to Access. I am fairly accomplished at Excel VBA etc, and pretty much run the business on a handful of highly developed Excel sheets I’ve created over the years. They work well, but they are not very scalable, and I want to get over to a proper relational DB.
I've taken an Udemy course on Access which was fine, but I’ve already hit some issues which may be fundamental misunderstandings, or just inexperience.
My first issue is that my company has projects (commercial contracts) which often, but not always, involve two ‘customers’ - an End User and an Agent. Agents and End Users can be interchangeable though, i.e. an Agent on one project might be the End User on another, so my “Customer Table” is simply a list of ALL my end users and agents with a CustomerID.
In my “Project Table” I have a CustomerID field and an AgentID field, both of which I wanted to use to pull out a customer and then agent from the single “Customer Table”. I can’t find the way to set up the relationships to enable me to do that – I can get either one, but not both for each Project Table query.
For a while I thought I needed a many-to-many relationship I needed, but I still don’t find how I can reference two entries from a single table in one record.
Thanks for any help!
You're almost there. What you need to do is to create a one-to-many join between tblCustomer and tblProject (based on tblCustomer!CustomerID=tblProject!CustomerID) and then another one-to-many join between tblProject and another instance of tblCustomer (based on tblCustomer!CustomerID=tblProject!AgentID). The relationship window should look like:
Regards,
I work for a 3D printer company and am in the process of designing a mobile app with a SQL Server database backend for the purpose of tracking spools of filament, hot-ends/nozzles (called hozzles), and eventually individual print jobs.
Here's my diagram for how I think the database should look.
Spools and Hozzles each have their own unique places they can be moved into except for printers which can hold both. All spools will be kept in the database, but when one is "finished" I want to remove it's entire history. All hozzles as well as their histories will be kept in perpetuity.
Are my tables for the spool and hozzle histories appropriate for what I am trying to accomplish?
Would it be better for me to handle attributes like 'spool_size_ID' or 'hozzle_move_ID' with an enum in the API instead of tables in the database?
Any other notes or questions about my approach would be helpful.
I am doing a BI project about e-commerce website. I have a complete database back-end(SQL Server 2014) for nopCommerce(version 3.70). I need to populate the empty database. Rather than manually place the order in nopCommerce font-end, do I have other methods to populate the database? I know there are relationships among tables and we cannot just simply import csv files to populate a single table.
Are there any source code I can use to automatically import data to nopCommerce back-end? Thanks a lot.
Since you are saying
I have a complete database back-end(SQL Server 2014) for nopCommerce(version 3.70)
Is it from PROD or Just a empty DB?
Best is to get a Whole backup from PROD with Products, Categorises, Customers and Orders. Manually populating is a daunting task and defeat the purpose of BI Project. Because manual data insertion wont be good as real customer data.
Thats my 2 cents.
Source: Ex-NopCommerce developer for 3 years.
I am using report builder 3.0 (very similar to SQL server reporting services) to create reports for users on an application using SQL server 2012 database.
To set the scene, we have a database with over 1200 tables. We actually only need about 100 of these for reporting purposes. But it is very common that we need to combine fields from multiple tables together to get a common resource of data that my colleagues and I need for our reports.
Eg if I want a view of a customer, I would want to bring in information about the customer from the customer_table, information about his phone details from the Phone table, information about his account(s) from the accounts table and so on. Then I might need another view of the accounts - account type, various balance amounts, opening date, status etc.
What I would love to do is create a "customer view" where we combine all these fields into a single combined virtual table. Then we have an "Accounts view". It would be easier to use, easier to manage etc. Then we use this for all our reports going forwards. And when we need to, we can combine the customer and accounts view to use on a report plus actual tables into one combo-dataset to use on a report.
I am unsure about the right way to do this.
I see I can create a data source. This doesn't seem right as this appears to be what one might do if working off 2 or more databases. We are using just 1 database.
Then there are report models. It seems these are being deprecated and phased out so this doesn't seem a good option.
Finally I see we can create shared datasets. However, this option (as far as I can tell) won't allow me to combine this with another dataset. So using the example above, I won't be able to combine the customer view and the account view with this approach to use for a report to display details about the customer and his/her accounts.
Would appreciate guidance on the best way to achieve what I am trying to do...
Thanks
I can only speak from personal experience, but using the the data source approach has been good for our purposes. We have a single database with 50+ tables in it. This is linked to as a shared data source in the project so is available to all 50+ reports.
We then use Stored Procedures to make the information in the databases available to the reports, each report has it's own Stored Procedure that joins as many tables as required to provide the data for the report. The advantage of using Stored Procedures also allows you to only return rows you are interested in, rather than entire tables.
I'm not certain if this is the kind of answer that you were after, but describes how we solve a similar (smaller) issue.
Excuse me if the question is simple. We have multiple medical clinics running each running their own SQL database EHR.
Is there anyway I can interface each local SQL database with a cloud system?
I essentially want to use the current patient data that one is consulting with at that moment to generate a pathology request that links to a cloud ?google app engine database.
As a medical student / software developer this project of yours interests me greatly!
If you don't mind me asking, where are you based? I'm from the UK and unfortunately there's just no way a system like this would get off the ground as most data is locked in proprietary databases.
What you're talking about is fairly complex anyway, whatever country you're in I assume there would have to be a lot of checks / security around any cloud system that dealt with patient data. Theoretically though, what you would want to do ideally is create an online database (cloud, hosted, intranet etc), and scrap the local databases entirely.
You then have one 'pool' of data each clinic can pull information from (i.e. ALL records for patient #3563). They could then edit that data and/or insert new records and SAVE them, exporting them back to the main database.
If there is a need to keep certain information private to one clinic only this could still be achieved on one database in a number of ways, or you could retain parts of the local database and have them merge with the cloud data as they're requested by the clinic
This might be a bit outdated, but you guys should checkout https://www.firebase.com/. It would let you do what you want fairly easily. We just did this for a client in the exact same business your are.
Basically, Firebase lets you work with a Central Database on the Cloud, that is automatically synchronised with all its front-ends. It even handles losing the connection to the server automagically. It's the best solution I've found so far to keep several systems running against one only cloud database.
We used to have our own backend that would try its best to sync changes, but you need to be really careful with inter-system unique IDs for your tables (i.e. going to one of the branches and making a new user won't yield the same id that one that already exists in any other branch or the central database). It becomes cumbersome very quickly.
CakePHP can automatically generate this kind of Unique IDs pretty easily and automatically, but you still have to work on sync'ing all the local databases with the central repository.