I work for a 3D printer company and am in the process of designing a mobile app with a SQL Server database backend for the purpose of tracking spools of filament, hot-ends/nozzles (called hozzles), and eventually individual print jobs.
Here's my diagram for how I think the database should look.
Spools and Hozzles each have their own unique places they can be moved into except for printers which can hold both. All spools will be kept in the database, but when one is "finished" I want to remove it's entire history. All hozzles as well as their histories will be kept in perpetuity.
Are my tables for the spool and hozzle histories appropriate for what I am trying to accomplish?
Would it be better for me to handle attributes like 'spool_size_ID' or 'hozzle_move_ID' with an enum in the API instead of tables in the database?
Any other notes or questions about my approach would be helpful.
Related
I've created a database to track computers at my company. The goal is for the data to be automatically updated nightly and any changes tracked in a history table. I created a temporal table and everything seems to work fine. However, I'd like to exempt the column that contains the lastLogon from AD for each computer account. History of the data is irrelevant, it would result in many unnecessary updates to the history table and I'm concerned it would grow too quickly. Is there any way to do something like "Update the history table on changes to any column EXCEPT m_lastLogon"?
The only way you will be able to do this is to store the m_lastlogon information in a separate, non-temporal table. However, you are losing some potentially valuable logging information that way, especially for usage patterns and possible accidental damage tracking. You may choose to have a simple login log table correlated to the hardware, so that only the login information is tracked, reducing the unnecessary multiple recording of the rest of the information.
According to a comment made by Borko Novakovid (a Program Manager in the SQL Server team), you cannot exclude columns.
His comment was
Currently we do not support filtering out changes that occur on
columns one is not interested to track in DW schema (I guess that was
the question). We are aware that some people need this capability, but
modifying ETL logic to exclude these updates is also viable option...
Here's the link to the webpage
https://channel9.msdn.com/Shows/Data-Exposed/Temporal-in-SQL-Server-2016
For a web application (with some real private data) we want to use privacy enhancing technology to prevent big risks when someone gets permission to our database.
The application is build with different layers, and we use (as said in the topic title) Fluent NHibernate to connect to our database and we've created our own wrapper class to create query's.
Security is a big issue for the kind of application we're building. I'll try to explain the setting by a simple example:
Our customers got some clients in their application (each installation of the application uses its own database), for which some sensitive data is added, there is a client table, and a person table, that are linked.
The base table, which links to the other tables (there will be hundreds of them soon), probably containing sensitive data, is the client table
At this moment, the client has a cleint_id, and a table_id in the database, our customer only knows the client_id, the system links the data by the table_id, which is unknown to the user.
What we want to ensure:
A possible hacker who would have gained access to our database, should not be able to see the link between the customer and the other tables by just opening the database. So actually there should be some kind of "hidden link" between the customer and other tables. The personal data and all sensitive other tables should not be obviously linked together.
Because of the data sensitivity we're looking for a more robust solution then "statically hash the table_id and use this in other tables", because when one of the persons is linked to the corresponding client, not all other clients data is compromised too.
Ultimately, the customer table cannot be linked to the other tables at all, just by working inside the database, the application-code is needed to link the tables.
To accomplish this we've been looking into different methods, but because of the multiple linked tables to this client, and further development (thus probably even more tables) we're looking for a centralised solution. That's why we concluded this should be handled in the database connector. Searching on the internet and here on Stack Overflow, did not point us in the right direction, perhaps we couldn't find this because of wrong search terms (PET, Privacy enhancing technology, combined with NHibernate did not give us any directions.
How can we accomplish our goals in this specific situation, or where to search to help us fix this.
We have a similar requirement for our application and what we ended up with using database schema's.
We have one database and each customer has a separate schema, where all the data for that customer is stored. It is possible to link from the schema to the rest of the database, but not to different schema's.
Security can be set for each schema separately so you can make the life of a hacker harder.
That being said I can also imagine a solution where you let NHibernate encrypt every peace of data it will send to the database and decrypt everything it gets back. The data will be store savely, but it will be very difficult to query over data.
So there is probably not a single answer to this question, and you have to decide what is better: Not being able to query, or just making it more difficult for a hacker to get to the data.
Excuse me if the question is simple. We have multiple medical clinics running each running their own SQL database EHR.
Is there anyway I can interface each local SQL database with a cloud system?
I essentially want to use the current patient data that one is consulting with at that moment to generate a pathology request that links to a cloud ?google app engine database.
As a medical student / software developer this project of yours interests me greatly!
If you don't mind me asking, where are you based? I'm from the UK and unfortunately there's just no way a system like this would get off the ground as most data is locked in proprietary databases.
What you're talking about is fairly complex anyway, whatever country you're in I assume there would have to be a lot of checks / security around any cloud system that dealt with patient data. Theoretically though, what you would want to do ideally is create an online database (cloud, hosted, intranet etc), and scrap the local databases entirely.
You then have one 'pool' of data each clinic can pull information from (i.e. ALL records for patient #3563). They could then edit that data and/or insert new records and SAVE them, exporting them back to the main database.
If there is a need to keep certain information private to one clinic only this could still be achieved on one database in a number of ways, or you could retain parts of the local database and have them merge with the cloud data as they're requested by the clinic
This might be a bit outdated, but you guys should checkout https://www.firebase.com/. It would let you do what you want fairly easily. We just did this for a client in the exact same business your are.
Basically, Firebase lets you work with a Central Database on the Cloud, that is automatically synchronised with all its front-ends. It even handles losing the connection to the server automagically. It's the best solution I've found so far to keep several systems running against one only cloud database.
We used to have our own backend that would try its best to sync changes, but you need to be really careful with inter-system unique IDs for your tables (i.e. going to one of the branches and making a new user won't yield the same id that one that already exists in any other branch or the central database). It becomes cumbersome very quickly.
CakePHP can automatically generate this kind of Unique IDs pretty easily and automatically, but you still have to work on sync'ing all the local databases with the central repository.
I'm working on a Compact Framework app running on Windows Mobile. It's to be used by delivery drivers to tell them their next job and track spending etc. I have a SQL CE database on the mobile devices and SQL Server on the server. After struggling with major performance and configuration problems with the Sync Framework I ended up writing my own sync code using WCF. This works well and is a lot faster than the Sync Framework but I've been asked to speed it up further. Now we get into the details of the problem. Hopefully I can explain this clearly.
The synchronisation works one table at a time and is only one-way. Updates are sent from the server to the PDA only. Data travelling back to the server is handled a completely different way. First of all I delete any records on the PDA that have been removed from the server. Because of database constraints I have to delete from 'child' tables before deleting from 'parent' tables so I work up the heirachy from the bottom. E.G. I delete records from the invoice table before deleting from the products table.
Next I add new records to the PDA that have been added on the server. In this case I have to update the parent tables first and work down the heirachy and update child tables later.
The problem is that my boss doesn't like the fact that my app will keep a large table like the products table synchronised with the server when the delivery driver only needs the
invoiceProduct table. The invoiceProduct table links the invoice and products table together and contains some information about the product. I mean that their database design is not normalised and the product name has been duplicated and stored in the invoiceProduct table as well as the product table. Of course we all know this is poor design but it seems they have done this to improve performance in this type of situations.
The obvious solution is to just remove the products table completely from the PDA database. However I can't do this because it is sometimes needed. Drivers have the ability to add a new product to an invoice on the fly. My boss suggests that they could just synchronise the large products table occasionally or when they try to add a product and find that it's not there.
This won't work with the current design bacause if an invoice is downloaded containing a new product that is not on the PDA it will throw a database foreign key error.
Sorry about posting such a large message. Hopefully it makes sense. I don't want to remove my database constraints and mess up my nice data structure :(
You seems to be running into some architecture problem. I work on a product that somewhat has a similar situation. I had a client-server application where the client loaded too much data that isn't needed.
We used ADO.NET (Dataset) to reflect what the database has on the client side. The Dataset class is like a in memory CE SQL Server.
Our company starts having bigger clients and our architecture isn't fast enough to handle all the data.
In the past, we did the following. These are no fast solution:
Remove the "most" of the constraints
on the client side
all the frequently used data still have constraint in the
dataset.
Create logic to load a subset of data, instead of loading everything to the client. For example, we only load 7 days of works data, instead of every work data (which is what we did in the past).
Denormalized certain data by adding new columns, so that we don't have to load extra data we don't need
Certain data is only loaded when it is needed based on the client modules.
As long as you keep your database constraint on the SQL Server, you should have no data integrity issue. However, on your PDA side, you will need to more testing to ensure your application runs properly.
This isn't an easy problem to solve when you already have an existing architecture. Hopefully these suggestions help you.
Add a created_on field for your products and keep track of when the last time each pda synced. When the invoice is downloaded, check if the product is newer than the last sync and if its re-sync the pda. Does not seem like it would screw up the DB too much?
SQL Server 2008 database design problem.
I'm defining the architecture for a service where site users would manage a large volume of data on multiple websites that they own (100MB average, 1GB maximum per site). I am considering whether to split the databases up such that the core site management tables (users, payments, contact details, login details, products etc) are held in one database, and the database relating to the customer's own websites is held in a separate database.
I am seeing a possible gain in that I can distribute the hardware architecture to provide more meat to the heavy lifting done in the websites database leaving the site management database in a more appropriate area. But I'm also conscious of losing the ability to directly relate the sites to the customers through a Foreign key (as far as I know this can't be done cross database?).
So, the question is two fold - in general terms should data in this sort of scenario be split out into multiple databases, or should it all be held in a single database?
If it is split into multiple, is there a recommended way to protect the integrity and security of the system at the database layer to ensure that there is a strong relationship between the two?
Thanks for your help.
This question and thus my answer may be close to the gray line of subjective, but at the least I think it would be common practice to separate out the 'admin' tables into their own db for what it sounds like you're doing. If you can tie a client to a specific server and db instance then by having separate db instances, it opens up some easy paths for adding servers to add clients. A single db would require you to monkey with various clustering approaches if you got too big.
[edit]Building in the idea early that each client gets it's own DB also just sets the tone for how you develop when it is easy to make structural and organizational changes. Discovering 2 yrs from now you need to do it will become a lot more painful. I've worked with split dbs plenty of times in the past and it really isn't hard to deal with as long as you can establish some idea of what the context is. Here it sounds like you already have the idea that the client is the context.
Just my two cents, like I said, you could be close to subjective on this one.
Single Database Pros
One database to maintain. One database to rule them all, and in the darkness - bind them...
One connection string
Can use Clustering
Separate Database per Customer Pros
Support for customization on per customer basis
Security: No chance of customers seeing each others data
Conclusion
The separate database approach would be valid if you plan to support per customer customization. I don't see the value if otherwise.
You can use link to connect the databases.
Your architecture is smart.
If you can't use a link, you can always replicate critical data to the website database from the users database in a read only mode.
concerning security - The best way is to have a service layer between ASP (or other web lang) and the database - so your databases will be pretty much isolated.
If you expect to have to split the databases across different hardware in the future because of heavy load, I'd say split it now. You can use replication to push copies of some of the tables from the main database to the site management databases. For now, you can run both databases on the same instance of SQL Server and later on, when you need to, you can move some of the databases to a separate machine as your volume grows.
Imagine we have infinitely fast computers, would you split your databases? Of course not. The only reason why we split them is to make it easy for us to scale out at some point. You don't really have any choice here, 100MB-1000MB per client is huge.