I have been hurting a wall for quite a while now, I am making an application linked to a software that we are using, which will allow the user to either access data from the software with my application and update data with my application on the software.
So here is the whole idea:
So my app will be linked to the software's database (Software Patient) with the help of foreign key (patientId on "App Patient").
And I need to be able to search for email, password, firstName, lastName, secretStuff directly from my app and be able to update data as well on both databases.
The biggest issue here is that I can't make a third table that merge all the data into one because the data from the software's database (Software Patient) will be updated quite a lot directly from the software by others people.
The current stack is composed of :
My application: Node.js with Sequelize, GraphQL & PostgreSQL
Software that we use: SQL Server Express
Thank you in advance!
The app you are developing must get data from your commercial Software Patient (we'll call it SP) system. That presents several questions. You really really need clear answers to these questions to finish designing the data flow in your app. Some of the questions:
How will your app get data from SP? Will you issue SQL queries to SP's database? Does SP publish an Application Programmer Interface (API) for this purpose? Or a data export function you'll use in you app's workflow?
Must your app's view of SP data be up-to-the-minute? Will an hourly update be enough? Daily?
Will your app change SP data, insert new data, or delete data in the SP system? If so see the first question.
Must you reverse-engineer SP, that is, guess how its data is structured, to make your app work? Or can you get specs / documentation from SP's developers?
If you update a reverse-engineered database, dude, be careful!
If your app will use SQL to get data from SP, it will send that SQL to SP's SQL Server Express database. nodejs has tooling for that, but both the tooling and the SQL dialect used in postgreSQL are different. Maybe it would be wise to use SQL Server throughout: doing so reduces the cognitive load on people who will maintain and enhance your app in the future. Neither they nor you will have to keep straight the differences between the two DBMSs.
If you'll use an API, great! That's a clean interface between two systems. (It will probably have some irritating and confusing bugs, so allow some time for that. I've had to send pull requests to several API maintainers.)
If you figure out the answers to these sorts of questions, you'll make a good decision about your question of the third table. It's impossible to address your specific third-table question without some of these questions.
And. Please. Don't forget infosec. You have a duty to keep personal data of the patients you serve away from cybercreeps.
I want to create a relational table on the google console screen using firebase. How can I make the relationship between the user table and the user's posts table on the google console screen
Both databases offered by Firebase, Realtime Database and Cloud Firestore, are both NoSQL type databases. By definition, this means that they are non-relational. They do not have tables, and there are no joins. You can't set up relationships between collections or nodes.
With Cloud Firestore, you can use a DocumentReference type field to have one document point to another document, but that doesn't help you with constructing a query that joins those two documents together.
If you want to use either of the databases that Firebase offers, please take some time to get acquainted with NoSQL data modeling techniques. There are plenty of tutorials out there, as well as the product documentation for each product.
NoSQL databases haven't data relationship with tables. Also they also saves documents in a json format.
nosql ecommerce data model maybe help for you
I'm designing a system that checks a given website for any security vulnerabilities. The system includes a client (firefox plugin) and a server. The server does all the scanning while the client just relays that info to the user. If a website is dangerous, it is blacklisted; otherwise whitelisted.
The system must hypothetically be able to handle several thousands of requests and updates to the database simultaneously.
Although the database is expected to have a very simple structure, I am still considering using NoSQL because my understanding is that it can handle a greater amount of queries. Is this true? Which db technology is better suited for my system?
I suggest a NoSQL database.
In fact I've been working with two databases in the last weeks, and searching on internet I found the differences between a NoSQL an a SQL database.
Pratically, you should use a NoSQL db if you have a lot of data to query. Remind that it's not sure the data recovery in case of a db disaster.
Instead, use a SQL database if your data MUST be permanent, and you can't lose it. But query times will be longer, so it's not suggested if you have tons of data.
I understood, from what you wrote, that you need lot of queries and you "can lose" the data (if you lose a website of the list, you'll just need to re-check it, right?).
So I suggest you to go for a NoSQL db (I worked with MongoDb, it is the most famous worl-wide).
If you consider NoSQL Databases you have to analyze your data to get the right Database.
For your use case I think you should look at document databases (like MongoDB) or, if you want really high performance, a key-value Database like Redis or Riak.
With Key-Value databases you can only use the key to find the data you want.
With document databases you still have some kind of querys to find the data.
For further information look at: http://nosql-database.org/
This is a simple question yet I was unable to find any information at all about this.
Is it possible to have sub-schemas in SQL Server 2005/2008?
Example:
Having a HR (Human Resources) schema with a sub-schema called Training (with tables related to this). It would end up like HR.Training.* where * would be the tables.
No. You could fake this with roles by putting different users into different roles and allowing those roles to use objects.
Maybe you could fake it in the naming of the schema, like HR_Training.* and HR_Reviews.* and so forth. Cheesy, I know.
Are you coming from an Oracle background by any chance ? Oracle has the concept of Schemas I believe. In SQL Server the closest equivalent is a Database.
You can cross-query from one database to another on the same SQL server very easily and that would give you virtually the same kind of calling syntax
e.g server.database.owner.object
In you case it might look like HRSvr.HR.dbo.xxx and HRSvr.Training.dbo.xxxx.
yea you can make schemas but doesn't seem like you can make sub-schemas. I come from IBM db2 background but our IT folks here don't seen to know that you can other schemas beside the default 'dbo'.
I'm no beginner to using SQL databases, and in particular SQL Server. However, I've been primarily a SQL 2000 guy and I've always been confused by schemas in 2005+. Yes, I know the basic definition of a schema, but what are they really used for in a typical SQL Server deployment?
I've always just used the default schema. Why would I want to create specialized schemas? Why would I assign any of the built-in schemas?
EDIT: To clarify, I guess I'm looking for the benefits of schemas. If you're only going to use it as a security scheme, it seems like database roles already filled that.. er.. um.. role. And using it a as a namespace specifier seems to have been something you could have done with ownership (dbo versus user, etc..).
I guess what I'm getting at is, what do Schemas do that you couldn't do with owners and roles? What are their specifc benefits?
Schemas logically group tables, procedures, views together. All employee-related objects in the employee schema, etc.
You can also give permissions to just one schema, so that users can only see the schema they have access to and nothing else.
Just like Namespace of C# codes.
They can also provide a kind of naming collision protection for plugin data. For example, the new Change Data Capture feature in SQL Server 2008 puts the tables it uses in a separate cdc schema. This way, they don't have to worry about a naming conflict between a CDC table and a real table used in the database, and for that matter can deliberately shadow the names of the real tables.
I know it's an old thread, but I just looked into schemas myself and think the following could be another good candidate for schema usage:
In a Datawarehouse, with data coming from different sources, you can use a different schema for each source, and then e.g. control access based on the schemas. Also avoids the possible naming collisions between the various source, as another poster replied above.
If you keep your schema discrete then you can scale an application by deploying a given schema to a new DB server. (This assumes you have an application or system which is big enough to have distinct functionality).
An example, consider a system that performs logging. All logging tables and SPs are in the [logging] schema. Logging is a good example because it is rare (if ever) that other functionality in the system would overlap (that is join to) objects in the logging schema.
A hint for using this technique -- have a different connection string for each schema in your application / system. Then you deploy the schema elements to a new server and change your connection string when you need to scale.
At an ORACLE shop I worked at for many years, schemas were used to encapsulate procedures (and packages) that applied to different front-end applications. A different 'API' schema for each application often made sense as the use cases, users, and system requirements were quite different. For example, one 'API' schema was for a development/configuration application only to be used by developers. Another 'API' schema was for accessing the client data via views and procedures (searches). Another 'API' schema encapsulated code that was used for synchronizing development/configuration and client data with an application that had it's own database. Some of these 'API' schemas, under the covers, would still share common procedures and functions with eachother (via other 'COMMON' schemas) where it made sense.
I will say that not having a schema is probably not the end of the world, though it can be very helpful. Really, it is the lack of packages in SQL Server that really creates problems in my mind... but that is a different topic.
I tend to agree with Brent on this one... see this discussion here. http://www.brentozar.com/archive/2010/05/why-use-schemas/
In short... schemas aren't terribly useful except for very specific use cases. Makes things messy. Do not use them if you can help it. And try to obey the K(eep) I(t) S(imple) S(tupid) rule.
I don't see the benefit in aliasing out users tied to Schemas. Here is why....
Most people connect their user accounts to databases via roles initially, As soon as you assign a user to either the sysadmin, or the database role db_owner, in any form, that account is either aliased to the "dbo" user account, or has full permissions on a database. Once that occurs, no matter how you assign yourself to a scheme beyond your default schema (which has the same name as your user account), those dbo rights are assigned to those object you create under your user and schema. Its kinda pointless.....and just a namespace and confuses true ownership on those objects. Its poor design if you ask me....whomever designed it.
What they should have done is created "Groups", and thrown out schemas and role and just allow you to tier groups of groups in any combination you like, then at each tier tell the system if permissions are inherited, denied, or overwritten with custom ones. This would have been so much more intuitive and allowed DBA's to better control who the real owners are on those objects. Right now its implied in most cases the dbo default SQL Server user has those rights....not the user.
I think schemas are like a lot of new features (whether to SQL Server or any other software tool). You need to carefully evaluate whether the benefit of adding it to your development kit offsets the loss of simplicity in design and implementation.
It looks to me like schemas are roughly equivalent to optional namespaces. If you're in a situation where object names are colliding and the granularity of permissions is not fine enough, here's a tool. (I'd be inclined to say there might be design issues that should be dealt with at a more fundamental level first.)
The problem can be that, if it's there, some developers will start casually using it for short-term benefit; and once it's in there it can become kudzu.
In SQL Server 2000, objects created were linked to that particular user, like if a user, say
Sam creates an object, say, Employees, that table would appear like: Sam.Employees. What
about if Sam is leaving the compnay or moves to so other business area. As soon you delete
the user Sam, what would happen to Sam.Employees table? Probably, you would have to change
the ownership first from Sam.Employees to dbo.Employess. Schema provides a solution to
overcome this problem. Sam can create all his object within a schemam such as Emp_Schema.
Now, if he creates an object Employees within Emp_Schema then the object would be
referred to as Emp_Schema.Employees. Even if the user account Sam needs to be deleted, the
schema would not be affected.
development - each of our devs get their own schema as a sandbox to play in.
Here a good implementation example of using schemas with SQL Server. We had several ms-access applications. We wanted to convert those to a ASP.NET App portal. Every ms-access application is written as an App for that portal. Every ms-access application has its own database tables. Some of those are related, we put those in the common dbo schema of SQL Server. The rest gets its own schemas. That way if we want to know what tables belong to an App on the ASP.NET app portal that can easily be navigated, visualised and maintained.