Can someone please explain me what a client in a SAP NetWeaver system using the ABAP stack is and how does it create logical separations within the same installation?
One SAP system can be used for several independent companies (or subsidiaries of a company). The client is used to separate the data of these companies. Most database tables in an SAP system therefore have the client is a key. This is used e.g. for transactional data, master data and client dependent customizing data. Nevertheless there is also customizing that is valid across all clients (so called cross client customizing.)
In a nutshell: Client is a key field in most database tables to separate data of multiple companies using the same SAP system. Each company has its own client number.
In addition to Christian Trebing's answer, there are some more important points to note.
A users logs into a client. Every client has separate set users and authentication.
Generally all the database operation on a table having "client" field, the code does not need to specify the client. It is already taken care by the system.
Most of the standard processes, (business transactions/ external communication etc) already are set up specific to the client and execute that way.
Related
Good morning,
I am using an asp.net framework with an azure client database.
I am now creating another server on Azure to host databases. On this server, for each customer registering on the website (for which 1 entry is created in my first database), I need to create a database with 8 tables - identical for each customer.
What would be the best thing to map the ASP.NET ID to a new database? Which framework would you recommend?
Thanks
Rather than running a VM where you're going to have to manage a SQL Server installation and write a bunch of code to handle a database per tenant scenario, I highly, highly, highly recommend taking a look at Azure SQL's multi-tenant sharding support. All of this code is already written for you. And it's not that you're paying for one DB per client - check out elastic pooling.
You can read the docs here.
Also note, this option will scale very well.
I have done this three different ways: a database per client where I wrote my own code to manage sharding, a single database with a separate schema per client (a huge pain in the rear), and using Azure SQL sharding support. It's not just the issue of correctly separating client data. You also need to think about querying for reporting across all client databases, and managing schema changes. Under the first two options, if you change a schema, you get to modify N client databases. Azure SQL's sharding tools will manage this for you.
I have a scenario, my application is a SAAS based app catering to multiple clients. Data Integrity to clients is very essential.
Is it better to keep my Tables
Client specific
OR
Relational Tables
For Ex: I have a mapping table with fields MapField1,MapField2. I need this kind of data for each client.
Should I have tables like MappingData_
or a Single Table with mapping to the ClientId
MappingData with Fields MapField1,MapField2,ClientId
I would have a separate database for each customer. (Multiple databases in a single SQL Server instance.)
This would allow you to design it once, with a single schema.
No dynamically named tables compromising test & development
Upgrades and maintenance can be designed and tested in one DB, then rolled out to all
A single customer's data can be backed-up, restored or dropped exceedingly simply
Bugs discovered/exploited in one DB won't comprise the integrity of other DBs
Data access (read and write) can be managed using SQL Logins (No re-inventing the wheel)
If there is a need for globally shared data, that would go in another database, with it's own set of permissions for the different SQL Logins.
The use of a single database, with all users in it is my next best choice. You still have a single schema. But you don't get to partition the customers' data, you need to manage access rights and permissions yourself, and a whole host of other additional design and testing work.
I would never go near dynamically creating new tables for additional customers. A new table name means all your queries need to be updated with the new table name, and a whole host of other maintenance head-aches.
I'm pretty much of the opinion that if you want to create tables dynamically during the Business As Usual use of an application/service, you've designed it badly.
SO has a tag for the thing you're describing: "multi-tenant".
Visualize the architecture for supporting a multi-tenant database application as a spectrum. At one extreme of the spectrum is "shared nothing", which means each tenant has its own database. At the other extreme of the spectrum is "shared everything", which means tenants share tables, and each row in each table belongs to one tenant. (Each row contains a tenant identifier.)
Terminology seems to overlap, so read carefully. What one writer means by shared schema might be identical to what another writer means by shared everything.
This SO answer, also written by me, describes the differences and the tradeoffs in terms of cost, data isolation and protection, maintenance, and disaster recovery. It also links to a fairly good introductory article.
Excuse me if the question is simple. We have multiple medical clinics running each running their own SQL database EHR.
Is there anyway I can interface each local SQL database with a cloud system?
I essentially want to use the current patient data that one is consulting with at that moment to generate a pathology request that links to a cloud ?google app engine database.
As a medical student / software developer this project of yours interests me greatly!
If you don't mind me asking, where are you based? I'm from the UK and unfortunately there's just no way a system like this would get off the ground as most data is locked in proprietary databases.
What you're talking about is fairly complex anyway, whatever country you're in I assume there would have to be a lot of checks / security around any cloud system that dealt with patient data. Theoretically though, what you would want to do ideally is create an online database (cloud, hosted, intranet etc), and scrap the local databases entirely.
You then have one 'pool' of data each clinic can pull information from (i.e. ALL records for patient #3563). They could then edit that data and/or insert new records and SAVE them, exporting them back to the main database.
If there is a need to keep certain information private to one clinic only this could still be achieved on one database in a number of ways, or you could retain parts of the local database and have them merge with the cloud data as they're requested by the clinic
This might be a bit outdated, but you guys should checkout https://www.firebase.com/. It would let you do what you want fairly easily. We just did this for a client in the exact same business your are.
Basically, Firebase lets you work with a Central Database on the Cloud, that is automatically synchronised with all its front-ends. It even handles losing the connection to the server automagically. It's the best solution I've found so far to keep several systems running against one only cloud database.
We used to have our own backend that would try its best to sync changes, but you need to be really careful with inter-system unique IDs for your tables (i.e. going to one of the branches and making a new user won't yield the same id that one that already exists in any other branch or the central database). It becomes cumbersome very quickly.
CakePHP can automatically generate this kind of Unique IDs pretty easily and automatically, but you still have to work on sync'ing all the local databases with the central repository.
We want to distribute / synchronize data from our Datawarehouse (MS SQL Server) to external customers (also MS SQL Server). The connection has to be secure, because we are dealing with trusted data. Transmission of data from our system to external client system must be via the http/https
In addition it is possible that the clients still run their systems with an older database schema, so already existing tables and columns should be transmitted and non existing ones should be ignored.
Its most likely that we will have large database updates and the updates have to arrive in almost real-time.
And it is definitely necessary that the data is stored in a client side datawarehouse / SQL database.
The whole process should also include good monitoring possibilities in case something goes wrong.
We started to develop our own .NET solution but I thought it should be almost a common problem to exchange data between different systems.
Does anybody know about an existing solution which we can adapt to our scenario?
Any help is appreciated!
The problem is so common that it has a dedicated component in SQL Server: Service Broker. Rather than start your own .Net thing and take care of the many problems (how are you gonna handle down time? Retries? duplicates? out of order delivery? authentication of non-domain joined computers? routing for machines that change names? service upgrades? transactional consistency, rollbacks? are you gonna use dtc?). You can look at the demo I gave to SQL connections to see how you can easily scale SSB to a throughput of well over 1000 msgs/sec (1k payload) on commodity hardware.
the only requirement is that all partitcipants must be at least SQL Server 2005 (no SSB in 2000).
Just use regular SQL connections over a secure VPN or an SSH tunnel. Should be very easy to setup for your networking guys.
For example, you can create a linked server. Then a SQL scheduled job could move the data:
truncate table targetserver.dbname.dbo.tablename
insert into targetserver.dbname.dbo.tablename
select a, b, c
from dbname.dbo.sourcetable
Since the linked server talks to your server over a VPN or SSH tunnel, all data is send encrypted over the internet.
We have a system with 2 clients (which will increase). These two clients connect to the same server/database, however neither should be able to see the others sensitive information. There is however some shared non sensitive information.
There is also an administrative department who does work on behalf on both of the clients. They are allowed to see all sensitive data.
We currently handle this by holding a ClientID against the tables in question and with a mixture of views and queries check against the ClientID to control access for each client.
I want to move to a consistent handling of this in our system e.g. all views, or all queries, however I just wondered if there was perhaps an easier/ better Pattern than using views to handle this situation?
We're using Sql Server 2005 however upgrade to 2008 is possible.
Cheers
the most logical way is to have (indexed) views filtered by what each user can see.
add read/write permisisons to each client for their views. admins access the tables directly.
but it looks to me that each client is a logicaly separated entity form the others.
if that's the case you might consider having 1 db per client and 1 db for shared stuff.
admins can access everything, each client cas only access it's own db and read from common db.
a 3rd option is to look into schemas and separate your clients there.