Can my employer see what tables I look at? - sql

We have several database servers at my work, most of which I have access too. In one server there is a database within that contains a table I want to look at but don't "need" to look at. It's more curiosity, would any one be able to know or be notified if I looked at this table?
Thanks

I am assuming you are referring to SQL Server. Refer here for a better understanding of how easy it is to track such activities.
Auditing is one of the basic capabilities of any database server. If there is a business need for the DBAs to monitor access to any specific database / table, chances are they might already be logging it. Better ask your DBAs about it if you are really interested in learning about that table.

Related

SQL database management question for Webscraper project

I have very little Database management experience, I took a single class when I was in Undergrad. I wanted to see other's inputs on the best way to setup the database.
I have developed a docker application(Webscraping, PostGIS database). The webscraper scrapes from multiple websites everyday. Then uploads to the database, it also checks for duplicates before uploading to the database.
However, I don't want the Reasearch Assistants to be able to change things on the original tables, since lot of the webscraper depends on the structure of the original tables. I gave them SELECT access, but I want them to be able to share their data on the Database as this is a collaborative project.
My original thoughts was to create a new and empty database with full permission. And only SELECT access to the webscraper database. I don't know if this is the best way to do this.
What are your thoughts?
Also to note, this is a contract job for a university project under a grant so I won't be maintaining the database after the contract. Also the project isn't big enough to hire a person with Docker & Database experience just to maintain the database. So I am trying to bulletproof this as much as possible.

Database optimisation

I'm starting a web application that will be used by a lot of companies (over 20K), and most importantly a lot of information will be recorded daily. I would like your advice on the following idea: create a database for each company to do sql queries like this:
select * from enterprisedb1.tablename;
select * from enterprisedb2.tablename2 where enterprisedb2.tablename2.col='foo'
Pleace i need your advice, i don't find anything on google
If you are selling this to multiple clients then it might come down to separation of their data.
On the one hand everything for the app is in the one database for each client, and provided you get the connection string right you probably don't need to ever specify the company name again for the rest of the app. No more "where customer=123" on every single query.
Also means a client could be deleted, backed up, moved, audited, whatever in a completely independent manner.
And also means there is no risk of a developer or a query accidentally doing cross-client things. So you can even open up to generic query access that still cant accidentally cross a client-to-client border. And security set-up will be simpler.
But if you have a million clients you do end up with a lot of databases. How well this works will depend on all sorts of things, including your database of choice.
You also end up having multiple copies of reference data unless you create an additional database "common" or something like that.
Its going to be very much a "depends" answer, but that's a few things to consider.
I suggest to use common tables for each company. It will better to manage and easy to understand.
Create one table for company data and use Integer reference of that key in another mete data tables. For better performance, Index and Query must be well formed.

Split Database Security

I'm working on an .NET MVC SQL application that will contain sensitive data, for example- HIV test results or income. I want to error-proof this privacy as much as possible so no one except the user can access it (think Joe the Plumber having his information hacked by a state employee).
I read hear that splitting the database in two doesn't seem reasonable:
Is splitting databases a legitimate security measure?
although I've heard of this being done. If we could just use two tables... better.
But when I say error-proofing, I mean impossible for ANYONE in our company to access both databases/tables. I'm thinking about putting access to the application code (which would access both databases) and to both databases in the hands of a deep-pockets third party (like PWC or EY) for when the government came calling or some other real need to see both data sources came along.
Anyone have any thoughts on the cleanest way to do this? We'd want to design the tables such that most queries would not require access to both data sources so the relative cost in throughput wouldn't be that much.
You can encrypt a column of data in SQL. So the columns which has the sensitive data e.g. HIV test results/income, you can encrypt the data while storing it in the DB.
Check the details here:
http://msdn.microsoft.com/en-us/library/ms179331.aspx
http://msdn.microsoft.com/en-us/library/bb964742.aspx
Let me know if it helps.

Querying multiple database servers?

I am working on a database for a monitoring application, and I got all the business logic sorted out. It's all well and good, but one of the requirements is that the monitoring data is to be completely stand-alone.
I'm using a local database on my web-server to do some event handling and caching notifications. Since there is one event row per system on my monitor database, it's easy to just get the id and query the monitoring data if needed, and since this is something only my web server uses, integrity can be enforced externally. Querying is not an issue either, as all the relationships are one-to-one so it's very straight forward.
My problem comes with user administration. My original plan had it on yet another database (to meet the requirement of leaving the monitoring database alone), but I don't think I was thinking straight when I thought of that. I can get all the ids of the systems a user has access to easily enough, but how then can I efficiently pass that to a query on the other database? Is there a solution for this? Making a chain of ors seems like an ugly and buggy solution.
I assume this kind of problem isn't that uncommon? What do most developers do when they have to integrate different database servers? In any case, I am leaning towards just talking my employer into putting user administration data in the same database, but I want to know if this kind of thing can be done.
There are a few ways to accomplish what you are after:
Use concepts like linked servers (SQL Server - http://msdn.microsoft.com/en-us/library/ms188279.aspx)
Individual connection strings within your front end driving the database layer
Use things like replication to duplicate the data
Also, the concept of multiple databases on a single database server instance seems like it would not be violating your business requirements, and I investigate that as a starting point, with the details you have given.

Configuring SQL Server 2005 with both server replication and client replication

I need to set up this scenario:
A SQL Server 2005 database will create a transactional replication subscription from another database to populate a set of lookup tables. These lookup tables will then be published as a merge replication publication to the client's SQL Server Mobile.
I remember seeing a similar scenario defined in the SQL Server Books Online somewhere, but I was unable to find the link anymore. I hope someone can help me find it, or otherwise point me to any other similar links.
Okay, I managed to get the answers I needed at the MSDN SQL Server Replication forum.
The article I was looking for is called: Republishing Data.
Apparently, it is located within "Advanced Replication Features and Internals" of the "Configuring and Maintaining Replications" section. It's a little non-obvious, so I spent most of my time looking in the "Replicating Data Between a Server and Clients" section, instead of there. Good to know, as there seems to be a number of other special scenarios worth looking at in there.
I do not get the interest of having a transactional replication to generate lookup tables. Such tables are not made to be updated from the client side, so why do you want to combine transactional + merge replication when you don't have the data modified by the subscribers?
Maybe the original scenario is not clear, so let me clarify.
The database where the original lookup tables are located either remotely with bad network connection, or operated under heavy load. This approach was suggested so that the lookup tables are replicated to another database which all merge replication with the clients will be performed.
Of course, it may not be the most appropriate approach to our problem, so if anyone has a better idea, we'll like to hear them.
Still, the main reason for this question is to find an article I previously found (but stupidly did not bookmarked) which described our scenario quite well. Any possible leads to this article (title, author, similar topics, etc) are definitely appreciated.