I am working with sensitive/private files stored in SQL Server as VarBinary(MAX). Is there a way to tell the database or nHibernate to nullify the column after a period of time after its creation.
Make sure to put a timestamp column on the table, then set up a SQL Scheduled Job with a query to delete those rows periodically based on that time.
Absolutely not
Imagine the fun when SQL Server decides to change data apparently on it's own because the code monkey fluffed setting up whatever mechanism would be used...
You could (off top of my head):
Encrypt the columns
Schedule a clean up (SQL Agent job, a config table)
Don't use SQL Server for the files themselves, just store paths/links
Related
I am using AWS RDS SQL Server as my DB. I have a few tables without a timestamp column and need to query those tables for data inserted within the last hour or so. I can't add extra columns, create trigger or change source in any way. Change Tracking (CT) feature of SQL server seems the way to go but I wanted to know is there any other way.
I have 11 databases in which I'm having tables contains User Details i.e. all employee details. There I have a column "Status"(which is 1 for Active and 0 for Inactive). I have a regular tasks for updating "Status" column value 0 or 1 for mentioned employees and for that, I have to go through all the databases then User table then I have to update. The same task i have to do for all the database and it consumes a lot of time.
If I will get a short Query or Procedure that I have to run once and will do all updation at once then, it would be a great help.
I see a couple of possible options.
You could build an SSIS package to connect to each database and do the necessary updates provided the criteria of which employees to update and what to update them to could be found within the database or some external source such as a text file.
Alternatively, you could use SQLCMD mode in SQL Server Management Studio and then within your SQL script use CONNECT command to switch to each server and database something like this...
:CONNECT Server1
USE Database1
--put your update SQL script
:CONNECT Server2
USE Database2
--put your update SQL script
...
These links provide some further information on using SQLCMD mode...
Connecting to multiple servers in a Query Window using SQLCMD
SQL Server SQLCMD Basics
Noel
As you mentioned, you have 11 databases.
Problem : First, you are using very bad approach for database design,
What really Happens : When you are using multiple databases and you need to check in every database, then the server needs to connect to different database again and again, which takes very more time compared to switching into the tables, because of connection handling.
Solution : In your case, you have only one option to connect different databases in loops and then run the query in the loop for every DB.
Suggestion : you should keep all the data in the same database, you can use an extra column in tables to keep track your data to different entities.
I am looking for a way to setup a scheduled update from a linked server I created to a local db, I am not familiar with triggers but from what I've read you have to set them up on the originating server, and I only have read access to the mysql Database. Basically all that I am trying to do is make a local copy of two tables from the mysql db. I can manually do so with select into statements, but I would like to have some automation if possible. Any thoughts on how to achieve this? Also I am using SQL server 2008 R2. Thanks!
You have several options to do:
Copy all data from the source table (do not use this if the source table is big)
If you have a column in the source table which can be used to determine which records should be copied, use that (this is mostly an auto updated timestamp column in MySQL)
Set up a trigger to track modifications
To copy, you can set up a Linked Server or you can use SSIS
To use a linked server you can use OPENQUERY()
You can schedule your task with SQL Server Agent
I have a PostgreSQL database that stores real-time data from sensors in a specific table (every 30sec).
What I want to do, is to get periodically the data from the remote PostgreSQL database (for instance every 30sec) and store them in SQL Server 2005 to manipulate them locally. I don't care about having the two databases with duplicate tables. Actually this is what I want to achieve!
So far, I have as Linked Server the PostgreSQL to SQL Server and I can query and retrieve the sensor data. However, I prefer to store them in my SQL Server for performance reasons.
Solution so far:
Make select openquery statements with the linked PostgreSQL and insert the results to my table in SQL Server. Repeat this periodically and store fresh data only (e.g. with a larger timestamp).
I assume that my proposed solution is not ideal. I want to know what are the best practices to achieve this synchronization between the two databases.
Thank you in advance!
If you don't want to write your own code(implementations) to do that you can use SymmetricDS to synch the table from postgreSQL to MSSQL .
I have written SQL statements (stored in a text document) that load data into a SQL Server database. These statements need to be repeated daily. Some of the statements use the NewId() function to populate a keyed field in the database, and this works fine.
While I'm in the process of writing an application to replicate these statements, I want to use Access queries and macros instead of copying and pasting queries into SQL Server, thus saving me time on a daily basis. All is working fine but I can't find any function that will replace the SQL Server NewId() function. Does one exist or is there a work around?
I'm using SQL Server 2005 and Access 2007.
On top of matt's answer, you could simply use a pass-through query and just use your existing, working queries from MS Access.
A solution would be to insert the stguidgen() function in your code, as you can find it here: http://trigeminal.fmsinc.com/code/guids.bas https://web.archive.org/web/20190129105748/http://trigeminal.fmsinc.com/code/guids.bas
The only workaround I can think of would be to define the column in your access database of type "Replication ID" and make it an autonumber field. That will automatically generate a unique GUID for each row and you won't need to use newid() at all. In SQL server, you would just make the default value for the column "newid()".
Again, there seems to be confusion here.
If I'm understanding correctly:
You have an Access front end.
You have a SQL Server 2005 back end.
What you need is the ability to generate the GUID in the SQL Server table. So, answers taht suggest adding an AutoNumber field of type ReplicationID in Access aren't going to help, as the table isn't a Jet table, but a SQL Server table.
The SQL can certainly be executed as a passthrough query, which will hand off everything to the SQL Server for processing, but I wonder why there isn't a default value for this field in SQL Server? Can SQL Server 2005 tables not have NewId() as the default value? Or is there some other method for having a field populate with a new GUID? I seem to recall something about using GUIDs and marking them "not for replication" (I don't have access to a SQL Server right at the moment to look this up).
Seems to me it's better to let the database engine do this kind of thing, rather than executing a function in your SQL to do it, but perhaps someone can enlighten me on why I'm wrong on that.