SSIS - insert data/ID into another table while importing - sql-server-2005

I'm working with SQL Server / SSIS 2005 and I am stumped on something that I believe should be a simple issue. I am trying to figure out how to insert data into another table from an incremental ID created within an SSIS project.
To help clarify, there is my example:
I have a table called users with the following values (shortened for this purpose)
User ID Username
======= ========
1 jsmith
2 jjones
I have another table called userpreferences with the following values
ID Keyname Keyvalue
======= =========== =========
1 Send a Report YES
2 Send a Report YES
Now that I have described the tables, I am going to be using SSIS to insert data into the users table. User ID is an identity field in the users table. ID is also an identity field in the userspreferences table. And they each correspond with each other.
What I would like to do is insert data into the userpreferences table based upon what the User ID is being generated in the users table. As an example, I insert a record through import as user ID #3. I then want to insert that ID to the user preferences table and insert the keyname and keyvalues. Just to clarify, the keyname and keyvalues are not part of the text file. I want to insert those in within the project.
Currently, I can achieve my goal through some "post processing" through T-SQL. But I am trying to do this more efficiently in SSIS. Plus this would help me a lot as I do this quite frequently.
I tried researching this and I thought this may help me: SSIS - Multiple table insert. However, the solution has the screen shots missing. Can someone assist me with this task. I would greatly appreciate it.

You could turn identity insert on your users table, and manually generate the IDs using a VB script component transformation, so they'd be available to you in the data flow to load to the user preferences table. Post processing is probably your best option here though - just build an Execute SQL task to run after the data flow completes and it will all still be part of your package.

Related

How to make only one user can insert data at a time in SQL Server?

I have a SQL Server database, multi-user can insert to it.
But for many reasons, I want only 1 user can insert at a time.
Example:
User 1 want to insert 100 record, while user 1 is inserting (100 record not saved to table). Other user can not insert to the table.
I have thought to use a flag, but I want to find another way.
Is there any SQL statement that can do that?
Thanks for reading!
It seems that you need to use
INSERT INTO TABLE with (rowlock)
Read the following post to have a better understanding.
Using ROWLOCK in an INSERT statement (SQL Server)
Updated
SQL supports us to handle 1 record at a time e. And your case is to want multiple records to handle serialized format.
I think the best you put into the temp table, there is a window service running real-time (Background service: using quartz job or hangfire): insert and delete then the temporary table with a column named IsInserted.
For that purpose you can used table lock or row loack concept.
ALTER TABLE Table_name SET (LOCK_ESCALATION = < TABLE | AUTO | DISABLE > –One of those options)
For more details you can also visit this link
locking in SQL Server

Trying to figure out how a table in SQL Server is getting populated

There is no documentation and no way to get in touch with the architects of the table. Are there any queries or options in SSMS that allows me to know what the table is connected to ?
A brief explanation of my issue is that there are two tables that seem to be connected.
Table A which contain 10M distinct users and Table B 5M users. All users in Table B are in Table A but obviously Table A's users aren't in Table B. After running multiple tests on the website, I cant find the trigger that copies the user information from Table A to Table B. And that's why I would like to know how Table B gets populated.
Like Diado already said, try running a profiler for a while. It will log queries against the database (e.g. INSERT ... and UPDATE ...), the source of the connection (e.g. IP address), and sometimes the process name. That could provide clues into what's interacting with your table in question.
https://learn.microsoft.com/en-us/sql/tools/sql-server-profiler/sql-server-profiler?view=sql-server-2017

Create a trigger for making a single audit table in sql server

How do I create a trigger in Microsoft SQL server to keep a track of all deleted data of any table existing in the database into a single audit table? I do not want to write trigger for each and every table in the database. There will only be once single audit table which keeps a track of all the deleted data of any table.
For example:
If a data is deleted from a person table, get all the data of that person table and store it in an XML format in an audit table
Please check my solution that I tried to describe at SQL Server Log Tool for Capturing Data Changes
The solution is build on creating trigger on selected tables dynamically to capture data changes (after insert, update, delete) and store those in a general table.
Then a job executes periodically and parses data captured and stored in this general table. After data is parsed it will be easier for humans to understand and see easily which table field is changed and its old and new values
I hope this proposed solution helps you for your own solution,

How do I insert data from parameters into a table with a trigger after Update

I am new to triggers, I am trying one that would let me insert the reasons for the update into a log table after another table was updated updated. The problem is that I am trying to make the user type an input as it would do in a stored procedure but that doesn't work with triggers apparently.
Let's say we have the table, Users:
User ID | User
--------+-------
1 John
The result I want is the following:
Log_ID | Reason_Change
-------+------------------------------------------
1 John wanted to change his name to John25
Is that possible?
I am trying:
#Reason_Change VARCHAR(500)
In SQL Server, you can do this with an instead of trigger on a view. The idea is the following:
Create a view that contains the columns from users along with the change reason.
Define an instead of trigger on the view. This will allow an insert to include the change reason.
Only use the view for inserts.
Then, you can include the reason in the insert. The trigger would put all the other columns in users and update the logs table appropriately.
Such triggers are explained in the documentation.

How to track changes for certain database tables?

I have program that takes user and updates information about him/her in five tables. The process is fairly sophisticated as it takes many steps(pages) to complete. I have logs, sysout and syserr statements that helps me to find sql queries in IDE console but it doesn't have all of them. I've already spend many days to catch other missing queries by debugging but no luck so far. The reason why I am doing this is because I want to automate user information updates so I don't have to go through every page entering user details manually.
I wonder if I could just have some technique that will show me database table changes as I already know table names, by changes I mean whether it was update or insert statements and what exactly changed(column name and value inserted/updated). Any advice is greatly appreciated. I have IBM RAD and DB2 database. Thanks.
In DB2 you can track basic auditing information.
DB2 can track what data was modified, who modified the data, and the SQL operation that modified the data.
To track when data was modified, define your table as a system-period temporal table. The row-begin and row-end columns in the associated history table contain information about when data modifications occurred.
To track who and what SQL modified the data, you can use non-deterministic generated expression columns. These columns can contain values that are helpful for auditing purposes, such as the value of the CURRENT SQLID special register at the time that the data was modified. Possible values for non-deterministic generated expression columns are defined in the syntax for the CREATE TABLE and ALTER TABLE statements.
For example
CREATE TABLE TempTable (balance INT,
userId VARCHAR(100) GENERATED ALWAYS AS ( SESSION_USER ) ,
opCode CHAR(1)
GENERATED ALWAYS AS ( DATA CHANGE OPERATION )
... SYSTEM PERIOD (SYS_START, SYS_END));
The userId column stores who modified the data. This column is defined as a non-deterministic generated expression column that contains the value of SESSION_USER special register.
The opCode column stores the SQL operation that modified the data. This column is defined as a non-deterministic generated expression column and stores a value that indicates the type of SQL operation.
Suppose that you then use the following statements to create a history table for TempTable and to associate that history table with TempTable:
CREATE TABLE TempTable_HISTORY (balance INT, user_id VARCHAR(128) , op_code CHAR(1) ... );
ALTER TABLE TempTable ADD VERSIONING
USE HISTORY TABLE TempTable_HISTORY ON DELETE ADD EXTRA ROW;
Capturing SQL statements for a limited number of tables and a limited time - as far as I understand your problem - could be solved with the DB2 Audit facility.
create audit policy tabsql categories execute status both error type normal
audit <tabname> using policy tabsql
You have to have SECADM rights in theh database and the second command will start the audit process. You can stop it with
audit <tabname> remove policy
Check out the
db2audit
command to configure paths and extract the data from the audit file to a delimited file which then could be loaded again into the database.
The necessarfy tables can be created with the provided sqllib/misc/db2audit.ddl script. You will need the query the EXECUTE table for your SQL details
Please note that audit can capture huge amounts of data so make sure to switch it off again after you have catured the necessary information.