I am currently developing ASP.NET Application & using SQL as backend.
I am having very serious problem with the DB. I am providing DB backup to the user.
If suppose Sql crashes and my client wants to restore the DB with the backup.
As I have implemented my application by mapping table with the auto incremented ID column. So, If user will try to get the old data back from the Backup & that backup will have suppose 1 to 50 as auto incremented number in each table of the column and with respect to that DB will try to map that data again on the basis of auto incremented column.
But AS DB crashed & auto increment column will not start from again 1. It will start respective count of the column from 51.
Then all the mappings will go wrong and not a single table will give me the proper mapping information.
I have around 25 tables in my application.
Now, what will be the possible solution to get proper DB.
What should I do for mapping my table.
Please suggest me the best possible way to resolve this.
Related
I'm working on developing a fast way to make a clone of a database to test an application. My database has some specif tables that are quite big (+50GB), but the big majority of the tables only have a few MBs. On my current server, the dump + restore takes some hours. These bigs tables have date fields.
With the context in mind, my question is: Is possible to use some type of restrictions on table rows to select the data that is being dumped? e.g. On table X only dump the rows that date is Y.
If this is a possible show can I do it? if it's not possible what would be a good alternative?
You can use COPY SELECT whatever FROM yourtable WHERE ... TO '/some/file' to limit what you export.
COPY command
You could use row level security and create a policy that lets the dumping database user see only those rows that you want to dump (make sure that that user is neither a superuser nor owns the tables, because these users are exempt from row level security).
Then dump the database with that user, using the --enable-row-security option of pg_dump.
I am trying to have a setup in Pentaho where :
My source data is in MySQL DB and target database is Amazon redshift.
I want to have incremental loads on Redshift database table, based on the last updated timestamp from MySQL DB table.
Primary key is student ID.
Can I implement this using update/insert in Pentaho ?
Insert/Update step in Pentaho Data Integration serves the purpose of inserting the row if it doesn't exist in the destination table or updating it if it's already there. It has nothing to do with incremental loads, but if your loads should be inserting or updating the record based on some Change Data Capture mechanism then this is the right step at the end of the process.
For example you could go one of two ways:
If you have a CDC then limit the data at Table Input for MySQL since you already know the last time a record has been modified (last load)
If you don't have a CDC and you are comparing entire tables then go for joining the sets to produce rows that has changed and then perform a load (slower solution)
The scenario is this: We have an application that is deployed to a number of locations. Each application is using a local-instance of SQL Server (2016) with exactly the same DB schema.
The reason for local-instance DBs is that the servers on which the application is deployed will not have internet access - most of the time.
We were now considering keeping the same solution but adding an SSIS package that can be executed at a later time - when the server is connected to the internet.
For now let's assume that once the package is executed - no further DB changes will be made to the local instance.
All tables (except for many-to-many intermediary) have an INT IDENTITY primary key.
What I need is that the table PKs get auto-generated on the Azure DB - which I'm currently doing by setting the mapping property to for the PK, however I would also need all FKs pointing to that PK to get the newly generated ID instead of pointing to the original ID.
Since data would be coming from multiple deployments, I want to keep all data as new entries - without updating / deleting existent records.
Could someone kindly explain or link me to some resource that handles this situation?
[ For future references I'm considering using UNIQUEIDENTIFIER instead of INT, but this is what we have atm... ]
Edit: Added example
So for instance, one of the tables would be Events. Now each DB deployment will have at least one Event starting off from Id 1. I'd like that when consolidating the data into the Azure DB, their actual Id is ignored and instead get an auto-generated Id from the Azure DB. - That part is Ok. But then I'd need all FKs pointing to EventId to point to the new Id, so instead of e.g. 1 they'd get the new Id according to Azure DB (e.g. 3).
I am new to SSIS.I got the task have according to the scenario as explained.
Scenario:
I have two databases A and B on different machines and have around 25 tables and 20 columns with relationships and dependencies. My task is to create a database C with selected no of tables and in each table I don't require all the columns but selected some. Conditions to be met are that the relationships should be intact and created automatically in new database.
What I have done:
I have created a package using the transfer SQL Server object task to transfer the tables and relationships.
then I have manually edited the columns that are not required
and then I transferred the data using the data source and destination
My question is: can I achieve all these things in one package? Also after I have transferred the data how can I schedule the package to just transfer the recently inserted rows in the database to the new database?
Please help me
thanks in advance
You can schedule the package by using a SQL Server Agent Job - one of the options for a job step is run SSIS package.
With regard to transferring new rows, I would either:
Track your current "position" in another table, assumes you have either an ascending key or a time stamp column - load the current position into an SSIS variable, use this variable in the WHERE statement of your data source queries.
Transfer all data across into "dump" copies of each table (no relationships/keys etc required just the same schema) & use a T-SQL MERGE statement to load new rows in, then truncate "dump" tables.
Hope this makes sense - its a bit difficult to get across in writing.
I have a table that is a replicate of a table from a different server.
Unfortunately I don't have access to the transaction information, and all I have is the table that shows "as is" information & I have a SSIS to replicate the table on my server every day (the table gets truncated, and new information is pulled every night).
Everything has been fine and good, but I want to start tracking what has changed. i.e. I want to know if a new row has been inserted or a value of a column has changed.
Is this something that could be done easily?
I would appreciate any help..
The SQL version is SQL Server 2012 SP1 | Enterprise
If you want to do this for a perticular table then you can go for a scd(slowly changing dimension) transform in SSIS control flow which will keep the hystory records in different table
or
you can create CDC(changing data capture) method on that table.CDC will help you on monitering of every DML operation in that table.It will inserted in the modified row in the system table.