I would need to come up with a robust solution to detect when a new table is created in MS Access and then copy its content to a master table.
One application is writing new data as new data tables into MS Access. This part can't be changed. Now these tables have to be copied into a master table to be picked up by an interface.
Is there a trigger in MS Access when a new table is created?
I was also thinking about a timer and then to look up all tables.
Any ideas or suggestions?
Access does not expose an event for table creation. So you will have to check whether a new table has been created.
If you're not deleting tables, you could examine whether CurrentDb.TableDefs.Count has increased since the last time you checked.
You need to trap COMPLETION of putting all the data into the table,
not the initial creation and not sometime during adding data when the count may be greater then before.
The copy operation cannot start until all the data is in the table.
Thus the creating program needs to send a signal when its done.
Related
I am using a Google Sheet as the source of a table in Big Query. Since I am unable to rename fieldnames in the schema of an existing table I deleted the table and attempted to re-create it after amending the column names in the source Google Sheet. I need to keep the table name the same as I already have analysis files connecting to the table, however when I create the new table as ask Big Query to auto-detect the schema it uses the schema of the previous table. Even if I enter the new schema as text when creating the table it ignores what I enter and use the schema from the old table.
Any ideas how I get Big Query to detect the new schema from the Google Sheet whilst using the same table name as the deleted table?
Thanks in advance!
After trying this multiple times and it not working - with several tables - randomly it worked and let me create a table with the new scheme (manually). Not sure why this didn't work before as I'm pretty sure I didn't do anything differently. If anyone has any insight on what might have caused the initial errors I'd love to hear it for future reference but my current problem is solved.
I came here today to see if someone could give me a suggestion to improve the way I update my database.
Here is the problem, I have one file that I store new scripts every time that I need to change something. For instance, let's say I need to add a new column in a table. I would add the following line in my file called script1.sql:
alter table CLIENTS
add AGE integer
After doing that, I am going to send it to a client with an updated application, and ask him to run script1.sql on his database. That works just fine for me.
The problem shows up when this file starts to get bigger, and the client needs to receive the new updates.
The client would run the script1.sql file again, but now with more updates. He will get errors indicating that a column named AGE already exists in the database.
The biggest problem is when I change the version of my application. If I update my application from Application1 to Application2, I also change the script from script1.sql to script2.sql.
Now, my client will need to run both to get to the correct version without conflicts. He will also get lots of errors, since almost everything from script1.sql was already processed in his database.
What I want is to eliminate the chance to face conflicts. This process has been working for me, but always causing some sort of trouble. Therefore, if anyone has any idea about how I could make it work better, please help me out.
Usually SQL provides something called IF EXISTS ( also IF NOT EXISTS) so eg you can write a statement such as:
CREATE TABLE IF NOT EXISTS users ...
Which will only create the users table if it hasn't already been created.
There is usually a variant of this that can be added to all your statements (including updates such as renaming columns etc).
Then if the table has already been added (or column updated etc) then it won't try to run that SQL command again - which means you can run the same file over and over as many times as you like.
(Note: this is called idempotency)
You will need to google for the details on how to use EXISTS for sql-server
I am new to SSIS.I got the task have according to the scenario as explained.
Scenario:
I have two databases A and B on different machines and have around 25 tables and 20 columns with relationships and dependencies. My task is to create a database C with selected no of tables and in each table I don't require all the columns but selected some. Conditions to be met are that the relationships should be intact and created automatically in new database.
What I have done:
I have created a package using the transfer SQL Server object task to transfer the tables and relationships.
then I have manually edited the columns that are not required
and then I transferred the data using the data source and destination
My question is: can I achieve all these things in one package? Also after I have transferred the data how can I schedule the package to just transfer the recently inserted rows in the database to the new database?
Please help me
thanks in advance
You can schedule the package by using a SQL Server Agent Job - one of the options for a job step is run SSIS package.
With regard to transferring new rows, I would either:
Track your current "position" in another table, assumes you have either an ascending key or a time stamp column - load the current position into an SSIS variable, use this variable in the WHERE statement of your data source queries.
Transfer all data across into "dump" copies of each table (no relationships/keys etc required just the same schema) & use a T-SQL MERGE statement to load new rows in, then truncate "dump" tables.
Hope this makes sense - its a bit difficult to get across in writing.
I have a table that is a replicate of a table from a different server.
Unfortunately I don't have access to the transaction information, and all I have is the table that shows "as is" information & I have a SSIS to replicate the table on my server every day (the table gets truncated, and new information is pulled every night).
Everything has been fine and good, but I want to start tracking what has changed. i.e. I want to know if a new row has been inserted or a value of a column has changed.
Is this something that could be done easily?
I would appreciate any help..
The SQL version is SQL Server 2012 SP1 | Enterprise
If you want to do this for a perticular table then you can go for a scd(slowly changing dimension) transform in SSIS control flow which will keep the hystory records in different table
or
you can create CDC(changing data capture) method on that table.CDC will help you on monitering of every DML operation in that table.It will inserted in the modified row in the system table.
Some years ago I built an Excel file for my colleagues that displays lots of data from an external ODBC data source. The data is partitioned into lots of data tables in different sheets. The file also contains a button that allows the user to update the data.
Since accessing the data from the external source was very slow, I implemented some caching logic that stored parts of the results, that were unlikely to change, in external tables on our SQL server and did some magic to keep the data synchronized. The excel file itself only accesses the SQL server. Every data table uses an SPROC to get part of the data.
Fast forward 5 years. The Excel file has grown in size and contain so many sheets and data that our Excel (still version 2003) got problems with it. So my colleagues split the file into two halfs.
The problem is now, that both excel files contain the logic to update the data and it can happen that a user clicks the update button in file no. 1 while another user is already updating file no. 2.
That's the point where the updating logic goes berserk and produces garbage.
The update run is only required once for both excel files because it updates all the data that's displayed in both files. It's quite expensive and lasts from 5 to 15 minutes.
I could split the update run into two halves as well, but that wouldn't make it any faster and updating the two files would take twice as long.
What I think about is some kind of mutex: User A clicks on the update button and the update run starts. User B wants to update too, but the (VBA/SPROC) logic detects that there's already an update running and waits till it finishes.
You could perform the updates in a Transaction with Serializable isolation level; your update code would need to detect and handle SQL Server error 1205 (and report to user that another update is in process).
Alternatively, add a rowversion timestamp to each row and only update a row if it hasn't been changed since you loaded it.
But when A has finished, B will run the update 'for nothing'.
Instead: When A clicks update, call a stored proc which fires the update asynchronously.
When the update starts, it looks at the last time it ran itself and exits if it was less than X minutes ago.