I'm trying to create an Update Query in Access 2010 to update a duplicate table on our shared drive from the user's local copy.
The Access program itself uses the usual split front end / back end.
Due to frequent drops over VPN, we came up with a method for:
downloading the latest version of the front end and a copy of the back end to the user's local drive
then running off of the local front end / back end (with the two linked)
and then using VBA to update individual records on both the local and network locations, unless the network drive is unavailable, where it then dumps the updated data into an array to be attempted again at program close.
I have two identical tables (one on the local and one on the network) that I need to create an Update Query to update any changes made in the local table to the one on the network so that it can be stored on the network database for the next user on their machine.
UPDATE HiringMgrData As NetworkHiringMgrData IN '\\ServerName\FilePath\HREmails_be.accdb'
SET NetworkHiringMgrData.UserName = HiringMgrData.UserName,
NetworkHiringMgrData.UserPhone = HiringMgrData.UserPhone,
NetworkHiringMgrData.UserEmail = HiringMgrData.UserEmail
WHERE NetworkHiringMgrData.ID IN (SELECT ID FROM HiringMgrData)
This gives me an error when it gets to the SET statements, and clicking through simply blanks the fields in the network table.
I'm trying to "trick" Access into treating the table in the network database as NetworkHiringMgrData, while keeping the name of table in the the local database HiringMgrData, in hopes that Access will be able to distinguish between the two.
In reality, both the local and network databases have a table named HiringMgrData with field names of ID, UserName, UserPhone, and UserEmail.
I was able to get the Append Query to work using:
INSERT INTO HiringMgrData IN '\\ServerName\FilePath\HREmails_be.accdb'
SELECT HiringMgrData.*
FROM HiringMgrData;
which simply adds any new records from the HiringMgrData table in the local database to the HiringMgrData table in the network database, but I cannot update the existing records.
Try the below. I was attempting to do something similar on my MS Access Database and for some reason this worked for me instead of using IN 'network path'
UPDATE [\\ServerName\FilePath\HREmails_be.accdb].HiringMgrData As NetworkHiringMgrData
inner join HiringMgrData as LocalHiringMgrData on NetworkHiringMgrData.ID = LocalHiringMgrData.ID
SET NetworkHiringMgrData.UserName = LocalHiringMgrData.UserName,
NetworkHiringMgrData.UserPhone = LocalHiringMgrData.UserPhone,
NetworkHiringMgrData.UserEmail = LocalHiringMgrData.UserEmail;
I think your WHERE clause is wrong and should be
WHERE NetworkHiringMgrData.ID = HiringMgrData.ID;
Note that this may generate lots of network traffic as all records are updated. Maybe your application can manage an isChanged flag and update only those records.
Related
I wanted to basically copy the entire content of one table to another.
Context:
Table source is SharePoint list and triggers an email per record being queried. No way to turn it off on my end as it's being utilised by another team.
When I run my queries on a local table, it's fine.
I need to just copy the data directly. So far, the only code I found is
DoCmd.TransferDatabase but I can't seem to configure it correctly.
Simplest method is probably to run a make-table query to (re)create the local table:
Dim Sql As String
Sql = "SELECT * INTO LocalTable FROM SharePointTable;"
CurrentDb.Execute Sql
That will pop a warning, though. If that is too much, create the local table, then run two queries - the first to delete all records from the local table, the second to append all records from the SharePoint table to the local table.
I have a remote database that I want to copy on my local SQL Server.
IMPORTANT: I only want a sample data (1k rows for instance) from each table, and there are about 130 different tables.
I have tried to use the export data procedure in SSMS. Put simply, I go to TASKS> EXPORT DATA > CHOSE SOURCE (the remote db)> CHOSE DESTINATION (my local db) > CHOSE THE TABLES TO COPY > COPY
What I have tried:
I've tried to write down in this tool the SQL query like
SELECT TOP 1000 *
FROM TABLE1 GO ...
SELECT TOP 1000 *
FROM TABLE130
But on the mapping step, it puts every result within a single table instead of creating the 130 different output tables.
FYI, the above procedure is taking 2min for one table. Doing it one by one for each table will take 130 *2min = 4hours and half... plus it is so boring
Do you have any idea for resolving this situation?
Thank you
regards
If you only want a subset you are going to have problems with foreign keys, if there are any in your database.
Possible approaches to extract all data or a subset
Use SSIS to extract the data from the source db and load into your local db
Write a custom application that does the job. (You can use SQL Bulk Copy)
If you purely wan't to do do it in SSMS you can create a linked server on your local server to the remote server.
This way you can do something like this if the tables or not yet created on your local server:
SELECT TOP 1000 *
INSERT INTO [dbo].Table1
FROM [yourLinkedServer].[dbo].[Table1]
Changing the INTO table and FROM table for each table you want to copy.
So I have a db with a bunch of member info (7 million records) that I can pull info from. Then in a separate system, I have a list of 800K emials. What I want to do is match all those members in the db to this list of 800K emails. I don't have the ability to create a table in the db with these emails - I can only read from the db.
So my question is, what is the best way to do this? Can I write a sql statement that reads these 800k in to memory from a csv file and then does a lookup comparing against this list? What is the approach? I just want to pull all emmebr info for members who's info are in that external list...
Thanks
If you have a separate Oracle database available where you do have DDL access, you could try creating a DB Link between the two databases using your read-only account.
CREATE DATABASE LINK <DB Link Name>
CONNECT TO <Your read-only account>
IDENTIFIED by <password>
USING <ServiceName or TNS Entry of remote database>;
This gives you the ability to build queries in your own database but refer to the 7 million records table in the read-only database:
Select * from
emails e -- your local 800K emails table
Join customers#readonlydatabase c -- read-only table in restricted database
on e.uniqueid = c.uniqueid
Per the documentation, the prerequisites for this setup are:
Prerequisites
To create a private database link, you must have the CREATE DATABASE
LINK system privilege. To create a public database link, you must have
the CREATE PUBLIC DATABASE LINK system privilege. Also, you must have
the CREATE SESSION system privilege on the remote Oracle database.
Oracle Net must be installed on both the local and remote Oracle
databases.
Also note that a dblink access is only as good as the account which it uses to connect. So if your account is read-only then your dblink will retain the same restrictions.
Build a huge SQL statement like this:
select *
from member_data
join
(
--Add all the text information here.
--
--Up to 32767 values can be stored in this collection.
select column_value email_address from table(sys.odcivarchar2list(
'asdf1#asdf.com',
'qwer1#qwer.com'
--...
))
--Another 32767 values
union all
select * from table(sys.odcivarchar2list(
'asdf2#asdf.com',
'qwer2#qwer.com'
--...
))
--...
) other_system
on member_data.email = other_system.email_address;
It's ugly but it's not that hard to build and it doesn't require any additional privileges. With a few text processing tricks, maybe a regular expression in a text editor or using Excel to add single quotes and commas, that statement can probably be built in a few minutes. SQL statements this large are usually a bad idea but it will be fine for a one-time process.
Server1: Prod, hosting DB1
Server2: Dev hosting DB2
Is there a way to query databases living on 2 different server with a same select query? I need to bring all the new rows from Prod to dev, using a query
like below. I will be using SQL Server DTS (import export data utility)to do this thing.
Insert into Dev.db1.table1
Select *
from Prod.db1.table1
where table1.PK not in (Select table1.PK from Dev.db1.table1)
Creating a linked server is the only approach that I am aware of for this to occur. If you are simply trying to add all new rows from prod to dev then why not just create a backup of that one particular table and pull it into the dev environment then write the query from the same server and database?
Granted this is a one time use and a pain for re-occuring instances but if it is a one time thing then I would recommend doing that. Otherwise make a linked server between the two.
To backup a single table in SQL use the SQl Server import and export wizard. Select the prod database as your datasource and then select only the prod table as your source table and make a new table in the dev environment for your destination table.
This should get you what you are looking for.
You say you're using DTS; the modern equivalent would be SSIS.
Typically you'd use a data flow task in an SSIS package to pull all the information from the live system into a staging table on the target, then load it from there. This is a pretty standard operation when data warehousing.
There are plenty of different approaches to save you copying all the data across (e.g. use a timestamp, use rowversion, use Change Data Capture, make use of the fact your primary key only ever gets bigger, etc. etc.) Or you could just do what you want with a lookup flow directly in SSIS...
The best approach will depend on many things: how much data you've got, what data transfer speed you have between the servers, your key types, etc.
When your servers are all in one Active Directory, and when you use Windows Authentification, then all you need is an account which has proper rights on all the databases!
You can then simply reference all tables like server.database.schema.table
For example:
insert into server1.db1.dbo.tblData1 (...)
select ... from server2.db2.dbo.tblData2;
I have a task in a project that required the results of a process, which could be anywhere from 1000 up to 10,000,000 records (approx upper limit), to be inserted into a table with the same structure in another database across a linked server. The requirement is to be able to transfer in chunks to avoid any timeouts
In doing some testing I set up a linked server and using the following code to test transfered approx 18000 records:
DECLARE #BatchSize INT = 1000
WHILE 1 = 1
BEGIN
INSERT INTO [LINKEDSERVERNAME].[DBNAME2].[dbo].[TABLENAME2] WITH (TABLOCK)
(
id
,title
,Initials
,[Last Name]
,[Address1]
)
SELECT TOP(#BatchSize)
s.id
,s.title
,s.Initials
,s.[Last Name]
,s.[Address1]
FROM [DBNAME1].[dbo].[TABLENAME1] s
WHERE NOT EXISTS (
SELECT 1
FROM [LINKEDSERVERNAME].[DBNAME2].[dbo].[TABLENAME2]
WHERE id = s.id
)
IF ##ROWCOUNT < #BatchSize BREAK
This works fine however it took 5 mins to transfer the data.
I would like to implement this using SSIS and am looking for any advice in how to do this and speed up the process.
Open Visual Studio/Business Intelligence Designer Studio (BIDS)/SQL Server Data Tools-BI edition(SSDT)
Under the Templates tab, select Business Intelligence, Integration Services Project. Give it a valid name and click OK.
In Package.dtsx which will open by default, in the Connection Managers section, right click - "New OLE DB Connection". In the Configure OLE DB Connection Manager section, Click "New..." and then select your server and database for your source data. Click OK, OK.
Repeat the above process but use this for your destination server (linked server).
Rename the above connection managers from server\instance.databasename to something better. If databasename does not change across the environments then just use the database name. Otherwise, go with the common name of it. i.e. if it's SLSDEVDB -> SLSTESTDB -> SLSPRODDB as you migrate through your environments, make it SLSDB. Otherwise, you end up with people talking about the connection manager whose name is "sales dev database" but it's actually pointing at production.
Add a Data Flow to your package. Call it something useful besides Data Flow Task. DFT Load Table2 would be my preference but your mileage may vary.
Double click the data flow task. Here you will add an OLE DB Source, a Lookup Task and a OLE DB Destination. Probably, as always, it will depend.
OLE DB Source - use the first connection manager we defined and a query
SELECT
s.id
,s.title
,s.Initials
,s.[Last Name]
,s.[Address1]
FROM [dbo].[TABLENAME1] s
Only pull in the columns you need. Your query current filters out any duplicates that already exist in the destination. Doing that can be challenging. Instead, we'll bring the entirety of TABLENAME1 into the pipeline and filter out what we don't need. For very large volumes in your source table, this may be an untenable approach and we'd need to do something different.
From the Source we need to use a Lookup Transformation. This will allow us to detect the duplicates. Use the second connection manager we defined, one that points to the destination. Change the NoMatch from "Fail Component" to "Redirect Unmatched rows" (name approximate)
Use your query to pull back the key value(s)
SELECT T2.id
FROM [dbo].[TABLENAME2] AS T2;
Map T2.id to the id column.
When the package starts, it will issue the above query against the target table and cache all the values of T2.id into memory. Since this is only a single column, that shouldn't be too expensive but again, for very large tables, this approach may not work.
There are 3 outputs now available from the Lookup: Match, NoMatch and Error. Match will be anything that exists in the source and destination. You don't care about those as you are only interested in what exists in source and not destination. When you might care is if you have to determine whether there is change between the values in source and the destination. NoMatch are the rows that exist in Source but don't exist in Destination. That's the stream you want. For completeness, Error would capture things that went very wrong but I've not experience it "in the wild" with a lookup.
Connect the NoMatch stream to the OLE DB Destination. Select your Table Name there and ensure the words Fast Load are in the destination. Click on the Columns tab and make sure everything is routed up.
Whether you need to fiddle with the knobs on the OLE DB Destination is highly variable. I would test it, especially with your larger sets of data and see whether the timeout conditions are a factor.
Design considerations for larger sets
It depends.
Really, it does. But, I would look at identifying where the pain point lies.
If my source table is very large and pulling all that data into the pipeline just to filter it back out, then I'd look at something like a Data Flow to first bring all the rows in my Lookup over to the Source database (use the T2 query) and write it into a staging table and make the one column your clustered key. Then modify your source query to reference your staging table.
Depending on how active the destination table is (whether any other process could load it), I might keep that lookup in the data flow to ensure I don't load duplicates. If this process is the only one that loads it, then drop the Lookup.
If the lookup is at fault - it can't pull in all the IDs then either go with the first alternate listed above or look at changing your caching mode from Full to Partial. Do realize that this will issue a query to the target system for potentially all the rows that come out of the source database.
If the destination is giving issues - I'd determine what the issue is. If it's network latency for the loading of data, drop the value of MaximumCommitInsertSize from 2147483647 to something reasonable, like your batch size from above (although 1k might be a bit low). If you're still encountering blocking, then perhaps staging the data to a different table on the remote server and then doing an insert locally might be an approach.