Do I need to relink or update an ODBC connection if I update a SQL view that is used in a query in Access?
Steps to reproduce
Update view in SQL > Save > open query that uses view in Access > no change
If the change (to the view) involves changes to the structure (columns: names, numbers, data types, primary key, any unique indices) then you need to re-link in MsAccess (so that it reads those changes and saves them). If the change is only about the tables joined (FROM clause), or the selection criteria (WHERE, HAVING), aggregation (GROUP BY) etc, no need to re-link.
Related
(Beginner of SQL so I do apologize for any novice mistake that I made)
So essentially, I'm currently making an access form that allows the user to update their stock inside the warehouse. I'm using the ODBC link database in which I can store various data inside the server (the configuration for the database will be seen down below)
However, when I created a combo box that linked to a column(IDDH), it automatically pops up an error stating that it is violating the PK constraint whenever I switch to another column.At this point I don't know what I did wrong since I already connected two tables with a relationship of one-to-many inside the SQL along with connecting it on Microsoft Access(Just in case). And connect Foreign Key in the dbo.DonHang table (ProductID)
Here is my configuration
SQL:
Relationship in Access
The error in the access form whenever I switched to a different column in the combo box
If you want to require more information. Please do not hesitate to ask.
You base one form directly on the main table. Not a sql join.
You then create a sub form with the "many" entries for the one record. And again that form is NOT based on some sql join, but ONLY the linked child table.
You don't try and use sql joins to solve this kind of data editing in Access (using sql server does not change this). So you need a main form (based ONLY directly on the ONE linked table of master (parent) records. You then have a sub form and again it is based directly on the child linked table (again not some sql query).
So, for editing main records and child records? you use a form + sub form in Access to achieve this goal. And this setup will also work well with sql server linked tables.
But all in all? You don't try and edit sql joined data. you build up the main form, and drop in a sub form to a classic and common editing of master/child data in Access.
If you really want to edit both as one row? Well, you can often edit but you not be allowed in general to add rows. But if the sql VIEW you create allows editing of rows (you can test/try this in SSMS), then if you save that query in SSMS as a view? Then simply link the view to Access and you can edit the one row, but there are limitations in terms of adding new rows etc. it really depends on your goal.
but, at the end of the day, editing of master + child records is NOT achieved by a sql join query, but that of editing each table separate, or a form + sub form in access.
I am having some trouble with updates to an SQL Server View through MS Access. The set of tables used for this is built off of a base table. This base table is of this format.
Id int (not-nullable; auto-assigned)
A1 varchar(50) (nullable)
A2 varchar(50) (nullable)
B1 varchar(50) (nullable)
B2 varchar(50) (nullable)
C1 varchar(50) (nullable)
C2 varchar(50) (nullable)
One row on this table is updated by multiple groups of users in our company. For instance, user group "A" updates columns "A1" and "A2", user group "B" updates columns "B1" and "B2", and so forth. However, we also want to prevent user group "A" from updating the columns of user group "B". To accomplish this, I set up a view containing the columns appropriate for each user group. For instance, the view for user group "A" would only contain the columns "Id", "A1", and "A2". Then I set the "Bind To Schema" option on the views in SSMS to "Yes", and I set up a unique, clustered index on the "Id" column on each of the views. In MS Access, I connect to these views as linked tables using an ODBC connection. When I open the tables in MS Access in design view and check the indexes, it does properly identify the "Id" column as the primary key.
Here is where the trouble comes in: When I try to update a record through MS Access in one of the views, sometimes the update runs instantly, but sometimes the update times out. Here is the error that I get.
"SM_Notes_Bridge" is the actual name of one of my views. Almost all previous answers that I can find online say to increase the amount of time before the update times out in MS Access, which seems like it is not a solution for my problem as the update either runs instantly or times out. There is no middle ground.
Another note is that I am currently the only one using this base table and these views. Also, important systems are developed around that base table structure, so changing its structure will take a lot of convincing.
By creating an unique index on a schema bound view, you're creating an indexed view, also called a materialized view.
A relevant property of indexed views:
When executing DML on a table referenced by a large number of indexed views, or fewer but very complex indexed views, those referenced indexed views will have to be updated as well. As a result, DML query performance can degrade significantly, or in some cases, a query plan cannot even be produced (MSDN).
Thus, creating multiple indexed views on a table that is updated often is a big no-no! Review this MSDN page for further explanation when and when not to use an indexed view. Every insert and update will have to propagate to all the indexed views, and will cause locks on those views as well.
Drop the indexes on ALL views on that table. As far as you've told me, there's no reason at all to use indexed views and they will hurt performance in a major way when executing updates. Even if that didn't fix this issue, it will improve performance.
I have a nightly SSIS process that exports a TON of data from an AS400 database system. Due to bugs in the AS400 DB software, ocassional duplicate keys are inserted into data tables. Every time a new duplicate is added to an AS400 table, it kills my nightly export process. This issue has moved from being a nuisance to a problem.
What I need is to have an option to insert only unique data. If there are duplicates, select the first encountered row of the duplicate rows. Is there SQL Syntax available that could help me do this? I know of the DISTINCT ROW clause but that doesn't work in my case because for most of the offending records, the entirety of the data is non-unique except for the fields which comprise the PK.
In my case, it is more important for my primary keys to remain unique in my SQL Server DB cache, rather than having a full snapshot of data. Is there something I can do to force this constraint on the export in SSIS/SQL Server with out crashing the process?
EDIT
Let me further clarify my request. What I need is to assure that the data in my exported SQL Server tables maintains the same keys that are maintained the AS400 data tables. In other words, creating a unique Row Count identifier wouldn't work, nor would inserting all of the data without a primary key.
If a bug in the AS400 software allows for mistaken, duplicate PKs, I want to either ignore those rows or, preferably, just select one of the rows with the duplicate key but not both of them.
This SELECT statement should probably happen from the SELECT statement in my SSIS project which connects to the mainframe through an ODBC connection.
I suspect that there may not be a "simple" solution to my problem. I'm hoping, however, that I'm wrong.
Since you are using SSIS, you must be using OLE DB Source to fetch the data from AS400 and you will be using OLE DB Destination to insert data into SQL Server.
Let's assume that you don't have any transformations
Add a Sort transformation after the OLE DB Source. In the Sort Transformation, there is a check box option at the bottom to remove duplicate rows based on a give set of column values. Check all the fields but don't select the Primary Key that comes from AS400. This will eliminate the duplicate rows but will insert the data that you still need.
I hope that is what you are looking for.
In SQL Server 2005 and above:
SELECT *
FROM (
SELECT *,
ROW_NUMBER() OVER (PARTITION BY almost_unique_field ORDER BY id) rn
FROM import_table
) q
WHERE rn = 1
There are several options.
If you use IGNORE_DUP_KEY (http://www.sqlservernation.com/home/creating-indexes-with-ignore_dup_key.html) option on your primary key, SQL will issue a warning and only the duplicate records will fail.
You can also group/roll-up your data but this can get very expensive. What I mean by that is:
SELECT Id, MAX(value1), MAX(value2), MAX(value3) etc
Another option is to add an identity column (and cluster on this for an efficient join later) to your staging table and then create a mapping in a temp table. The mapping table would be:
CREATE TABLE #mapping
(
RowID INT PRIMARY KEY CLUSTERED,
PKIN INT
)
INSERT INTO #mapping
SELECT PKID, MIN(rowid) FROM staging_table
GROUP BY PKID
INSERT INTO presentation_table
SELECT S.*
FROM Staging_table S
INNER JOIN #mapping M
ON S.RowID = M.RowID
If I understand you correctly, you have duplicated PKs that have different data in the other fields.
First, put the data from the other database into a staging table. I find it easier to research issues with imports (especially large ones) if I do this. Actually I use two staging tables (and for this case I strongly recommend it), one with the raw data and one with only the data I intend to import into my system.
Now you can use and Execute SQL task to grab the one of the records for each key (see #Quassnoi for an idea of how to do that you may need to adjust his query for your situation). Personally I put an identity into my staging table, so I can identify which is the first or last occurance of duplicated data. Then put the record you chose for each key into your second staging table. If you are using an exception table, copy the records you are not moving to it and don't forget a reason code for the exception ("Duplicated key" for instance).
Now that you have only one record per key in a staging table, your next task is to decide what to do about the other data that is not unique. If there are two different business addresses for the same customer, which do you chose? This is a matter of business rules definition not strictly speaking SSIS or SQL code. You must define the business rules for how you chose the data when the data needs to be merged between two records (what you are doing is the equivalent of a de-dupping process). If you are lucky there is a date field or other way to determine which is the newest or oldest data and that is the data they want you to use. In that case once you have selected just one record, you are done the intial transform.
More than likely though you may need different rules for each other field to choose the correct one. In this case you write SSIS transforms in a data flow or Exec SQl tasks to pick the correct data and update the staging table.
Once you have the exact record you want to import, then do the data flow to move to the correct production tables.
I have multiple database files which exist in multiple locations with exactly similar structure. I understand the attach function can be used to connect multiple files to one database connection, however, this treats them as seperate databases. I want to do something like:
SELECT uid, name FROM ALL_DATABASES.Users;
Also,
SELECT uid, name FROM DB1.Users UNION SELECT uid, name FROM DB2.Users ;
is NOT a valid answer because I have an arbitrary number of database files that I need to merge. Lastly, the database files, must stay seperate. anyone know how to accomplish this?
EDIT: an answer gave me the idea: would it be possible to create a view which is a combination of all the different tables? Is it possible to query for all database files and which databases they 'mount' and then use that inside the view query to create the 'master table'?
Because SQLite imposes a limit on the number of databases that can be attached at one time, there is no way to do what you want in a single query.
If the number can be guaranteed to be within SQLite's limit (which violates the definition of "arbitrary"), there's nothing that prevents you from generating a query with the right set of UNIONs at the time you need to execute it.
To support truly arbitrary numbers of tables, your only real option is to create a table in an unrelated database and repeatedly INSERT rows from each candidate:
ATTACH DATABASE '/path/to/candidate/database' AS candidate;
INSERT INTO some_table (uid, name) SELECT uid, name FROM candidate.User;
DETACH DATABASE candidate;
Some cleverness in the schema would take care of this.
You will generally have 2 types of tables: reference tables, and dynamic tables.
Reference tables have the same content across all databases, for example country codes, department codes, etc.
Dynamic data is data that will be unique to each DB, for example time series, sales statistics,etc.
The reference data should be maintained in a master DB, and replicated to the dynamic databases after changes.
The dynamic tables should all have a column for DB_ID, which would be part of a compound primary key, for example your time series might use db_id,measurement_id,time_stamp. You could also use a hash on DB_ID to generate primary keys, use same pk generator for all tables in DB. When merging these from different DBS , the data will be unique.
So you will have 3 types of databases:
Reference master -> replicated to all others
individual dynamic -> replicated to full dynamic
full dynamic -> replicated from reference master and all individual dynamic.
Then, it is up to you how you will do this replication, pseudo-realtime or brute force, truncate and rebuild the full dynamic every day or as needed.
Hypothetically I have two tables Employee and Locations. Additionaly I have a view viewEmpLocation which is made by joining Employee and Locations.
If I update the view, will the data in the original table get updated?
Yes.
The data "in" a view has no existence independent from the tables that make up the view. The view is, in essence, a stored SELECT statement that masquerades as a table. The data is stored in the original tables and only "assembled" into the view when you want to look at it. If the view is updateable (not all views are) the updates are applied to the table data.
see Using Views in Microsoft SQL Server
When modifying data through a view
(that is, using INSERT or UPDATE
statements) certain limitations exist
depending upon the type of view. Views
that access multiple tables can only
modify one of the tables in the view.
Views that use functions, specify
DISTINCT, or utilize the GROUP BY
clause may not be updated.
Additionally, inserting data is
prohibited for the following types of
views:
* views having columns with derived (i.e., computed) data in the SELECT-list
* views that do not contain all columns defined as NOT NULL from the tables from which they were defined
It is also possible to insert or
update data through a view such that
the data is no longer accessible via
that view, unless the WITH CHECK
OPTION has been specified.
You could use a trigger on the view to do an insert/update/delete to the actual tables.
http://www.devarticles.com/c/a/SQL-Server/Using-Triggers-In-MS-SQL-Server/1/