I have a table which stores records which require to be inserted into another database. Once these values are inserted, I then need to mark these records as processed to prevent them being re-processed.
DECLARE #InsertedValues TABLE (
[ITEMNMBR] nchar(31),
[ITEMDESC] nchar(101),
[ITMSHNAM] nchar(15),
[ITMGEDSC] nchar(11),
[UOMSCHDL] nchar(11),
[ALTITEM1] nchar(31),
[ALTITEM2] nchar(31),
[USCATVLS_1] nchar(11),
[USCATVLS_2] nchar(11),
[USCATVLS_3] nchar(11),
[USCATVLS_6] nchar(11),
[ABCCODE] int,
[ROW_ID] int
)
-- INSERT NEW INVENTORY ITEMS INTO DB
INSERT INTO TABLE1..IV00101 (ITEMNMBR,ITEMDESC,ITMSHNAM,ITMGEDSC,UOMSCHDL,ALTITEM1,ALTITEM2,USCATVLS_1,USCATVLS_2,USCATVLS_3,USCATVLS_6,ABCCODE)
OUTPUT
INSERTED.[ITEMNMBR],
INSERTED.[ITEMDESC],
INSERTED.[ITMSHNAM],
INSERTED.[ITMGEDSC],
INSERTED.[UOMSCHDL],
INSERTED.[ALTITEM1],
INSERTED.[ALTITEM2],
INSERTED.[USCATVLS_1],
INSERTED.[USCATVLS_2],
INSERTED.[USCATVLS_3],
INSERTED.[USCATVLS_6],
INSERTED.[ABCCODE],
U.[ROW_ID] INTO #InsertedValues
SELECT U.[ITEMNMBR],U.[ITEMDESC],U.[ITMSHNAM],U.[ITMGEDSC],U.[UOMSCHDL],U.[ALTITEM1],U.[ALTITEM2],U.[USCATVLS_1],U.[USCATVLS_2],U.[USCATVLS_3],U.[USCATVLS_6],U.[ABCCODE]
FROM
DYNAMICS..TABLE2 AS U
WHERE
U.[ProcessedFlag] = 0 AND
U.[Action] = 'I' AND
U.[DestinationCompany] = 'COMPANY1' AND
U.[DestinationTable] = 'IV00101'
As it stands currently, this query doesn't work as it complains about the U.[ROW_ID] column in the OUTPUT statement which makes sense. So my problem is, how do I get the row that was inserted so that I can then run the following query?
UPDATE DYNAMICS..TABLE2
SET [ProcessedFlag] = 1, [ProcessedDateTime] = GETDATE()
FROM #InsertedValues AS U
INNER JOIN DYNAMICS..TABLE2 AS R ON U.[ROW_ID] = R.[ROW_ID]
I'd consider using eConnect, since messing with GP tables is not a good idea (though inserting into IV00101 should be OK since it's inventory master ... but still!)
What version of GP are you using? GP10 and GP2010 support webservices which have methods which allow you to insert an inventory item, otherwise you can use eConnect and provide XML files to the eConnect entrypoint which it will process. It also provides validation and error handling. You can use message queuing too if needs be
Are you trying to do an import from your own holding table into the GP tables or something like that?
I do plenty of GP and integration where I work :)
It's not possible to get the number of updated rows with standard SQL, but probably any database allows to do it. Bit it won't be that easy to help you, if you don't tell what RDBMS are you using and where are you calling the SQL instructions from. I mean a script executed on what db client application or an application you're developing in T-SQL, PL-SQL, pgplsql, java, PHP, c/c++, c#, VB or whatever language you should say, probably using a db-library you should also say.
UPDATE DYNAMICS..TABLE2
SET [ProcessedFlag] = 1, [ProcessedDateTime] = GETDATE()
WHERE
DYNAMICS..TABLE2.[ProcessedFlag] = 0 AND
DYNAMICS..TABLE2.[Action] = 'I' AND
DYNAMICS..TABLE2.[DestinationCompany] = 'COMPANY1' AND
DYNAMICS..TABLE2.[DestinationTable] = 'IV00101'
Just update the same set of records you selected in the first place.
just a suggestion. You should use identity columns when you have similar kind of scenarios. Since after that use of ##IDENTITY/SCOPE_IDENTITY() becomes pretty easy.
Anyways, I'll suggest you to use trigger if this table doesn't have multiple insertions simultaneously as triggers have a few disadvantages of there own.
Related
So perhaps the title is a little confusing. If you can suggest better wording for that please let me know and i'll update.
Here's the issue. I've got a table with many thousands of rows and i need to update a few thousand of those many to store latest email data.
For example:
OldEmail#1.com => NewEmail#1.com
OldEmail#2.com => NewEmail#2.com
I've got a list of old emails ('OldEmail#1.com','OldEmail#2.com') and a list of the new ('NewEmail#1.com','NewEmail#2.com'). The HOPE was was to sort of do it simply with something like
UPDATE Table
SET Email = ('NewEmail#1.com','NewEmail#2.com')
WHERE Email = ('OldEmail#1.com','OldEmail#2.com')
I hope that makes sense. Any questions just ask. Thanks!
You could use a case expression:
update mytable
set email = case email
when 'OldEmail#1.com' then 'NewEmail#1.com'
when 'OldEmail#2.com' then 'NewEmail#2.com'
end
where email in ('OldEmail#1.com','OldEmail#2.com')
Or better yet, if you have a large list of values, you might create a table to store them (like myref(old_email, new_email)) and join it in your update query, like so:
update t
set t.email = r.new_email
from mytable t
inner join myref r on r.old_email = t.email
The actual syntax for update/join does vary accross databases - the above SQL Server syntax.
With accuracy to the syntax in particular DBMS:
WITH cte AS (SELECT 'NewEmail#1.com' newvalue, 'OldEmail#1.com' oldvalue
UNION ALL
SELECT 'NewEmail#2.com', 'OldEmail#2.com')
UPDATE table
SET table.email = cte.newvalue
FROM cte
WHERE table.email = cte.oldvalue
or, if CTE is not available,
UPDATE table
SET table.email = cte.newvalue
FROM (SELECT 'NewEmail#1.com' newvalue, 'OldEmail#1.com' oldvalue
UNION ALL
SELECT 'NewEmail#2.com', 'OldEmail#2.com') cte
WHERE table.email = cte.oldvalue
Consider prepared statement for rows update in large batches.
Basically it works as following :
database complies a query pattern you provide the first time, keep the compiled result for current connection (depends on implementation).
then you updates all the rows, by sending shortened label of the prepared function with different parameters in SQL syntax, instead of sending entire UPDATE statement several times for several updates
the database parse the shortened label of the prepared function , which is linked to the pre-compiled result, then perform the updates.
next time when you perform row updates, the database may still use the pre-compiled result and quickly complete the operations (so the first step above can be skipped).
Here is PostgreSQL example of prepare statement, many of SQL databases (e.g. MariaDB,MySQL, Oracle) also support it.
With GDPR in the UK on the looming horizon and already have a team of 15 users creating spurious SELECT statements (in excess of 2,000) across 15 differing databases I need to be able to create a method to capture an already created SELECT statement and be able to assign surrogate keys/data WITHOUT rewriting every procedure we already have.
There will be a need to run the original team members script as normal and there will be requirements to pseudo the values.
My current thinking is to create a stored procedure along the lines of:
CREATE PROC Pseudo (#query NVARCHAR(MAX))
INSERT INTO #TEMP FROM #query
Do something with the data via a mapping table of real and surrogate/pseudo data.
UPDATE #TEMP
SET FNAME = (SELECT Pseudo_FNAME FROM PseudoTable PT WHERE #TEMP.FNAME = PT.FNAME)
SELECT * FROM #TEMP
So that team members can run their normal SELECT statements and get pseudo data simply by using:
EXEC Pseudo (SELECT FNAME FROM CUSTOMERS)
The problem I'm having is you can't use:
INSERT INTO #TEMP FROM #query
So I tried via CTE:
WITH TEMP AS (#query)
..but I can't use that either.
Surely there's a way of capturing the recordset from an existing select that I can pull into a table to amend it or capture the SELECT statement; without having to amend the original script. Please bear in mind that each SELECT statement will be unique so I can't write COLUMN or VALUES etc.
Does any anyone have any ideas or a working example(s) on how to best tackle this?
There are other lengthy methods I could externally do to carry this out but I'm trying to resolve this within SQL if possible.
So after a bit of deliberation I resolved it.
I passed the Original SELECT SQL to SP that used some SQL Injection, which when executed INSERTed data. I then Updated from that dataset.
The end result was "EXEC Pseudo(' Orginal SQL ;')
I will have to set some basic rules around certain columns for now as a short term fix..but at least users can create NonPseudo and Pseudo data as required without masses of reworking :)
I know a similar question to this has been asked a fair few times on here, but none seem to give me the answer I need - specifically because I am talking about the relationship between SQL and FileMaker.
See, I'm not actually using an OUTPUT clause at all in my Trigger! But FileMaker seems to either think I am, or insert something itself as an OUTPUT clause that I can't see.
Here is an example trigger:
CREATE TRIGGER [dbo].[TRG_person__name]
ON [dbo].[person]
AFTER INSERT,UPDATE
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
IF UPDATE ( person_name_first )
BEGIN
UPDATE person
SET person_name_first = dbo.mslTitleCase ( i.person_name_first )
FROM person AS x
JOIN inserted AS i ON x._pk_person = i._pk_person
WHERE x._pk_person = i._pk_person
END
--END IF
END
The mslTitleCase function has been declared and works, if I was to create a record using a SQL Query.
But I can an error when creating a new record in FileMaker.
Does anyone have any tips to find a way to stop this error in FileMaker?
Any guidance greatly appreciated.
Thank you
Lewis
This has been reported as a bug: http://help.filemaker.com/app/answers/detail/a_id/7870/~/external-sql-data-sources-(ess)%3A-unable-to-write-to-a-table-with-triggers-on
It does not look like it was resolved.
You may have other problems, but the correct syntax for the update is:
UPDATE p
SET person_name_first = dbo.mslTitleCase ( i.person_name_first )
FROM person p JOIN
inserted i
ON p._pk_person = i._pk_person
WHERE p._pk_person = i._pk_person;
The important change is that the update references the table alias not the table name. I think p is a much better alias than x for a table named person.
It sounds like your application is using the OUTPUT clause to directly return the modified data. If the table has any trigger, this is not allowed. They would have to output the modified data into a table, and then select the data from the table.
The issue is covered in this article:
https://blogs.msdn.microsoft.com/sqlprogrammability/2008/07/11/update-with-output-clause-triggers-and-sqlmoreresults/
If this is the case, you will not be able to add a trigger to the table. The corrections to the data will need to happen from an external source.
I'm using SQL Server. I'm also relatively new to writing SQL... in a strong way. It's mostly self-taught, so I'm probably missing key ideas in terms of proper format.
I've a table called 'SiteResources' and a table called 'ResourceKeys'. SiteResources has an integer that corresponds to the placement of a string ('siteid') and a 'resourceid' which is an integer id that corresponds to 'resourceid' in ResourceKeys. ResourceKeys also contains a string for each key it contains ('resourcemessage'). Basically, these two tables are responsible for representing how strings are stored and displayed on a web page.
The best way to consistently update these two tables, is what? Let's say I have 5000 rows in SiteResources and 1000 rows in ResourceKeys. I could have an excel sheet, or a small program, which generates 5000 singular update statements, like:
update SiteResources set resoruceid = 0
WHERE siteid IS NULL AND resourceid IN (select resourceid
from ResourceKeys where resourcemessage LIKE 'FooBar')
I could have thousands of those singular update statements, with FooBar representing each string in the database I might want to change at once, but isn't there a cleaner way to write such a massive number of update statements? From what I understand, I should be wrapping all of my statements in begin/end/go too, just in-case of failure - which leads me to believe there is a more systematic way of writing these update statements? Is my hunch correct? Or is the way I'm going about this correct / ideal? I could change the structure of my tables, I suppose, or the structure of how I store data - that might simplify things - but let's just say I can't do that, in this instance.
As far as I understand, you just need to update everything in table SiteResources with empty parameter 'placement of a string'. If so, here is the code:
UPDATE a
SET resourceid = 0
FROM SiteResources a
WHERE EXISTS (select * from ResourceKeys b where a.resourceid = b.resourceid)
AND a.siteid IS NULL
For some specific things like 'FooBar'-rows you can add it like this:
UPDATE a
SET resourceid = 0
FROM SiteResources a
WHERE EXISTS (select * from ResourceKeys b where a.resourceid = b.resourceid and b.resourcemessage IN ('FooBar', 'FooBar2', 'FooBar3', ...))
AND a.siteid IS NULL
Let me see if I understood the question correctly. You'd like to update resourceid to 0 if the resourcemessage corresponds to a list of strings ? If so, you can build your query like this.
UPDATE r
SET resourceid = 0
FROM SiteResources r
JOIN ResourceKeys k ON r.resourceid = k.resourceid
WHERE k.resourcemessage IN ('FooBar', ...)
AND r.siteid IS NULL;
This is using an extended UPDATE syntax in transact-sql allowing you to use a JOIN in the UPDATE statement. But maybe it's not exactly what you want ? Why do you use the LIKE operator in your query, without wildcard (%) ?
With table-valued parameters, you can pass a table from your client app to the SQL batch that your app submits for execution. You can use this to pass a list of all the strings you need to update to a single UPDATE that updates all rows at once.
That way you don't have to worry about all of your concerns: the number of updates, transactional atomicitty, error handling. As a bonus, performance will be improved.
I recommend that you do a bit of research what TVPs are and how they are used.
I am trying to update my current table by drawing data from another table.
My database (dbo_finance)
column - test
The other database is assestsc and I am going to pull the data from column issuename1,
however I only want to pull the issuename1 when the field [MinSecClass] is = 9.
This is what I wrote
UPDATE dbo_finance
SET [dbo_finance].cusip9 = AssetsC.cusip
FROM dbo_finance INNER JOIN AssetsC ON dbo_finance.test = AssetsC.[IssueName1]
WHERE (AssetsC.MinSecClass = 9)
Thanks, first time really using SQL
Well I would use aliases, it's a good habit to get into:
UPDATE f
SET [dbo_finance].cusip9 = AssetsC.cusip
FROM dbo_finance f
INNER JOIN AssetsC a ON f.test = a.[IssueName1]
WHERE (a.MinSecClass = 9)
Now that will work fine if the assets table will only return one value for cuspid for each record. If this is a one to many relationship you may need to get more complex to truly get the answer you want.
I see several serious design flaws in your table structure. First joins fields that are dependant as something as inherently unstable as issue name are a very poor choice. You want PK and FK field to be unchanging. Use surrogate keys instead and a unique index.
The fact that you have a field called cusp9 indicates to me that you are denormalizing the data. Do you really need to do this? Do you undestand that this update will have to run in a trigger ever time the cuspid assoicated with MinSecClass changes? Whya re you denormalizing? Do you currently have performance problems? A denormalized table like this can be much more difficult to query when you need data from several of these numbered fields. Since you already have the data in the assets table what are you gaining except a maintenance nightmare by duplicating it?
UPDATE dbo_finance
SET cusip9 = (
SELECT A1.cusip
FROM AssetsC AS A1
WHERE dbo_finance.test = A1.IssueName1
AND AssetsC.MinSecClass = 9
)
WHERE EXISTS (
SELECT *
FROM AssetsC AS A1
WHERE dbo_finance.test = A1.IssueName1
AND A1.MinSecClass = 9
);
Because you're using SQL 2008, you can take advantage of the new(ish) MERGE statement.
MERGE INTO dbo_finance
USING (SELECT IssueName1, cusip FROM AssetsC WHERE MinSecClass = 9) AS source
ON dbo_finance.test = source.IssueName1
WHEN MATCHED THEN UPDATE SET dbo_finance.cusip9 = source.cusip;