I am doing performance tuning on a company product that relatest to putting a lot of financial data in to CRM. There seems to be a bottleneck at the point of Invoice creation where the following query is run
(#orgid uniqueidentifier)declare #currentval int
update OrganizationBase
set #currentval = CurrentInvoiceNumber, CurrentInvoiceNumber = CurrentInvoiceNumber + 1
where OrganizationId = #orgid
select #currentval
Despite running all of the code in a multi-threaded way, everything inevitably queues behind this task which for some reason seems to take a second or so to run.
I can't find any way to disable this auto-numbering as I would prefer to generate the invoice number myself for performance purposes (contiguous numbers are not a necessity).
So my questions are:
Q: Can auto numbering for invoices be turned off.
Q: Which out of the box plugin or workflow actually calls this data (it doesn't seem to be a stored proc)
Q: Is there another workaround that I am not considering?
You can't disable the autonumbering of the invoice, but CRM allows to set a custom value as invoice number (invoicenumber field) when you create a new record, you can check the performance when you set manually the value
Related
I'm currently learning ABAP and trying to make an enhancement but have broken down in confusion on how to go about building on top of existing code. I have a program that runs periodically via a background job that disables user accounts X amount of days (in this case 90 days of inactive usage based on USR02~TRDAT).
I want to add an enhancement to notify the User via their email address (result of usr02~bname to match usr21~bname to pass the usr21~persnumber and usr21~addrnumber to adr6 which will point to the adr6~smtp_addr of the user, providing the usr02~bname -> adr6~smtp_addr relationship) based on their last logon date being 30, 15, 7, 5, 3, and 1 day away from the 90 day inactivity threshold with a link to the SAP system to help them reactivate the account with ease.
I'm beginning to think that an enhancement might not be a good idea but rather create a new program and schedule the background job daily. Any guidance or information would be greatly appreciated...
Extract
CLASS cl_inactive_users_reader DEFINITION.
PUBLIC SECTION.
TYPES:
BEGIN OF ts_inactive_user,
user_name TYPE syst_uname,
days_of_inactivity TYPE int1,
END OF ts_inactive_user.
TYPES tt_inactive_users TYPE STANDARD TABLE OF ts_inactive_user WITH EMPTY KEY.
CLASS-METHODS read_inactive_users
IMPORTING
min_days_of_inactivity TYPE int1
RETURNING
VALUE(result) TYPE tt_inactive_users.
ENDCLASS.
Then refactor
REPORT block_inactive_users.
DATA(inactive_users) = cl_inactive_users_readers=>read_inactive_users( 90 ).
LOOP AT inactive_users INTO DATA(inactive_user).
" block user
ENDLOOP.
And add
REPORT warn_inactive_users.
DATA(inactive_users) = cl_inactive_users_readers=>read_inactive_users( 60 ).
LOOP AT inactive_users INTO DATA(inactive_user).
CASE inactive_user-days_of_inactivity.
" choose urgency
ENDCASE.
" send e-mail
ENDLOOP.
and run both reports daily.
Don't create a big ball of mud by squeezing new features into existing code.
From SAP wiki:
The enhancement concept allows you to add your own functionality to SAP's standard business applications without having to modify the original applications. To modify the standard SAP behavior as per customer requirements, we can use enhancement framework.
As per your description, it doesn't sound like a use case for an enhancement. It isn't an intervention in an existing process. The original process and your new requirement are two different processes with some mutual logical part - selection of days of inactivity of users. The two shouldn't rely on each other.
Structurally I think it is best to have a separate program for computing which e-mails need to be sent and when, and a separate program for actually sending them.
I would copy your original program to a new one, and modify it a little bit so that instead of disabling a user, it records into some table for each user: 1) an e-mail 2) a date when to send 3) how many days left (30, 15, 7, etc) 4) status if the e-mail was sent or not. Initially you can even have multiple such jobs for each period (30, 15, 7 etc) and pass it as a parameter (which you use inside instead of 90).
This program you run daily as a job and it populates that table with e-mail "tasks" of what needs to be sent today. It just adds new lines, so lines from yesterday should stay in there.
The 2nd program should just read that table and send actual e-mails and update the statuses. You run that program daily as well.
This way you have:
overview: just check the table to see what's going on
control: if the e-mailer dies or hangs, you can restart it and it will continue where it left off; with statuses you avoid sending duplicate mails
you can make sure that you don't send outdated e-mails if in your mailer script you ignore all tasks older than say 2 days
I want to clarify your confusion about the use of enhancements:
You would want to use enhancements in terms of 'something' happens or is going to happen in the system and you would want to change this standard way.
That something, let's call it event or process could be for example an order is placed, a certain user is logging onto the system or a material has been or is going to be changed.
The change could be notifying another system of an order or checking the logged on user with additional checks for example his GUI version and warn him/her if not up-to-date.
Ask yourself, what process on the system does the execution of your program or code depend on. Does anything need to happen before the program is executed? No, only elapsing time.
Even if you had found an enhancement, you would want to use. If this process using the enhancement would not be run in 90 days, your mails would not be sent, because the enhancement would never been called.
edit: That being said, supposing you mean by enhancement 'building on your existing program' instead of 'creating a new one' would be absolutely not the right terminology for enhancement in the sap universe.
I would extend the functionality of your existing program, since you already compute how many days are left and you would have only one job to maintain.
A coworker in accounting was complaining about how she ran a query twice and it doubled her values and she got confused. Im just a Junior IT person who has very little VBA experience. I am basically just trying to add code to make it so my queries in our databases can't be run more than once unless you restart the database. I was thinking of doing a boolean check to see if a query has been run and if it has don't allow it to be run again. Or maybe I could just do a simple if statement. Please let me know if you have any input on this issue.
I couldn't find anything on the Googs either.
I would think on a date and a session ID as default values in each table, you could code the addition of both etc.
These are populated, date =date() as default value and sessionID is the DMAX from your SessionID table, as extra column in said query.
This SessionID table is incemented by a startup popup form, running macro.
The Primary Key of each table being operated on would be the date and the sessionID not allowing dupes. You probably dont need the date, just a sessionID in the PK.
It is not always the best idea to implement ad-hoc ideas by users like this.
You should analyze what happened here, and make sure it cannot happen in the application design, not by arbitrary rules.
Example: If the update query adds fees to a bill, and this must happen only once per bill, then the update query should also set a flag "fees added" in the bill record. And it should not update bills with this flag set.
I have a SQL Server table called AD where ad's to be viewed are stored as
create table Sponsors.AD
(
ADID varchar(40) primary key,
SponsorID varchar(30),
PurchasedViews int , --How many views the ad must reach before it is disabled
CurrentViewCount int, --Keeps track of how many views the ad has gotten
{...}
Active bit -- for easier checking of whether the AD still has clicks to give
)
This feeds into a webpage where, to access a feature, users first need to view an ad. Users can pick one ad from a menu that displays three options [they pick one, the ad's media is displayed and the feature is unlocked at the conclusion].
After they view the ad, its CurrentViewCount should be updated (increased by 1).
This is handled by a stored procedure that includes an update call for the table - separate from the stored procedure that fetches 3 ads at random for the option menu - but I'm looking for suggestions on how to solve the problem of synchronizing all concurrent AD views - as it could happen that
two or more users have the same ad in their 3-choice-menu
two or more users view the same ad at the same time
1 and 2 are not a problem on their own but they could be if the ad is one click away from it's set maximum.
One way I've thought to solve this is to set the active flag as false if the ad is one click away from it's target when it is displayed in the 3-option menu, and if the user does not click it, the flag will be reset to true -- but then I'd need to handle cases where the user exits the option dialogue or disconnects, times out, etc. I feel like there must be a better way.
Another suggestion I've heard is to automatically increase the counter when the ads are summoned to the 3-option menu but that's even more overhead than the other and suffers the same issues.
Locking the table is absolutely infeasible unless we wanted to only serve one ad view at a time - so I'm not even considering it.
I'm sure something like this has been discussed before but don't know what keywords/etc to search to find more on this.
I would not count the clicks within the same table... Could avoid your locking issues...
But, to get to your question: maybe you could handle this "fuzzy". Not the thight active=yes/no but rather something like an InactivityLevel together with a timeout.
As long as your flag would be true, everything is fine. If the counter exceeds, you switch to "No new visitors" and set a timestamp, so your add won't display in a new context. You set this to "inactive" after a given timeout.
I'm currently working on a SQL import routine to import data from a legacy application to a more modern robust system. The routine simply imports data from a flat-file legacy table (stored as a .csv file) into SQL Server that follows the classic order/order-detail pattern. Here's what both tables look like:
**LEGACY_TABLE**
Cust_No
Item_1_No
Item_1_Qty
Item_1_Prc
Item_2_No
Item_2_Qty
Item_2_Prc
...
Item_7_No
Item_7_Qty
Item_7_Prc
As you can see, the legacy table is basically a 22 column spreadsheet that is used to represent a customer, along with up to 7 items and their quantity and purchase price, respectively.
The new table(s) look like this:
**INVOICE**
Invoice_No
Cust_No
**INVOICE_LINE_ITEM**
Invoice_No
Item_No
Item_Qty
Item_Prc
My quick-and-dirty approach has been to create a replica of the LEGACY_TABLE (let's call it LEGACY_TABLE_SQL) in SQL Server. This table will be populated from the .csv file using a database import that is already built into the application.
From there, I created a stored procedure to actually copy each of the values in the LEGACY_TABLE_SQL table to the INVOICE/INVOICE_LINE_ITEM tables as well as handle the underlying logical constraints (i.e. performing existence tests, checking for already open invoices, etc.). Finally, I've created a database trigger that calls the stored procedure when new data is inserted into the LEGACY_TABLE_SQL table.
The stored procedure looks something like this:
CREATE PROC IMPORT_PROCEDURE
#CUST_NO
#ITEM_NO
#ITEM_QTY
#ITEM_PRC
However, instead of calling the procedure once, I actually call the stored procedure seven times (once for each item) using a database trigger. I only execute the stored procedure when the ITEM_NO is NOT NULL, to account for blank items in the .csv file. Therefore, my trigger looks like this:
CREATE TRIGGER IMPORT_TRIGGER
if ITEM_NO_1 IS NOT NULL
begin
exec IMPORT_PROCEDURE (CUST_NO,ITEM_NO_1, ITEM_QTY_1, ITEM_PRC_1)
end
...so on and so forth.
I'm not sure that this is the most efficient way to accomplish this task. Does anyone have any tips or insight that they wouldn't mind sharing?
I would separate the import process from any triggers. A trigger is useful if you're going to have rows being constantly added to the import table from a constantly running, outside source. It doesn't sound like this is your situation though, since you'll be importing an entire file at once. Triggers tend to hide code and can be difficult to work with in some situations.
How often are you importing these files?
I would have an import process that is mostly stand-alone. It might use stored procedures or tables in the database, but I wouldn't use triggers. A simple approach would be something like below. I've added a column to the Legacy_Invoices (also renamed to something that's more descriptive) so that you can track when items have been imported and from where. You can expand this to track more information if necessary.
Also, I don't see how you're tracking invoice numbers in your code. I've assumed an IDENTITY column in the Legacy_Invoices. This is almost certainly insufficient since I assume that you're creating invoices in your own system as well (outside of the legacy system). Without knowing your invoice numbering scheme though, it's impossible to give a solution there.
BEGIN TRAN
DECLARE
#now DATETIME = GETDATE()
UPDATE Legacy_Invoices
SET
import_datetime = #now
WHERE
import_status = 'Awaiting Import'
INSERT INTO dbo.Invoices (invoice_no, cust_no)
SELECT DISTINCT invoice_no, cust_no
FROM
Legacy_Invoices
WHERE
import_datetime = #now
UPDATE Legacy_Invoices
SET
import_status = 'Invoice Imported'
WHERE
import_datetime = #now
INSERT INTO dbo.Invoice_Lines (invoice_no, item_no, item_qty, item_prc)
SELECT
invoice_no,
item_no_1,
item_qty_1,
item_prc_1
FROM
Legacy_Invoices LI
WHERE
import_datetime = #now AND
import_status = 'Invoice Imported' AND
item_no_1 IS NOT NULL
UPDATE Legacy_Invoices
SET
import_status = 'Item 1 Imported'
WHERE
import_datetime = #now AND
import_status = 'Invoice Imported'
<Repeat for item_no_2 through 7>
COMMIT TRAN
Here's a big caveat though. While cursors are normally not desirable in SQL and you want to use set-based processing versus RBAR (row by agonizing row) processing, data imports are often an exception.
The problem with the above is that if one row fails, that whole import step fails. Also, it's very difficult to run a single entity (invoice plus line items) through business logic when you're importing them in bulk. This is one place where SSIS really shines. It's extremely fast (assuming that you set it up properly), even when importing one entity at a time. You can then put all sorts of error-handling in it to make sure that the import runs smoothly. One import row has an erroneous invoice number? No problem, mark it as an error and move on. A row has item# 2 filled in, but no item#1 or has a price without a quantity? No problem, mark the error and move on.
For a single import I might stick with the code above (adding in appropriate error handling of course), but for a repeating process I would almost certainly use SSIS. You can import millions of rows in seconds or minutes even with individual error handling on each business entity.
If you have any problems with getting SSIS running (there are tutorials all over the web and on MSDN at Microsoft) then post any problems here and you should get quick answers.
I'm not sure why you would add a trigger. Will you be continuing to use the LEGACY_TABLE_SQL?
If not then how about this one time procedure? It uses Oracle syntax but can be adapted to most databases
PROCEDURE MIGRATE IS
CURSOR all_data is
SELECT invoice_no, cust_no,Item_1_no,Item_1_qty........
FROM LEGACY_TABLE_SQL;
BEGIN
FOR data in all_data LOOP
INSERT INTO INVOICE (invoice_no, cust_no) VALUES (data.invoice_no, data.cust_no);
IF Item_1_no IS NOT NULL THEN
INSERT INTO INVOICE_LINE_ITEM(invoice_no,Item_1_no,Item_1_qty....) VALUES(data.invoice_no,data.Item_1_no,data.Item_1_qty....)
END IF;
--further inserts for each item
END LOOP;
COMMIT;
END;
This can be further optimized in Oracle with a BULK_COLLECT.
I would create the INVOICE_LINE_ITEM table with default values of 0 for all items.
I would also consider these possibilities:
is the invoice number really unique now and in the future? it may be a good idea to add a pseudo key based off a sequence
is there any importance to null item_no entries? Could this indicate a back order, short shipment or just bad data entry?
EDIT: as you advise that you will continue to use the legacy table you need to prioritize what you want. Is efficiency and performance your number one priority, maintainability, synchronous transaction
For example:
- if performance is not really critical then implement this as you outlined
- if this will have to be maintained then you might want to invest more into the coding
- if you do not require a synchronous transaction then you could add a column to your LEGACY_TABLE_SQL called processed with a default value of 0. Then, once a day or hour, schedule a job to get all the orders that have not been processed.
I have a table that has ordernumber, cancelled date and reason.
Reason field is varchar(255) field and it was written by many different sales rep and really hard to group by the reason category I need to generate a report to categorize cancelation reasons. What is the best way to analyse the reasons with TSQL?
Sample of reasons entered by sales rep
cust already has this order going out
cust can not hold for item Called to cancel order
cust doesn't want to pay for shipping
wife ordered same item from different vendor, sent email
cst made a duplicate order, sent email
cst can't hold
Cust doesn't want to go through verification process so is cancelling order
doesn't ant to hold for Bo
doesn't want
Cust called to cancel the order He can no longer get the product he wants
cnt hld
will not comply with export req
cant' hold
Custs request
Cust will not hold for BO
per. cust. request.
BTW I have SQL Server 2005.
part of your problem is that this these aren't truly reason codes. sounds like an issue with your schema to me. if there aren't predefined reason codes to reference and you're allowing free text entry for each reason, then there's really no way to do this directly, outside of pulling distinct reasons back, which is probably not going to be very useful.
just an idea, can you add another column to the table, even if it's in a temp or test environment and then give the business users the ability to assign a code (e.g. 1 for mis-ships, 2 for duplicate orders, 3 for wrong item etc.) to each order cancellation. then perform the analysis on that.
i assume that's what they're expecting from you, but i don't know that i see any better way. you could always perform the analysis yourself if you have the authority/knowledge but this might be painful if you have a ton of cancellations.
edit- i see now that you've tagged this with regex... it would be possible to setup specified keywords to pull out the entries, but there'd have to be some tolerance built in and still manual analysis afterwards for items which don't fall into any specified category due to misspellings etc. /edit
+1 to #jmatthews, you really need to have reason codes that are selected and then possibly allow free-form entry for the full reason.
If this isn't an option you can look into text clustering. Don't expect that to be fast or easy though, it's still an open research topic and is related to both AI and machine learning.
Look at Term Lookup in SSIS, here is an article to read.