I've been working on my project about bank account transactions (withdraw, deposit, check cashed, and balance inquiry) using "account.txt".
My TA said that I have to use temporary file. This temporary file will read line by line to find what the user is looking for. However, I did not understand this temporary OPEN file at all. Does anyone explain what that is, and if it's possible, would you attach example of it?
Here are the project instructions:
This project is about writing a program to perform transactions on bank accounts. You will be given a file which contains all the accounts in the bank (the file is named “account.txt”). Your program is to provide an interactive menu for users to perform transactions on these accounts. Your program needs to update the account file after each transaction. The user may perform transactions on accounts that are not available. Your program needs print an error message on the screen and return to the menu. In addition, your program needs to print whether a transaction is successful. For unsuccessful transaction, your program will print out the reason for the failed transaction.
Your program needs to be able to handle the following transactions:
Deposit money into an account
Withdraw money from an account
Check cashed against an account
Balance inquiry of an account
There is a limit on how many checks can be cashed against a saving account. The limit is 2 checks per month. There is a $0.25 penalty for each check cashed over the limit. If there is enough fund to cash the check but not the penalty, the transaction should go through and the resulting balance would be zero.
Here is the format in the account file for one account (data fields are separated by exactly one space):
Account type, S for saving, C for checking (1 character)
Account number of 5 digits
Last name of account holder (15 characters)
First name of account holder (15 characters)
Balance of the account in the form xxxxx.xxx
An integer field indicating how many checks have been cahsed this month (three digit)
An interest rate in the form of xx.xx (e.g. 10.01 = 10.01%)
For names with fewer than 15 characters, the data will be padded to have width of 15 characters.
Here is an example of the account file:
C 12345 Smith John 100.000 10 0.00
S 45834 Doe Jane 3462.340 0 0.30
C 58978 Bond Jones 13.320 5 0.00
*Creating temporary file
There is a way in FORTRAN to create a temporary file. Use:
OPEN(UNIT = , STATUS = "SCRATCH", ...)
There is no need to provide (FILE = ””). By using a temporary file, you can copy the accounts from the account file to the temporary file. Then when you copy the data back from the temporary file to the account file, perform the necessary transactions. Your program should not copy accounts between these two files if a transaction is to be failed.
Please forgive my english, I'm Japanese.
The are saying that a statement such as:
OPEN (7, ACCESS = 'DIRECT',STATUS = 'SCRATCH')
You can create a temporary file--one that will only live until you close it, and not be saved to disk. This file needs no name (it's never going to be referred to by name) just a unit number (in my example 7).
You can use this file to hold the account information temporarily during a transaction. You need this because, when you are inserting rows into the real file, and you don't want to overwrite subsequent data. So they are saying:
Copy everything to a temporary file
If the transaction succeeds, copy the data back to the main file but
Omit rows that are to be deleted
Add in the rows that are to be inserted
Does that help?
Related
I'm currently learning ABAP and trying to make an enhancement but have broken down in confusion on how to go about building on top of existing code. I have a program that runs periodically via a background job that disables user accounts X amount of days (in this case 90 days of inactive usage based on USR02~TRDAT).
I want to add an enhancement to notify the User via their email address (result of usr02~bname to match usr21~bname to pass the usr21~persnumber and usr21~addrnumber to adr6 which will point to the adr6~smtp_addr of the user, providing the usr02~bname -> adr6~smtp_addr relationship) based on their last logon date being 30, 15, 7, 5, 3, and 1 day away from the 90 day inactivity threshold with a link to the SAP system to help them reactivate the account with ease.
I'm beginning to think that an enhancement might not be a good idea but rather create a new program and schedule the background job daily. Any guidance or information would be greatly appreciated...
Extract
CLASS cl_inactive_users_reader DEFINITION.
PUBLIC SECTION.
TYPES:
BEGIN OF ts_inactive_user,
user_name TYPE syst_uname,
days_of_inactivity TYPE int1,
END OF ts_inactive_user.
TYPES tt_inactive_users TYPE STANDARD TABLE OF ts_inactive_user WITH EMPTY KEY.
CLASS-METHODS read_inactive_users
IMPORTING
min_days_of_inactivity TYPE int1
RETURNING
VALUE(result) TYPE tt_inactive_users.
ENDCLASS.
Then refactor
REPORT block_inactive_users.
DATA(inactive_users) = cl_inactive_users_readers=>read_inactive_users( 90 ).
LOOP AT inactive_users INTO DATA(inactive_user).
" block user
ENDLOOP.
And add
REPORT warn_inactive_users.
DATA(inactive_users) = cl_inactive_users_readers=>read_inactive_users( 60 ).
LOOP AT inactive_users INTO DATA(inactive_user).
CASE inactive_user-days_of_inactivity.
" choose urgency
ENDCASE.
" send e-mail
ENDLOOP.
and run both reports daily.
Don't create a big ball of mud by squeezing new features into existing code.
From SAP wiki:
The enhancement concept allows you to add your own functionality to SAP's standard business applications without having to modify the original applications. To modify the standard SAP behavior as per customer requirements, we can use enhancement framework.
As per your description, it doesn't sound like a use case for an enhancement. It isn't an intervention in an existing process. The original process and your new requirement are two different processes with some mutual logical part - selection of days of inactivity of users. The two shouldn't rely on each other.
Structurally I think it is best to have a separate program for computing which e-mails need to be sent and when, and a separate program for actually sending them.
I would copy your original program to a new one, and modify it a little bit so that instead of disabling a user, it records into some table for each user: 1) an e-mail 2) a date when to send 3) how many days left (30, 15, 7, etc) 4) status if the e-mail was sent or not. Initially you can even have multiple such jobs for each period (30, 15, 7 etc) and pass it as a parameter (which you use inside instead of 90).
This program you run daily as a job and it populates that table with e-mail "tasks" of what needs to be sent today. It just adds new lines, so lines from yesterday should stay in there.
The 2nd program should just read that table and send actual e-mails and update the statuses. You run that program daily as well.
This way you have:
overview: just check the table to see what's going on
control: if the e-mailer dies or hangs, you can restart it and it will continue where it left off; with statuses you avoid sending duplicate mails
you can make sure that you don't send outdated e-mails if in your mailer script you ignore all tasks older than say 2 days
I want to clarify your confusion about the use of enhancements:
You would want to use enhancements in terms of 'something' happens or is going to happen in the system and you would want to change this standard way.
That something, let's call it event or process could be for example an order is placed, a certain user is logging onto the system or a material has been or is going to be changed.
The change could be notifying another system of an order or checking the logged on user with additional checks for example his GUI version and warn him/her if not up-to-date.
Ask yourself, what process on the system does the execution of your program or code depend on. Does anything need to happen before the program is executed? No, only elapsing time.
Even if you had found an enhancement, you would want to use. If this process using the enhancement would not be run in 90 days, your mails would not be sent, because the enhancement would never been called.
edit: That being said, supposing you mean by enhancement 'building on your existing program' instead of 'creating a new one' would be absolutely not the right terminology for enhancement in the sap universe.
I would extend the functionality of your existing program, since you already compute how many days are left and you would have only one job to maintain.
If you are working with access control, you must have faced the issue where the Automatic Record Permission field (with Rules) does not update itself on recalculating the record. You either have to launch full recalculation or wait for a considerable amount of time for the changes to take place.
I am facing this issue where based on 10 different field values in the record, I have to give read/edit access to 10 different groups respectively.
For instance:
if rule 1 is true, give edit access to 1st group of users
if rule 1 and 2 are true, give edit access to 1st AND 2nd group of
users.
I have selected 'No Minimum' and 'No Maximum' in the Auto RP field.
How to make the Automatic Record Permission field to update itself as quickly as possible? Am I missing something important here?
If you are working with access control, you must have faced the issue
where the Automatic Record Permission field (with Rules) does not
update itself on recalculating the record. You either have to launch
full recalculation or wait for a considerable amount of time for the
changes to take place.
Tanveer, in general, this is not a correct statement. You should not face this issue with [a] well-designed architecture (relationships between your applications) and [b] correct calculation order within the application.
About the case you described. I suggest you check and review the following possibilities:
1. Calculation order.Automatic Record Permissions [ARP from here] are treated by Archer platform in the same way as calculated fields. This means that you can modify the calculation order in which calculated field and automatic record permissions will be updated when you save the record.So it is possible that your ARP field is calculated before certain calculated fields you use in the rules in ARP. For example, let say you have two rules in ARP field:
if A>0 then group AAA
if B>0 then groub BBB
Now, you will have a problem if calculation order is the following:
"ARP", "A", "B"
ARP will not be updated after you click "Save" or "Apply", but it will be updated after you click "Save" or "Apply" twice within the save record.With calculation order "A","B","ARP" your ARP will get recalculated right away.
2. Full recalculation queue.
Since ARPs are treated as calculated fields, this mean that every time ARP needs to get updated there will be recalculation job(s) created on the application server on the back end. And if for some reason the calculation queue is full, then record permission will not get updated right away. Job engine recalculation queue can be full if you have a data feed running or if you have a massive amount of recalculations triggered via manual data imports. Recalculation job related to ARP update will be created and added to the queue. Recalculation job will be processed based on the priorities defined for job queue. You can monitor the job queue and alter default job's processing priorities in Archer v5.5 via Archer Control Panel interface. I suggest you check the job queue state next time you see delays in ARP recalculations.
3. "Avalanche" of recalculations
It is important to design relationships and security inheritance between your applications so recalculation impact is minimal.
For example, let's say we have Contacts application and Department application. - Record in the Contacts application inherits access using Inherited Record Permission from the Department record.-Department record has automatic record permission and Contacts record inherits it.-Now the best part - Department D1 has 60 000 Contacts records linked to it, Department D2 has 30 000 Contacts records linked to it.The problem you described is reproducible in the described configuration. I will go to the Department record D1 and updated it in a way that ARP in the department record will be forced to recalculate. This will add 60 000 jobs to the job engine queue to recalculate 60k Contacts linked to D1 record. Now without waiting I go to D2 and make change forcing to recalculate ARP in this D2 record. After I save record D2, new job to recalculate D2 and other 30 000 Contacts records will be created in the job engine queue. But record D2 will not be instantly recalculated because first set of 60k records was not recalculated yet and recalculation of the D2 record is still sitting in the queue.
Unfortunately, there is not a good solution available at this point. However, this is what you can do:
- review and minimize inheritance
- review and minimize relationships between records where 1 record reference 1000+ records.
- modify architecture and break inheritance and relationships and replace them with Archer to Archer data feeds if possible.
- add more "recalculation" power to you Application server(s). You can configure your web-servers to process recalculation jobs as well if they are not utilized to certain point. Add more job slots.
Tanveer, I hope this helps. Good luck!
I am facing a lock table overflow issue and below is the error it displays me and as soon as it displays it crashes the code.
Lock table overflow, increase -L on server (915)
I have checked the error number and it is saying we need to modify that -L value before server starts and it has been set to 500 by default. But I would not imagine I have been given that privilege to change that value unless I am a database administrator of the company.
What i was trying to do was wipe out roughly 11k of member records with all the linked table records ( more than 25 tables are linked to each member record ) while backing them each table up into separate file. So roughly it achieves 'EXCLUSIVE-LOCK' when entering the member for loop as below,
for each member
EXCLUSIVE-LOCK:
/*
Then find each linked records in a order.
Extract them.
Delete them.
*/
Finally it extracts the member.
Delete member.
end.
When it hits certain number of member records program crashes out. So i had to run it as batches like,
for each member
EXCLUSIVE-LOCK:
Increment a member count.
When count = 1k
then RETURN.
/*
Then find each linked records in a order.
Extract them.
Delete them.
*/
Finally it extracts the member.
Delete member.
end.
So literally I've ended up with running the same code more than 11 times to get the work done. I hope someone should have come across this issue and would be great help if you like to share a long time solution rather than my temporary solution.
You need a lock for each record that is part of a transaction. Otherwise other users could make conflicting changes before your transaction commits.
In your code you have a transaction that is scoped to the outer FOR EACH. Thus you need 1 lock for the "member" record and another lock for each linked record associated with that member.
(Since you are not showing real code it is also possible that your actual code has a transaction scope that is even broader...)
The lock table must be large enough to hold all of these locks. The lock table is also shared by all users -- so not only must it hold your locks but there has to be room for whatever other people are doing as well.
FWIW -- 500 is very, very low. The default is 8192. There are two startup parameters using the letter "l", one is upper case, -L, and that is the lock table and it is a server startup parameter. Lower case, -l, is the "local buffer size" and that is a client parameter. (It controls how much memory is available for local variables.)
"Batching", as you have sort of done, is the typical way to ensure that no one process uses too many locks. But if your -L is really only 500 a batch size of 1,000 makes no sense. 100 is more typical.
A better way to batch:
define buffer delete_member for member.
define buffer delete_memberLink for memberLink. /* for clarity I'll just do a single linked table... */
for each member no-lock: /* do NOT get a lock */
batch_loop: do for delete_member, delete_memberLink while true transaction:
b = 0.
for each delete_memberLink exclusive-lock where delete_memberLink.id = member.id:
b = b + 1.
delete delete_memberLink.
if b >= 100 then next batch_loop.
end.
find delete_member exclusive-lock where recid( delete_member ) = recid( member ).
leave batch_loop. /* this will only happen if we did NOT execute the NEXT */
end.
end.
You could also increase your -L database startup parameter to take into account your one off query / delete.
I have a .RDL report which I designed in BIDS and have deployed to my report server. The report asks for three parameters before viewing report: Year, Month and Customer ID. The report works great and does exactly what it is supposed to.
While I used to run each report individually because there were 2-3 customers, now there are 30+ customers who receive the report, so I wanted to switch to a more automated fulfillment method to get the reports generated. After doing some research it appears that a using Report Manager to create a "Data Driven Subscription" (DDS) using the "Windows File Share" option gives me the capabilities I need.
As part of creating the DDS, I created a table called [Subscription] which is a table containing one row for each customer receiving the report and has the following columns:
Year
Month
CustomerID
FileName
FileLocation
Overwrite
Format
...so through using the DDS Wizard in Report Manager, I was able to successfully set up a Data Driven Subscription (which is linked to various columns in the [Subscription] table) which creates a new report for each customer in the [Subscription] table, saves [and overwrites, if necessary] it in a location of my choosing as a PDF (specified in [Subscription].[FileLocation], or the FileLocation column of my table for each row), and runs every minute (I plan on changing frequency to once a week, eventually).
This works flawlessly, giving me a new set of 30 reports in the directory of my choosing, with each report having a name I assigned in the FileName column of my table. Exactly what I was looking for.
HERE'S THE PROBLEM: When I update the FileLocation or FileName (or anything, really) in the [Subscription] table - it doesn't pick up the changes right away. Sometimes it doesn't even pick it up at all (for example I updated the [ReportName] column for one customer from Report_711622 to SpecialReport_711622, so that the output file for that customer should be named SpecialReport_711622 while all of the other reports should be called Report_XXXXX [no Special prefix]. But the file name of report for Customer 711622 remains the same!
It's almost like the job only see's what it needs to do once a day, and then does not go back and reference the [Subscription] table until I leave for the night, then when I come back in the morning it picks up the change.
Since I am about to scale this process out to a large customer-base using a different report, I need to be able to make edits to the [Subscription] table and have them get picked up by the Data Driven Subscription immediately (and if not immediately, at least a fixed interval of time that I can adjust, so that I can know 100% when the change will get picked up).
Does anyone know what's causing my lag? How do I change it so that updates to the Subscription table get picked up regularly? I'm also having issues with creating new DDS on other reports (following the exact process outlined above) - I've created the subscriptions, for every minute, and it says they are running and the number of outputs match the number of customers with 0 errors, but there are no files in the drive I specified (or anywhere else I've looked, for that matter).
Any help would be greatly appreciated!
I think the answer lies in the mechanism SSRS uses. There are a few places "lag" can occur.
The subscription is in fact an SQL Agent job which creates a record in the Event table. This table is a queue that SSRS checks to do scheduled tasks.
There is a small amount of time between the moment the subscription creates the Event record and the moment SQL reads it and starts creating the dataset for your DDS. The creation of the DDS dataset takes some time, too. In this time, the subscription will be in the Pending state. If you change anything in the data during this time, The subscription will still use the old data as report parameters. So obviously you will not notice your change until the next scheduled run.
Which brings me to the following: if a subscription is still being run and the next schedule kicks in (chances are, because yours runs every minute), the engine will not execute it, but wait for the next subscription schedule, and so on. So that's another possibility of lag - and cause of missing reports for a certain schedule minute. The subscription processes reports sequentially, one row from your DDS recordset at a time. Again, this takes some time. You can also see that in the subscription window when it says: # of # processed.
I suggest you look at the Event table in the database ReportServer during an execution. Also the ExecutionHistory views (there are 3) may be interesting. A scheduled run shows up as a RequestType = 1 and generates one record for each report. You can see the exact timing and parameters of each report that is run in the subscription. You may be able to extract the data you need to resolve your other issues.
EDIT: Here is a more elaborate guide to DDS data and events
http://blogs.msdn.com/b/deanka/archive/2009/01/13/diagnosing-and-troubleshooting-subscriptions.aspx
http://blogs.msdn.com/b/deanka/archive/2010/02/16/troubleshooting-subscriptions-part-ii-using-the-report-services-trace-log-file.aspx
Could this "Double-Hop" problem be the source of my issues? I'm so stuck on this one!
The Double-Hop Problem - MSDN Knowledgecast
I Have 3 files. The customer file has customers who never ordered or had an invoice. We want to remove those customers from the customer file. I have 2 rpg programs one for each (orders, invoice) files. They create 2 temp outfiles which have those records that we want to Purge.
I want to merge these 2 files. There are duplicates in this sense:
Customer number Suffix
123456 000
123456 001
123456 002
567890 000
Suffix can be there if the customer contacted us a second time, etc.
SO both outfiles can have these dupes.
I would like to have a final file that only has the customer number.
BUt I want to do this automatically, in a CL.
Can this be done in a CL, rather than a ad hoc SQL?
Generally speaking, CL is not a database language. Put the ad hoc SQL in a source member and execute it with the CL command RUNSQLSTM. For more dynamic SQL inside a CL program, use RUNSQL.
If I recall, this application is creating some archive files and this is the final step. When you create the archive files, it would be easy to also create the 'duplicates' file. I'd consider that as a better route, because you can more easily create a report or spreadsheet or web page or some other record of the customers you are about to purge.