How to test Yodlee account refresh using DAG? - yodlee

It seems that data in DAG accounts is static and doesn't change.
How can I test account refresh without using real financial accounts, with DAG only?

You must upload your own transactions XML. It can be painful and tedious.
From the DAG site find a site of the appropriate container you are interested in testing and download the file. In your favorite XML editor you can add/edit transactions and then upload (via Edit).
A fair warning, the data that results from your upload may not necessarily prepare you for the real world transaction data. Also, if you stray from their undocumented format for the transactions there is no warning/error message and will simply result in empty transactions or whatever was previously uploaded.

Related

Automating web page population

I have data in a csv file & want to do the following with it:
Log into web site
Populate field of the page with the csv data
Navigate to next page
Input the rest of data
Click submit
Repeat for next line
I can do this using UiPath but it's an expensive option for a relatively simple use case.
Any one any suggestions on how do this using a different method?
Thanks,
EddieT
If you're looking for alternatives then you probably would want to investigate APIs or Webhooks. But that all depends on the access rights you have for that particular website.
Try messaging the Developers of the website you need as they might have this service already available.
UiPath may appear expensive but if you calculate the amount of time saved for this one process then you will see the money savings too.
If you can find a couple of other processes you want to automate then I'd highly recommend it.

Best way to export data from other company's SAP

I need to extract some data from my client's SAP ECC (the SUIM -> Users by Complex Selection Criteria -program RSUSR002)
Normally I give them a table of values that I they have to fill some field to extract what I need.
They have to make 63 different extractions (with different values of objects, for example - but inside the same transaction - you can see in the print) from their SAP, to later send to me all extracted files.
Do you know if there is an automated way to extract that, so they don't have to make 63 extractions?
My biggest problem is that every time they make mistakes. It's a lot of things to fill..
Can I create a variant and send it to them? Is it possible to export my variant so they can import it without the need to fill 63x different data?
Thank you.
When this is a task which takes considerable effort by multiple people each year, then it is something which might be worth automatizing.
First you need to find out where that transaction gets its data from. If you spend some time analyzing and debugging the program behind the transaction, you will surely find which SELECT's on which database table(s) provide that data. If you are lucky, there might even be a function module for it.
Then you just need to write an own ABAP program which performs the same selections.
Now about the interesting part: How to get that data to you. There are several approaches here. The best one depends on your requirements and your technical infrastructure. Some possibilities are:
Let users run the program in foreground, use the method cl_gui_frontend_services=>gui_download to save the data to a file on the user's PC and ask them to send it to you via email
Run the program in background and save the file on the application server. Then ask your sysadmins how to get that file from their application server to you. The simplest way would be to just map a network fileserver so they all write to the same place, but there might be some organizational hurdles in the way which prevent that. (Our security people would call me crazy if I proposed to allow access to SMB shares from outside of our network, but your mileage may vary)
Have the program send the data to you directly via email. You can send emails from an SAP system using the function module SO_NEW_DOCUMENT_ATT_SEND_API1. This of course requires that the system was configured to be able to send emails (which you can do with transaction code SCOT). Again, security considerations apply. When it's PII or other confidential data, then you should not send it in an unencrypted email.
Use an RFC call to send the data to your own SAP system which aggregates the data
Use a webservice call to send the data to your own non-SAP system which aggregates the data
You can create a recording in transaction SM35.
There you fill a tcode (SUIM), start recording, make some input in transaction SUIM and then press 'Execute'. Then you can go back to recording (F3 multiple times) and the system will generate some table with commands (structure is BDCDATA). You can delete unnecessary part (i.e. BACK button click) and save it to use as a 'macro'. Then you can replay this recording and it will do exactly what you did.
Also it's possible to export/import the recording to text file, so you can explore it's structure, write some VBA script to create such recording from your parameters and sent it to users. But keep in mind that blanks are meaningful.
It's a standard tools so there's no any coding in the system.
You can save the selection as a variant.
Fill in the selection criteria and press Save.
It can be reused.
You can also transport Variants if the they have a special name

CloudTrail RunInstances event, who actually provisioned EC2 instance when STS AssumeRole used?

My client is in need of an AWS spring cleaning!
Before we can terminate EC2 instances, we need to find out who provisioned them and ask if they are still using the instance before we delete them. AWS doesn't seem to provide out-of-the-box features for reporting who the 'owner'/'provisioner' of an EC2 instance is, as I understand, I need to parse through gobs of archived zipped log files residing in S3.
Problem is, their automation is making use of STS AssumeRole to provision instances. This means the RunInstances event in the logs doesn't trace back to an actual user (correct me if I'm wrong, please please I hope I am wrong).
AWS blog provides a story of a fictional character, Alice, and her steps tracing a TerminateInstance event back to a user which involves 2 log events: The TerminateInstance event and an event "somewhere around the time" of an AssumeRole event containing the actual user details. Is there a pragmatic approach one can take to correlate these 2 events?
Here's my POC that's parsing through a cloudtrail log from s3:
import boto3
import gzip
import json
boto3.setup_default_session(profile_name=<your_profile_name>)
s3 = boto3.resource('s3')
s3.Bucket(<your_bucket_name>).download_file(<S3_path>, "test.json.gz")
with gzip.open('test.json.gz','r') as fin:
file_contents = fin.read().replace('\n', '')
json_data = json.loads(file_contents)
for record in json_data['Records']:
if record['eventName'] == "RunInstances":
user = record['userIdentity']['userName']
principalid = record['userIdentity']['principalId']
for index, instance in enumerate(record['responseElements']['instancesSet']['items']):
print "instance id: " + instance['instanceId']
print "user name: " + user
print "principalid " + principalid
However, the details are generic since these roles are shared by many groups. How can I find details of the user before they Assumed Role in a script?
UPDATE: Did some research and it looks like I can correlate the Runinstances event to an AssumeRole event by a shared 'accessKeyId' and that should show me the account name before it assumed a role. Tricky though. Not all RunInstances events contain this accessKeyId, for example, if 'invokedby' was an autoscaling event.
Direct answer:
For the solution you are proposing, you are unfortunately out of luck. You can take a look at http://docs.aws.amazon.com/IAM/latest/UserGuide/cloudtrail-integration.html#w28aac22b9b4b7b3b1. On the 4th row, it says that the Assume Role will save the Role identity only for all subsequent calls.
I'd contact aws support to make sure of this as I might very well be mistaken.
What I would do in your case:
First, wait a couple of days in case someone had a better idea or I was mistaken and aws support answers with an out-of-the-box solution
Create an aws config rule that would delete all instances that have a certain tag. Then tell your developers to tag all instances that they are sure that should be deleted, then these will get deleted
Tag all the production instances and still needed development instances with a tag of their own
Run a script that would tag all of the untagged instances with a separate tag. Douple and triple check these instances.
Back up and turn off the instances tagged in step 3 (without
deleting the instances).
If someone complained about something not being on, that means they
missed an instance in step 1 or 2. Tag this instance correctly and
turn it on again.
After a while (a week or so), delete the instances that are still
stopped (keep the backups)
After a couple months, delete the backups that were not restored
Note that this isn't foolproof as it has the possibility of human error and possible downtime, so double and triple check, make a clone of the same environment and test on that (if you have a development environment that already has such a configuration, that would be the best scenario), take it slow to be able to monitor everything, and be sure to keep backups of everything.
Good luck and plzz tell me what your solution ended up being.
General guidelines for the future:
Note: The following points are very opiniated, and are general rules that I abide by as I find them saving me a load of trouble from time to time. Read them, dismiss what you find as unfit for you and take the things that you find reasonable.
Don't use assume role that often as it obfuscates user access. In case it was a script run on the developer's pc, let it run with their own username. If it's running on a server, keep it with the role it was created in. The amount of management will be less that way as you just cut the middle-man (the assume-role) and don't need to create roles anymore, just assign the permissions to the correct group/user. Take a look below for when I'd consider using the assume-role as a necessity.
Automate deletions. The first things you should create is automating the task of keeping the aws account as clean as possible as this would save both $$$ and debugging pain. Tags and scripts to act on these tags are very powerful tools. So if a developer needs an instance for a day to try out something new, he can create a tag that times the instance out, then there is a script that cleans it up when the time comes. These are project-specific, and not everyone needs all of these, so see and assess what you need for your project and act on them.
What I'd recommend is giving the permissions to the users themselves in the development environment as it would make tracking things to their root and finding the most knowledgeable person to solve things easier. As of the production environment, everything should be automated anyway (creation when needed and deletion when no longer needed) and no one should have any write access to that account, ever.
As for the assume-role, I only use it in case I want to give access to read-only production logs on another account. Another case would be something that really shouldn't be happening that often, if at all, but still need to give some users access to it. So, as an extra layer of protection against the 'I did it by mistake', I make them switch role to do it, and never have a script that automatically switches roles and do the action in an attempt to make it as deliberate as possible (think deleting a database and such). Another thing would be accessing sensitive information (credit-card database, etc.). Many more scenarios can occur, and here it comes to your judgement.
Again, Good Luck.

Logging the last time user signed in Node.js

I need to log the last time the user signed in using my node.js server. I am looking into three options. The persistence requirement is not super high, meaning that the margin of error of this record being recorded is open.
Use SQL DB and whenever the user logs in it modifies their profile account.
Record it in a server text file. So whenever the user logs on, this file will be opened and updated. The opening, recording and closing of the file will all be done asynchronously.
I'm thinking that the second option is the better on because I'm using SQL for many other operations so I prefer to not interrupting my DB as much as possible.
One concern I have for the second option is the performance hit on the server that will be caused by the frequently read and write to a local text file.
I'm curious what other people who have gone through this path thought about my thought process. Any opinions or tips are highly welcomed. Thank you.
Normally you should use a SQL database, it is a much more better way than the plain text.
The main problem with a text file is that when you log in, you can simply append a line (but what about a couple of user loggin in at the same moment ? You have not any warranty that all the access are logged), but when you want to extact the last login for a user, you should read (and then load) all the file from the start (or the end), which can cause a really worst problem than the access to the DB.
Naturally you can work out all the problems with a text file, but then you have written a lot of code to avoid a simple update query.
I don't think that, with the information you give, you should be worried about the performance of a database access in this case.

Cache data in SQL CE database

Background
I have an SQL CE database, that is constantly updated (every second).
I have a (web) application that allows a user to look at the data in real-time. At some point a user can click "take a snapshot" button, and it will open the snapshot in a different window.
And then on that form, there is "print" and "download" buttons that will either generate a page for printing, or will stream the data as CSV file - but same data snapshot has to be used, i.e. I can't go to the DB to get latest data for that.
Details
SQL CE dabatase is exposed through WCF web service.
Snapshot consists of up to 500 records, 10 columns each. Expiration time on the snapshot of 2 hours is sufficient.
It is a low-traffic application, so I don't expect more than few (5) connections at the same time.
Loosing snapshot is not a big deal, user can simply generate new one.
database is accessed by self-hosted WCF web service using Linq-to-SQL.
Web site is ASP.NET MVC hosted on UltiDev Cassini.
database, and web site are most likely be on the same box, when deployed. The entire app is intranet bound.
Problem
I need to cache the snapshot of the data at the moment user pressed "take a snapshot" button, so that I can use same data to generate print page, or generate a file for download.
Solution 1:
Each time there is a need to generate a snapshot, I will create a table in the database. Since there are no temp tables in SQL CE, I will need to clean it up myself.
Solution 2:
Cache the snapshot in-memory on either DB server, or web server.
Question:
Is there anything wrong with proposed solutions? Any different solution suggestions?
A consideration is the typical usage pattern. Do most snapshots eventually result in either being printed or exported or both?
If such is the case, we might as well "get it in memory" (temporarily) in the form of a non blocking (asynchronous) select statement from the device to the server. In this fashion the data will "be there" or well on its way when user decides to use it.
If on the other hand many snapshot end up not being effectively used, Solution #1 seems quite ok (maybe the table could be named after the account/user, hence guaranteeing "self clean up" based on the number of snapshot a user can maintain at a given time (though it seems to be just one, with even the tolerance of loosing it sometimes).
500 rows by 10 columns isn't really very large at all. For the sake of simplicity in this case, I might just generate the CSV data at the same time I generate the initial snapshot page, and then place the CSV data in a hidden field in the snapshot page. The "Print" and "Download CSV" buttons would then POST the form that contains the CSV data to a Print page that generates the printable version from the posted CSV data, or a page that streams the CSV directly back to the client's browser, respectively. This way, at least, you wouldn't have any clean-up issues to deal with, and you avoid having to cache something on the server (either in the cache proper or in the database) that might well end up never being used at all.
If you cached the CSV data in a hidden field client-side, you could even handle both the printing and the CSV display completely client-side with javascript, although I don't know if that's worth the trouble or not.