How to test a Mule application flow? - mule

I have been assigned with the following task regarding a Mule flow application currently in production:
To stores the client IP which is using the webservice
To implements a control that limits to ten the times each IP could ask
daily to the website
I have knowledge in Java core and SQL but null background with Mule. All the people that I can ask are in the same situation.
Once I get the app package (the one that is currently in production) up and running, I have stopped it and add the following elements to the flow:
In a subflow with some initial tasks, I have addded a database element to store the IP of the computer which is using the webservice (The user_request is a table I have just created in the DB which stores the IP and date of connection.):
insert into user_request values
(#[MULE_REMOTE_CLIENT_ADDRESS], #[function:datestamp:dd-MM-yy])
To ask the website, a database element performs a select query to provide a choice with some inputs. Depending on the value of those inputs the request is done to the website or not:
Database (Select) --> Choice --> Ask or not to the website depending on the select output
So, there, I have added to the database element that performs the select and additional output that is a count of user_request table for current IP and current day so it can provide the choice with the original inputs as usual and also this extra one (I am copying only the suquery I added):
SELECT COUNT(*) as TRIES FROM USER_REQUEST
WHERE IP_ADDRESS=#[MULE_REMOTE_CLIENT_ADDRESS]
AND REQUEST_DATE=#[function:datestamp:dd-MM-yy]
In the choice, I have added this condition to the path that finally ask the website:
#[payload.get(0).TRIES < 10]
Reached this point, the app runs and give no errors but I don't know how to test it. Where does the flow start? How can I test it as I was the user?
Aditionally, if you see anything wrong in the syntax I used above, I would appreciate if you tell me.
Thanks in advance!!!

munit will require you to learn the basics of this process first, but it is the primary testing tool of mule. With it, you will create a test suite to execute various flows and verify that when given know inputs the correct processing occur in a repeatable manner. In the test, you can mock critical calls, such as a write to your DB so that the actual is called but not actually done so as to not modify your DB table. Likewise, on reads from the DB you can either actually make a call to get known data, or returned mocked test data to exercise all paths in the flow.

Related

Create Automate Null Alert for our Data

I want to create an automate alert for our data such as null, unequal amount, etc. It'll send to our email. The data stored in bigquery. How do i create this?
it automatically send to our email if the database have null value, empty value, unequal.
e.g
Look!
Table invoice have 0 rows detected. You can check from XX, and XY table.
or for the advance
Look!
Table invoice have 0 rows detected. You can check from XX, and XY table. This causes because XYZ.
The easiest way to resolve this request is to develop a custom solution on top of app engine or cloud functions, so that you can run your own queries and trigger email notifications.
This approach will not alert you in real-time but based on an schedule.
The other option if your use case is related to INSERT statements and real-time analysis is to take advantage of the logging alerts.
This will trigger an alert based on INSERT events, so you need to build your own query and setup the alert accordingly.
You can start with something like this:
resource.type="bigquery_resource"
protoPayload.methodName="jobservice.insert"
If you don't want to create a custom solution, it sounds like the Great Expectations framework could be a good fit for you.
This is a system for defining properties you expect of your data. It can be connected to BigQuery. It can be configured to send you emails if your tests ('expectations') fail.
You would still need to host the system yourself (they have a cloud solution coming in 2023).
If you want to get started fast and don't have easy access to hosting, you could also consider DBT (data build tool). DBT can connect to BigQuery and allows you to run tests (though this is not its main purpose). These tests can include checks for null data. You can write custom tests for more complex checks.
I mention it because they already have a paid for cloud solution, DBT Cloud, and yes, you can send email notifications if your tests fail. Depending on circumstances, this could be the fastest way to get what you want.

Best way to export data from other company's SAP

I need to extract some data from my client's SAP ECC (the SUIM -> Users by Complex Selection Criteria -program RSUSR002)
Normally I give them a table of values that I they have to fill some field to extract what I need.
They have to make 63 different extractions (with different values of objects, for example - but inside the same transaction - you can see in the print) from their SAP, to later send to me all extracted files.
Do you know if there is an automated way to extract that, so they don't have to make 63 extractions?
My biggest problem is that every time they make mistakes. It's a lot of things to fill..
Can I create a variant and send it to them? Is it possible to export my variant so they can import it without the need to fill 63x different data?
Thank you.
When this is a task which takes considerable effort by multiple people each year, then it is something which might be worth automatizing.
First you need to find out where that transaction gets its data from. If you spend some time analyzing and debugging the program behind the transaction, you will surely find which SELECT's on which database table(s) provide that data. If you are lucky, there might even be a function module for it.
Then you just need to write an own ABAP program which performs the same selections.
Now about the interesting part: How to get that data to you. There are several approaches here. The best one depends on your requirements and your technical infrastructure. Some possibilities are:
Let users run the program in foreground, use the method cl_gui_frontend_services=>gui_download to save the data to a file on the user's PC and ask them to send it to you via email
Run the program in background and save the file on the application server. Then ask your sysadmins how to get that file from their application server to you. The simplest way would be to just map a network fileserver so they all write to the same place, but there might be some organizational hurdles in the way which prevent that. (Our security people would call me crazy if I proposed to allow access to SMB shares from outside of our network, but your mileage may vary)
Have the program send the data to you directly via email. You can send emails from an SAP system using the function module SO_NEW_DOCUMENT_ATT_SEND_API1. This of course requires that the system was configured to be able to send emails (which you can do with transaction code SCOT). Again, security considerations apply. When it's PII or other confidential data, then you should not send it in an unencrypted email.
Use an RFC call to send the data to your own SAP system which aggregates the data
Use a webservice call to send the data to your own non-SAP system which aggregates the data
You can create a recording in transaction SM35.
There you fill a tcode (SUIM), start recording, make some input in transaction SUIM and then press 'Execute'. Then you can go back to recording (F3 multiple times) and the system will generate some table with commands (structure is BDCDATA). You can delete unnecessary part (i.e. BACK button click) and save it to use as a 'macro'. Then you can replay this recording and it will do exactly what you did.
Also it's possible to export/import the recording to text file, so you can explore it's structure, write some VBA script to create such recording from your parameters and sent it to users. But keep in mind that blanks are meaningful.
It's a standard tools so there's no any coding in the system.
You can save the selection as a variant.
Fill in the selection criteria and press Save.
It can be reused.
You can also transport Variants if the they have a special name

How to build automation test for Webservice api that's independence with Database

I'm new with automation test term. Currently I had a project which would like to apply Cucumber to test Rest Api. But when i try to assert out put of endpoints of this api base on current data, so I wonder what happen if I changed environment or there are any change in test database in the future, so my test case will be potential to fail.
What is the best practice to write test which's independence on database.
Or I need to run my test with empty separated db and execute some script to initialize db before to run test?
In order for your tests to be trustworthy, they should not depend on the test data being in the database or not. You should be in control of that data. So in order to make it independent of the current state of the database: insert the expected data as a precondition (setup) of your test. (And delete it again at the end of the test). If the database connection is not actually part of your test, you could also stub or mock the result from the database (this will make your tests faster, as you're not using db connectivity).
If you are going to assert the response value that comes (eg:number of cars) it is actually impossible to make it database independent. I guess you can understand why? What I would do in a similar situation is something like this.
Use the API and get the number of cars in the database (eg: 544) and assign it to a variable.
Using the API add another car to the database.
Then again check the total cars in the database and assert (should be 544 + 1 else fail)
Hope this helps.

How do I design a Gherkin/SpecFlow/Selenium solution to have easily parametrizable logins

I am developing a solution for validation of exams developed on top of a web software. This implies that:
Multiple users, each with separate logins and tenants, will implement an application to match exam standards
The exam proctor will have to run a validator that checks the implemented application against the definition of what is correct for each step (i.e. in a given step, the unit price times the ordered quantity is the dollar amount to be ordered).
The validator should give exact reports of what occurred so the exam can be rated.
For this, we decided to implement a stack using Selenium for browser automation, and SpecFlow/Gherkin/Cucumber to interact with Selenium.
Right now the main issue I'm having is how to have the person who administers the exam successfully and easily validate, for 20 students, that their exam is correct. My current way of running things is having an NUnit console runner being invoked by a powershell script that then uses SpecFlow to create a detailed execution report.
Should my powershell script go edit the feature files with tables containing the logins for each of the students, obtained from a .csv or something? Is there any way I can pass the csv file to NUnit so it can be used in the tests?
Thanks,
JM
I would put the login information into the app.config or another file. Before you start the test run, change the values for that run. In the steps then you read the values from it.
I agree with all the responses provided earlier. However, if you dont want to do any of those, you can set an environment variable with the patient login key (or even the credentials) and save login+password in a file, Database or even a csv. At runtime, you will just need to read this key and insert whatever logic you want. This will work well even on non windows, build machines etc

A process monitor based on periodic sql selects - does this exist or do I need to build it?

I need a simple tool to visualize the status of a series of processes (ETL processes, but that shouldn't matter). This process monitor need to be customizable with color coding for different status codes. The plan is to place the monitor on a big screen in the office making any faults instantly visible to everyone.
Today I can check the status of these processes by running an sql statement against the underlying tables in our oracle database. The output of these queries are the abovementioned status codes for each process. I'm imagining using these sql statements, run periodically (say, every minute or so), as an input to this monitor.
I've considered writing a simple web interface for doing this, but I'm thinking something like this should exist out there already. Anyone have any suggestions?
If just displaying on one workstation another option is SQL Developer Custom Reports. You would still have to fire up SQL Developer and start the report, but the custom reports have a setting so they can be refreshed at a specified interval (5-120 seconds). Depending on the 'richness' of the output you want you can either:
Create a simple Table report (style = Table)
Paste in one of the queries you already use as a starting point.
Create a PL/SQL Block that outputs HTML via DBMS_OUTPUT.PUT_LINE statements (Style = plsql-dbms_output)
Get creative as you like with formatting, colors, etc using HTML tags in the output. I have used this to create bar graphs to show progress of v$Long_Operations. A full description and screen shots are available here Creating a User Defined HTML Report
in SQL Developer.
If you just want to get some output moving you can forego SQL Developer, schedule a process to use your PL/SQL block to write HTML output to a file, and use a browser to display your generated output on your big screen. Alternately make the file available via a web server so others in your office can bring it up. Periodically regnerate the file and make sure to add a refresh meta tag to the page so browsers will periodically reload.
Oracle Application Express is probably the best tool for this.
I would say roll your own dashboard. Depends on your skillset, but I'd do a basic web app in Java (spring or some mvc framework, I'm not a web developer but I know enough to create a basic functional dashboard). Since you already know the SQL needed, it shouldn't be difficult to put together and you can modify as needed in future. Just keep it simple I would say (don't need a middleware or single sign-on or fancy views/charts).