How to prevent #Rollback in a specific method? - testing

I'm doing tests using Spock in Grails 3. A particular test case is breaking because Grails is speaking to my database in two different sessions in this case. My spec is annotated with #Rollback, to rollback all changes made with each test.
Is there a way to disable #Rollback for this one method, then after the test is complete, manually rollback the changes?
Update
A stripped down example of my test spec is below:
#Log4j
#Integration
#Rollback
class PublicationReportBreakingSpec extends Specification {
#Unroll
#Rollback
void "Invalidate #type object and check not in report"(type, status, hits, expect)
{
//We want to roll back the changes made to the publication after each pass.
given: "Find a valid #type publication"
//Finding publication which is Read (valid, should appear in report)
final Publication pub = getSinglePub(type, status, hits)
//Set publication to be Unread (invalid, shouldn't appear in report)
when: "The #type publication is altered to fail"
pub.setStatus('Unread')
pub.save(flush:true, failOnError: true)
then: "Check the properties match the test case"
pub.status = 'Unread'
//Generate report of read publications
when: "The report is run"
def resp = PublicationController.reportReadPublications()
//Make sure report doesn't contain the publication
then: "Check for expected result #expect"
checkReportExpectedResult(resp, expect)
where:
clause | type | status | hits || expect
"Book is unread" | 'Book' | 'Read' | 1200 || 0
"Article is unread" | 'Article' | 'Read' | 200 || 0
}
//Checks report for expect value
public void checkReportExpectedResult(resp, expect){...}
//Returns single publication based on passed in parameters
public Publication getSinglePub(type, status){...}
}
The stacktrace for the error is:
<testcase name="Testing breaking domain class changes. Book is unread" classname="com.my.project.PublicationReportBreakingSpec" time="118.216">
<failure message="java.lang.IllegalStateException: No transactionManager was specified. Using #Transactional or #Rollback requires a valid configured transaction manager. If you are running in a unit test ensure the test has been properly configured and that you run the test suite not an individual test method." type="java.lang.IllegalStateException">java.lang.IllegalStateException: No transactionManager was specified. Using #Transactional or #Rollback requires a valid configured transaction manager. If you are running in a unit test ensure the test has been properly configured and that you run the test suite not an individual test method.
at grails.transaction.GrailsTransactionTemplate.<init>(GrailsTransactionTemplate.groovy:60)
at com.my.project.PublicationReportBreakingSpec (Removed due to sensitivity)
at com.my.project.PublicationReportBreakingSpec (Removed due to sensitivity)
at groovy.lang.Closure.call(Closure.java:414)
at groovy.lang.Closure.call(Closure.java:430)
at grails.transaction.GrailsTransactionTemplate$2.doInTransaction(GrailsTransactionTemplate.groovy:96)
at org.springframework.transaction.support.TransactionTemplate.execute(TransactionTemplate.java:133)
at grails.transaction.GrailsTransactionTemplate.execute(GrailsTransactionTemplate.groovy:93)
at com.my.project.PublicationReportBreakingSpec.Invalidate #type object and check not in report. Check #clause, publication type #type(PublicationReportBreakingSpec.groovy)

According to the Javadoc annotating the test with #Rollback(false) should do the trick.

It might be an issue if #Rollback(false) is not working...
As a workaround, the annotation can be put at method level too. So remove the annotation from your test class and put it on your test methods, except the one you don't want to rollback. Then add a cleanup: section in that spec to cleanup the data you created.

If you are not interested in rolling back your changes, you don't want to use a transaction... therefore you can skip the transaction in THAT method. You can do it using #NotTransactional.

Related

How to skip a testcase if a link is not present and go to next link in Robot framework

Scenario:
There are 5 Links in the Home page:
Link 1
Link 2
Link 3
Link 4
Link 5
Each of the above links are separate test cases, so there are a total of 5 test cases.
All the links may not present in all the sites, according to the requirements.
So I need to write a Robot framework test case which works dynamically for all the sites, Like 1 site may have 3 links only some has all the 5 links. So its like SKIPPING a particular Test case if that lisk is not present.
*** Keywords ***
Go to Manage Client Reports
Click Link link:Manage Client Reports
Can anyone help.
In the upcoming Robot Framework Release 4.0 a new test status skipped will be introduced. Here is a brief status about the release:
Past due by 27 days 87% complete
Major release concentrating on adding the skip status (#3622), IF/ELSE
(#3074) and enhancing the listener API (#3296 and #3538). Last major
release to support Python 2.
So it can be ready any time soon now.
This is what you can have New SKIP status #3622. There will be a Skip If and a Skip keywords and more to be used.
How to skip tests
There are going to be multiple ways:
A special exception that library keywords can use to mark a single test to be skipped. See also #3685.
BuiltIn keyword Skip (or Skip Test and Skip Task) that utilizes the aforementioned exception.
BuiltIn keyowrd Skip If to skip based on condition.
When the skipping exception is used in a suite setup, all tests in the suite are skipped.
Command line option --skip to unconditionally skip tests based on tags. Similar to --exclude but skipped tests are shown in logs/reports
with a skip status and not dropped from execution altogether.
Command line option --skiponfailure to skip tests if they fail. Similar effect than with the current --noncritical.
What about criticality
As already discussed in #2087, the skip status is very similar feature
than Robot's current criticality concept. There are many people who
would like to have both, but I don't think that's a good idea and
believe it's better to remove criticality when skipping is added.
Separate issue #3624 covers removing criticality and explains this in
more detail. Colors
Skip status needs a specific color to match current pass (green) and
fail (red). Yellow feels like a good candidate with a traffic light
metaphor, but I'm open for other ideas and we could possibly change
other colors as well. Probably should make colors configurable too --
currently only report background colors support it.
Report background color mentioned above needs some thinking as well.
Currently it's either green or red, but with the added skip status we
could use also yellow or whatever skip color we decide to use.
Different scenarios where different colors could be used are listed
below (assuming green/yellow/red scheme):
All tests pass. This is naturally green.
Any test fails. This is naturally red.
Any test is skipped (no failures). This probably should be green but could also be yellow.
All tests skipped. This could be yellow. Could also be green but that's a bit odd if all tests are yellow.
Depending on your deadlines you might won't be able to wait this release, nevertheless it is a good to know thing.
There is an advanced solution where you can generate your test cases run-time. To do so you have to implement a small library that also acts as a listener. This way it can have a start_suite method that will be invoked and it will get the suite(s) as Python object(s), robot.running.model.TestSuite. Then you could use this object along with Robot Framework's API to create new test cases. The idea below was inspired by and it is based on this blog post: Dynamically create test cases with Robot Framework.
DynamicTestLibrary.py:
from robot.running.model import TestSuite
class DynamicTestLibrary(object):
ROBOT_LISTENER_API_VERSION = 3
ROBOT_LIBRARY_SCOPE = 'GLOBAL'
ROBOT_LIBRARY_VERSION = 0.1
def __init__(self):
self.ROBOT_LIBRARY_LISTENER = self
self.top_suite = None
def _start_suite(self, suite, result):
self.top_suite = suite
self.top_suite.tests.clear() # remove placeholder test
def add_test_case(self, keyword, *args):
tc = self.top_suite.tests.create(name=keyword)
tc.keywords.create(name=keyword, args=args)
globals()[__name__] = DynamicTestLibrary
UPDATE for Robot Framework 4.0
Due to the backward incompatible changes (Running and result models have been changed) made in the 4.0 release the add_test_case function should be change as below if you are using version above 4.0.
def add_test_case(self, name, keyword, *args):
tc = self.top_suite.tests.create(name=name)
tc.body.create_keyword(name=keyword, args=args)
You can utilize this library in a suite setup, in which you check which links are present and add test cases for the ones that are available.
test.robot
*** Settings ***
Library DynamicTestLibrary
Suite Setup Check Links And Generate Test Cases
*** Variables ***
##{LINKS} Manage Clients # test input 1
#{LINKS} Manage Clients Manage Client Hardware # test input 2
##{LINKS} Manage Clients Manage Client Hardware Manage Client Reports # test input 3
*** Test Cases ***
Placeholder
[Documentation] Placeholder test that will be removed during execution.
No Operation
*** Keywords ***
Check Links And Generate Test Cases
FOR ${link} IN #{LINKS}
DynamicTestLibrary.Add Test Case Go to ${link}
END
Go to Manage Client Reports
Log Many Click Link link:Manage Client Reports
Go to Manage Client Hardware
Log Many Click Link link:Manage Client Hardware
Go to Manage Clients
Log Many Click Link link:Manage Clients
Go to ${link} will give the appropriate keyword name that will be called in a test case with the same name. You can check with each example input list that the number of executed tests will be equal with the length of the list.
Here is the output:
# robot --pythonpath . test.robot
==============================================================================
Test
==============================================================================
Go to Manage Clients | PASS |
------------------------------------------------------------------------------
Go to Manage Client Hardware | PASS |
------------------------------------------------------------------------------
Test | PASS |
2 critical tests, 2 passed, 0 failed
2 tests total, 2 passed, 0 failed
==============================================================================

Persist detailed information about failed Item processing

I´ve got a Job that runs a TaskletStep, then a chunk-based step and then another TaskletStep.
In each of these steps, errors (in the form of Exceptions) can occur.
The chunk-based step looks like this:
stepBuilderFactory
.get("step2")
.chunk<SomeItem, SomeItem>(1)
.reader(flatFileItemReader)
.processor(itemProcessor)
.writer {}
.faultTolerant()
.skipPolicy { _ , _ -> true } // skip all Exceptions and continue
.taskExecutor(taskExecutor)
.throttleLimit(taskExecutor.corePoolSize)
.build()
The whole job definition:
jobBuilderFactory.get("job1")
.validator(validator())
.preventRestart()
.start(taskletStep1)
.next(step2)
.next(taskletStep2)
.build()
I expected that Spring Batch somehow picks up the Exceptions that occur along the way, so I can then create a Report including them after the Job has finished processing. Looking at the different contexts, there´s also fields that should contain failureExceptions. However, it seems there´s no such information (especially for the chunked step).
What would be a good approach if I need information about:
what Exceptions did occur in which Job execution
which Item was the one that triggered it
The JobExecution provides a method to get all failure exceptions that happened during the job. You can use that in a JobExecutionListener#afterJob(JobExecution jobExecution) to generate your report.
In regards to which items caused the issue, this will depend on where the exception happens (during the read, process or write operation). For this requirement, you can use one of the ItemReadListener, ItemProcessListener or ItemWriteListener to keep record of the those items (For example, by adding them to the job execution context to be able to get access to them in the JobExecutionListener#afterJob method for your report).

Pentaho Data Integration: Error Handling

I'm building out an ETL process with Pentaho Data Integration (CE) and I'm trying to operationalize my Transformations and Jobs so that they'll be able to be monitored. Specifically, I want to be able to catch any errors and then send them to an error reporting service like Honeybadger or New Relic. I understand how to do row-level error reporting but I don't see a way to do job or transaction failure reporting.
Here is an example job.
The down path is where the transformation succeeds but has row errors. There we can just filter the results and log them.
The path to the right is the case where the transformation fails all-together (e.g. DB credentials are wrong). This is where I'm having trouble: I can't figure out how to get the error info to be sent.
How do I capture transformation failures to be logged?
You can not capture job-level errors details inside the job itself.
However there are other options for monitoring.
First option is using database logging for transformations or jobs (see the "Log" tab in the job/trans parameters dialog) - this way you always have up-to-date information about the execution status so you can, say, write a job that periodically scans the logging database and sends error reports wherever you need.
Meanwhile this option seems to be something pretty heavy-weight for development and support and not too flexible for further modifications. So in our company we ended up with monitoring on a job-execution level - i.e. when you run a job with kitchen.bat and it fails by any reason you get an "error" status of execution of the kitchen, so you can easily examine it and perform necessary actions with whenever tools you'd like - .bat commands, PowerShell or (in our case) Jenkins CI.
You could use the writeToLog("e", "Message") function in the Modified Java Script step.
Documentation:
// Writes a string to the defined Kettle Log.
//
// Usage:
// writeToLog(var);
// 1: String - The Message which should be written to
// the Kettle Debug Log
//
// writeToLog(var,var);
// 1: String - The Type of the Log
// d - Debug
// l - Detailed
// e - Error
// m - Minimal
// r - RowLevel
//
// 2: String - The Message which should be written to
// the Kettle Log

I can't rename my transaction on HP Virtual User Generator Script

I've copied Vu's Script, (and of course renamed it) which have access to another DB, and when I run it I have in the output the old transaction name of the old script.
here is the old transaction name which are to seen in the output : MDM_GetAssociations
Here is the renamed transaction:MDM_GET_ASSOCIATIONS_Otmann
After renaming the transaction, I run the script, I got 2 errors:
1)
Error 14 undeclared identifier `MDM_GET_ASSOCIATIONS_Otmann' Action.c C:\GCDM_Test\Scripts\MDM\MDM_Get_POSTGRE_Otmann MDM_Get_POSTGRE_Otmann
2)
Error 15 type error in argument 1 to web_custom_request'; foundint' expected `pointer to const char' Action.c C:\GCDM_Test\Scripts\MDM\MDM_Get_POSTGRE_Otmann MDM_Get_POSTGRE_Otmann
and this is my script :
//########## start the test scenario ############
web_set_max_html_param_len("8000");
web_set_sockets_option("SSL_VERSION", "TLS");
web_add_auto_header("Content-Type","application/xml");
web_add_auto_header("Accept","application/json");
web_add_auto_header("Authorization",lr_eval_string("{AUTHORIZATION}"));
//GetAssociations, NOTE: our dummy customers have often NO associations!
web_reg_save_param("RESPONSE", "LB=", "RB=", "Search=Body", LAST);
lr_start_transaction((char*)MDM_GENERIC_TRANSACTION);
lr_start_transaction((char*)MDM_GET_ASSOCIATIONS);
web_custom_request(MDM_GET_ASSOCIATIONS,
"URL={TEST_ENV_HOSTNAME}/api/v3/clients/{BUSINESS_CONTEXT}/customers/{GCID}/associations",
"Method=GET",
"Resource=1", // => We are retrieving a ressource,
// which implies that it is not critical for the success of the script.
// Any failures (HTTP 404 - Not found etc.) in downloading the resource
// will be considered as warnings rather than errors.
"EncType=application/xml",
"Referer=Loadrunner",
LAST);
lr_end_transaction((char*)MDM_GET_ASSOCIATIONS, LR_AUTO);
lr_end_transaction((char*)MDM_GENERIC_TRANSACTION, LR_AUTO);
return 0;
}
and this is the output where the old transaction name apeared (MDM_GetAssociations), but I don't know where is she coded or from where she came, and as I said before when I try to change it in all position which has to do with Transactions,I got the errors mentioned above.
Here ist the output of the script, where you can see the name of the old the transaction(MDM_GetAssociations).
Action.c(13): Notify: Transaction "MDM_GenericServiceCall_ALL" started.
Action.c(14): Notify: Transaction "MDM_GetAssociations" started.
Action.c(15): web_custom_request("MDM_GetAssociations") started
Action.c(15): web_custom_request("MDM_GetAssociations") highest severity level was "warning", 505 body bytes, 1971 header bytes [MsgId: MMSG-26388]
Action.c(25): Notify: Transaction "MDM_GetAssociations" ended with "Pass" status (Duration: 1,8408 Wasted Time: 1,2668).
Action.c(26): Notify: Transaction "MDM_GenericServiceCall_ALL" ended with "Pass" status (Duration: 2,4066 Wasted Time: 1,2668).
Ending action Action.
Ending iteration 1.
You have two variables. You do not have their declarations here. You do not have their contents. And you appear to be casting them from another data type to a pointer to a character.
Does this pass with a literal, "My_Test_Transaction"? If so, then you are likely looking at oddities on how your variable is declared, populated and referenced.

MQL4 How To Detect Status During Change of Account (Completed Downloading of Historical Trades)

In MT4, there exists a stage/state: when we switch from AccountA to AccountB, when Connection is established and init() and start() are triggered by MT4; but before the "blinnnggg" (sound) when all the historical/outstanding trades are loaded from Server.
Switch Account>Establish Connection>Trigger Init()/Start() events>Start Downloading of Outstanding/Historical trades>Completed Downloading (issue "bliinng" sound).
I need to know (in MQL4) that all the trades are completed downloaded from the tradeServer --to know that the account is truly empty -vs- still downloading history from tradeServer.
Any pointer will be appreciated. I've explored IsTradeAllowed() IsContextBusy() and IsConnected(). All these are in "normal" state and the init() and start() events are all fired ok. But I cannot figure out if the history/outstanding trade lists has completed downloading.
UPDATE: The final workaround I finally implemented was to use the OrdersHistoryTotal(). Apparently this number will be ZERO (0) during downloading of order history. And it will NEVER be zero (due to initial deposit). So, I ended-up using this as a "flag".
Observation
As the problem was posted, there seems no such "integrated" method for MT4-Terminal.
IsTradeAllowed() reflects an administrative state of the account/access to the execution of the Trading Services { IsTradeAllowed | !IsTradeAllowed }
IsConnected() reflects a technical state of the visibility / login credentials / connection used upon an attempt to setup/maintain an online connection between a localhost <-> Server { IsConnected() | !IsConnected() }
init() {...} is a one-stop setup facility, that is/was being called once an MT4-programme { ExpertAdvisor | Script | TechnicalIndicator } was launched on a localhost machine. This facility is strongly advised to be non-blocking and non-re-entrant. A change from the user account_A to another user account_B is typically ( via an MT4-configuration options ) a reason to stop an execution of a previously loaded MQL4-code ( be it an EA / a Script / a Technical Indicator ) )
start() {...} is an event-handler facility, that endlessly waits, for a next occurrence of an FX-Market Event appearance ( being propagated down the line by the Broker MT4-Server automation ) that is being announced via an established connection downwards, to the MT4-Terminal process, being run on a localhost machine.
A Workaround Solution
As understood, the problem may be detected and handled indirectly.
While the MT4 platform seems to have no direct method to distinguish between the complete / in-complete refresh of the list of { current | historical } trades, let me propose a method of an indirect detection thereof.
Try to launch a "signal"-trade ( a pending order, placed geometrically well far away, in the PriceDOMAIN, from the current Ask/Bid-levels ).
Once this trade would be end-to-end registered ( Server-side acknowledged ), the local-side would have confirmed the valid state of the db.POOL
Making this a request/response pattern between localhost/MT4-Server processes, the localhost int init(){...} / int start(){...} functionality may thus reflect a moment, when the both sides have synchronised state of the records in db.POOL