How to run a simulation case using CaseRunner function? - ocean

I'm currently working on a Petrel plug-in in which I need to run a simulation case (through a "For Loop"), I create my case runner, export it and the run it...but after finishing the simulation and closing the console, I check the CaseRunner.IsRunning property and it shows true! This cause that the results have not been loaded to the petrel system.
I tried to load the results manually after finishing the Run of my case (using caserunner and also using a batch file in my code) and I can't see any results in the programming environment.
Does anybody have a solution for this situation?
This is the related part of my code:
Case theCase = arguments.TheCase;
Case Test2 = simroots.CreateCase(theCase, "FinalCase");
CaseRunner cRunners = SimulationSystem.GetCaseRunner(Test2);
cRunners.Export();
cRunners.Run();
bool b = cRunners.IsRunning;
actually I checked when the process finishes; after "cRunners.Run" the code waits for exit the process using:
System.Diagnostics.Process[] parray = System.Diagnostics.Process.GetProcesses();
foreach (System.Diagnostics.Process pr in parray)
{
if (pr.ProcessName == "cmd")
{
pr.WaitForExit();//just wait
}
}
and when the console closes itself, i checked the cRunners.IsRunning term.
However, I'm not so expert... can you show me an example of using CaseRunnerMonitor? both definition of the derived class and its implementation.
All I need is running a simulation case n times via a for loop and
after each Run access to its provided summary results.
I tried some different scenarios to get my desired results, I put here some of them
First I create my CaseRunnerMonitor class:
public class MyMonitor : CaseRunnerMonitor
{
//…
public override void RunCompleted()
{
// define arguments
foreach (Slb.Ocean.Petrel.DomainObject.Simulation.SummaryResult sr in simroot.SummaryResults)
{
IEnumerable ….
List ….
// some codes to change the input arguments according to the current step simulation summary results
}
PetrelLogger.InfoOutputWindow("MyMonitor is completed!");
}
//…
}
And then use it:
private void button1_Click(object sender, EventArgs e)
{
// Some codes that define some arguments…
for (int j = 0; j < 8; j++)
{
// some changes in the arguments
Case MyTest;
MyMonitor monit4 = new MyMonitor();
SimulationRoot simroot = SimulationRoot.Get(PetrelProject.PrimaryProject);
using (ITransaction trans = DataManager.NewTransaction())
{
trans.Lock(simroot);
MyTest = simroot.CreateCase(OriginalCase, MycaseNameFunc());
trans.Commit();
}
CaseRunner cRun = SimulationSystem.GetCaseRunner(MyTest);
cRun.Export();
cRun.Run(monit4);
//Wait(); //waits for current process to close
}
}
But the thing is that MyTest case results part are empty after my run is completed. in this case all the results loaded to the petrel when the 8th (last) simulation completes. If I don’t activate the Wait() function, all 8 runs are almost calling simultaneously…
I changed my scenario, my callback after each run is read the simulation results, change something and call next run so
I create my CaseRunnerMonitor class:
public class MyMonitor2 : CaseRunnerMonitor
{
//…
public override void RunCompleted()
{
// define arguments
index++;
if (index <=8)
{
foreach (Slb.Ocean.Petrel.DomainObject.Simulation.SummaryResult sr in simroot.SummaryResults)
{
IEnumerable ….
List ….
// some codes to change the input arguments according to the current step simulation summary results
}
Case MyTest;
MyMonitor monit4 = new MyMonitor();
SimulationRoot simroot = SimulationRoot.Get(PetrelProject.PrimaryProject);
using (ITransaction trans = DataManager.NewTransaction())
{
trans.Lock(simroot);
MyTest = simroot.CreateCase(OriginalCase, MycaseNameFunc());
trans.Commit();
}
CaseRunner cRun = SimulationSystem.GetCaseRunner(MyTest);
cRun.Export();
cRun.Run(monit4);
}
PetrelLogger.InfoOutputWindow("MyMonitor2 is completed!");
}
//…
}
And then use it:
private void button1_Click(object sender, EventArgs e)
{
Index=0;
// Some codes that define some arguments…
// some changes in the arguments
Case MyTest;
MyMonitor monit5 = new MyMonitor();
SimulationRoot simroot = SimulationRoot.Get(PetrelProject.PrimaryProject);
using (ITransaction trans = DataManager.NewTransaction())
{
trans.Lock(simroot);
MyTest = simroot.CreateCase(OriginalCase, MycaseNameFunc());
trans.Commit();
}
CaseRunner cRun = SimulationSystem.GetCaseRunner(MyTest);
cRun.Export();
cRun.Run(monit5);
}
in this situation no need to wait() function is required. But the problem is that I access to MyTest case results in one level before the current run completes. i.e, I can view the step 5 results via MyTest.Results when the run 6 is completed while step 6 results are empty despite of completion of its run.

I check the CaseRunner.IsRunning property and it shows true
This is because Caserunner.Run() is non-blocking; that is, it starts another thread to launch the run. Control flow then passes immediately to your cRunners.IsRunning check which is true as simulation is in progress.
cRunners.Run(); //non-blocking
bool b = cRunners.IsRunning;
You should look at CaseRunnerMonitor if you want a call-back when the simulation is complete.
Edit:
can you show me an example of using CaseRunnerMonitor? both definition of the derived class and its implementation.
Create your monitor class:
public class CustomCaseRunnerMonitor : CaseRunnerMonitor
{
//...
public override void RunCompleted()
{
//This is probably the callback you want
}
}
Use it:
Case myCase = WellKnownSimulators.ECLIPSE100.CreateSimulationCase(...);
CaseRunner runner = SimulationSystem.GetCaseRunner(myCase);
var myMonitor = new CustomCaseRunnerMonitor(...);
runner.Run(myMonitor);
//Your callbacks defined in your CustomCaseRunnerMonitor will now be called
See also "Running and monitoring a Simulation" in SimulationSystem API documentation.

Ah, OK. I didn't realise you were trying to load results with the CaseMonitor.
I'm afraid the short answer is "No, you can't know when Petrel has loaded results".
The long answer is Petrel will automatically load results if the option is set in the Case arguments. (Define Simulation Case -> Advance -> Automatically load results).
In API:
EclipseFormatSimulator.Arguments args = EclipseFormatSimulator.GetEclipseFormatSimulatorArguments(myCase);
EclipseFormatSimulator.Arguments.RuntimeArguments runtimeArgs = args.Runtime;
runtimeArgs.AutoLoadResults = true;
runtimeArgs.AutoLoadResultsInterval = 120; //How frequently in seconds Petrel polls sim dir.
You will have to poll SimulationRoot.SummaryResults (using the same API you are already using) after case has finished.
You should use the CaseRunnerMonitor we discussed to determine when to start doing this, rather than the System.Diagnostics.Process[] parray = System.Diagnostics.Process.GetProcesses(); code you currently have.

Related

How to Take Screenshot when TestNG Assert fails?

String Actualvalue= d.findElement(By.xpath("//[#id=\"wrapper\"]/main/div[2]/div/div[1]/div/div[1]/div[2]/div/table/tbody/tr[1]/td[1]/a")).getText();
Assert.assertEquals(Actualvalue, "jumlga");
captureScreen(d, "Fail");
The assert should not be put before your capture screen. Because it will immediately shutdown the test process so your code
captureScreen(d, "Fail");
will be not reachable
This is how i usually do:
boolean result = false;
try {
// do stuff here
result = true;
} catch(Exception_class_Name ex) {
// code to handle error and capture screen shot
captureScreen(d, "Fail");
}
# then using assert
Assert.assertEquals(result, true);
1.
A good solution will be is to use a report framework like allure-reports.
Read here:allure-reports
2.
We don't our tests to be ugly by adding try catch in every test so we will use Listeners which are using an annotations system to "Listen" to our tests and act accordingly.
Example:
public class listeners extends commonOps implements ITestListener {
public void onTestFailure(ITestResult iTestResult) {
System.out.println("------------------ Starting Test: " + iTestResult.getName() + " Failed ------------------");
if (platform.equalsIgnoreCase("web"))
saveScreenshot();
}
}
Please note I only used the relevant method to your question and I suggest you read here:
TestNG Listeners
Now we will want to take a screenshot built in method by allure-reports every time a test fails so will add this method inside our listeners class
Example:
#Attachment(value = "Page Screen-Shot", type = "image/png")
public byte[] saveScreenshot(){
return ((TakesScreenshot)driver).getScreenshotAs(OutputType.BYTES);
}
Test example
#Listeners(listeners.class)
public class myTest extends commonOps {
#Test(description = "Test01: Add numbers and verify")
#Description("Test Description: Using Allure reports annotations")
public void test01_myFirstTest(){
Assert.assertEquals(result, true)
}
}
Note we're using at the beginning of the class an annotation of #Listeners(listeners.class) which allows our listeners to listen to our test, please mind the (listeners.class) can be any class you named your listeners.
The #Description is related to allure-reports and as the code snip suggests you can add additional info about the test.
Finally, our Assert.assertEquals(result, true) will take a screen shot in case the assertion fails because we enabled our listener.class to it.

Abort/ignore parameterized test in JUnit 5

I have some parameterized tests
#ParameterizedTest
#CsvFileSource(resources = "testData.csv", numLinesToSkip = 1)
public void testExample(String parameter, String anotherParameter) {
// testing here
}
In case one execution fails, I want to ignore all following executions.
AFAIK there is no built-in mechanism to do this. The following does work, but is a bit hackish:
#TestInstance(Lifecycle.PER_CLASS)
class Test {
boolean skipRemaining = false;
#ParameterizedTest
#CsvFileSource(resources = "testData.csv", numLinesToSkip = 1)
void test(String parameter, String anotherParameter) {
Assumptions.assumeFalse(skipRemaining);
try {
// testing here
} catch (AssertionError e) {
skipRemaining = true;
throw e;
}
}
}
In contrast to a failed assertion, which marks a test as failed, an assumption results in an abort of a test. In addition, the lifecycle is switched from per method to per class:
When using this mode, a new test instance will be created once per test class. Thus, if your test methods rely on state stored in instance variables, you may need to reset that state in #BeforeEach or #AfterEach methods.
Depending on how often you need that feature, I would rather go with a custom extension.

I need answer of one jade agent to depend on information from others and don't know how to do it

I'm new to jade and I have 5 agents in eclipse that have formula for finding an average and the question is how to send information from agent to this formula for calculation?
I'll be glad if someone can help me with this.
For example, there is one of my agents. There's no formula, because I don't know how to represent it. This is math expression of it: n+=alfa(y(1,2)-y(1,1))
public class FirstAgent extends Agent {
private Logger myLogger = Logger.getMyLogger(getClass().getName());
public class WaitInfoAndReplyBehaviour extends CyclicBehaviour {
public WaitInfoAndReplyBehaviour(Agent a) {
super(a);
}
public void action() {
ACLMessage msg = myAgent.receive();
if(msg != null){
ACLMessage reply = msg.createReply();
if(msg.getPerformative()== ACLMessage.REQUEST){
String content = msg.getContent();
if ((content != null) && (content.indexOf("What is your number?") != -1)){
myLogger.log(Logger.INFO, "Agent "+getLocalName()+" - Received Info Request from "+msg.getSender().getLocalName());
reply.setPerformative(ACLMessage.INFORM);
try {
reply.setContentObject(7);
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
else{
myLogger.log(Logger.INFO, "Agent "+getLocalName()+" - Unexpected request ["+content+"] received from "+msg.getSender().getLocalName());
reply.setPerformative(ACLMessage.REFUSE);
reply.setContent("( UnexpectedContent ("+content+"))");
}
}
else {
myLogger.log(Logger.INFO, "Agent "+getLocalName()+" - Unexpected message ["+ACLMessage.getPerformative(msg.getPerformative())+"] received from "+msg.getSender().getLocalName());
reply.setPerformative(ACLMessage.NOT_UNDERSTOOD);
reply.setContent("( (Unexpected-act "+ACLMessage.getPerformative(msg.getPerformative())+") )");
}
send(reply);
}
else {
block();
}
}
}
So from what I can make out you want to (1) send a formula/task to multiple platforms, (2) have them performed locally, (3) and have the results communicated back.
I think there are atleast two ways of doing this:
The first is sending an object in an ACLMessage using Java Serialisation. This is a more OOP approach and not very "Agenty".
The second being the cloning or creating a local task agent.
Using Java SerialZation. (Solution 1)
Create an object for the calculation
class CalculationTask implements serialization
{ int n;
int calculate(){
n+=alfa(y(1,2)-y(1,1));
}
}
send the calculation object via ACLMESSAGE from the senderAgent.
request.setContentObject(new CalculationTask())
recieve the calculation object by recieverAgent and perform calculation on the object. Then response setting the complete task in the response.
CalculationTask myTask = request.getContentObject();
myTask.calculate();
ACLMESSAGE response = request.createReply();
response.setContentObject(myTask());
response.setPerformative(ACLMESSAGE.INFORM)
send(response)
The senderAgent then receives the complete job.
ACLMESSAGE inform = getMessage();
CalculationTask completeTask = inform.getContentObject();
completeTask.process()
Creating local Task Agents (Solution 2)
The Agent Orientated way of doing it would be to launch a task agent on each platform. Have each task agent complete the task and respond appropriately.

Problem attaching WatiN to IE

I am experimenting with WatiN for our UI testing, I can get tests to work, but I can't get IE to close afterwards.
I'm trying to close IE in my class clean up code, using WatiN's example IEStaticInstanceHelper technique.
The problem seems to be attaching to the IE thread, which times out:
_instance = IE.AttachTo<IE>(Find.By("hwnd", _ieHwnd));
(_ieHwnd is the handle to IE stored when IE is first launched.)
This gives the error:
Class Cleanup method
Class1.MyClassCleanup failed. Error
Message:
WatiN.Core.Exceptions.BrowserNotFoundException:
Could not find an IE window matching
constraint: Attribute 'hwnd' equals
'1576084'. Search expired after '30'
seconds.. Stack Trace: at
WatiN.Core.Native.InternetExplorer.AttachToIeHelper.Find(Constraint
findBy, Int32 timeout, Boolean
waitForComplete)
I'm sure I must be missing something obvious, has anyone got any ideas about this one?
Thanks
For completeness, the static helper looks like this:
public class StaticBrowser
{
private IE _instance;
private int _ieThread;
private string _ieHwnd;
public IE Instance
{
get
{
var currentThreadId = GetCurrentThreadId();
if (currentThreadId != _ieThread)
{
_instance = IE.AttachTo<IE>(Find.By("hwnd", _ieHwnd));
_ieThread = currentThreadId;
}
return _instance;
}
set
{
_instance = value;
_ieHwnd = _instance.hWnd.ToString();
_ieThread = GetCurrentThreadId();
}
}
private int GetCurrentThreadId()
{
return Thread.CurrentThread.GetHashCode();
}
}
And the clean up code looks like this:
private static StaticBrowser _staticBrowser;
[ClassCleanup]
public static void MyClassCleanup()
{
_staticBrowser.Instance.Close();
_staticBrowser = null;
}
The problem is that when MSTEST executes the method with the [ClassCleanup] attribute, it will be run on a thread that isn't part of the STA.
If you run the following code it should work:
[ClassCleanup]
public static void MyClassCleanup()
{
var thread = new Thread(() =>
{
_staticBrowser.Instance.Close();
_staticBrowser = null;
});
thread.SetApartmentState(ApartmentState.STA);
thread.Start();
thread.Join();
}
The WatiN website briefly mentions that WatiN won't work with threads not in the STA here but it isn't obvious that [TestMethod]'s run in the STA while methods like [ClassCleanup] and [AssemblyCleanupAttribute] do not.
By default when IE object are destroyed, they autoclose the browser.
Your CleanUp code may try to find a browser already close, which why you have an error.
Fixed this myself by dumping mstest and using mbunit instead. I also found that I didn't need to use any of the IEStaticInstanceHelper stuff either, it just worked.

How can I use nested Async (WCF) calls within foreach loops in Silverlight?

The following code contains a few nested async calls within some foreach loops. I know the silverlight/wcf calls are called asyncrously -but how can I ensure that my wcfPhotographers, wcfCategories and wcfCategories objects are ready before the foreach loop start? I'm sure I am going about this all the wrong way -and would appreciate an help you could give.
private void PopulateControl()
{
List<CustomPhotographer> PhotographerList = new List<CustomPhotographer>();
proxy.GetPhotographerNamesCompleted += proxy_GetPhotographerNamesCompleted;
proxy.GetPhotographerNamesAsync();
//for each photographer
foreach (var eachPhotographer in wcfPhotographers)
{
CustomPhotographer thisPhotographer = new CustomPhotographer();
thisPhotographer.PhotographerName = eachPhotographer.ContactName;
thisPhotographer.PhotographerId = eachPhotographer.PhotographerID;
thisPhotographer.Categories = new List<CustomCategory>();
proxy.GetCategoryNamesFilteredByPhotographerCompleted += proxy_GetCategoryNamesFilteredByPhotographerCompleted;
proxy.GetCategoryNamesFilteredByPhotographerAsync(thisPhotographer.PhotographerId);
// for each category
foreach (var eachCatergory in wcfCategories)
{
CustomCategory thisCategory = new CustomCategory();
thisCategory.CategoryName = eachCatergory.CategoryName;
thisCategory.CategoryId = eachCatergory.CategoryID;
thisCategory.SubCategories = new List<CustomSubCategory>();
proxy.GetSubCategoryNamesFilteredByCategoryCompleted += proxy_GetSubCategoryNamesFilteredByCategoryCompleted;
proxy.GetSubCategoryNamesFilteredByCategoryAsync(thisPhotographer.PhotographerId,thisCategory.CategoryId);
// for each subcategory
foreach(var eachSubCatergory in wcfSubCategories)
{
CustomSubCategory thisSubCatergory = new CustomSubCategory();
thisSubCatergory.SubCategoryName = eachSubCatergory.SubCategoryName;
thisSubCatergory.SubCategoryId = eachSubCatergory.SubCategoryID;
}
thisPhotographer.Categories.Add(thisCategory);
}
PhotographerList.Add(thisPhotographer);
}
PhotographerNames.ItemsSource = PhotographerList;
}
void proxy_GetPhotographerNamesCompleted(object sender, GetPhotographerNamesCompletedEventArgs e)
{
wcfPhotographers = e.Result.ToList();
}
void proxy_GetCategoryNamesFilteredByPhotographerCompleted(object sender, GetCategoryNamesFilteredByPhotographerCompletedEventArgs e)
{
wcfCategories = e.Result.ToList();
}
void proxy_GetSubCategoryNamesFilteredByCategoryCompleted(object sender, GetSubCategoryNamesFilteredByCategoryCompletedEventArgs e)
{
wcfSubCategories = e.Result.ToList();
}
Yes, before you can proceed with the next step of the algorithm, you need to have gotten the result of the previous step, which can be hard when you have to use the async methods.
If this is not happening on the UI thread, then you could just block and wait for the response. For example, have each "completed" method signal (using whatever synchronization primitives are available in Silverlight; I don't know offhand e.g. if ManualResetEvent is there, if so, have the completed callback call .Set()), and then have your main PopulateControl method invoke the FooAsync() call and then block until the ManualResetEvent signals (by calling .Wait()).
If this is on the UI thread and you really need to write a non-blocking solution, then it is much, much harder to code this up correctly in C#. You might consider using F# instead, where asyncs provide a nice programming model for non-blocking calls.
EDIT:
Pseudo-code example to block for results:
// class-level
ManualResetEvent mre = new ManualResetEvent(false);
// some method that needs to make WCF call and use results
void Blah() {
// yadda yadda
proxy.FooCompleted += (o,ea) => { ... mre.Set(); };
proxy.FooAsync(...);
mre.WaitOne(); // block until FooCompleted
// use results from FooCompleted now that they're here
// mre.Reset() if you intend to use it again later
}
I used a lambda for FooCompleted, but using a separate method like you have is fine too.
Alternatively, for each async method you are using to populate the collection you can create a helper method that would return IObservable and then use Linq query to group the result.
E.g.:
private IObservable<Photographer> GetPhotographerNames()
{
var photographers = Observable
.FromEvent<GetPhotographerNamesCompletedEventArgs>(proxy, "GetPhotographerNamesCompleted")
.Prune()
.SelectMany(e => e.EventArgs.Result.ToObservable());
proxy.GetPhotographerNamesAsync();
return photographers;
}
And similarly:
private IObservable<Category> GetCategoryNamesFilteredByPhotographer(int photographerId) { ... }
private IObservable<SubCategory> GetSubCategoryNamesFilteredByCategory(int photographerId, int categoryId) { ... }
Now you can write a Linq query:
var pcs = from p in GetPhotographerNames()
from c in GetCategoryNamesFilteredByPhotographer(p.PhotographerId)
from s in GetSubCategoryNamesFilteredByCategory(p.PhotographerId, c.CategoryId)
select new {p, c, s};
This query will return you a list of triplets (Photographer, Category, SubCategory) Now all you have to do is to Subscribe to it and aggregate it to the objects you use on the client which should be pretty straightforward.