I am wondering what the best approach to toggling App.Config settings for C# would be. This is involving our test suite, and we would like the option to either choose a remote or local environment to kick the tests off. We use LeanFT and NUnit as our testing framework, and currently in order to get tests to run remote we have to add an <leanft></leanft> config in the App.config file. How can I specify different configurations at run time when I kick these tests off thru the command line? Thanks!
Any leanft configuration can be modified at runtime, by using the SDK namespace or the Report namespace.
Here's an example using NUnit 3 showing how you can achieve this
using NUnit.Framework;
using HP.LFT.SDK;
using HP.LFT.Report;
using System;
namespace LeanFtTestProject
{
[TestFixture]
public class LeanFtTest
{
[OneTimeSetUp]
public void TestFixtureSetUp()
{
// Initialize the SDK
SDK.Init(new SdkConfiguration()
{
AutoLaunch = true,
ConnectTimeoutSeconds = 20,
Mode = SDKMode.Replay,
ResponseTimeoutSeconds = 20,
ServerAddress = new Uri("ws://127.0.0.1:5095") // local or remote, decide at runtime
});
// Initialize the Reporter (if you want to use it, ofc)
Reporter.Init(new ReportConfiguration()
{
Title = "The Report title",
Description = "The report description",
ReportFolder = "RunResults",
IsOverrideExisting = true,
TargetDirectory = "", // which means the current parent directory
ReportLevel = ReportLevel.All,
SnapshotsLevel = CaptureLevel.All
});
}
[SetUp]
public void SetUp()
{
// Before each test
}
[Test]
public void Test()
{
Reporter.ReportEvent("Doing something", "Description");
}
[TearDown]
public void TearDown()
{
// Clean up after each test
}
[OneTimeTearDown]
public void TestFixtureTearDown()
{
// If you used the reporter, invoke this at the end of the tests
Reporter.GenerateReport();
// And perform this cleanup as the last leanft step
SDK.Cleanup();
}
}
}
Related
I am trying to develop a IntelliJ plugin which provides a Language Server with help of lsp4intellij by ballerina.
Thing is, i've got a special condition: The list of completion items should be editable in runtime.
But I've not found any way to communicate new completionItems to the LanguageServer process once its running.
My current idea is to add an action to the plugin which builds a new jar and then restarts the server with the new jar, using the Java Compiler API.
The problem with that is, i need to get the source code from the plugin project including the gradle dependencies accessable from the running plugin... any ideas?
If your requirement is to modify the completion items (coming from the language server) before displaying them in the IntelliJ UI, you can do that by implementing the LSP4IntelliJ's
LSPExtensionManager in your plugin.
Currently, we do not have proper documentation for the LSP4IntelliJ's extension points but you can refer to our Ballerina IntelliJ plugin as a reference implementation, where it has implemented Ballerina LSP Extension manager to override/modify completion items at the client runtime in here.
For those who might stumble upon this - it is indeed possible to change the amount of CompletionItems the LanguageServer can provide during runtime.
I simply edited the TextDocumentService.java (the library I used is LSP4J).
It works like this:
The main function of the LanguageServer needs to be started with an additional argument, which is the path to the config file in which you define the CompletionItems.
Being called from LSP4IntelliJ it would look like this:
String[] command = new String[]{"java", "-jar",
"path\\to\\LangServer.jar", "path\\to\\config.json"};
IntellijLanguageClient.addServerDefinition(new RawCommandServerDefinition("md,java", command));
The path String will then be passed through to the Constructor of your CustomTextDocumentServer.java, which will parse the config.json in a new Timer thread.
An Example:
public class CustomTextDocumentService implements TextDocumentService {
private List<CompletionItem> providedItems;
private String pathToConfig;
public CustomTextDocumentService(String pathToConfig) {
this.pathToConfig = pathToConfig;
Timer timer = new Timer();
timer.schedule(new ReloadCompletionItemsTask(), 0, 10000);
loadCompletionItems();
}
#Override
public CompletableFuture<Either<List<CompletionItem>, CompletionList>> completion(CompletionParams completionParams) {
return CompletableFuture.supplyAsync(() -> {
List<CompletionItem> completionItems;
completionItems = this.providedItems;
// Return the list of completion items.
return Either.forLeft(completionItems);
});
}
#Override
public void didOpen(DidOpenTextDocumentParams didOpenTextDocumentParams) {
}
#Override
public void didChange(DidChangeTextDocumentParams didChangeTextDocumentParams) {
}
#Override
public void didClose(DidCloseTextDocumentParams didCloseTextDocumentParams) {
}
#Override
public void didSave(DidSaveTextDocumentParams didSaveTextDocumentParams) {
}
private void loadCompletionItems() {
providedItems = new ArrayList<>();
CustomParser = new CustomParser(pathToConfig);
ArrayList<String> variables = customParser.getTheParsedItems();
for(String variable : variables) {
String itemTxt = "$" + variable + "$";
CompletionItem completionItem = new CompletionItem();
completionItem.setInsertText(itemTxt);
completionItem.setLabel(itemTxt);
completionItem.setKind(CompletionItemKind.Snippet);
completionItem.setDetail("CompletionItem");
providedItems.add(completionItem);
}
}
class ReloadCompletionItemsTask extends TimerTask {
#Override
public void run() {
loadCompletionItems();
}
}
}
Whenever I try to restart Avalonia application form base application, I get an exception: "Setup was already called on one of AppBuilder instances." on SetupWithLifetime() call.
Application Startup code is:
public static void Start()
{
lifeTime = new ClassicDesktopStyleApplicationLifetime()
{
ShutdownMode = ShutdownMode.OnLastWindowClose
};
BuildAvaloniaApp().SetupWithLifetime(lifeTime);
lifeTime.Start(new[] { "" });
}
public static AppBuilder BuildAvaloniaApp()
=> AppBuilder.Configure<App>()
.UsePlatformDetect()
.LogToTrace()
.UseReactiveUI();
Application shutdown code is:
lifeTime.Shutdown();
lifeTime.Dispose();
Here's a link to functional example code, which produces this error: https://pastebin.com/J1jqppPv
Has anyone encountered such problem? Thank you
SetupWithLifetime calls Setup which can only be called once. A possible solution is to call SetupWithoutStarting on BuildAvaloniaApp, which can only be called once as well, for example:
private static AppBuilder s_builder;
static void Main(string[] args)
{
s_builder = BuildAvaloniaApp();
}
public static void Start()
{
lifeTime = new ClassicDesktopStyleApplicationLifetime()
{
ShutdownMode = ShutdownMode.OnLastWindowClose
};
s_builder.Instance.Lifetime = lifeTime;
s_builder.Instance.OnFrameworkInitializationCompleted();
lifeTime.Start(new[] { "" });
}
private static AppBuilder BuildAvaloniaApp()
=> AppBuilder.Configure<App>()
.UsePlatformDetect()
.LogToTrace()
.UseReactiveUI();
Additional note: Restarting the app probably won't work on macOS.
My scenario reads a file with hundreds of lines. Each line calls an API Service, but the service may not be running. If I get a non-200 response (available inside the 'Then' method), I want to abandon the Scenario & save time.
How can I tell TechTalk SpecFlow to not carry on with the other tests?
You can use a concept like this .
public static FeatureContext _featureContext;
public binding( FeatureContext featureContext)
{
_featureContext = featureContext;
}
[Given(#"user login")]
public void login(){
// do test
bool testPassed = //set based on test. true or false
binding._featureContext.Current["testPass"] = testPassed;
}
Then in BeforeScenario()
[BeforeScenario(Order = 1)]
public void BeforeScenario()
{
Assert.IsTrue(FeatureContext.Current["testPass"];);
}
Currently, openGoogle() does get called for each test case with the correct parameters. The problem is that setBrowser does not appear to be working properly. It does set the first time and completes the test successfully. However, when openGoogle() gets invoked for the second time it continues to use the first browser instead of using the new browser specified.
using NFramework = NUnit.Framework;
...
[NFramework.TestFixture]
public class SampleTest : FluentAutomation.FluentTest
{
string path;
private Action<TinyIoCContainer> currentRegistration;
public TestContext TestContext { get; set; }
[NFramework.SetUp]
public void Init()
{
FluentAutomation.Settings.ScreenshotOnFailedExpect = true;
FluentAutomation.Settings.ScreenshotOnFailedAction = true;
FluentAutomation.Settings.DefaultWaitTimeout = TimeSpan.FromSeconds(1);
FluentAutomation.Settings.DefaultWaitUntilTimeout = TimeSpan.FromSeconds(30);
FluentAutomation.Settings.MinimizeAllWindowsOnTestStart = false;
FluentAutomation.Settings.ScreenshotPath = path = "C:\\ScreenShots";
}
[NFramework.Test]
[NFramework.TestCase(SeleniumWebDriver.Browser.Firefox)]
[NFramework.TestCase(SeleniumWebDriver.Browser.InternetExplorer)]
public void openGoogle(SeleniumWebDriver.Browser browser)
{
setBrowser(browser);
I.Open("http://www.google.com/");
I.WaitUntil(() => I.Expect.Exists("body"));
I.Enter("Unit Testing").In("input[name=q]");
I.TakeScreenshot(browser + "EnterText");
I.Click("button[name=btnG]");
I.WaitUntil(() => I.Expect.Exists(".mw"));
I.TakeScreenshot(browser + "ClickSearch");
}
public SampleTest()
{
currentRegistration = FluentAutomation.Settings.Registration;
}
private void setBrowser(SeleniumWebDriver.Browser browser)
{
switch (browser)
{
case SeleniumWebDriver.Browser.InternetExplorer:
case SeleniumWebDriver.Browser.Firefox:
FluentAutomation.SeleniumWebDriver.Bootstrap(browser);
break;
}
}
}
Note: Doing it this way below DOES work correctly - opening a separate browser for each test.
public class SampleTest : FluentAutomation.FluentTest {
string path;
private Action currentRegistration;
public TestContext TestContext { get; set; }
private void ie()
{
FluentAutomation.SeleniumWebDriver.Bootstrap(FluentAutomation.SeleniumWebDriver.Browser.InternetExplorer);
}
private void ff()
{
>FluentAutomation.SeleniumWebDriver.Bootstrap(FluentAutomation.SeleniumWebDriver.Browser.Firefox);
}
public SampleTest()
{
//ff
FluentAutomation.SeleniumWebDriver.Bootstrap();
currentRegistration = FluentAutomation.Settings.Registration;
}
[TestInitialize]
public void Initialize()
{
FluentAutomation.Settings.ScreenshotOnFailedExpect = true;
FluentAutomation.Settings.ScreenshotOnFailedAction = true;
FluentAutomation.Settings.DefaultWaitTimeout = TimeSpan.FromSeconds(1);
FluentAutomation.Settings.DefaultWaitUntilTimeout = TimeSpan.FromSeconds(30);
FluentAutomation.Settings.MinimizeAllWindowsOnTestStart = false;
path = TestContext.TestResultsDirectory;
FluentAutomation.Settings.ScreenshotPath = path;
}
[TestMethod]
public void OpenGoogleIE()
{
ie();
openGoogle("IE");
}
[TestMethod]
public void OpenGoogleFF()
{
ff();
openGoogle("FF");
}
private void openGoogle(string browser)
{
I.Open("http://www.google.com/");
I.WaitUntil(() => I.Expect.Exists("body"));
I.Enter("Unit Testing").In("input[name=q]");
I.TakeScreenshot(browser + "EnterText");
I.Click("button[name=btnG]");
I.WaitUntil(() => I.Expect.Exists(".mw"));
I.TakeScreenshot(browser + "ClickSearch");
} }
Dev branch: The latest bits in the Dev branch play nicely with NUnit's parameterized test cases in my experience.
Just move the Bootstrap call inside the testcase itself and be sure that you manually call I.Dispose() at the end. This allows for proper browser creation when run in this context.
Here is an example that you should be able to copy/paste and run, if you pull latest from GitHub on the dev branch.
[TestCase(FluentAutomation.SeleniumWebDriver.Browser.InternetExplorer)]
[TestCase(FluentAutomation.SeleniumWebDriver.Browser.Chrome)]
public void CartTest(FluentAutomation.SeleniumWebDriver.Browser browser)
{
FluentAutomation.SeleniumWebDriver.Bootstrap(browser);
I.Open("http://automation.apphb.com/forms");
I.Select("Motorcycles").From(".liveExample tr select:eq(0)"); // Select by value/text
I.Select(2).From(".liveExample tr select:eq(1)"); // Select by index
I.Enter(6).In(".liveExample td.quantity input:eq(0)");
I.Expect.Text("$197.70").In(".liveExample tr span:eq(1)");
// add second product
I.Click(".liveExample button:eq(0)");
I.Select(1).From(".liveExample tr select:eq(2)");
I.Select(4).From(".liveExample tr select:eq(3)");
I.Enter(8).In(".liveExample td.quantity input:eq(1)");
I.Expect.Text("$788.64").In(".liveExample tr span:eq(3)");
// validate totals
I.Expect.Text("$986.34").In("p.grandTotal span");
// remove first product
I.Click(".liveExample a:eq(0)");
// validate new total
I.WaitUntil(() => I.Expect.Text("$788.64").In("p.grandTotal span"));
I.Dispose();
}
It should find its way to NuGet in the next release which I'm hoping happens this week.
NuGet v2.0: Currently only one call to Bootstrap is supported per test. In v1 we had built-in support for running the same test against all the browsers supported by a provider but found that users preferred to split it out into multiple tests.
The way I manage it with v2 is to have a 'Base' TestClass that has the TestMethods in it. I then extend that once per browser I want to target, and override the constructor to call the appropriate Bootstrap method.
A bit more verbose but very easy to manage.
The following TestNG (6.3) test case generates the error "Invalid context for the recording of expectations"
#Listeners({ Initializer.class })
public final class ClassUnderTestTest {
private ClassUnderTest cut;
#SuppressWarnings("unused")
#BeforeMethod
private void initialise() {
cut = new ClassUnderTest();
}
#Test
public void doSomething() {
new Expectations() {
MockedClass tmc;
{
tmc.doMethod("Hello"); result = "Hello";
}
};
String result = cut.doSomething();
assertEquals(result, "Hello");
}
}
The class under test is below.
public class ClassUnderTest {
MockedClass service = new MockedClass();
MockedInterface ifce = new MockedInterfaceImpl();
public String doSomething() {
return (String) service.doMethod("Hello");
}
public String doSomethingElse() {
return (String) ifce.testMethod("Hello again");
}
}
I am making the assumption that because I am using the #Listeners annotation that I do not require the javaagent command line argument. This assumption may be wrong....
Can anyone point out what I have missed?
The JMockit-TestNG Initializer must run once for the whole test run, so using #Listeners on individual test classes won't work.
Instead, simply upgrade to JMockit 0.999.11, which works transparently with TestNG 6.2+, without any need to specify a listener or the -javaagent parameter (unless running on JDK 1.5).