Log4net & NUnit: Test only succeeds once, subsequent tests fail because log file "used by another process" - locking

My open source .NET software uses Log4net with zero problems.
But when I test it with NUnit, I get the error below at the second test. For instance, if I run a test twice, it will succeed the first time and fail the second time, whatever the test:
System.IO.IOException : The process cannot access the file 'C:\Users\win7pro32bit\AppData\Roaming\cmissync\debug_log.txt' because it is being used by another process.
The log file is created by a static call to log4net.Config.XmlConfigurator.Configure(path)
I guess I should somehow close the log file in TearDown, but I can't see any log4net "close" method.
Adding <lockingModel type="log4net.Appender.FileAppender+MinimalLock" /> to the log4net configuration fixes the problem, but this degrades performance, so I would prefer a test-side solution, that has no impact on production code.

As suggested by adrianbanks and Cole W in the comments, I don't really need to log to a file.
So this solved the problem:
[SetUp]
public void Init()
{
log4net.Config.BasicConfigurator.Configure(new TraceAppender());
}

Related

How can an uncalled test affect another in Go?

I have a test function TestJobqueue() in https://github.com/VertebrateResequencing/wr/blob/develop/jobqueue/jobqueue_test.go that I can call in isolation: go test -tags netgo ./jobqueue -v -run 'TestJobqueue$'.
I recently started getting test failures related to boltdb (one of my dependencies) bombing out with signal SIGBUS: bus error code panics, or just normally failing tests because the database couldn't be opened. But only when working off an NFS mounted directory. Fair enough, I or boltdb have some kind of NFS-related bug.
But the thing I can't wrap my head around is that I only get these errors when an entirely different test function exists.
As per the comments in TestREST() in https://github.com/VertebrateResequencing/wr/blob/92fb61ccd7819c8f1edfa8cce8468c4250d40ea7/jobqueue/rest_test.go, if I call Serve(serverConfig) (a function in the package being tested, a function call which is made many times in TestJobqueue() and other test functions) in that test function, TestJobqueue() fails. If I don't, it doesn't.
In short, the failure of tests in one test function can be controlled by the value of a boolean in a test function that I'm not running.
How is this possible?
Edit: to address some points brought up by the first answer, TestJobqueue() is being run in isolation. No other test runs before or after it. If the database file already exists, Serve() results in those files being deleted first, then a new one created to run the new set of tests. The odd thing that I'm seeking an answer for is how an unexecuted function can have this side effect. I can demonstrate it is really unexecuted by beginning or ending TestREST() with a panic call: the output of that panic is never seen, but TestJobqueue() failure can still be controlled by the boolean in TestREST() (if the panic comes at the end).
Edit2: this turns out to be caused by an unusual thing I do in TestJobqueue(), which is to call go test on itself. Needless to say, if you do this, strange things can happen...
In short, the failure of tests in one test function can be controlled by the value of a boolean in a test function that I'm not running.
This is not a great summary. Your test starts a server. The other test starts a server, clearly, the problem is there. You appear to have commented out the bit of code that stops the server at the end of the test? You can't run two servers on the same port.
You probably have a port conflict or some network condition that is triggered by running the two servers at once, because they both appear to use a similar (identical?) config loaded like this:
config := internal.ConfigLoad("development", true)
Running with no config uses default values, avoiding the conflict, running with config causes the conflict. So to pin it down, try creating a config with one setting at a time till you find the config setting that causes the problem (most likely Port or WebPort). Alternatively, make sure the tests stop the server at the end.
[EDIT] Looks like you have narrowed it down to DBFile config setting by changing one at a time. This implies the server starts a new db instance - if both try to use the same file for a new db, this would cause contention and the second test to run would fail.
It's not entirely clear from your description above what you're doing or what the problem is, so you could try to improve that to state exactly the sequence of actions and the problem. If for example you have previously run a test which creates a db, it could affect later test runs because of the presence of a db file, so your tests are not completely independent.
[EDIT 2 - after further edits to question]
If commenting out TestREST completely solves your problem (or a panic before it starts), and given changing it breaks the other test, you are executing TestREST somehow.
Looking at your code for jobqueue_test, it appears to invoke go test so you might be running more tests that you assume? Given you don't see the panic output I'd suspect your use of exec.Command in this big test. Try removing bits of the failing test till it works to narrow down exactly which invocation is running the other test. Calling go test within a test is pretty unusual!
https://github.com/VertebrateResequencing/wr/blob/develop/jobqueue/jobqueue_test.go#L2445

Is there a way to see if a selenium test is being ran via nunit or nunit-console?

So, I have a reasonable amount of Selenium Tests. I want them to run quietly in the background via a batch script, nunit-console, and RemoteWebDriver. I have this setup already. I want to also be able to run the same tests (with me watching, debugging, writing new tests, etc...) with other drivers in visual studios 2013 using nunit. I have this already setup. The problem is I want to be able to run them at the same time.
I'm thinking of putting a check in to see if the calling program is nunit vs nunit-console to determine which driver to use, but I am a little uncertain how I should set this up.
I've considered:
bool isConsole = Process.GetProcessesByName("nunit-console")
.FirstOrDefault(p => p.MainModule.FileName.StartsWith(#"C:\Program Files (x86)\NUnit 2.6.4\bin")) != default(Process);
if (isConsole)
{
// remote
}
else
{
// ff,chrome,etc...
}
This however would not allow me to run the suite quietly in the background WHILE running individual tests in visual studios.
I'm not sure if there's any difference when you're running a selenium test, but with a normal nunit test you could do:
if("nunit" == Process.GetCurrentProcess().ProcessName)) {
...
}
This gets your the name of the process that's actually executing the tests, rather than just checking if the process is currently running on the machine.
Running from within visual studio, I get a process name of "vstest.executionengine.x86", from the console, I get "nunit-console" and from the gui I get "nunit".
It's possible that depending on the process model that you're running your tests under that you might need to check the parent process, rather than the current process. With Nunit configured to run tests in a separate process the reported process name with the above code is "nunit-agent". For some reason I can't get nunit-console to run in this mode at the moment, so I don't know if it has a different process name that you can use instead.
If you do need to trace up the process call stack to see what the parent process is, there's some excellent answers on how to do it on this question.

Selenium grid runs out of free slots

I have a large suite of SpecFlow tests executing against a selenium grid running locally. The grid has a single host configured for max 10 firefox instances. The tests are run from NUnit serially, so I would only expect to require a single session at a time.
However, when approximately half of the test cases have been run, the console window reporting output from the hub starts reporting
INFO: Node host [url] has no free slots
Why?
All the test cases are associated with a TearDown method that closes and disposes the WebDriver, although I haven't verified that absolutely every test gets to this method without failing. I would expect a maximum of one session to be active at once. How can I find out what is stopping the host from recycling those sessions?
edit #1:
I think I've narrowed down the cause of the issue - it is indeed to do with not closing the WebDriver. There are [AfterScenario] attributes on the teardown methods that are meant to do this, but they only match a subset of scenarios as they have parameters on them. Removing the parameter so that the teardown associates with every scenario fixes the session exhaustion (or seems to) but there are some tests that expect to reacquire an existing session, so I'll have to fix them separately.
A bit of background: This test suite was inherited as part of a 'complete' solution and it's been left untouched and never run since delivery. I'm putting it back into service and have had to discover its quirks as I go - I didn't write any of this. I've had brief encounters with both Selenium and SpecFlow but never used the two together.
The issue turned out to be a facepalm-level fail - mostly in the sense that I didn't spot it. Some logging code was trying to write to a file that wasn't there, the thrown exception bypassed the call to Dispose() on the WebDriver, and was then swallowed with no error reporting. Therefore the sessions were hanging around. Removing the logging code fixed the session exhaustion.
Look on the node (remote desktop) and see what is happening on the box. It does sound like your test isn't closing out it's session properly.

Dynamically change connection string for UI tests

I'm using WebAii library for UI testing - I want to test whether my component displays the same records as there are in database therefore I need to switch my application's connection string to point to the test database just for the time of running tests. What is the best way to do it? How to dynamically change the connection string prior to running the app? Thanks
Are you storing the connection string in the Web.config file? If so, I would deploy a new Web.config just before starting the test and then use the command line to send an IISRESET.
FYI, these are the kinds of questions we answer all day long on our public forum dedicated to WebAii.
Cody
Telerik Technical Support
What kind of application is it? This is first probably an indication of not-well-factored code. Next, it is common to have a separate environment for testing code.
If you are, for example, deploying to ASP.NET with Visual Studio, you can use Web.config file transformations to set a different value when you deploy to e.g. test.contoso.com vs. www.contoso.com. The transformation syntax allows you to define a new connection string, or change an existing one from the base Web.config, when deploying a different configuration.
If you have a single environment, and control over it, you could probably write a couple of (Power)shell scripts to copy a web.config with "test" connection strings to your app root prior to the test. Then run a second script to reset the original web.config after the test is run.
If you have access to your deploy directory within the context you will be running your tests, you could even simply have a Web.test.config file included in your unit test project. In [AssemblyInitialize]:
File-copy _\\{your app server}{your app directory}\Web.config to \\{your app server}{your app directory}\Web.config.orig.
File-copy Web.test.config to \\{your app server}{your app directory}\Web.config.
Sleep for a few seconds?
Then do the reverse in [AssemblyCleanup].
Other strategies exist, too. You could build in an override to your application when in debug mode, that checks various things (special file, additional config, cookies, extra query string). Or you could have a Settings manager in your app that you can instrument in test setup when arranging your test (click through UI to change DB settings).
Very likely, however, you may get the best compounding rewards by factoring your code to reduce dependencies. Then you can write unit tests which stub/mock/fake the database. You can use code coverage tools to verify that you've tested specific scenarios, or to see that additional integration tests would be duplication of coverage at that point.

C++ routine to delete data from database

Is there a mechanism in googletest framework that allows the test to clear the data even after a test fails (The code throws an exception and stops further execution (of clearing the data) if a test fails.
Thanks!
Run the tests on a temporary, in-memory database.
Since SQLite operates from a single file, you can use SetUp() in a test fixture to copy a pre configured database file to where your program expects the database to be, overwriting the "runtime" database file with the pre configured one before every test.
That way every test gets a completely fresh database, initialized with all tables and possibly base data of your choice without running any database creation scripts. That should keep test runs speedy.