Debug a test that only fails on TeamCity - asp.net-mvc-4

I have several random failing integration tests running on TeamCity. These tests do not fail locally. I've even tried making my local site hit the dev database.
Does anyone have experience debugging weird issues like this?
We are using MVC 4 (C#) and MSBuild.

As commented, you don't give lots of information, but the straightforward anser to the question "how to debug this" would be to put a System.Diagnostics.Debugger.Break(); statement right before the failing test. Then you'll have the chance to attach the debugger, and off you go.

The issue I was having was with a load balanced solution. The two web servers were out of sync causing odd timing issues.
One web server would save it, then another would pick it up to the processing and the time was in the past.

Related

should E2E be run in production

Apologies if this is open ended.
Currently my team and I are working on our End to end (E2E) testing strategy and we seem to be unsure whether we should be executing our E2E tests against our staging site or on our production site. We have gathered that there are pros and cons to both.
Pro Staging Tests
wont be corrupting analytics data on production.
Can detect failure before hitting production.
Pro Production Tests
Will be using actual components of the system including database and other configurations and may capture issues with prod configs.
I am sometimes not sure if we are conflating E2E with monitoring services (if such a thing exists). Does someone have an opinion on the matter?
In addition when are E2E tests run? Since every member of the system is being tested there doesn't seem to be an owner of the test suite making it hard to determine when the E2E should be run. We were hoping that we could run E2E in some sort of a pipeline before hitting production. Does that mean I should run these test when either the Front end / backend changes? Or would you rather just run the execution of the E2E on an interval regardless of any change?
In my team experience has shown that test automation is better done in a dedicated test server periodically and new code is deployed only after being tested several sessions in a row successfully.
Local test run is for test automation development and debugging.
Test server - for scheduled runs, because - no matter how good you are at writing tests, at some point they will become many hours in a row to run and you need a reliable statistic of them over time with fake data that wont break production server.
I disagree with #MetaWhirledPeas on the point of pursuing only fast test runs. Your priority should always be better coverage and reduced flakiness. You can always reduce the run time by paralelization.
Running in production - I have seen many situations when test results in a funny state of the official site that makes the company reputation go down. Other dangers are:
Breaking your database
Making purchases from non existent users and start losing money
Creating unnecessary strain on the official site api, that make the client experience during the run bad or even cause the server to stop completely.
So, in our team we have a dedicated manual tester for the production site.
You might not have all the best options at your disposal depending on how your department/environment/projects are set up, but ideally you do not want to test in production.
I'd say the general desire is to use fake data as often as possible, and curate it to cover real-world scenarios. If your prod configs and setup are different than your testing environment, do the hard work to ensure your testing environment configuration matches prod as much as possible. This is easier to accomplish if you're using CI tools, but discipline is required no matter what your setup may be.
When the tests run is going to depend on some things.
If you've made your website and dependencies trivial to spin up, and if you are already using a continuous integration workflow, you might be able to have the code build and launch tests during the pull request evaluation. This is the ideal.
If you have a slow build/deploy process you'll probably want to keep a permanent test environment running. You can then launch the tests after each deployment to your test environment, or run them ad hoc.
You could also schedule the tests to run periodically, but usually this indicates that the tests are taking too long. Strive to create quick tests to leave the door open for integration with your CI tools at some point. Parallelization will help, but your biggest gains will come from using cy.request() to fly through repetitive tasks like logging in, and using cy.intercept() to stub responses instead of waiting for a service.

How do I run cucumber tests when testing an rest or graphql API

This is my first time playing with cucumber and also creating a suite which tests and API. My questions is when testing the API does it need to be running?
For example I've got this in my head,
Start express server as background task
Then when that has booted up (How would I know if that happened?) then run the cucumber tests?
I don't really know the best practises for this. Which I think is the main problem here sorry.
It would be helpful to see a .travis.yml file or a bash script.
I can't offer you a working example. But I can outline how I would approach the problem.
Your goal is to automate the verification of a rest api or similar. That is, making sure that a web application responds in the expected way given a specific question.
For some reason you want to use Cucumber.
The first thing I would like to mention is that Behaviour-Driven Development, BDD, and Cucumber are not testing tools. The purpose with BDD and Cucumber is to act as a communication tool between those who know what the system should do, those who write code to make it happen, and those who verify the behaviour. That’s why the examples are written in, almost, a natural language.
How would I approach the problem then?
I would verify the vast majority of the behaviour by calling the methods that make up the API from a unit test or a Cucumber scenario. That is, verify that they work properly without a running server. And without a database. This is fast and speed is important. I would probably verify more than 90% of the logic this way.
I would verify the wiring by firing up a server and verify that it is possible to reach the methods verified in the previous step. This is slow so I would do as little as possible here. I would, if possible, fire up the server from the code used to implement the verification. I would start the server as a part of the test setup.
This didn’t involve any external tools. It only involved your programming language and some libraries. The reason for doing it this way is that I want to to be as portable as possible. The fewer tools you use, the easier it gets to work with something.
It has happened that I have done some of the setup in my build tool and had it start a server before running the integration tests. This is usually more heavy weight and something I avoid if possible.
So, verify the behaviour without a server. Verify the wiring with a server. It is important to only verify the wiring in this step. The logic has been verified earlier, there is no need to repeat it.
Speed, as in a fast feedback loop, is very important. Building and testing the entire system should, in a good world, take seconds rather than minutes.
I have a working example if you're interested (running on travis).
I use docker-compose to launch the API & required components such as database, then I run cucumber-js tests against the running stack.
docker-compose is also used for local development & testing.
I've also released a library to help writing cucumber for APIs, https://github.com/ekino/veggies.

ASP.MVC 4, Azure Caching: Error on both local and remote - "role discovery data is unavailable"

Whew...ok, been wrestling with this for a while and I can't figure out what is going on.
I am new to Azure caching, but at this point I have read a good bit and I think I have it setup right, but something is obviously wrong so what do I know?
Ok, so first I setup a dedicated caching web worker role using this fine tutorial: http://berniecook.wordpress.com/2013/01/27/distributed-caching-in-azure-cache-worker-role/
I have an ASP.net MVC 4 website that is supposed to be using it.
I have my solution set to multiple starting projects with my cloud caching project set to start first, but no matter what I do, I get the "role discovery data is unavailable".
Sometimes in my output log I get that the Role Environment failed to initialize, but not very often. Most of the time the output log says that is succeeds. Regardless of that, I still get the error above.
I was thinking that maybe the issue was because I was running on local azure storage and compute emulators, so I reconfigured and published the Cloud Service to Azure to see if that helped.
It didn't...
The fun part is that there have been exactly 2 times when it suddenly worked (both when I was working locally). 2 times about of about 100. I didn't do anything different...just ran the debugger and poof, it all worked. This at least lends a bit of credit that it is actually setup correctly.
Needless to say, this is putting a huge damper on my productivity so any advice would be appreciated.
Update
Ok, I have figured out a workaround of sorts...I have learned that the reason that it consistently failed was because the development web server was holding onto a file which prevented the caching server to launch correctly.
The workaround is to stop the web server each and every time I want to recompile and run the code. This is obviously not ideal, so any ways to make this more reliable would be appreciated.
Thanks,
David
I don't know if this helps but I find that if I don't shut down the both the storage and compute emulator, I get weird errors, so after doing an F5 and closing the browser down, I manually shut down both emulators

Newbie question about Continuous integration and selenium tests,

I'm very new to C.I. but I have recently inherited a project where Team City has just been implemented and I'm slowly getting my head around it. One thing we would like to do is run some Selenium Tests as part of the build process. I've created the selenium tests and can run them successfully via nunit-console on my development machine. The build server builds the project and then deploys it (A web forms application as it happens) to a staging server.
Before each selenium test we set the database to a known state, i.e. to only have certain records in place - that way each test is independent of the others. The problem is the staging server will be used by real "human" testers so this would cause them a problem with the database continually being reset (records being removed etc.) The question is should I really also deploy the application to a virtual directory on the build server and run the selenium tests against that and only deploy to the staging server if those tests pass?
Or have I got this stuff completely wrong? If so how do you do it in your organisation?
I suggest that you do not mix your automated and manual testing by allowing your testers to access the server that is staged for your automated tests. This can cause false negatives both in your automated and your manual tests. These 'bugs' are indeterministic and more than likely never reproducible (a very bad news). This will cause you a lot of unnecessary 'bug reports' and build failures.
So here is what you can do...
In addition to your current setup, you can create an extra staged server for your manual testers. This is the least you should do. You should probably create several of them, one for each tester.
And here comes the rant...
In my current project we recently found out that our testers (we had ~10 of them) reused one server. They claimed that since our app is going to have multiple concurrent users, it was a good idea that while they are testing the individual functionalities, they are also testing how these functionalities act while multiple users are working on the same server. WRONG!
If multiple users are a concern, there should be test cases for the specific concerns. If functionality#1 can interfere with functionality#2, it should be specifically tested and not just be 'tested-by-luck'.
Before this was explained to our manual testers, we had many false bug reports due to the fact that one tester was simply stepping on another tester's toes. (e.g. tester1 deleted a record that tester2 introduced to the system, etc...). This created a lot of unnecessary bug reports and these bugs were never reproducible.
Sorry about the rant, I hoped this still helps :)

IIS reset details

What exactly happens when we do IISreset? What resources get released? We have an ASP.Net website (.net 1.1) which use Crystal reports 11. Lately, running reports are throwing several crystal report specific exceptions and then the users can't run reports anymore. Resetting IIS lets the users log back in and run the reports until it fails the next time. Knowing exactly what resources are released when IIS is reset will help us dig deeper to find the root cause. Any help?
Pretty much everything. All thread pools, asp, asp.net, shared memory, etc... will all be purged. Doing in IISReset is basically the same as going to Services->WWW Service->Restart. Also, it will affect SMTP and FTP if you are running these services as well.
To narrow your problem down slightly ( and to reduce impact ), you should try putting your website in its own App Pool. Then when it next hangs, see if restarting the app pool fixes the problem. Then you are limiting things to just one running web application, not completely taking down iis. If the problem persists and still requires an IISReset, you at least have one more datapoint to work with.
EDIT: In response to your additional comment, I would suggest you do as much logging as possible and see if the problem becomes obvious. http://learn.iis.net/page.aspx/579/advanced-logging-for-iis-70---custom-logging/
Obviously, a quick run through Event Viewer is probably a good idea.