Self-Test Support For Audiobooks - sonos

We are currently in the self-testing phase of developing a Sonos music API and encountered a issue with one of the test conditions, test_combinatorial_test_browse_to_leaf. Whenever we run the self-test, this test fails with an Instance message of "local variable 'response' referenced before assignment."
The structure of our content is very simple, with the root consisting of a list of mediaCollections of type "audiobook," each containing a list of mediaMetadata tracks. Our mediaCollections include tags for id, itemType, title, summary, author, narrator, canPlay and canResume (both always true), and albumArtUri.
We also looked into the test suite and found that the Instance message about "local variable 'response'" was due to the if-statement on ln. 57 of browse.py of the self-test code, which accounts for types TRACK, PROGRAM, and STREAM, but not AUDIOBOOK, leading to "response" never being assigned. Is this intended behavior or is this a bug in the self test suite?

This is a bug in the self-test suite. Please submit your service with your configuration file and we will work with you to resolve any issues as part of the application review process.

Related

Run automated tests on a schedule to server as health check

I was tasked with creating a health check for our production site. It is a .NET MVC web application. There are a lot of dependencies and therefore points of failure e.g. a document repository, Java Web services, Site Minder policy server etc.
Management wants us to be the first to know if ever any point fails. Currently we are playing catch up if a problem arises, because it is the the client that informs us. I have written a suite of simple Selenium WebDriver based integration tests that test the sign in and a few light operations e.g. retrieving documents via the document api. I am happy with the result but need to be able to run them on a loop and notify IT when any fails.
We have a TFS build server but I'm not sure if it is the right tool for the job. I don't want to continuously build the tests, just run them. Also it looks like I can't define a build schedule more frequently than on a daily basis.
I would appreciate any ideas on how best achieve this. Thanks in advance
What you want to do is called a suite of "Smoke Tests". Smoke Tests are basically very short and sweet, independent tests that test various pieces of the app to make sure it's production ready, just as you say.
I am unfamiliar with TFS, but I'm sure the information I can provide you will be useful, and transferrable.
When you say "I don't want to build the tests, just run them." Any CI that you use, NEEDS to build them TO run them. Basically "building" will equate to "compiling". In order for your CI to actually run the tests, it needs to compile.
As far as running them, If the TFS build system has any use whatsoever, it will have a periodic build option. In Jenkins, I can specify a Cron time to run. For example:
0 0 * * *
means "run at 00:00 every day (midnight)"
or,
30 5 * 1-5 *
which means, "run at 5:30 every week day"
Since you are making Smoke Tests, it's important to remember to keep them short and sweet. Smoke tests should test one thing at a time. for example:
testLogin()
testLogout()
testAddSomething()
testRemoveSomething()
A web application health check is a very important feature. The use of smoke tests can be very useful in working out if your website is running or not and these can be automated to run at intervals to give you a notification that there is something wrong with your site, preferable before the customer notices.
However where smoke tests fail is that they only tell you that the website does not work, it does not tell you why. That is because you are making external calls as the client would, you cannot see the internals of the application. I.E is it the database that is down, is a network issue, disk space, a remote endpoint is not functioning correctly.
Now some of these things should be identifiable from other monitoring and you should definitely have an error log but sometimes you want to hear it from the horses mouth and the best thing that can tell you how you application is behaving is your application itself. That is why a number of applications have a baked in health check that can be called on demand.
Health Check as a Service
The health check services I have implemented in the past are all very similar and they do the following:
Expose an endpoint that can be called on demand, i.e /api/healthcheck. Normally this is private and is not accessible externally.
It returns a Json response containing:
the overall state
the host that returned the result (if behind a load balancer)
The application version
A set of sub system states (these will indicate which component is not performing)
The service should be resilient, any exception thrown whilst checking should still end with a health check result being returned.
Some sort of aggregate that can present a number of health check endpoints into one view
Here is one I made earlier
After doing this a number of times I have started a library to take out the main wire up of the health check and exposing it as a service. Feel free to use as an example or use the nuget packages.
https://github.com/bronumski/HealthNet
https://www.nuget.org/packages/HealthNet.WebApi
https://www.nuget.org/packages/HealthNet.Owin
https://www.nuget.org/packages/HealthNet.Nancy

$model->validate() method is printing 'test'

I am doing a website on YII and PostgreSQL. It is recurrently exiting inside $model->validate() method by printing 'test'. I have searched whole code, There is no exit code on controller , model, beforeValidate(), afterValidate(), even whole project.
Question
How can i debug on such scenario. I have only access to ftp, Netbean as IDE, but no localhost. How to find which file is printing exit code or do you have any idea ?
Thank you,
Ram
Messages can be logged by calling either Yii::log or Yii::trace. The difference between these two methods is that the latter logs a message only when the application is in debug mode.
Yii::log($message, $level, $category);
Yii::trace($message, $category);
When logging a message, we need to specify its category and level. Category is a string in the format of xxx.yyy.zzz which resembles to the path alias. For example, if a message is logged in CController, we may use the category system.web.CController. Message level should be one of the following values:
For more details, refer this YII Tutorial

How can I test error messages on MVVMCross?

How can I test error messages on MVVMCross?
I'm using ReportError method found on this sample on a Portable Class Library project:
http://slodge.blogspot.com.br/2012/05/one-pattern-for-error-handling-in.html
What are you actually looking to test?
If you are looking to unit test the 'hub' that listens for and republishes errors, then you should be able to do that easily - just create an nunit test and one or more mock subscribers - then you can test sending one or more messages.
If you are looking to test some of your services or view models to check that they correctly report errors, then create unit test for them with mocks which simulate the error conditions - in the test, Assert that the errors are correctly reported.
If you are looking to just manually trigger the ui as part of development, then either find some way to reliably trigger an error (e.g. providing an incorrect password or switching the phone to flight mode) or consider adding a debug ui to your app with buttons to trigger mock errors.
If you are looking for integration or acceptance level testing, then either write manual test steps to trigger errors - or consider automated solutions like Frank, Calabash, Telerik ui testing or the excellent Windows Phone Test Framework (http://m.youtube.com/watch?v=2JkJfHZDd2g) - although I might be biased on the last one
As an update on the error reporting mechanism itself, I have now moved in some of my apps to the more general messenger plugin for this type of reporting. But other apps still use the old mechanism.

Send Jenkins notification only when new test fails

I have Jenkins project that perform some sort of sanity check on couple of independent documents. Check result is written in JUnit XML format.
When one document test fails, entire build fails. Jenkins can be simply configured to send email to commiter in this situation. But I want to notify commiters only when new test failed or any failed test was fixed with the commit. They are not interested in failed tests for documents they have not edited. Email should contain only information of changes in tests, not full test report. Is it possible to send this kind of notification with any currently available Jenkins plugins? What could be the simplest way to achieve this?
I had the same question today. I wanted to configure Jenkins sending notifications only when new tests fail.
What I did was to install email-ext plugin.
You can find there a special trigger that is called Regression (An email will be sent any time there is a regression. A build is considered to regress whenever it hasmore failures than the previous build.)
Regarding fixed tests, there is Improvement trigger (An email will be sent any time there is an improvement. A build is considered to have improved wheneverit has fewer failures than the previous build.)
I guess that this is what you are looking for.
Hope it helps
There's the email-ext plugin. I don't think it does exactly what you want (e.g. sending only emails to committers who have changed a file that is responsible for a failure). You might be able to work around that/extend the plugin though.
Also have a look at the new Emailer, which talks about new email functionality in core hudson that is based on aforementioned plugin.

Error handling: show error message or not?

Generally, in software design, which of the options below is preferred when there is a problem or error with a resource such as a database or file?
Show an error message
Do not show an error message and act as though the resource was empty (eg. do not populate a GUI component)]
For example, should the user see an empty DataGrid following which they complain, or should there be an error message? Which is better?
I don't see this as an either/or. Also, we need to consider all "users" of the system.
First consider the UI. Let's consider a contrived general case: you are populating a UI by calling a service which in turn uses a couple of of databases (for example a "current data" and an "historic data") database.
There are at least these possibilities:
It all works, data is retrieved
It all works but as it happens there's no data for this particular query
Can't reach the service
Service is invoked, but one database is down
Service is invoked, but both databases are down
Then also consider your application's semantics. Can your applciation procede in a "degraded" mode if all the data cannot be retrieved? For example, we can't query the history but that doesn't stop us creating a new item.,
Now also consider the roles here. There's the person using the UI, there's also support and maintenance people who need to know about and fix problems.
My general rules:
First Failure Data capture: Whichever component first detects an error should log it in some detail. So, service up, database down the service should log the problem. Service down, the UI should log the problem. This log should be a technical record targeting the support roles.
UIs should be tolerant: if at all possible run in a degraded mode. So if the service is down but (for example) local working is possible put up an empty screen and continue. BUT ...
Always indicate a problem: The "no data for this query" and "databases unavailable" cases may both result in an empty screen. The user needs to know the status of the display, is it showing complete information, partial information (eg. because one DB is down) or is no information available (eg. service or both dbs down). So have a "Status" field somewhere on the screen. Giving messages such as
Historica Data not currently available
or
There are problems retrieveing
information, if these persist please
contact support ...
There are some pitfalls to each of the options
Showing error message
This is specially helpful when your application is in testing stage or public testing. Also when clients meets an error, he or she can copy down the details and forward to you.
However sometimes this error message gets very ugly (call stacks and so on - remember ASP.NET?) and it becomes so large that it becomes difficult for clients to copy down the details.
Do not show error message and act as though nothing happened =)
This is useful when you don't want error messages to cog up your software UI design. But be reminded that it becomes difficult and further error prone when clients can't differentiate between an actual error, or really nothing on the GUI. The error stays there and nothing gets fixed.
My stand
Get the best of both worlds. In fact most modern applications how have a very good error handling process. I'll take the example of Mozilla Firefox 3.
A deadly error occurred and Firefox crashes
Error is captured and stored into a file as a form of error report
Error Reporting Application pops up apologizing to the user
Ask the user if the user want to send the error report to the software dev team
Then ask the user if want to restart the application
Or if the error is a warning or of lesser severity:
Show a simple error code and tell the user that there's the error with that action. Something like: "Error 123 at RequestSalary() Line 2"
The practice I usualy use is:
If the error didn't happen due to user error, then you should try to handle the error quietly.
If the error occurred because of some external problem (such as no internet connection) then you should alert the user.
IMO you should show a message (albeit a user friendly one and not something like "java.io.IOException: Connection timed out".) You could have a message box telling the user that an error occured while getting the data and provide helpful tips like: Trying after some time, check network cable, etc.
Also allow user to report that error to you (error reporting build into the app) that will send you the actual error and stack trace.