Can't analyzeload test results in visual studio 2013 - testing

what does that count field(that is with Avg. page time filed) in the load test result in page results section represents and what is its relation with Avg Page time .. I can't understand that what does that count shows as every time i run the test it shows the different count figures so what exactly this count is representing here?? i am using visual studio 2013 ultimate
I am new to load testing so any help would be appreciated
Thanks..

The count is the number of times that page was requested - by any user - during the test.
Your test will have requested a particular page many times. The response time for each request for that page is saved in the test result set. How would you tell someone what the response time for that page is?
You could provide a list of each and every response time, but this would be hard to read. This is where statistics comes in handy - it's a language for summarising large sets of data.
You can condense the long list of values to a single value by stating an average. To give a better picture of your result set you could also say how big the list of response times is by stating the count.

Related

Summarize browsing records in specific URL for specific user into one record regarding date in SQL Server

I'm facing a problem in my report. The report calculates how many times a specific (company's applications) url's were viewed/opened ignoring user data.
What I need is for the query to count the times viewed regarding that every user might have not only opened the application but also browse in it (i.e. filter something, but it is still the same application), then the data shows that the same user in minutes or seconds difference opened the same application - every click/filtering/recompiling the page etc. makes a new entry record, which is misleading, because the report shows how many times the application was opened as an individual record. The applications (which are the same in every country) are used in different countries therefore the log data is in different servers.
There are 4 tables from different servers, which have log entry data of the applications (url's), and they have to be inserted into one with already summarizes log entry data.
A small piece of the one table with the data:
A small piece of the second table just to see that the only difference between tables is litintranet, wokintranet:
There you can see that for the LogApp IFP the same user browsed with difference of seconds. But it should have only one record (just for opening the app), but has 3 records because the user probably filtered something or refreshed the page etc.
I need a query that summarizes this information and enters the new summarized / reduced records into a new table. The new table will be used for reports as the correct data of records.
The output should look like this:
How can the summarizing be done?
Thank you for your help

BIRT Filter by Parameters Return a Blank Report When it Shouldn't

let me further elaborate on my concern:
I am working on some test reports on BIRT to be familiarized with it and came across an unsettling problem.
I created a data source that connects to a test SQL Server database, a data source that will return building, floor, room, and the number of employees that contain more than one employee, and two String parameters that lets the user choose the building and floor so the report filters by just that one.
The problem happens when I test it with a building and floor that I know are in the result set. For some reason the report filters a blank result, as if the building and floors are not present in the data set result.
I tried filtering by just the building first and then the floor but the same thing happens. If I take out the filter then the report shows up without a problem.
Why does this happen? I am assuming it's the way I input the parameters, but I am not sure.
Can anyone help me? Thanks!

Splunk Failed Login Report

I am relatively new to Splunk and I am trying to create a reportthat will display a hostname and the amount of times that host failed to login within the past five minutes, when they failed 3 or more times. The only way I was able to get the initial search results I want is to look only within the past 5 minutes, as you can see in my query:
index="wineventlog" EventCode=4625 earliest=-5min | stats count by host,_time | stats count by host | search count > 2
This returns the host and the count. The issue is if I use this query in my report, it can run every five minutes, but the hosts that were listed previously get removed as they no longer are included in the search results.
I found ways to generate logs that I can then search for separately (http://docs.splunk.com/Documentation/Splunk/6.6.2/Alert/LogEvents) but it didn't work the way I expected.
I am looking for an answer to any of these questions that can help me get the intended results:
Can my original search be improved to still only get results where the failed logins were within 5 minutes but be able to search over any time period?
Is there a way to send the results from the query I already have to a report, where the results will not be cleared out when the search is run again?
Is there any other option I haven't considered to achieve the desired result?
If you only care about the last 5 minutes then search only the last 5 minutes. Searching more is just wasting resources.
Consider writing your results to a summary index (using collect) with a scheduled search and have your report/dashboard display values from the summary index.

Advanced MDX query

I am in a process of making OLAP cubes for data mining purposes.
The domain is Instruments which run tests and tests has status id's 1,2,3 which means ok, warning and error. I have already deployed the cube and its working perfectly.
My measure is the Sum of my tests. I have a timetable associated with the test table, for when the test was run.
I have four dimensions:
Instrument: which holds information about instrument.
Test: contains all the tests with information about the time it ran.
Status: contains the three status mentioned above.
Time: sort out tests in time
My question is, I have another status called 'NotRun'. Like the other statuses NotRun tests can not be saved in the database, but is calculated with a query.
Notrun is calculated by selecting all instruments from instrument table and then extract those instruments that are to find in test table within a given time period.
I want to use MDX to do the thing mentioned above, but instead of giving a time period i want the cube to handle that for me dynamically.
I don't want to pick a specific year instead i would like to take care of that with my time dimension dynamically.
where ([Date].[Calendar Year].&[2002])
I am really stucked. Any idea how we can acheive that in Business Studio Intelligence 2008?
All the best,
Hassan.
To answer the question "how do I get MDX to pick a date member by itself" see my answer at enter link description here
Or did you want Business Studio Intelligence to pop up a box and ask for the dates each time you run the report?

hiding unnecessary fields in Access Report

At my workplace there is a "Daily Feedback" database where details are entered of any errors made by Customer Service Officers (CSOs), such as who made the mistake, when, and what service it was for. this is so we can gather data which would show areas where CSO's are repeatedly making mistakes so we can feed this back to them and train them in those areas if need be.
i currently have a report where an CSOs name is entered along with a date range and it produces the report showing the number of errors made for each service for that date range.
the code used is -
=Sum(IIf([Service]="Housing",1,0))
=Sum(IIf([Service]="Environmental Health",1,0))
etc etc for each Service.
The problem i have is that not every CSO does EVERY service and so there are usually a few results showing as "0". and i cannot be sure if thats because they dont do the service or if they are just very good at that service.
Being absolutely useless at SQL (or any other possible way of fixing this) i cannot figure out how to HIDE the entries that produce the zero value.
any help here would be greatly appreciated!
Assuming you have a table with the fields CSO, Service, FeedbackComments you could modify the report record source to
SELECT [CSO], [Service], Count([FeedbackComments])
FROM [FeedbackTable]
GROUP BY [CSO], [Service];
Then services which have no records will not appear on the report.
I don't understand exactly what you want. But I want to mention you can use the COUNT() function along with SUM(). A count >0 will reveal if 0 means '0' instances or '0' errors.