Get multiple HIT submissions or auto-generated answers in MTurk Worker Sandbox - mechanicalturk

Using MTurk, during testing phase, it might be useful to execute the same HIT multiple times to collect enough data for a WebApplication. However, in the worker's sandbox, only one assignment can be submitted.
Is there a way to get multiple results for the same HIT? E.g. a way to fill in the form multiple times from the same user account, or a method to auto-generate inputs and submit crowd-sourced forms?

Related

Jmeter -Understanding Differences between thread groups users and loop count

I've been asked to carry out some testing to assess the response time under varying load on an API that is being developed in my current job. I’ve got a very basic knowledge of jMeter so I’m looking at this as my chosen tool to use.
Here’s quick Back ground info of what I’m tying to achieve:
I want to test the API by simulating a number of unique Json calls each made by different users. Each unique Json call contains a unique user ID and item number.
Lets say for example I want to test, 10 different users, all making calls for different Items to the API at once.
I’m a little confused with the terminology and difference between thread groups and users and loop count (which are found inside thread groups) and how best to use them. I couldn't find an answer on the jMeter site or on here so have come up with what I think may be a possible approach....
From my understanding of how jMeter works. In order for me to achieve my aim would require me to create 10 separate thread groups within the test plan, each of these having their unique Json body data configured.
For example : Thread group 1 containing request 1, Thread group 2 containing request 2, Thread group 3 containing request 3, Thread group 4 containing request 4, Thread group 5 containing request 5.... and so on.
( note, I would uncheck the option ‘run thread groups consecutively so that all requests get sent at once)
If the number of users and the loop count in each thread group is set to 1 then each call is made only once. Meaning a total of unique 10 calls would be made during a single test run?
I guess, what I’m kind of asking is in my scenario, does the term thread group actually represent unique users, and if so what does the number of threads(users) actually represent
I think I've got this right but it would be good if someone more experienced might be able to add some confirmation or advice to point me in the right direction if I've got this wrong.
Thread Group is an element in JMeter test plan which is responsible for executing the scenario/business work flow for the given set of users & for the given duration(or iteration/loop count).
Lets say I have a business flow like this.
users login
search for product
select a product
order the product
Another business flow for the same application would be
Admin users login
Search for users
modify the order/cancel the order/issue refund
I would create 2 thread groups for 2 different work flows.
In production/live environment, I might have 1000 users searching for products and 500 users might be placing the order.
I might have only 10 admin users to provide support for 1000 users - to provide refund/modify the order etc. [some might do refund / some cancel / some modify]
So,
Thread Group 1 [users1=1000, rampup=3600,loop count=forever, duration=7200(2 hrs)]
users login
search for product
If Controller [let only 500 users to order product]
select a product
order product
Thread Group 2 [users1=10, rampup=600,loop count=forever, duration=7200(2 hrs)]
Admin users login
Search for users
Throughput controller [40%]
modify the order
Throughput controller [30%]
cancel the order
Throughput controller [30%]
some other support
As you mention you are new to JMeter, I would suggest you to check this.
JMeter Tips/Tricks for beginners / Best Practices
JMeter - REST API Testing
I believe using Synchronizing Timer to ensure all 10 threads are running at exactly the same moment will be easier and smarter choice.
And this is some general information regarding how JMeter works:
JMeter starts the specified amount of threads during ramp-up period
Each thread starts executing samplers upside-down (or according to the Logic Controllers)
When there are no samplers to execute and loops to iterate - thread is being shut down. Elsewise it starts over until desired test duration or loops count is met/exceeded.
Hopefully it clarifies everything

What to do when an acceptance test has varying user choices and you want to test each of them

I'm writing some acceptance tests for a donation form. I'm using Codeception. For the sake of this example, lets say that the donation form has 3 parts:
Enter your personal information
Enter either Credit Card and Direct Transfer
Submit and receive e-mail confirmation
For the acceptance test I'd like to test the whole process--for both credit card AND direct transfer. Steps 1 and 3 are essentially the same between the two donation processes, but--obviously--you can't run the second step by itself (the donation form wouldn't submit without step 1).
So I'm wondering, would it be "normal" in this case to write two tests (e.g. canDonateWithCreditCard() and canDonateWithDirectTransfer()) that both test all three parts of the process? Even though that's partly testing the same thing twice?
If not, what would be the preferred way to do it?
This is perfectly acceptable at my work we have a sizable automation suite where the same pages get executed multiple times, because of scenarios similar to what you outlined above.
The only caveat I would mention is when building your tests (I don't know how codeception works) but look to build your tests using something along the lines of the page object model (http://martinfowler.com/bliki/PageObject.html) this will mean even though you have multiple tests that may implement the same scenarios each test doesn't have its own implementation of those steps.
This depends on your approach.
1. You can create two different test cases performing the action.
2. You can have a logic in your test to pass the mode of transfer as an argument to the method and perform activities accordingly.
It's always ideal to use Page object model to encapsulate all actions in each page class and also to avoid redundancy.
If both Credit card and Direct transfer actions navigate to a new page, create a new object of the page according to the argument passed, and call the method to do the transfer action.
A simple page object class can be created like this:
http://testautomationlove.blogspot.in/2016/02/page-object-design-pattern.html

How to consolidate API calls for the ASANA API

I'm a freelance web dev and I work with a lot of clients across many different workspaces in Asana. Not being able to get a consolidated view makes this a tedious and difficult thing to manage, so I'm putting together my own little utility to help me get a 'superview' of tasks assigned to me in order of the due date. In order to make this easier for me to scan, I need to have the project name next to the task details.
The easiest way, in my mind, would be a single API call for all tasks assigned to me and request the project name, task name, task id, due date, and workspace name all at once.
The API doesn't seem to allow this consolidated type of request, however, so instead, the workflow goes something like this;
API call to get all my workspaces
Loop through the workspaces, making an API call for each to get all tasks
Use PHP to sort those tasks accordingly
Loop through those tasks making an API call for the first instance of each project in order to get the project name (I cache the data as I
go so that I'm only making a call once per project)
The issue I'm getting is a 500 error when I start making API calls to get the project details. I doubt I'm hitting the 100 call per minute limit, but I'm still getting the errors none the less. In light of this, I'm looking for a way to make a consolidated call that contains all the data I need, but I can't seem to figure it out.
Anyone have some guidance on this?
Good news! We actually do support Input/Output options that allow you to specify which fields you want, including nested fields. So, while you still need to make separate calls for each workspace, you can do something like this:
workspaces = GET /workspaces
for id in workspaces
tasks = GET /workspaces/:id/tasks?assignee=me&opt_fields=name,due_on,projects.name
(If you're only interested in incomplete tasks, you can add &completed_since=now - or if you want incomplete and recently completed tasks, &completed_since=... with the timestamp you want to exclude any tasks that were completed before)
Additionally, 500 is not the code we send for rate limiting - it's likely an issue with the request itself. How are you requesting the project details?

Present variable information within a single mturk HIT

I'd like to use mturk to have 10 workers visit my website, log in with a test account, and enter some information on their profile. I don't want them to see each other's entries, so each worker should get login information for a different test account when they view the HIT.
This almost looks like what mturk's template feature is for -- I could upload a CSV with the information for each test account. But if I understand correctly, that will make 10 separate HITs, and allow one worker to do all 10 of them. Is there any way to have mturk put information that varies between workers into a single HIT?
Here are the solutions I'm currently aware of:
Use the CLI to automate creation of a bunch of different HITs. This would be a lot of work, and also make approving and retrieving the results cumbersome.
Direct workers to a survey website that's capable of doing what I want, and have them get the login information there.
Dynamically fill in part of the HIT using an AJAX request to an external website and database. That seems like crazy overkill for something so simple.
Are there other options?

Bumping an Amazon Mechanical Turk HIT

We have a web-based game for two players, which we offer via Amazon Mechanical Turk. For each game we need two players that will enter simultaneously, or at most 1 minute apart. We noticed that at the first few minutes after we publish the HIT, we get many workers, because the HIT is on the first page of their search results, but later the rate drops as the game moves to a previous page. So, in order to get enough simultaneous workers, we had to remove the HIT, and open a new HIT.
Is it possible, instead of deleting a HIT, and opening a new HIT to somehow "bump" / "poke" the old HIT to make it appear new?
This is possible in many ad websites, when after you publish an ad, you can bump it to the head of the ad list.
Using TurkPrime.com it is possible to bump a hit by using the "Restart" feature. It closes the first HIT and opens a new HIT excluding all workers who completed the first HIT. The effect is that it bumps the HIT to the top of Mechanical Turk when sorted by date.
With TurkPrime you use your own Amazon MTurk account, but they have an option for requesters with no MTurk account.
The only surefire way I know of to bump a HIT is to create different HITs of the same HIT Type and add HITs to that type. That bumps the creation date of the entire HIT Type for sorting purposes, which is what workers often search for. See my projects which use this approach:
https://github.com/HarvardEconCS/turkserver-meteor
https://github.com/HarvardEconCS/TurkServer
The other way to get concurrent workers for your HITs is to post on worker forums.
http://www.mturkforum.com/
http://www.cloudmebaby.com/
http://www.reddit.com/r/mturk
http://www.mturkgrind.com/
http://www.turkernation.com/
Generally, if you are going to do things that require more than two workers at the same time, you need to establish communication with the workers and schedule something in advance.