How to Create Approval Rate Qualification in Mechanical Turk Command Line Tools - mechanicalturk

I am using Mechanical Turk's Command Line Tools interface to create a set of HITs. I would like to require that people accepting my HITs have an approval rate of at 95% or better with at least 1,000 HITs completed already. I believe I need to create a qualification type then somehow add it to my HIT properties file (see these excellent slides), but I was unable to find an example of precisely how to specify it. How can I use Command Line Tools to specify such a requirement?

Nope, that's not true.
Using the command line tools, you just put this information in your HIT type .properties file, like this:
#Approval number system qualification
qualification.1:00000000000000000040
qualification.comparator.1:greaterThan
qualification.value.1:1000
#Approval percentage system qualification
qualification.2:000000000000000000L0
qualification.comparator.2:greaterThan
qualification.value.2:95
When you do loadHITs using this .properties file as the argument for -properties, these will be the required qualifications.
Creating qualification types is only for your HITs with custom qualifications.
Shameless plug, but the command line tools are quite old and difficult to deal with. You might have more success using one of the API's, like the Python mTurk API I wrote.

Related

How do I design a Gherkin/SpecFlow/Selenium solution to have easily parametrizable logins

I am developing a solution for validation of exams developed on top of a web software. This implies that:
Multiple users, each with separate logins and tenants, will implement an application to match exam standards
The exam proctor will have to run a validator that checks the implemented application against the definition of what is correct for each step (i.e. in a given step, the unit price times the ordered quantity is the dollar amount to be ordered).
The validator should give exact reports of what occurred so the exam can be rated.
For this, we decided to implement a stack using Selenium for browser automation, and SpecFlow/Gherkin/Cucumber to interact with Selenium.
Right now the main issue I'm having is how to have the person who administers the exam successfully and easily validate, for 20 students, that their exam is correct. My current way of running things is having an NUnit console runner being invoked by a powershell script that then uses SpecFlow to create a detailed execution report.
Should my powershell script go edit the feature files with tables containing the logins for each of the students, obtained from a .csv or something? Is there any way I can pass the csv file to NUnit so it can be used in the tests?
Thanks,
JM
I would put the login information into the app.config or another file. Before you start the test run, change the values for that run. In the steps then you read the values from it.
I agree with all the responses provided earlier. However, if you dont want to do any of those, you can set an environment variable with the patient login key (or even the credentials) and save login+password in a file, Database or even a csv. At runtime, you will just need to read this key and insert whatever logic you want. This will work well even on non windows, build machines etc

Automating Sequence of Manual Steps

I have sequence of steps that an user does, e.g. logging on the a remote UNIX shell, creation of files/directories, changing permission, Running remote Shell scripts and commands, File deletion, File movements,
Run DB queries and basis the query results perform certain tasks exporting the results to a file or run further shell commands/scripts or DB insert statements etc etc.
doing there steps users achieves different processed or data processing and validating.
What is the best way to automate the above schenerio, Should we go for a Workflow tools like Activiti etc. or is there a better framework/way to achieve the requirements.
My requirement is to work with Open-source, and possibly Java based.
I am completely new to this so any help pointers would be appreciated.
The scenario you describe is certainly possible with a workflow tool like Activiti. Apache Camel or Spring Integration would be another possibility (as all the steps you mention are automatic system tasks).
A workflow framework would be a good option if you need one of these
you want to store the history data for 'audit purposes': who did what/when/how long did it take.
you want to visually model your steps, perhaps to discuss it with business people.
there is a need for human interaction between some of the steps
Your description reminds me of a software/account provisioning process.
There are a large number of provisioning tools on the market both Open Source or otherwise (Dell Crowbar is one options).
However, A couple of the comments you made in your response to Joram indicate a more general purpose tool such as Activiti may be an option:
"Swivel Chair" tasks - User tasks that may one day be automated
Visual model of process state
Most provisioning tools dont allow for generic user tasks and dont provide a (good) visual model of the process state.
However, they generally include remote script execution which would need to be cobbled together as a service task if using a BOM tool.
I would certainly expand my research to include provisioning tools as they sound like a better fit, however if you cant find anything that works for you, a BPM platform provides a generic framework to build what you need.

Is there any way to create roles and profiles in mass in SAP?

I need to create a lot of SAP roles and profiles with a little difference between them.
Is there any way to do this using ABAP, or any the tamplate for the file to be uploaded using the PFCG transaction?
I'm pretty new in SAP, so if you have any document about that, please send me.
Thanks in advance.
quite often you can use the Legacy Systems Migration Workbench (transaction 'lsmw'). The workbench works like a sort of macro recorder. In it you can record the steps in a transaction and replay that record any number of times, replacing the values you used in your recorded transaction with new ones, for instance read from a text file. There are a few limitations though:
handling table controls is quite tricky
the steps for all iterations have to be the same. You can't just omit some part of your recording because you only need it for some of the records.
A lot more complex would be creating your own batch input (that is the technology used to replay recorded transactions) using some ABAP coding you need to create yourself. There you would be more flexible, for instance adding different numbers of privileges to different roles. That batch input would then be executed by using the "call transaction using " statement (see here).
If you can manage to restrict the differences to organizational hierarchy fields, you can use the built-in function to derive roles. This way, you can create a master role and a number of derived roles that only differ in specific values. You should be able to use the LSMW mentioned by Dirk Trilsbeek to create the derived roles, if necessary.
If this is not possible, you could try to create the role once, download it and check the contents of the file - it's basically a line-based fixed-width format with the first field of each line describing the line type, IIRC - just compare the contents of each line to the structures named. If you are familiar with any programming environment that is able to handle text output, it's not too hard to generate files containing the new roles with any toolkit you're comfortable with. I've successfully used XText / XPand for this, but it doesn't really matter. You can then upload the roles from the generated text files.

Setting permissions based on the program trying to access a kernel module

I have written a kernel module that creates a /proc file and reads values written into it from a user program,say user.c
Now I want restrict permissions for this /proc file.I have restricted permissions based on userid using the 'current' kernel variable by checking current->euid.
My question: Is there a way to restrict this based on the program too? i.e. only user.c should be able to write to this proc file and not any other program.I could not find any parameters in task_struct that would help me do this. Can you please suggest a way to do this?
In your proc writer implementation (that is, inside the kernel module) the best you can do is check the value of current (a struct task *), which holds (among other things) valuable fields such as comm (16-character argv[0]), pid, uid, etc (Basically, everything you see in /proc//status. You can also check the original exe name (like you see in /proc//exe), to see if it's a well known path. You can then return an error.
Caveat: Anyone could rename their opening process to be one of your "allowed" programs, if you go by "comm", and there are ways to defeat the "exe" protection. This will only make it slightly harder, but not impossible for someone to get around. A more comprehensive and stronger solution would require you to peek at the user mode memory of the program, which is possible, but too complicated for a brief answer.
Note: Permission parameters won't work, don't even bother. They go by classic UNIX ACL, which is u/g/o - so you can't filter by PID.

How to access results of Sonar metrics for use with applications like PowerPivot

I'm trying to run a number of applications with known failure rates through Sonar, with hopes of deciding which metrics are most valuable in determining whether a particular application will fail. Ultimately I'll be making some sort of algorithm that will look at the outputs of whatever metrics I'm using and generate a score from 1 - 100. I've got about 21 applications put through Sonar, and the results have been stored in a MySQL database. I originally planned to use PowerPivot to find relationships in the data, but it seems like the formatting of the tables doesn't lend itself well to that. Other questions on stackoverflow have told me that Sonar's tables are unformatted, and I should instead use the Web Service API to get the information. I'm unfamiliar with API and was unsuccessful in trying to do what I wanted by looking at Sonar's documentation for API.
From an answer to another question:
http://nemo.sonarsource.org/api/timemachine?resource=org.apache.cxf:cxf&format=csv&metrics=ncloc,violations_density,comment_lines_density,public_documented_api_density,duplicated_lines_density,blocker_violations,critical_violations,major_violations,minor_violations
This looks very similar to what I'd like to have, except I'm only looking at each application once (I'm analyzing a sample of all the live applications on a grid), which means Timemachine isn't really what I'm looking for. Would it be possible to generate a similar table, except instead of the stats for a particular application per date, it showed the statistics for an application and all of its classes, etc?
If you're not familiar with the WS API, you can also create your own Sonar plugin to achieve whatever you want: it is written in Java and it will execute on every analysis you run. This way, in the code ot this custom plugin, you can do whatever you want: flush the metrics you need in an output file, push them into a third party system, ... etc.
Just take a look on how to write a plugin (most probably you will create a Decorator). You have concrete examples also to get started faster.