I have several projects created on the web interface, each has several batches that are already ended. The max assignment per task is 3. I need to add 2 more assignments for each HIT, is it possible?
I've tried using the API on a in-progress batch:
mturk.create_additional_assignments_for_hit(HITId=HIT_ID,NumberOfAdditionalAssignments=2)
and the response :
{'ResponseMetadata': {'RequestId': '.....some id ...',
'HTTPStatusCode': 200,
'HTTPHeaders': {'x-amzn-requestid': '.....some id ...',
'content-type': 'application/x-amz-json-1.1',
'content-length': '2',
'date': 'Thu, 20 Jan 2022 12:20:02 GMT'},
'RetryAttempts': 0}}
But I can't see any update on the web for +2 extra assignments..
Found the answer.
First, the Requester UI (RUI) and the API are not fully connected, such that not all changes in the API will be visible in the RUI.
Here is my answer using the python API
To "revive" old HITs and add a new assignments to them you need to:
mturk = boto3.client('mturk',
aws_access_key_id='xxxxxxxxxxxx',
aws_secret_access_key='xxxxxxxxxxxxx',
region_name='us-east-1',
endpoint_url='https://mturk-requester.us-east-1.amazonaws.com')
Extend the expiration date for the HIT (even if its already passed):
mturk.update_expiration_for_hit(HITId=HIT_ID_STRING,ExpireAt=datetime.datetime(2022, 1, 23, 20, 00, 00))
Then, increase max assignments, here we add +2 more:
mturk.create_additional_assignments_for_hit(HITId=HIT_ID_STRING,NumberOfAdditionalAssignments=2)
That's it, you can see that the total number of NumberOfAssignmentsAvailable increased by 2 and that MaxAssignments increased as well:
mturk.get_hit(HITId=HIT_ID_STRING)
'MaxAssignments':5,
'NumberOfAssignmentsPending': 0,
'NumberOfAssignmentsAvailable': 2,
'NumberOfAssignmentsCompleted': 3
Related
Wanted to know if this is possible, I have 2 APIs I am testing.
API 1. Gives a list of total jobs posted by the user.
Response =
"jobId": 15596, "jobTitle": "PHP developer"
API 2. Gives the following response.
"total CVs": 19, "0-7days": 12,"status": "New Resume"
meaning in bucket New resume we have a total of 19 CVs and in 19 Cvs 12 Cvs have an aging of 12 days. This response basically related to the jobs posted.
When i Hit the API i am getting the correct numbers but on front end the API 1 will be used as dropdown to select the jobs and then New Resume, ageing and total Cvs will be shown according to that jobs.
I wanted is it possible to test the two API's togther sort of using filter like on front end or the only way to test is to check if the response i am getting is correct.
I have an react-native application and im trying to get Glucose Measurements from Accu-Chek Guide device.
I have limited knowledge on BLE and this stackoverflow question helped me a lot to understand bluetooth and retrieving glucose measurements.
Reading from a Notify Characteristic (Ionic - Bluetooth)
so, what im doing in my code:
1, connect to BLE Peripheral
2, monitor characteristic Glucose Feature & Record Access Control Point
3, send 0x0101 (Report stored records | All records) to Record Access Control Point
4, decode response
So far i have 1-3 working but i dont know how to decode the response from Glucose Feature:
Notification response of Glucose Measurement
[27, 4, 0, 195, 164, 7, 7, 14, 11, 6, 5, 90, 2, 119, 194, 176, 195, 184, 0, 0]
Notification of Record Access Control Point
[6, 0, 1, 1]
I am assuming this is the Bluetooth SIG adopted profile of Continuous Glucose Monitoring Service (CGMS) the specification of which is available from:
https://www.bluetooth.com/specifications/gatt/
Looking at the XML for the Glucose Measurement characteristic, gives more detail on how the data is structured.
https://www.bluetooth.com/wp-content/uploads/Sitecore-Media-Library/Gatt/Xml/Characteristics/org.bluetooth.characteristic.glucose_measurement.xml
There will be a little bit of work to do to unpack the data.
For example, the first byte stores the information for the first field named flags. However you need to look at the first 5 bits for those various flags.
The next field is "Sequence Number" which is a uint16 so will take two bytes. Worth noting here that Bluetooth typically uses little endian.
Next is the Base Time field which refers to https://www.bluetooth.com/wp-content/uploads/Sitecore-Media-Library/Gatt/Xml/Characteristics/org.bluetooth.characteristic.date_time.xml which will take the next 7 bytes.
Because some of the 9 fields in the characteristic take more than one byte, this is why you are seeing 20 bytes for the 9 fields.
I want to apply custom logic over dataset placed in Redshift.
Example of input data:
userid, event, fileid, timestamp, ....
100000, start, 120, 2018-09-17 19:11:40
100000, done, 120, 2018-09-17 19:12:40
100000, done, 120, 2018-09-17 19:13:40
100000, start, 500, 2018-09-17 19:13:50
100000, done, 120, 2018-09-17 19:14:40
100000, done, 500, 2018-09-17 19:14:50
100000, done, 120, 2018-09-17 19:15:40
This means something like:
file 120: start-----done-----done-----done-----done
file 150: start-----done
time : 11:40----12:40----13:40-----14:40-----15:40
But it should looks like
file 120: start-----done-----done
file 150: start-----done
time : 11:40----12:40----13:40-----14:40-----15:40
The file 120 has been interrupted once the file 150 has been started
Keep in mind that a lot if different users here and many different files.
Cleaned data should be:
userid, event, fileid, timestamp, ....
100000, start, 120, 2018-09-17 19:11:40
100000, done, 120, 2018-09-17 19:12:40
100000, done, 120, 2018-09-17 19:13:40
100000, start, 500, 2018-09-17 19:13:50
100000, done, 500, 2018-09-17 19:14:50
It shouldn't be able to have multiple concurrent files at once for same user. So after second one has started, events from first one should not be removed from current dataset.
The code is simple but on python and it's easy scalable for Google Dataflow, for example, but moving 100GB+ from AWS to GC is not good idea.
Question #1:
Is it possible to do it on SQL (using postgres/redshift specific features) or better to use Spark? (but not sure how to implement it there)
Question #2:
Any suggestion on maybe better to use AWS Batch or whatever, cause with apache beam - it's easy and pretty much obvious, but how AWS Batch works and how to divide the dataset on chunks (like group per user) - it's a big question.
My suggestion is to somehow unload data from redshift into S3 bucket but divide it in manner separate file=user, then if aws batch supporting this - just feed the bucket and each file should be processed concurrently on already created instances. Not sure if this is makes sense.
If you want to remove rows where the fileid does not match the most recent start for the user, you can use lag(ignore nulls):
select t.*
from (select t.*,
lag(case when event = 'start' then file_id end ignore nulls) over (partition by userid order by timestamp) as start_fileid
from t
) t
where event = 'start' or start_fileid = fileid;
I'm working with RabbitMQ instances hosted at CloudAMQP. I'm calling the management API to get detailed queue statistics. About 1 in 10 calls to the API return invalid numbers.
The endpoint is /api/queues/[vhost]/[queue]?msg_rates_age=600&msg_rates_incr=30. I'm looking for average message rates at 30 second increments over a 10 minute span of time. Usually that returns valid data for the stats I'm interested in, e.g.
{
"messages": 16,
"consumers": 30,
"message_stats": {
"ack_details": {
"avg_rate": 441
},
"publish_details": {
"avg_rate": 441
}
}
}
But sometimes I get incorrect results for one or both "avg_rate" values, often 714676 or higher. If I then wait 15 seconds and call the same API again the numbers go back down to normal. There's no way the average over 10 minutes jumps by a multiple of 200 and then comes back down seconds later.
I haven't been able to reproduce the issue with a local install, only in production where the queue is always very busy. The data displayed on the admin web page always looks correct. Is there some other way to get the same stats accurately like the UI?
Hello everyone.
Sorry for my noob question as I'm just a non-programmer trying to learn to program with Lua.
I'm so attracted with Lua since it's indeed very simple, either in size as well as in syntax.
And I decided to explore further experiment with this Brazilian born language, like playing with sound -- as I did in Python and Ruby.
So I found this ProteaAudio and tried to play the sample scripts came within package I downloaded from here.
The package comes with two sample scripts:
first named example.lua to play the ogg sample file (also comes within the package)
and another to play function generated sound named scale.lua
The first script runs just fine on my Win 7 and Ubuntu 12.04 x86 machine.
But the second script only runs on Windows and got an error when I tried to run it on Ubuntu, generating this message:
../lua52: scale.lua:13: bad argument #1 to 'soundLoop' (number expected, got nil)
stack traceback:
[C]: in function 'soundLoop'
scale.lua:13: in function 'playNote'
scale.lua:29: in main chunk
[C]: in ?
The full original source-code from scale.lua is:
-- function creating a sine wave sample:
function sampleSine(freq, duration, sampleRate)
local data = { }
for i = 1,duration*sampleRate do
data[i] = math.sin( (i*freq/sampleRate)*math.pi*2)
end
return proAudio.sampleFromMemory(data, sampleRate)
end
-- plays a sample shifted by a number of halftones for a definable period of time
function playNote(sample, pitch, duration, volumeL, volumeR, disparity)
local scale = 2^(pitch/12)
local sound = proAudio.soundLoop(sample, volumeL, volumeR, disparity, scale)
proAudio.sleep(duration)
proAudio.soundStop(sound)
end
-- create an audio device using default parameters and exit in case of errors
require("proAudioRt")
if not proAudio.create() then os.exit(1) end
-- generate a sample:
local sample = sampleSine(440, 0.5, 88200)
-- play scale (a major):
local duration = 0.5
for i,note in ipairs({ 0, 2, 4, 5, 7, 9, 11, 12 }) do
playNote(sample, note, duration)
end
-- cleanup
proAudio.destroy()
And since I got confused with this ProteaAudio Lua API, I really can't get why this error comes.
Please help.
This is actually just a guess, but...
To play a "major" scale upwards (8 notes, jumping: full full half, full full full half) the original code does:
local duration = 0.5
for i,note in ipairs({ 0, 2, 4, 5, 7, 9, 11, 12 }) do
playNote(sample, note, duration)
end
where the sample is a handle to a pre-generated sample created by proAudio.sampleFromMemory which is returned by function sampleSine, that passed it a calculated 'table' representing a 440hz sine-wave (concert-pitch frequency for note 'A4', the first above middle 'C').
Thus playing an 'A major scale' by changing (increasing) the 'pich' (frequency) of that sample (in 8 steps=notes). That pitch-calculation is done by function playNote.
Function playNote accepts the following arguments:
sample, pitch, duration, volumeL, volumeR, disparity,
but it currently does not receive the arguments:
volumeL, volumeR, disparity (which will then be nil).
So when function playNote tries to call:
proAudio.soundLoop(sample, volumeL, volumeR, disparity, scale),
then the call will end up like:
proAudio.soundLoop(sample, nil, nil, nil, scale),
where the sample is passed on and scale is the 'playback-pitch' of that sample, as just calculated (according to specified note) by function playNote.
Your error-message states: bad argument #1 to 'soundLoop' (number expected, got nil).
Hmm, that seems consistent with what is happening (assuming that 'bad argument #1' is the second argument, in this case volumeL).
So,
you might want to try specifying some values for volumeL, volumeR, disparity like:
local duration = 0.5
local volumeL = 1.0
local volumeR = 1.0
local disparity = 0.0
for i,note in ipairs({ 0, 2, 4, 5, 7, 9, 11, 12 }) do
playNote(sample, note, duration, volumeL, volumeR, disparity)
end
From the proteaAudio documentation one can read about soundLoop's arguments:
sample - A sample handle returned by a previous load() call
volumeL - (optional) Left volume
volumeR - (optional) Right volume
disparity - (optional) Time difference between left and right channel in seconds.
Use negative values to specify a delay for the left
channel, positive for the right.
pitch - (optional) Pitch factor for playback. 0.5 corresponds to one octave
below, 2.0 to one above the original sample.
If that should do the trick, then the arguments might not be so optional on Ubuntu.
Hope this helps!