Trimming video for better results with Affectiva? - affdex-sdk

I have a video recording of a person in a sitting position (the camera was place in front of the person). The recording lasts for around 60 minutes. At the beginning and end of the recordings (around 10 min each) there is a second person in the video and the faces might not always be visible because the persons have been talking. We are only interested in the 60 minutes and not in this part in the beginning and end.
Would the result of Affectiva be different when I cut the 10 minutes at the beginning and end? Of course the easier solution would be to just run it over the whole recording without trimming…

No, shouldn't be our SDK gives a prediction of the emotional/expression state per frame basis.

Related

Is there an algorithm for near optimal partition of the Travelling salesman problem, creating routes that need the same time to complete?

I have a problem to solve. I need to visit 7962 places with a vehicle. The vehicle travels with 10km/h and each time I visit one place I stay there for 1 minute. I want to divide those 7962 places into subsets that take will take up to 8 hours. So lets say 200 places take 8 hours I visit them and come back the next day to visit another maybe 250 places(the 200 places subsets will require more distance travelled). For the distance I only care for Euclidean Distances no need to take into account the distance through the road network.
A map of the 7962 places
What I have done so far is use the k means clustering algorithm to get good enough subsets and then the Lin Kernighan heuristic (Program Concorde) to find the distance. And then compute times. But my results go from 4 hours to 12 hours. Any idea to make it better? Or a code that does this whole task all together. Propose anything but I am not a programmer I just use Python some times.
Set of coordinates :
http://www.filedropper.com/wholesetofcoordinates
Coordinates subsets(40 clusters produces with the k means algorithm):
http://www.filedropper.com/kmeans40clusters

Chess engine Alpha Beta expected time to calculate depth 20

I am working on a chess-like-game engine (its the same as chess except each player gets to make 2 moves), and would like to be able to calculate a search to around depth 8(which i guess translates to around depth 16 for regular chess or more since there is no pruning of the 2-moves). I am running alphaBeta pruning.
Currently I seem to be able to get depth 6 (12+ for regular chess) within 20-30ish minutes. Relatively speaking how bad is this performance?
Any tips would be appreciated.
Each ply costs you a multiple of time equal to the number of moves being considered.
If you need 20-30 mins to reach only depth 6, it'll take exponential more time to reach depth 8. So the answer is NO.
You should go back to your algorithm and check for any possible improvement. Null-move reduction, heavy pruning etc are required.

CMSensorRecord Processing Large List Troubles?

I am using the new CMSensorRecord with watchOS 2 and have no problems up until I try to get data out of CMRecordedAccelerometerData. I believe I am using fast enumeration like it is reccomended to go through the list from the docs
for (CMRecordedAccelerometerData* data in list) {
NSLog(#"Sample: (%f),(%f),(%f) ", data.acceleration.x,data.acceleration.y,data.acceleration.z);
}
however going through even 60 min of data takes a long time process (the accelerometer records at 50Hz (50 data points a second)) — the WWDC video on Core Motion recommends to decimate the data to decrease processing time, how can this be implemented? Or am I misunderstanding the intention of using this, are we intended to send the CMSensorDataList to the iPhone for processing? I would like to add each axis to an array on the Apple Watch without iPhone help.
If for example I recorded for the max 12 hours — there would be over 100 million data points to read over, even in this case would it be possible for the Apple Watch to read through this?
At the moment it is taking minutes to read through a couple million data points.

What is the average consumption of a GPS app (data-wise)?

I'm currently working on a school project to design a network, and we're asked to assess traffic on the network. In our solution (dealing with taxi drivers), each driver will have a smartphone that can be used to track its position to assign him the best ride possible (through Google Maps, for instance).
What would be the size of data sent and received by a single app during one day? (I need a rough estimate, no real need for a precise answer to the closest bit)
Thanks
Gps Positions compactly stored, but not compressed needs this number of bytes:
time : 8 (4 bytes is possible too)
latitude: 4 (if used as integer or float) or 8
longitude 4 or 8
speed: 2-4 (short: 2: integer 4)
course (2-4)
So binary stored in main memory, one location including the most important attributes, will need 20 - 24 bytes.
If you store them in main memory as single location object, additonal 16 bytes per object are needed in a simple (java) solution.
The maximum recording frequence is usually once per second (1/s): Per hour this need: 3600s * 40 byte = 144k. So a smartphone easily stores that even in main memory.
Not sure if you want to transmit the data:
When transimitting this to a server data usually will raise, depending of the transmit protocoll used.
But it mainly depends how you transmit the data and how often.
If you transimit every 5 minutes a position, you dont't have to care, even
when you use a simple solution that transmits 100 times more bytes than neccessary.
For your school project, try to transmit not more than every 5 or better 10 minutes.
Encryption adds an huge overhead.
To save bytes:
- Collect as long as feasible, then transmit at once.
- Favor binary protocolls to text based. (BSON better than JSON), (This might be out of scope for your school project)

Mesaure upload and download speed in iPhone

I would like to measure the upload and download speed of data in iPhone, is any API available to achieve the same? Is it correct to measure it on the basis of dividing total bytes received with time taken in response?
Yes, it is correct to measure the total bytes / time taken, that is exactly what the speed is. You might want to take an average if you want to constantly show the download speed.., like using 500 bytes and the time it took to download those particular ones.
For doing this you could like have an NSMutableArray, as a buffer, which you empty idk every 2 seconds. Then you do [bufferMutableArray length]/2 and you know how many bytes a second you had those 2 seconds. When you empty the buffer ofc append to the data you are downloading.
There is no direct API to know the speed.
Total data received/sent and time only will give you average speed. There use to be lot of variation in the speed over the time so if you want more accurate value then do the speed calculation based on sampling.
(Data transferred in 1 miniut) /(60 seconds) ---> this solution only if you need greater accuracy in the speed calculation. The sampling duration can changed based on the level of accuracy required.