affdex-sdk. How many frames can processed per second? - affdex-sdk

I used affdex android sdk, our device cpu is msm8994, android 6.0. but we used the sample apk framedectordemo, there only have 10 frames per second. we want to know the official data. How many frames can processed per second? from the affdex web sit, it said can reach to 20+ processed frames per second.

The stated minimum requirements are a Quad-core 1.5 GHz Cortex-A53, 1 GB of RAM and Android 4.4 or better. As the MSM8994 uses a big.LITTLE configuration of 2 quad-core and the fast one is a 2.0Gz Cortex-A57 you should be able to achieve a good performance. If you don't want to modify the sample apk, you can just download AffdexMe from Google Play: https://play.google.com/store/apps/details?id=com.affectiva.affdexme&hl=en and test the performance there. Remember that you can only track up to 6 detectors in AffdexMe, so if you are trying to benchmark the performance for a bigger amount of metrics the results are going to be different.

Related

Stopping when the solution is good enough?

I successfully implemented a solver that fits my needs. However, I need to run the solver on 1500+ different "problems" at 0:00 precisely, everyday. Because my web-app is in ruby, I built a quarkus "micro-service" that takes the data, calculate a solution and return it to my main app.
In my application.properties, I set:
quarkus.optaplanner.solver.termination.spent-limit=5s
which means each request take ~5s to solve. But sending 1500 requests at once will saturate the CPU on my machine.
Is there a way to tell OptaPlanner to stop when the solution is good enough ? ( for example if the score is stable ... ). That way I can maybe reduce the time from 5s to 1-2s depending on the problem?
What are your recommandations for my specific scenario?
The SolverManager will automatically queue solver jobs if too many come in, based on its parallelSolverCount configuration:
quarkus.optaplanner.solver-manager.parallel-solver-count=3
In this case, it will run 3 solvers in parallel. So if 7 datasets come in, it will solve 3 of them and the other 4 later, as the earlier solvers terminate. However if you use moveThreadCount=2, then each solver uses at least 2 cpu cores, so you're using at least 6 CPU cores.
By default parallelSolverCount is currently set to half your CPU cores (it currently ignores moveThreadCount). In containers, it's important to use JDK 11+: the CPU count of the container is often different than from the bare metal machine.
You can indeed tell the OptaPlanner Solvers to stop when the solution is good enough, for example when a certain score is attained or the score hasn't improved in an amount of time, or combinations thereof. See these OptaPlanner docs. Quarkus exposes some of these already (the rest currently still need a solverConfig.xml file), some Quarkus examples:
quarkus.optaplanner.solver.termination.spent-limit=5s
quarkus.optaplanner.solver.termination.unimproved-spent-limit=2s
quarkus.optaplanner.solver.termination.best-score-limit=0hard/-1000soft

Laravel server hardware requirement

I have developed a Laravel API and looking into picking a server to deploy the project. There is no big business logic running on the server. It's a simple application. But the application will be accessed by ~100 users per second at its peak time. In that case, what parameters of the server should I be looking into for selecting a server (from hardware aspect - RAM, Storage, Processor, etc...)?
API will be used for shop floor time reporting. Every hour (when the hour completes), ~150 users will access the system to report time.
You say you will have 100 users per second, yet you say employees will access it 150 per hour.
While it is likely you can get 100 writes in 30 secs, that's nothing to a modern database.
I would recommend getting the lowest vps package from a hosting provider you like and upgrading to a higher plan if needed.
If you want to run a dedicated server on premises even an office PC with a low end ssd will do the job.
I’m going to round up my estimates because it’s better that you have slightly more then you need then less. Also I’m more used to bigger databases so these estimates may be slightly overkill? But based on my understanding of what you require, they shouldn’t be too excessive , I’ll explain everything aswell so feel free to edit this based on your requirements.
RAM= 150 people? Minimum 10gb. But ram doesn’t come in 10GB and you might aswell go for 16.
Storage? 50GB is a safe bet for small databases and whatnot, feel free to use more or less based on your numbers.
OS requirements. If your app takes up 40gb. Then you do not want only 41gb of space, that will slow everything down.
A good rule of thumb is to reserve 1 GB of RAM for the OS by default, plus an additional 1 GB for each 4 GB between 4-16 and another 1 GB for every 8 GB installed above 16 GB.  What this looks like in a server with 32 GB RAM is 7 GB for your OS, with the remaining 25 GB dedicated for your application.
CPU. Whenever I talk about this people always think it’s not a big deal. It kinda is. The amount of servers that have been bottlenecked by their cpu? Is more then it should be. Now, you said that it’ll be lots of interactions (150) but small ones (just logging hours) therefore cpu cores are what you wanna look at. So just find something within budget that has a fair few cores. Intel Xeon E3 1270 V3 is pretty good for its price I would say. That’s all I can think of right now, don’t hesitate to follow up if I’ve missed anything.
I would recommend taking a look at this aswell:
Choose your version and see if you want to make any motivations based on what’s shown in the official documentation below
https://laravel.com/docs/master/installation

Xcode shows up to 400% CPU Usage - but iPhone is only 2-core

The following shows a screenshot of the Xcode CPU Report indicating that my application (while number crunching) is maxing-out one of the CPUs:
The above shows a maximum of 400%. However the iPhone has a 2-core CPU, so I am wondering why the gauge doesn't go to 200% instead?
Furthermore, by using concurrency and splitting my number crunching across multiple threads I can max out at 400%, however my algorithm only runs twice as fast. Again, indicating that the work is divided across 2 CPU Cores.
Does anyone know why Xcode shows 400% and how this relates to the physical hardware?
If you are testing in a simulator then it shows reports on the basis of your MAC's processor that's why it is showing 400% ( for a quad-core processor).
The iPhone has only 2 cores (although some iPads have more). The Mac running the simulator apparently has four, or two plus hyper threading.

Does quad-core perform substantially better than a dual-core for web development?

First, I could not ask this on most hardware forums, because they are mostly populated by
gamers. Additionally, it is difficult to get an opinion from sysadmins, because they have a fairly different perspective as well.
So perhaps, amongst developers, I might be able to deduce a realistic trend.
What I want to know is, if I regularly fire up netbeans/eclipse, mysql workbench, 3 to 5 browsers with multi-tabs, along with apache-php / mysql running in the background, perhaps gimp/adobe photoshop from time to time, does the quad core perform considerably faster than a dual core? provided the assumption is that the quad has a slower i.e. clockspeed ~2.8 vs a 3.2 dual-core ?
My only relevant experience is with the old core 2 duo 2.8 Ghz running on 4 Gig ram performed considerably slower than my new Core i5 quad core on 2.8 Ghz (desktops). It is only one sample data, so I can't see if it hold true for everyone.
The end purpose of all this is to help me decide on buying a new laptop ( 4 cores vs 2 cores have quite a difference, currently ).
http://www.intel.com/content/www/us/en/processor-comparison/comparison-chart.html
I did a comparison for you as a fact.
Here Quad core is 2.20 GHz where dual core is 2.3 GHz.
Now check out this comparison and see the "Max Turbo Frequency". You will notice that even though quad core has less GHz but when it hit turbo it passes the dual core.
Second thing to consider is Cache size. Which does make a huge difference. Quad core will always have more Cache. In this example it has 6MB but some has up to 8MB.
Third is, Max memory bandwidth, Quad core has 25.6 vs dual core 21.3 means more faster speed in quad core.
Fourth important factor is graphics. Graphics Base Frequency is 650MHz in quad and 500MHz in dual.
Fifth, Graphics Max Dynamic Frequency is 1.30 for quad and 1.10 for dual.
Bottom line is if you can afford it quad not only gives you more power punch but also allow you to add more memory later. As max memory size with Quad is 16GB and dual restricts you to 8GB. Just to be future proof I will go with Quad.
One more thing to add is simultaneous thread processing is 4 in dual core and 8 in quad, which does make a difference.
The problem with multi-processors/multi-core processors has been and still is memory bandwidth. Most applications in daily use have not been written to economize on memory bandwidth. This means that for typical, everyday use you'll run out of bandwidth when your apps are doing something (i e not waiting for user input).
Some applications - such as games and parts of operating systems - attempt to address this. Their parallellism loads a chunk of data into a core, spends some time processing it - without accessing memory further - and finally writes the modified data back to memory. During the processing itself the memory bus is free and other cores can load and store data.
In a well-designed, parallel code essentially any number of cores can be working on different parts of the same task so long as the total amount of processing - number of cores * processing time - is less than or equal to the total time doing memory work - number of cores * (read time + write time).
A code designed and balanced for a specific number of cores will be efficient for fewer but not for more cores.
Some processors have multiple data buses to increase the overall memory bandwidth. This works up to a certain point after which the next-higher memory - the L3 cache- becomes the bottleneck.
Even if they were equivalent speeds, the quad core is executing twice as many instructions per cycle as the duo core. 0.4 Mhz isn't going to make a huge difference.

What's the best way to 'indicate/numerate' performance of an application?

In the old (single-threaded) days we instructed our testing team to always report the CPU time and not the real-time of an application. That way, if they said that in version 1 an action took 5 CPU seconds, and in version 2 it took 10 CPU seconds, that we had a problem.
Now, with more and more multi-threading, this doesn't seem to make sense anymore. It could be that the version 1 of an application takes 5 CPU seconds, and version 2 10 CPU seconds, but that version 2 is still faster if version 1 is single-threaded, and version 2 uses 4 threads (each consuming 2.5 CPU seconds).
On the other hand, using real-time to compare performance isn't reliable either since it can be influenced by lots of other elements (other applications running, network congestion, very busy database server, fragmented disk, ...).
What is in your opinion the best way to 'numerate' performance?
Hopefully it's not intuition since that is not an objective 'value' and probably leads to conflicts between the development team and the testing team.
Performance needs to be defined before it is measured.
Is it:
memory consumption?
task completion times?
disk space allocation?
Once defined, you can decide on metrics.