Timer / Prescaler in microcontrollers - embedded

Maybe it's a silly question:
I have internal 20 MHz oscillator, 16 bit timer and prescalers (1, 2, 4, 8, 16, 32, 64, 128) and I want to generate 1 ms delay. I know how to do that - (20 000 000 / 1) / 1000 = 20 000 -> put this value to 16 bit register and it works.
With prescalers 2 and 4, I have the same result 1ms - (20 000 000 / 2) / 1000 = 10 000 and (20 000 000 / 4) / 1000 = 5 000
My question is how to determine which prescaler to use ? Maybe, I have to choose prescaler 4, because this value (5000) is closer to 0 and my timer starts to count from 0 to 5000. If I choose 10 000, the timer will count 2 x 5000.
Thank you in advance !

You use a higher prescaler when the counter is otherwise not large enough for the period required. For example if you wanted a 10ms period, you would necessarily need to use a prescaler of at least 4 for a reload value of 50000.
The smallest prescaler that results a reload less than 216 is what you would normally aim for. That gives you the highest possible counter resolution which is useful of you are using it for say PWM or input-capture. If you are just using the reload interrupt on the other hand it is not critical.
If the period you need is not exactly achievable because in is not an exact multiple of the clock period, then using a lower prescaler will minimise the error.
Finally if you go too low, you may not be able to achieve the precise period. For example for your 1ms period, a prescaler of 64 would result in a reload of 312.5, choosing 312 results in a period of ~1001.603ms or 313 for ~998.403ms.
So generally a reload value as close as possible to 65535 rather than zero as you have suggested is what you you would generally aim for.

Mathematally it makes no difference which combination you choose.
The prescaler and the counter are both implemented in hardware and you won't see a performance impact.
However depending on your architecture your prescaler may run permanently and gives you an inaccuracy.
You should send the highest possible frequency (lowest prescaler, most precisest) into your timer/counter as possible.

Related

Need to achieve 120 hits per second in Jmeter during loadtest

I have 5 thread groups each one have 3 api requests and each thread should execute one after one, in 1 hour load test should achieve 120 hits per second.
Pacing: 5 sec
Thinktime: 8 sec
Each thread single iteration time: 20 sec
So for this how much users I need to give to achieve required 120 hits per second and how can I do load test for 5 thread groups because each one should execute one after one.
It's a matter of simple arithmetic calculations and I believe question should go to https://math.stackexchange.com/ (or alternatively you can catch a student of the nearest school ask ask him)
Each thread single iteration time: 20 sec
means that each user executes 3 requests per 20 seconds, to wit 1 request per 6.6 seconds.
So you need 6.6 users to get 1 request per second or 792 users to reach 120 requests per second.
Also "pacing" concept is for the the "dumb" tools which don't support setting the desired throughput and JMeter provides:
Constant Throughput Timer
Precise Throughput Timer
Throughput Shaping Timer
any of them provides possibility to define the number of requests per second, especially the latter one which can be connected with Concurrency Thread Group

Recommended VLF File Count in SQL Server

What is the recommended VLF File Count for 120 GB size database in SQL Server?
I appreciate anyone response quickly .
Thanks,
Govarthanan
There are many excellent articles on managing VLFs in SQL server; but the crux of all of them is- It depends on you!
Some people may need really quick recovery, and allocating a large VLF upfront is better.
DB size and VLFs are not really correlated.
You may have a small DB and may be doing large amount of updates. Imagine a DB storing daily stock values. It deletes all data every night and inserts new data in tables every day! This will really create a large log data but may not impact mdf file size.
Here's an article about VLF auto growth settings. Quoting important section
Up to 2014, the algorithm for how many VLFs you get when you create, grow, or auto-grow the log is based on the size in question:
Less than 1 MB, complicated, ignore this case.
Up to 64 MB: 4 new VLFs, each roughly 1/4 the size of the growth
64 MB to 1 GB: 8 new VLFs, each roughly 1/8 the size of the growth
More than 1 GB: 16 new VLFs, each roughly 1/16 the size of the growth
So if you created your log at 1 GB and it auto-grew in chunks of 512 MB to 200 GB, you’d have 8 + ((200 – 1) x 2 x 8) = 3192 VLFs. (8 VLFs from the initial creation, then 200 – 1 = 199 GB of growth at 512 MB per auto-grow = 398 auto-growths, each producing 8 VLFs.)
IMHO 3000+ VLFs is not a big number but alarming. Since you have some idea about your DB size; and assuming you know that typically your logs are approximately n times your DB size.
Then you can put in right auto growth settings to keep your VLFs in a range you are comfortable with.
I personally will be comfortable with a setting of 10 GB start with 5 GB auto-growth.
So for 120 GB of logs (n=1) this will give me 16 + 22*16=368 VLFs.
And if my logs go up to 500 GB, then I'll have 16+ 98*16=1584 VLFs

What is the average consumption of a GPS app (data-wise)?

I'm currently working on a school project to design a network, and we're asked to assess traffic on the network. In our solution (dealing with taxi drivers), each driver will have a smartphone that can be used to track its position to assign him the best ride possible (through Google Maps, for instance).
What would be the size of data sent and received by a single app during one day? (I need a rough estimate, no real need for a precise answer to the closest bit)
Thanks
Gps Positions compactly stored, but not compressed needs this number of bytes:
time : 8 (4 bytes is possible too)
latitude: 4 (if used as integer or float) or 8
longitude 4 or 8
speed: 2-4 (short: 2: integer 4)
course (2-4)
So binary stored in main memory, one location including the most important attributes, will need 20 - 24 bytes.
If you store them in main memory as single location object, additonal 16 bytes per object are needed in a simple (java) solution.
The maximum recording frequence is usually once per second (1/s): Per hour this need: 3600s * 40 byte = 144k. So a smartphone easily stores that even in main memory.
Not sure if you want to transmit the data:
When transimitting this to a server data usually will raise, depending of the transmit protocoll used.
But it mainly depends how you transmit the data and how often.
If you transimit every 5 minutes a position, you dont't have to care, even
when you use a simple solution that transmits 100 times more bytes than neccessary.
For your school project, try to transmit not more than every 5 or better 10 minutes.
Encryption adds an huge overhead.
To save bytes:
- Collect as long as feasible, then transmit at once.
- Favor binary protocolls to text based. (BSON better than JSON), (This might be out of scope for your school project)

Check for minutes not evenly divided by 5

I have a time stored as a decimal(9,2) column in an sql-server 2005 database.
The time is represented like
Time timeInDecimal
1H 20Min 1.33
1H 30Min 1.50
and so on
I´m looking for an easy way to check whether the number of minutes except whole hours is not evenly divided by 5.
The value I'm hoping to find is where the time is 1H:23Min but not 1H:25MIN.
I just wan´t to compare the minute part of the time.
The way I do now is:
RIGHT(CONVERT(varchar(5),DATEADD(minute,ROUND(timeInDecimal * 60,0),0),108),1) not in ('0','5')
But it does hardly seems to be the ideal way to deal with this.
Feels like I can use the modulo operator for this, but how?
Or is there an even better way?
Hope for a quick answer.
Kind Regards
Andreas
Using the modulus operator, twice:
ROUND((timeInDecimal % 1) * 60, 0) % 5 <> 0
That will:
Get the fractional part and convert it to minutes.
Round it to the nearest minute (.33 hours -> 20 minutes, not 19.80).
Check whether that's divisible by 5.

How should I manage expensive reporting crons for users who visit infrequently?

Let's say I have 5M users (for easy math) who vary widely in their visits per month.
User loyalty, in visits per month
1. 1M <1 visits/month
2. 1M 1-10 visits/month
3. 1M 10-50 visits/month
4. 1M 50-100 visits/month
5. 1M >100 visits/month
The goal for each user is to access data that takes (let's say) 1 CPU cycle to fetch (for example... the reality in our situation is much much less, but it's easier math with 1).
Each data fetch takes too long to load inline, so it's preferred to have data ready for them when they come. (via crons)
Let's say that in order to satisfy our most active users, we would need to run the cron 10 times a day to have it ready for them when they want it. (I say "when they want it" because typically that's 4 times within a 8 hour work day, not 4 times spread evenly over 24 hours). That's 1M(users) * 10(data fetches) per day. Or (at 1 CPU cycle per fetch), 10M CPU Cycles for these 1M most active users. The good news is that they're at least using the fetched data.
However, what about our less active users? What strategy do you recommend to still provide relevant fetched data results while protecting from wasted CPU cycles fetching data that will never or rarely be seen?
Here's a chart of the minimum cycles required based on the chart above. The ideal answer would get as close to this as possible.
Group # Users Visits/Month CPU Cycles/Month
1. 1M 0.1 0.1M
2. 1M 1 1M
3. 1M 10 10M
4. 1M 50 50M
5. 1M 100 100M
-------------------------------------------------
5M 161.1 161.1M
If I did the same cron necessary to keep Group 5 happy for everyone, that'd be 500M CPU cycles (roughly 70% wasted).
What do you recommend to minimize wasted CPU cycles? But to still keep infrequent users happy (because we still want them to turn into active users).