Calculate seconds (or milliseconds) between two locations - objective-c

Currently my function goes like this:
#define kKILOMETER 1000
#define kSeconds_To_Milliseconds 1000
#define kHOUR 60
#define kMINUTE 60
#define kLongDistance_3KM 3
- (long)calculateTimeToMilliSecondsWithDistance:(double)distance andSpeed:(float)theSpeed
{
// Distance arrive in meters
double kmDistance = distance/kKILOMETER;
long time =
(long)(((kmDistance - kLongDistance_3KM)/[self.ref integerForKey:#"MAX_SPEED"])*kHOUR*kMINUTE*kSeconds_To_Milliseconds);
NSLog(#"calculateTimeToMilliSeconds -> sleep to (sec): %li, maxSpeed: %li",
time/1000, (long)[self.ref integerForKey:#"MAX_SPEED"]);
[self startSendSleepTime:time];
return time;
}
My "MAX_SPEED" value refer to 200 km/h.
What I get back is wrong value.
What I want to to calculate the time using my distance minus 3 kilometers with speed of 200 km/h but somehow it mess up.
Also it would be even better to get it in seconds, for that I should remove the kSeconds_To_Milliseconds, right?
I don't know what mess up for me.
EDIT 1:
For example:
2014-03-19 18:12:39.624 Cellular Radar[3338:60b] Distance to car: 4713, Trap Radius: 900
2014-03-19 18:12:39.673 Cellular Radar[3338:60b] calculateTimeToMilliSeconds -> sleep to (sec): 2055, maxSpeed: 3
2014-03-19 18:12:39.684 Cellular Radar[3338:60b] Sleep for (2055.000000 seconds | 34.250000 minutes | 0.570833 hours)

Related

What is the difference of "dom_content_loaded.histogram.bin.start/end" in Google's BigQuery?

I need to build a histogram, concerning DOMContentLoaded of a webpage. When I used BigQuery, I noticed that apart from density, there are 2 more attributes (start, end). In my head there should only be 1 attribute, the DOMContentLoaded event is only fired when the DOM has loaded.
Can anyone help clarify the difference of .start and .stop? These attributes always have 100 milliseconds difference between them (if start = X ms, then stop = X+100 ms. See a query example posted below.
I can not understand what these attributes represent exactly:
dom_content_loaded.histogram.bin.START
AND
dom_content_loaded.histogram.bin.END
Q: Which one of them represents the time that the DOMContentLoaded event
is fired in a user's browser?
SELECT
bin.START AS start,
bin.END AS endd
FROM
`chrome-ux-report.all.201809`,
UNNEST(dom_content_loaded.histogram.bin) AS bin
WHERE
origin = 'https://www.google.com'
Output:
Row |start | end
1 0 100
2 100 200
3 200 300
4 300 400
[...]
Below explains meaning of bin.start, bin.end and bin.density
Run below SELECT statement
SELECT
origin,
effective_connection_type.name type_name,
form_factor.name factor_name,
bin.start AS bin_start,
bin.end AS bin_end,
bin.density AS bin_density
FROM `chrome-ux-report.all.201809`,
UNNEST(dom_content_loaded.histogram.bin) AS bin
WHERE origin = 'https://www.google.com'
You will get 1550 rows in result
below are first 5 rows
Row origin type_name factor_name bin_start bin_end bin_density
1 https://www.google.com 4G phone 0 100 0.01065
2 https://www.google.com 4G phone 100 200 0.01065
3 https://www.google.com 4G phone 200 300 0.02705
4 https://www.google.com 4G phone 300 400 0.02705
5 https://www.google.com 4G phone 400 500 0.0225
You can read them as:
for phone with 4G load of dom_content was loaded within 100 milliseconds for 1.065% of loads; in between 100 and 200 milliseconds for 1.065%; in between 200 and 300 milliseconds for 2.705% and so on
To summarize for each origin, type and factor you got histogram that is represented as a repeated record with start and end of each bin along with density which represents percentage of respective user experience
Note: if you add up the dom_content_loaded densities across all dimensions for a single origin, you will get 1 (or a value very close to 1 due to approximations).
For example
SELECT SUM(bin.density) AS total_density
FROM `chrome-ux-report.all.201809`,
UNNEST(dom_content_loaded.histogram.bin) AS bin
WHERE origin = 'https://www.google.com'
returns
Row total_density
1 0.9995999999999978
Hope this helped

Solar energy conversion w/m^2 to mj/m^2

i am new here, I am using MERRA monthly solar radiation data. I want to convert w/M^2 to MJ/m^2
I am bit confused, how to convert solar radiation monthly average data W/m^2 to MJ/m^2
so far i understood by reading different sources,
Firstly i have to convert w/m^2 to kw/m^2
after kw/m^2 to mj/m^2 .......
Am i doing correctly
Just i am taking one instance:
For may month i have value 294 w/m^2
So 294 * 0.001 = 0.294 kw/m^2
0.294 * 24 (kw to kwh (m^/day)) = 7.056 kwh/m^2/day
7.056 * 3.6 (kwh to mj) = 25.40 mj/day
i am confused i am doing right or wrong .
Not sure why you would take the kWh step in between.
Your panels do 294 Watt per m², i.e. 294 Joule per sec per m². So that's 24*60*60 * 294 = 25401600 Joule per m² per day, or 25.4016 MJ per m² per day.
So if:
1 W/m2 = 1 J/m2 s
Then:
294 W/m2 = 294 J/m2 s
if you want it in days then:
1 day = 60s * 60min *24h = 86400s
294 J/m2 s x 86000s/1day = 25284000 J/m2 day
25284000 J/m2 day x 1MJ/1000000J = 25.284 MJ/m2 day
all together:
294 W/m2 = 294/(1000000/86400) = 25.4016 MJ/m2 day
A watt is the unit of power and Joules are the units of energy, they are related by time. 1 watt is 1 Joule per second 1W = 1 J/s. So the extension of that equation is that 1J = 1w x 1second. 1J = 1Ws. A loose analogy is if you say Litre is a unit of volume and L/S is a unit of flow. So your calculation needs to consider how long you are gathering the solar energy. So the number of Joules, if the sunlight shines at 90degrees to the solar panel for 1 hour is 294W/m2 x 3600s and would give ~1 x 10^7 joules per square metre. Of course as the inclination [the angle of light] varies away from 90 degrees, this will cause the effective power and hence the energy absorbed to drop, as a function of the sine of the angle to the sun. 90 degrees gives a sine of 1 and is full power.

predefined preemption points among processes

I have forked many child processes and assigned priority and core to each of them. Porcess A executes at period of 3 sec and process B at a period of 6 sec. I want them to execute in such a way that the higher priority processes should preempt lower priority ones only at predefined points and tried to acheive it with semaphores. I have used this same code snippets within the 2 processes with different array values in both.
'bubblesort_desc()' sorts the array in descending order and prints it. 'bubblesort_asc()' sorts in ascending order and prints.
while(a<3)
{
printf("time in sort1.c: %d %ld\n", (int)request.tv_sec, (long int)request.tv_nsec);
int array[SIZE] = {5, 1, 6 ,7 ,9};
semaphore_wait(global_sem);
bubblesort_desc(array, SIZE);
semaphore_post(global_sem);
semaphore_wait(global_sem);
bubblesort_asc(array, SIZE);
semaphore_post(global_sem);
semaphore_wait(global_sem);
a++;
request.tv_sec = request.tv_sec + 6;
request.tv_nsec = request.tv_nsec; //if i add 1ms here like an offset to the lower priority one, it works.
semaphore_post(global_sem);
semaphore_close(global_sem); //close the semaphore file
//sleep for the rest of the time after the process finishes execution until the period of 6
clk = clock_nanosleep(CLOCK_MONOTONIC, TIMER_ABSTIME, &request, NULL);
if (clk != 0 && clk != EINTR)
printf("ERROR: clock_nanosleep\n");
}
I get the output like this whenever two processes get activated at the same time. For example at time units of 6, 12,..
time in sort1.c: 10207 316296689
time now in sort.c: 10207 316296689
9
99
7
100
131
200
256
6
256
200
5
131
100
99
1
1
5
6
7
9
One process is not supposed to preempt the other while one set of sorted list is printing. But it's working as if there are no semaphores. I defined semaphores as per this link: http://linux.die.net/man/3/pthread_mutexattr_init
Can anyone tell me what can be the reason for that? Is there a better alternative than semaphores?
Its printf that's causing the ambiguous output. If the results are printed without '\n' then we get a more accurate result. But its always better to avoid printf statements for real time applications. I used trace-cmd and kernelshark to visualize the behaviour of the processes.

Daemon to monitor query and send mail conditionally in SQL Server

I've been melting my brains over a peculiar request: execute every two minutes a certain query and if it returns rows, send an e-mail with these. This was already done and delivered, so far so good. The result set of query is like this:
+----+---------------------+
| ID | last_update |
+----+---------------------|
| 21 | 2011-07-20 13:03:21 |
| 32 | 2011-07-20 13:04:31 |
| 43 | 2011-07-20 13:05:27 |
| 54 | 2011-07-20 13:06:41 |
+----+---------------------|
The trouble starts when the user asks me to modify it so the solution so that, e.g., the first time that ID 21 is caught being more than 5 minutes old, the e-mail is sent to a particular set of recipients; the second time, when ID 21 is between 5 and 10 minutes old another set of recipients is chosen. So far it's ok. The gotcha for me is from the third time onwards: the e-mails are now sent each half-hour, instead of every five minutes.
How should I keep track of the status of Mr. ID = 43 ? How would I know if he has already received an e-mail, two or three? And how to ensure that from the third e-mail onwards, the mails are sent each half-hour, instead of the usual 5 minutes?
I get the impression that you think this can be solved with a simple mathematical formula. And it probably can be, as long as your system is reliable.
Every thirty minutes can be seen as 360 degrees, or 2 pi radians, on a harmonic function graph. That's 12 degrees = 1 minute. Let's take cosin for instance:
f(x) = cos(x)
f(x) = cos(elapsedMinutes * 12 degrees)
Where elapsed minutes is the time since the first 30 minute update was due to go out. This should be a constant number of minutes added to the value of last_update.
Since you have a two minute window of error, it will be time to transmit the 30 minute update if the the value of f(x) (above) is between the value you would get at less than one minute before or after the scheduled update. Which would be = cos(1* 12 degrees) = 0.9781476007338056379285667478696.
Bringing it all together, it's time to send a thirty minute update if this SQL expression is true:
COS(RADIANS( 12 * DATEDIFF(minutes,
DATEADD(minutes, constantNumberOfMinutesBetweenSecondAndThirdUpdate, last_update),
CURRENT_TIMESTAMP))) > 0.9781476007338056379285667478696
If you need a wider window than exactly two minutes, just lower this number slightly.

Is there any short hand on formatting the String?

I am using objective c, and I got some variables like this:
1100
920
845
1439
But I want to change it to :
11:00
09:20
08:45
14:39
The problem is how can I fill the zero when I got three number only? I know the logic can be simple to do it I detect the string length, but is there any way to do it more effectually? Thank you.
unsigned int time = 1100;
NSString *strTime = [NSString stringWithFormat:#"%02u:%02u", time / 100, time % 100];