I'm executing some code and then waiting somewhere between 1 second and 1 minute. I'm currently using random 0:01:00 /seed but what I really need is to be able to set a floor so that its waiting between 30 seconds and 1 minute.
If you want 0:0:30 to be the minimum and 0:1:0 to be the maximum, try the formula:
0:0:29 + random 0:0:31
This formula yields a "discretely distributed (pseudo)random value". If you want a "continuously distributed (pseudo) random value", you can use (just in R3) the formula:
0:0:30 + random 30.0
R2 does not have a native support for "continuously distributed (pseudo)random values".
Not my area of expertise, but:
00:00:30 + to time! (random 100% * (to integer! 00:00:30))
...appears to work, I think.
>>random/seed now/precise
>> t1: now wait 30 + random 30 difference now t1
== 0:00:39
How about the following:
0:00:30 + random 0:00:30
You could generate a whole number from 1 to 30 and subtract that number in seconds from 1 minute and 1 second.
(and about seeding, use that, but not constantly)
Related
I have a column containing measurement values in meters.
I want to round them up (ceil) them to the next 100m and return it as a km value.
Special thing is: if the original value is a "round" number (100m increment) it should be ceiled up to the next 100m increment (see line 3 in the example below).
Example:
meter_value kilometer_value
1111 1.2
111 0.2
1000 1.1
I think I can get the first two lines by doing:
ceil(meter_value/1000,1) as kilometer_value
The solution I thought of to fix the edge case in line three is to just add 1 meter always:
ceil((meter_value+1)/1000,1) as kilometer_value
It seems a bit clumsy, is there a better way/alternative function to archive this?
You can check to see if it's divisible by 100 and only add one if it is:
ceil(((meter_value + iff(meter_value % 100 = 0, 1, 0))/1000), 1)
This will prevent situations where (if decimal parts are allowed) adding 1 to a value of 999.5 would not be accurate if adding one all the time.
Greg's answer is good, simpler to read to me would be to
divide by 100
floor
add 1
ceil
divide by 10
select
column1 as meter_value
,ceil(((meter_value + iff(meter_value % 100 = 0, 1, 0))/1000), 1) as greg
,ceil(floor(meter_value/100)+1)/10 as simeon
from values
(1111)
,(111)
,(1000)
,(1)
,(0)
;
METER_VALUE
GREG
SIMEON
1,111
1.2
1.2
111
0.2
0.2
1,000
1.1
1.1
1
0.1
0.1
0
0.1
0.1
do we want to mention negative values? I mean it distance, so it's a directionless magnitude, right?
anyway with negative value, both our methods the +1 forces the boundary case to be wrong.
Actually:
Once you have floored adding the 1 or 0.1 if you divide by 1000 vs 100 first, you don't need to ceil at all
thus two short forms can be:
,ceil(floor(meter_value/100)+1)/10 as version_a
,(floor(meter_value/100)+1)/10 as version_b
,floor(meter_value/1000,1)+0.1 as version_c
How to handle decimal numbers in solidity?
If you want to find the percentage of some amount and do some calculation on that number, how to do that?
Suppose I perform : 15 % of 45 and need to divide that value with 7 how to get the answer.
Please help. I have done research, but getting answer like it is not possible to do that calculation. Please help.
You have a few options. To just multiply by a percentage (but truncate to an integer result), 45 * 15 / 100 = 6 works well. (45 * 15%)
If you want to keep some more digits around, you can just scale everything up by, e.g., some exponent of 10. 4500 * 15 / 100 = 675 (i.e. 6.75 * 100).
How do I make a biased random number generator (RNG) in VB.NET?
I know I could make it by fiddling with the output of the Randomize()/Rnd methods, but is there a built-in way of doing this?
I want the biased RNG to give me either a 2 or 4 (though using 1 or 2 as a substitute is also OK by me), with 2 occurring on average 90% of the time and 4 occurring on average 10% of the time.
Create a random number generator to return values from 1-10, if the value from the random number generator is between 1 and 9 send a 2 if the value is 10 send a 4.
You might want to look at this
http://msdn.microsoft.com/en-us/library/vstudio/ctssatww(v=vs.100).aspx?cs-save-lang=1&cs-lang=vb#code-snippet-2
If you want to come out with a mask to generate your values
Here is what I think you can do.
Dim numbers() as integer = {2,2,2,2,4,2,2,2,2,2} ' set 10% for 4, 90% for 2
Dim r as new Random()
Return numbers(r.Next(0, 10))
I have some irregularly stamped time series data, with timestamps and the observations at every timestamp, in pandas. Irregular basically means that the timestamps are uneven, for instance the gap between two successive timestamps is not even.
For instance the data may look like
Timestamp Property
0 100
1 200
4 300
6 400
6 401
7 500
14 506
24 550
.....
59 700
61 750
64 800
Here the timestamp is say seconds elapsed since a chose origin time. As you can see we could have data at the same timestamp, 6 secs in this case. Basically the timestamps are strictly different, just that second resolution cannot measure the change.
Now I need to shift the timeseries data ahead, say I want to shift the entire data by 60 secs, or a minute. So the target output is
Timestamp Property
0 750
1 800
So the 0 point got matched to the 61 point and the 1 point got matched to the 64 point.
Now I can do this by writing something dirty, but I am looking to use as much as possible any inbuilt pandas feature. If the timeseries were regular, or evenly gapped, I could've just used the shift() function. But the fact that the series is uneven makes it a bit tricky. Any ideas from Pandas experts would be welcome. I feel that this would be a commonly encountered problem. Many thanks!
Edit: added a second, more elegant, way to do it. I don't know what will happen if you had a timestamp at 1 and two timestamps of 61. I think it will choose the first 61 timestamp but not sure.
new_stamps = pd.Series(range(df['Timestamp'].max()+1))
shifted = pd.DataFrame(new_stamps)
shifted.columns = ['Timestamp']
merged = pd.merge(df,shifted,on='Timestamp',how='outer')
merged['Timestamp'] = merged['Timestamp'] - 60
merged = merged.sort(columns = 'Timestamp').bfill()
results = pd.merge(df,merged, on = 'Timestamp')
[Original Post]
I can't think of an inbuilt or elegant way to do this. Posting this in case it's more elegant than your "something dirty", which is I guess unlikely. How about:
lookup_dict = {}
def assigner(row):
lookup_dict[row['Timestamp']] = row['Property']
df.apply(assigner, axis=1)
sorted_keys = sorted(lookup_dict.keys)
df['Property_Shifted'] = None
def get_shifted_property(row,shift_amt):
for i in sorted_keys:
if i >= row['Timestamp'] + shift_amt:
row['Property_Shifted'] = lookup_dict[i]
return row
df = df.apply(get_shifted_property, shift_amt=60, axis=1)
I'm looking for a way to generate some (6 for default) equations where all subsums are unique.
For example,
a+b+c=50
d+e+f=50
g+h+i=50
a, d and g have to be distinct.
a+b and d+e have to be distinct.
e+f and h+i have to be distinct.
a+c and d+f have to be distinct.
But, a+b and e+f can be the same. So I only care about the subsums of aligned parameters..
I could only found ways to check whether some sequence is subsum-distinct, but I found nothing on how to generate such a sequence..
You didn't state whether you need it to be a random sequence, so suppose that this is not required.
One simple approach is this:
1 + 2 + 47 = 50
3 + 4 + 43 = 50
5 + 6 + 39 = 50
7 + 8 + 35 = 50
9 + 10 + 31 = 50
11 + 12 + 27 = 50
First two numbers are 2 smallest available numbers, the third number is final sum - those numbers.
a and b are always increasing, c is always decreasing
a + b is always increasing, b + c and a + c are always decreasing
You can generate it this way in a loop.
EDIT after comment that it has to be a random sequence:
Possibly you could create several sets (some sort of hashset/hashmap would be the most appropriate)
set of first summands
set of sums of first and second summands
set of sums of second and third summands
set of sums of first and third summands
set of previously generated triples
You would generate random triples this way:
If total number of demanded triples was not achieved generate a random triple, otherwise finish.
Check if the triple was not previously generated, if not proceed with step 3.
Conduct checks for first four sets. If no sums are contained within those sets, add triple and proceed with step 1.
However, I am not sure if this approach guarantees that you will get results (especially in small final sums).
So, I would add an counter, if too many consecutive attempts are not successful, then I would switch to brute force approach (which should not be problem if final sums are small and on other hand is very unlikely to happen if a final sum is large).
Overall, performance should be good.