This might be a duplicate question, but I didn't get the conclusive answer in them.
I have the vehicle data i.e., velocity (m/s), yaw rate(in radians), sampling times, with these two I calculated the curvature of the road using the equation - curvature = YawRate/velocity.
mSec Speed YawRate(with offset 500) Velocity
22 113 513 31.38888889
53 113 513 31.38888889
84 113 513 31.38888889
115 113 513 31.38888889
915 110 510 30.55555556
946 110 510 30.55555556
978 110 510 30.55555556
24 109 510 30.27777778
56 109 510 30.27777778
87 109 511 30.27777778
118 109 511 30.27777778
Now I want to plot the road curvature on an image of the road. I have the equation for curvature,
Curvature = YawRate/Velocity)
(something like showing the trail of the vehicle).
**Remember-I have to plot this trajectory on an image. How can I do it?
P:S at high speeds steering angle is not significant. so ruling out steering angle as input.
What you want to plot are the known positions of the vehicle across time.
From the available data set we can infer that the polar coordinates are given by the steering angle (which seems to be absolute - deduct a quarter turn) and the instantaneous radius of curvature.
Convert from polar to Cartesian.
Without better information, you have to assume that the instantaneous center of rotation is fixed.
Related
I have logs of mouse movement that is coordinates and timestamp .I want to plot the mouse movement using this log how can I do this I have no idea what API or what can be used to do the same.I want to know how start with if there is some way which exist.
My log is as follows
Date hr:min:sec ms x y
13/6/2020 13:13:33 521 291 283
13/6/2020 13:13:33 638 273 234
13/6/2020 13:13:33 647 272 233
13/6/2020 13:13:33 657 271 231
13/6/2020 13:13:33 667 269 230
13/6/2020 13:13:33 677 268 229
13/6/2020 13:13:33 687 267 228
13/6/2020 13:13:33 697 264 226
You're looking for geom_path() from ggplot2. The geom will connect a line between all your observations based on the order they appear in the dataframe. So, here's some x,y data that's expanded a bit:
df <- data.frame(
x=c(291,273,272,271,269,268,267,264,262,261,261,265,268,280,290),
y=c(283,234,233,231,230,229,228,226,230,235,237,248,252,246,235)
)
And some code to make a simple plot using geom_path():
p <- ggplot(df, aes(x=x,y=y)) + theme_classic() +
geom_path(color='blue') + geom_point()
p
If you want, you can even save that as an animation based on your time points. See the code below using the gganimate package:
library(gganimate)
df$time <- 1:15
a <- p + transition_reveal(time)
animate(a, fps=20)
I am using Keras to create an LSTM neural-network that can predict the concentration in the blood of a certain drug. I have a dataset with time stamps on which a drug dosage was administered and when the concentration in the blood was measured. These dosage and measurement time stamps are disjoint. Furthermore several other variables are measured at all time steps (both dosage and measurements). These variables are the input for my model along with the dosages (0 when no dosage was given at time t). The observed concentration in the blood is the response variable.
I have normalized all input features using the MinMaxScaler().
Q1:
Now I am wondering, do I need to normalize the time variable that corresponds with all rows as well and give it as input to the model? Or can I leave this variable out since the time steps are equally spaced?
The data looks like:
PatientID Time Dosage DosageRate DrugConcentration
1 0 100 12 NA
1 309 100 12 NA
1 650 100 12 NA
1 1030 100 12 NA
1 1320 NA NA 12
1 1405 100 12 NA
1 1812 90 8 NA
1 2078 90 8 NA
1 2400 NA NA 8
2 0 120 13.5 NA
2 800 120 13.5 NA
2 920 NA NA 16
2 1515 120 13.5 NA
2 1832 120 13.5 NA
2 2378 120 13.5 NA
2 2600 120 13.5 NA
2 3000 120 13.5 NA
2 4400 NA NA 2
As you can see, the time between two consecutive dosages and measurements differs for a patient and between patients, which makes the problem difficult.
Q2:
One approach I can think of is aggregating on measurements intervals and taking the average dosage and SD between two measurements. Then we only predict on time stamps of which we know the observed drug concentration. Would this work, or would we lose to much information?
Q3
A second approach I could think of is create new data points, so that all intervals between dosages are the same and set the dosage and dosage rate at those time points to zero. The disadvantage is then, that we can only calculate the error on the time stamps on which we know the observed drug concentration. How should we tackle this?
I am computing my fuel consumption from OBD2 parameter. MAF to be specific and I am receiving data on per second basis. Here is an section of my data.
TS RS EngS MAF R MAP EL TD Travel
14:41:22 31 932 1056 98 23978 12130
14:41:23 29 2084 2639 107 23210 12130
14:41:24 32 2154 3867 149 38826 12130
14:41:25 36 2426 4683 184 36266 12130
14:41:26 39 2391 3031 133 682 12130
14:41:27 40 1784 2794 132 30634 12130
14:41:28 42 1864 2853 140 30378 12130
14:41:29 43 1953 2900 132 29098 12130
14:41:30 46 2031 3017 135 29098 12130
14:41:31 45 2027 2969 126 20138 12130
14:41:32 47 2122 4253 174 42154 12130
14:41:33 51 2220 4722 183 20906 12130
Where
TS : Time Stamp,
RS : Road Speed,
EngS : Engine Speed,
MAF R : Mass Air Flow Rate,
MAP Mass Air Pressure,
EL : Engine Load,
TD Travel : Total Distance Traveled
So basically from this data I am trying to compute my Instantaneous Fuel Consumption and The Mileage in KMPL.
For that, Since The Data is per second i am taking MAF of each row and using this formula,
Fuel Consumption = MAF/(14.7*710),
where 14.7 = ideal air/fuel ratio,
and 710 is density of gasoline in grams/L
So, this should give my consumption. and I am calculating the distance(in KM) from RS /3600. And further dividing distance by fuel consumption to get mileage. However the calculation is coming horribly wrong. The mileage of my car is around 14KMPL. Here are my results.
TS Distance (inKM) Fuel Consum(L) Mileage(KMPL)
14:41:22 0.0086111111 0.1008355216 0.0853975957
14:41:23 0.0080555556 0.2519933158 0.0319673382
14:41:24 0.0088888889 0.369252805 0.0240726374
14:41:25 0.01 0.4471711626 0.0223628016
14:41:26 0.0108333333 0.2894246837 0.0374305785
14:41:27 0.0111111111 0.2667939842 0.0416467828
14:41:28 0.0116666667 0.2724277871 0.0428248043
14:41:29 0.0119444444 0.2769157317 0.0431338602
14:41:30 0.0127777778 0.2880878491 0.0443537546
14:41:31 0.0125 0.2835044163 0.0440910239
14:41:32 0.0130555556 0.4061112437 0.0321477323
14:41:33 0.0141666667 0.4508952017 0.0314189785
Can someone tell what am I doing so wrong that the computation is so wrong. As the formulas are simple there isn't much scope to do error.Thank You.
MAF is in g/s
MAF(g/s) * 1/14.7 * 1L/710g = Fuel Consumption in L/s Units
Speed (V) is in KPH (Km/hr) so V(Km/hr) * (1hr/3600s) = v KPS(Km/s)
so FC(L/s) / v (Km/s) = L/Km
you want Km/L so v/Fc so your final formula is
KmPL = V * 1/ 3600 * 1/MAF * 14.7 * 710
Divide the MAF by 14.7 to get Grams of fuel per Sec
next divide by 454 to get lbs fuel/sec
next divide 6.701 to get fuel/sec
multiply by 3600 to get gallons/ hr
other case GPH=MAF*0.0805 next MPG=MPH?GPH
I am trying to query the bitcoin daemon in order to find out what's the total amount of bitcoins mined/produced so far in order to calculate the market capitalization. However, I can't seem to find any command that does that.
I've checked the following link to no avail:
https://en.bitcoin.it/wiki/Original_Bitcoin_client/API_calls_list
You can find that information from the number of blocks. 50 BTC per block were created for the first 210 000 blocks, then 25 BTC per block for the next 210 000 blocks, etc.
If I take, say:
http://bitcoincharts.com/
as I write this SO answer I can read:
Blocks 279383
Total BTC 12.235M
Starting from 279 383 blocks you can find:
210 000 * 50 = 10 500 000
(279 383 - 210 000) * 25 = 1 734 575
10 5000 + 1 734 575 = 12 234 575
12 234 575 which that site rounded up as "12.235M"
This is not 100% correct as the first 50 bitcoins were not usable etc. moreover it's a fact that quite a lot of the early bitcoins mined are lost forever.
But that approximation should be "close enough" and seems to be what most sites are using.
I have the following excel file:
W1000x554 1032 408 52.1 29.5 70700 12300
W1000x539 1030 407 51.1 28.4 68700 12000
W1000x483 1020 404 46 25.4 61500 10700
W1000x443 1012 402 41.9 23.6 56400 9670
W1000x412 1008 402 40 21.1 52500 9100
W1000x371 1000 400 36.1 19 47300 8140
W1000x321 990 400 31 16.5 40900 6960
W1000x296 982 400 27.1 16.5 37800 6200
W1000x584 1056 314 64 36.1 74500 12500
I want to define a function that can ask the user for one of the first column's names and then read all the relevant data of that row later.
For example if the user defines W1000x412 then read : 1008 402 40 21.1 52500 9100.
Any ideas?
I suspect what #Marc means is that a formula such as in J2 below (copied across and down as necessary) will 'pick out' the values you want. It is not clear to me from your question whether these should be kept separate (as in Row2 of example) or strung together (CONCATENATE [&] as in J7 of example, where these are space [" "] delimited):
I am also not entirely sure about your 'define a function' but have assumed you do not require a UDF.
I have used Row1 to provide the offset for VLOOKUP, to save adjusting manually the formula for each column.
ColumnI is the expected user input, that might be best by selection from a Data Validation List with Source $A$2:$A$10.