About Nyquist sampling example - frequency

If i have eight voltage levels (0v, 2, 4, 6, 8, 10, 12, 14v) with a bandwidth of 125MHz, what is my max data rate in Mbps? And if I increase to 200MHz?

If the question is how to find max data rate or channel capacity (C), given the bandwidth (B) and number of voltage levels (M), it is C = 2*B*log(M)
For 125MHz, C = 2*125*log(8) = 750 Mbps.
For 200MHz, C = 2*200*log(8) = 1200 Mbps.
Refer http://computernetworkingsimplified.in/physical-layer/relationship-bandwidth-data-rate-channel-capacity/ for details

Related

Code to obtain the maximum value out of a list and the three consecutive values after the maximum value

I am writing a model to calculate the maximum production capacity for a machine in a year based on 15-min data. As the maximum capacity is not the sum of the required capacity for all 15-min over the year, I want to write a piece of code that determines the maximum value in the list and then adds this maximum value and the three next consecutive values after this maximum value to a new variable. An simplified example would be:
fifteen_min_capacity = [10, 12, 3, 4, 8, 12, 10, 9, 2, 10, 4, 3, 15, 8, 9, 3, 4, 10]
The piece of code I want to write would be able to determine the maximum capacity in this list (15) and then add this capacity plus the three consecutive ones (8,9,3) to a new variables:
hourly_capacity = 35
Does anyone now the code that would give this output?
I have tried using the max(), the sum() and a combination of both. However, I do not get a working code. Any help would be much appreciated!

Count (or sum) the number of the gridpoints from a high resultion 2-D data, that are closest to the nearst gridpoints of a 2-D coarse resolution?

I have two datasets, the first one is a high spatial resolution, and its values are 0 and 1, and the second dataset has coarse spatial resolution data (its values are not important in my case).
I would like to count the number of gridpoints from the high-resolution data which are closest to the gridpoints of the coarse-resolution data, where the values of the high-resolution data are 1.
In other words, count the number of high-resolution gridpoints with the value of 1, that fall within the pixels of the coarse-resolution data.
Example of the data for coarse spatial resolution data
lon = [ 176.25, 176.75, 177.25, 177.75, 178.25, 178.75, 179.25, 179.75]
lat = [-87.25, -87.75, -88.25, -88.75, -89.25, -89.75]
temperature = np.random.rand(6, 8)
coarse_res = xr.DataArray(temperature, coords={'lat': lat,'lon': lon}, dims=["lat", "lon"])
Example of the data for high spatial resolution data
lon = [176.125,176.375,176.625,176.875,177.125,177.375,177.625,177.875,178.125,178.375,178.625,178.875,179.125,179.375,179.625,179.875]
lat = [-87.125, -87.375, -87.625, -87.875, -88.125, -88.375, -88.625, -88.875, -89.125, -89.375, -89.625, -89.875]
ds_2 = np.random.randint(0, 2, size=(12, 16))
high_res = xr.DataArray(ds_2, coords={'lat': lat,'lon': lon}, dims=["lat", "lon"])
In the end, I would like to calculate the fraction of the high_res gridpoints/pixels with the value of 1 surrounding the coarse-resolution gridpoint. For example, if the first gridpoint of the coarse_res data is surrounded by 4 high-res gridpoints and these values are 0, 1, 1, 1 the fraction should be 0.75.
You can do this with xr.Dataset.groupby_bins:
low_lon_edges = np.arange(176., 178.001, 0.5)
low_lat_edges = np.arange(-90, -86.9, 0.5)
low_lon_centers = (low_lon_edges[:-1] + low_lon_edges[1:]) / 2
low_lat_centers = (low_lat_edges[:-1] + low_lat_edges[1:]) / 2
aggregated = (
high_res
.groupby_bins('lon', bins=low_lon_edges, labels=low_lon_centers)
.sum(dim="lon")
.groupby_bins('lat', bins=low_lat_edges, labels=low_lat_centers)
.sum(dim="lat")
)
Additionally, if the cells nest perfectly (it looks like you're dealing with 1/4 and 1/2 degree data which are both centered on the half cell, so this should work fine) you can just use xr.Dataset.coarsen:
aggregated = ds.coarsen(lat=2, lon=2, boundary="exact").sum()

Dask aggregate value into fixed range with start and end time?

In dask or even pandas how would you go about grouping an dask data frame that has a 3 columns of time / level / spread into a set of fixed ranges by time.
Time is only used to move one direction. Like a loop counting up. So the end result would be start time and end time with high of level, low of level, first value of level and last value of level over the fixed range? Example
12:00:00, 10, 1
12:00:01, 11, 1
12:00:02, 12, 1
12:00:03, 11, 1
12:00:04, 9, 1
12:00:05, 6, 1
12:00:06, 10, 1
12:00:07, 14, 1
12:00:08, 11, 1
12:00:09, 7, 1
12:00:10, 13, 1
12:00:11, 8, 1
For a fixed level range of (7). So level from start to end can not be more than 7 total distance from start to end for each bin of level. Just because first bin is only 8 difference in time and second is only 2 different in time, this dose not madder one the high to low madders that the total distance from high to low dose not go passed 7 the fixed bin size. The first bin could have been 5 not 8 for first bin and 200 for next bin not 2 in the example below. So the First few rows in dask would look something like this.
First Time, Last Time, High Level, Low Level, First Level, Last Level, Spread
12:00:00, 12:00:07, 13, 6, 10, 13, 1
12:00:07, 12:00:09, 14, 7, 13, 7, 1
12:00:09, X, 13, 7, X, X, X
How could this be aggregated in dask with a fix window of level moving forward in time binning each time level moves above X or equal too high/low with in X or below X?

Using DAX for Production Planning

My question is based on building a ramp up for planning production lines.
I have a WIP where a ramp up category is selected to be used for each MSO (Master Sew Order). The Ramp up is based on hour fences (for example 1-6 hours,6-12 hours,etc).
On the WIP, an MSO will have units (example 1,920 units), divided by capacity per hour (80 pcs/hr), to give time needed 24 hours. This then needs to be
calculated based on ramp up, for hours 1-6, 6-12, 12-18, and 18-24 and multiply our by related efficiency.
For example:
Hours 1-6: 20% efficiency * 80 units = 16 units/hr (6 x 16 = 96 units produced)
Hours 6-12: 40% efficiency * 80 units = 32 units/hr (192 units)
Hours 12-18: 60% efficiency * 80 Units = 48 units/hr (288 units)
Hours 18-24: 80% efficiency * 80 units = 64 units/hr (384 units)
Hours 24+: 100% efficiency * 80 units = 80 units/hr ((1920-960)/80)= 12 hours remaining
TOTAL TIME = 36 hours to produce
How would Power BI know to divide up the original 24 hour estimate into parts, multiply by respective efficiency, and return a new result of 36 hours?
Thank you so much in advance!
Kurt
Relationships
I'm not sure how to do this in DAX but you tagged PowerQuery so here's a custom query that computes 36 based on your parameters:
let
MSO = 1920,
Capacity = 80,
Efficiency = {
{6, 0.2},
{12, 0.4},
{18, 0.6},
{24, 0.8},
{#infinity, 1.0}
},
Accumulated = List.Accumulate(Efficiency, [
Remaining = MSO,
RunningHours = 0
], (state, current) =>
let
until = current{0},
eff = current{1},
currentCapacity = eff * Capacity,
RemainingHours = state[Remaining] / currentCapacity,
CappedHours = List.Min({RemainingHours, until - state[RunningHours]})
in [
Remaining = state[Remaining] - currentCapacity * CappedHours,
RunningHours = state[RunningHours] + CappedHours
]),
Result = if Accumulated[Remaining] = 0
then Accumulated[RunningHours]
else error "Not enough time to finish!"
in
Result
The inner lists for Efficiency are of the form time-efficiency-ends,efficiency-value. Plug in infinity to mean the last efficiency never stops.
In a normal iterative programming language you could update state with a for-loop, but in M you need to use List.Accumulate and package all your state into one value.
In your data model you may have MSO in one table containing 2 fields, [Units] and [UnitsPerHour], and another table called EffTable which may store the efficiencies broken out by the hour fences.
Create 4 new calculated columns in your MSO table, one for each hour fence, eg [1--6]:
=
6 * LOOKUPVALUE ( EffTable[Efficiency], EffTable[Hours], "1--6" )
* [UnitsPerHour]
These are fields that hold how many units you would produce in the 4 time slots. Create a new calculated field for the total, [RampUpUnits]:
=
[1--6Hours] + [6--12Hours] + [12--18Hours] + [18--24Hours]
Finally calculate the total time as:
=
24
+ ( [Units] - [RampUpUnits] )
/ [UnitsPerHour]
This calculates the number of hours required for the remaining units and adds it to 24 for the ramp up time.

Money Denominations

I have payroll database, on my payroll payslip i would like to make money denominations for the salary of each employee, I.e if an employe has got 759 Dollar then the cashier wil withdraw 7 one hundreds ,1 Fifty Dolar, 9 ten dollars from a banck
please give me a code in vb.net
Salary hundred Fifty ten
759 7 1 9
Please help me thans a lot
Here's an answer in python:
# Target amount
amount = 759
# The denominations to be used, sorted
denoms = [100, 50, 20, 10, 5, 1]
# Take as many of each denomination as possible
for d in denoms:
count = amount // d
amount -= count * d
print "%ix%i" % (count, d)
Sample output:
7x100
1x50
0x20
0x10
1x5
4x1