I have a worked example of how to compute the capacity of a hard disk, could anyone explain where the BOLD figures came out of?
RPM: 7200
no of sectors: 400
no of platters: 6
no of heads: 12
cylinders: 17000
avg seek time: 10millisecs
time to move between adj cylinders: 1millisec
the first line of the answer given to me is:
12 x 17 x 4 x 512 x 10^5
I just want to know where the parts in bold came from.The 512 I dont know. The 10 is from the seek time but its power 5?
The answer is
heads x cylinder x sectors x 512 (typical size of one sector in bytes)
so this is
12 x 17000 x 400 x 512
which is the same as
12 x 17 x 1000 x 4 x 100 x 512
and
100 = 10^2
1000 = 10^3
10^2 x 10^3 = 10^5
As you want the capacity, you don't need any times here.
A reference for the 512 bytes can be found at Wikipedia, for example (and it also has a similar example with the same formula a bit below).
Related
This question already has answers here:
How can I find the right size box for each product?
(2 answers)
Closed 2 years ago.
This is a hard question and I hope I can get the answer here.
The question is to find the right size box which allow the logistic business to save the money when shipping.
We have 2 tables which are boxes and products.
Boxes table contains each ID and dimensions per box.
'w' for wide, 'd' for depth and 'h' for height. Please assume we have just 3 box samples for our convenience.
Products table includes also product ID, dimensions. Dimensions has the same meaning as boxes table.
'layable' means the product can be packaged not only straight position but also layable position. For instance product 'g' is a fragile bottle not being able to put horizontal position in the box. Thus this is 'n' in layable column.
This question needs to query each product ID with the right size box.
The right size box means the product needs to be shipped with box that is the least space.
Hope your kind help. Thanks.
boxes:
size
w
d
h
S
353
250
25
M
450
350
160
L
610
460
460
products:
ID
w
d
h
layable
a
350
250
25
y
b
450
250
160
y
c
510
450
450
y
d
350
250
25
y
e
550
350
160
y
f
410
400
430
n
g
350
240
25
n
h
450
350
160
n
i
310
360
430
n
Expected output:
ID
size
a
S
b
M
...
....
...
....
...
....
g
S
Hmmm . . . I'm not quite sure how "layable" fits in. But you want the smallest box that is as big as or bigger than each dimension. The basic idea is:
select p.*,
(select b.size
from boxes b
where b.w >= p.w and b.d >= p.d and b.h >= p.h
order by b.size desc -- happens to works because S > M > L
limit 1
) as size
from products p
I need some guidance on how to approach for this problem. I've simplified a real life example and if you can help me crack this by giving me some guidance, it'll be awesome.
I've been looking at public optimization algorithms here (https://www.cvxpy.org/) but I'm a noob and I'm not able to figure out which algorithm would help me (or if I really need it).
Problem:
x1 to x4 are items with certain properties (a,b,c,y,z)
I have certain needs:
Parameter My Needs
a 150
b 800
c 80
My goal is get all optimal coefficient sets for x1 to x4 (can be
fractions) so as to get as much of a, b and c as possible to satisfy
needs from the smallest possible y.
These conditions must always be met:
1)Individual values of z should stay within threshold (between maximum and minimum for x1, x2, x3 and x4)
2)And Total y should be maintained within limits (y <=1000 & y>=2000)
To illustrate an example:
x1
Each x1 has the following properties
a 20 Minimum z 10 Maximum z 50
b 200
c 0
y 300
z 20
x2
Each x2 has the following properties
a 30 Minimum z 60 Maximum z 160
b 5
c 20
y 50
z 40
x3
Each x3 has the following properties
a 20 Minimum z 100 Maximum z 200
b 200
c 15
y 200
z 40
x4
Each x4 has the following properties
a 5 Minimum z 100 Maximum z 300
b 30
c 20
y 500
z 200
One possible arrangement can be (not the optimal solution as I'm trying to keep y as low as possible but above 1000 but to illustrate output)
2x1+2x2+1x3+0.5x4
In this instance:
Coeff x1 2
Coeff x2 2
Coeff x3 3
Coeff x4 0.5
This set of coefficients yields
Optimal?
total y 1550 Yes
total a 162.5 Yes
total b 1025 Yes
total c 95 Yes
z of x1 40 Yes
z of x2 80 Yes
z of x3 120 Yes
z of x4 100 Yes
Lowest y? No
Can anyone help me out?
Thanks!
I am using Keras with Tensorflow backend.
I am facing a batch size limitation due to high memory usage
My data is composed of 4 1D signals treated with a sample size of 801 for each channel. Global sample size is 3204
Input data:
4 channels of N 1D signals of length 7003
Input generated by applying a sliding window on 1D signals
Give input data shape (N*6203, 801, 4)
N is the number of signals used to build one batch
My Model:
Input 801 x 4
Conv2D 5 x 1, 20 channels
MaxPooling 2 x 1
Conv2D 5 x 1, 20 channels
MaxPooling 2 x 1
Conv2D 5 x 1, 20 channels
MaxPooling 2 x 1
Conv2D 5 x 1, 20 channels
Flatten
Dense 2000
Dense 5
With my GPU (Quadro K6000, 12189 MiB) i can fit only N=2 without warning
With N=3 I get a ran out of memory warning
With N=4 I get a ran out of memory error
It sound like batch_size is limitated by the space used by all tensors.
Input 801 x 4 x 1
Conv 797 x 4 x 20
MaxPooling 398 x 4 x 20
Conv 394 x 4 x 20
MaxPooling 197 x 4 x 20
Conv 193 x 4 x 20
MaxPooling 96 x 4 x 20
Conv 92 x 4 x 20
Dense 2000
Dense 5
With a 1D signal of 7001 with 4 channels -> 6201 samples
Total = N*4224 MiB.
N=2 -> 8448 MiB fit in GPU
N=3 -> 12672 MiB work but warning: failed to allocate 1.10 GiB then 3.00 GiB
N=4 -> 16896 MiB fail, only one message: failed to allocate 5.89 GiB
Does it work like that ? Is there any way to reduce the memory usage ?
To give a time estimation: 34 batch run in 40s and I got N total = 10^6
Thank you for your help :)
Example with python2.7: https://drive.google.com/open?id=1N7K_bxblC97FejozL4g7J_rl6-b9ScCn
Given the following data frame:
df = pd.DataFrame()
df['A'] = [np.random.randint(1, 100) for i in xrange(1000)]
df['B'] = [np.random.randint(1, 100) for i in xrange(1000)]
I would like to compute some statistics based on a rolling window:
that has a 50% overlap
within this window, I would like to break it into 10 smaller non-overlapping windows and compute statistics for each of the 10 windows and save this information to a list.
This is what I mean:
0 100
____________________
0 10
10 20
20 30
30 40
40 50
50 60
60 70
70 80
80 90
90 100
____________________
50 150
____________________
50 60
60 70
70 80
80 90
90 100
100 110
110 120
120 130
130 140
140 150
____________________
100 200
____________________
100 110
110 120
...
Take a window of size 100 data points.
Break that into a small window of 10 data points.
Compute statistics.
Back to 1: Move the window by 50%.
Repeat steps 2 and 3
Back to 1: ...
I have the following code that works.
def rolling_window(df, size=100):
start = 0
while start < df.count():
yield start, start + size
start += (size / 2)
stats = []
for start, end in windows(df['A']):
step = 10
time_range = np.arange(start, end + step, step)
times = zip(time_range[:-1], time_range[1:])
for t in times:
s = t[0]
e = t[1]
this_drange = df.loc[s:e,'B'].max() - df.iloc[s:e,'B'].min()
stats.append(this_drange)
But the two for loops take 9 hours for 0.5 million rows. How do I modify the code such that it is really fast? Is there a way to vectorize this?
I tried looking at pd.rolling() but I have no idea how to set it up such that there is a 50% overlap. Also, this is much more than just 50% overlap.
This should give you some inspiration. I'm not sure it handles all edge cases correctly though..
def thing2(window=100, step=50, subwindow=10, substep=10):
# Calculate stats for all possible subwindows
rolled = df['B'].rolling(window=subwindow)
stats = rolled.max() - rolled.min()
# Only take the stats of complete subwindows
stats = stats[subwindow-1:]
# Collect the subwindow stats for every "macro" window
idx, subidx = np.ogrid[:len(df)-window+1:step, :window:substep]
linidx = (idx + subidx).ravel()
return stats.iloc[linidx]
I am using the gnuplot scrip command set key autotitle columnhead to make the lables for my data. The only issue is, the column head data is numeric and so it doesnt really mean much on its own.
Is there a way to add a fixed string to the autotitle, eg "Year " + columnhead, or alternatively, give my key a title?
String concatenation using . operator with columnhead() works in gnuplot v4.6 (documentation):
set terminal pngcairo enhanced truecolor size 480,320 fontscale 0.8
set output 'autotitle.png'
set key left Left
plot for [i=2:4] 'data.txt' u 1:i w l t 'f(x) = '.columnhead(i)
Also, yes, you can set a title for the key instead, like this: set key title 'f(x)'.
Input file data.txt used in this example:
x 100x x^3 2^x
1 100 1 2
2 200 8 4
3 300 27 8
4 400 64 16
5 500 125 32
6 600 216 64
7 700 343 128
8 800 512 256
9 900 729 512
10 1000 1000 1024