How do I calculate sum of values in a column with different units? - sum

Date From To Upload Download Total
03/12/15 00:53:52 01:53:52 407 KB 4.55 MB 4.94 MB
01:53:51 02:53:51 68.33 MB 1.60 GB 1.66 GB
02:53:51 03:53:51 95.39 MB 2.01 GB 2.10 GB
03:53:50 04:53:50 0 KB 208 KB 209 KB
04:53:50 05:53:50 0 KB 10 KB 11 KB
05:53:49 06:53:49 0 KB 7 KB 7 KB
06:53:49 07:53:49 370 KB 756 KB 1.10 MB
07:53:48 08:53:48 2.69 MB 64.05 MB 66.74 MB
I have this data in a spreadsheet. The last column contains total data usage in an hour. I would like to add all data used in a day in GB. The total data usage as you can see varies. It has KB, MB and GB.
How can I do it in LibreOffice Calc?

Converting all the totals into kilobytes and then summing the column of kilobytes seems like the most straightforward method.
Assuming your "Total" column is column F, and the entries in this column are text (and not numbers formatted to have the varies byte size indicators on the end), this formula will convert GB into KB:
=IF(RIGHT(F2,2)="GB",1048576*VALUE(LEFT(F2,LEN(F2)-3)),"Not a GB entry")
The IF function takes parameters IF(Test is True, Then Do This, Else Do That). In this case we are telling Calc:
IF the right two characters in this string are "GB"
THEN take the left characters minus three, convert the string into a number with VALUE, and multiply by 1,045,576
ELSE give an error message
You want to handle GB, MB, and KB, which requires nested IF statements like so:
=IF(RIGHT(F2,2)="GB",1048576*VALUE(LEFT(F2,LEN(F2)-3)),IF(RIGHT(F2,2)="MB",1024*VALUE(LEFT(F2,LEN(F2)-3)),IF(RIGHT(F2,2)="KB",VALUE(LEFT(F2,LEN(F2)-3)),"No byte size given")))
Copy and paste the formula down however long your column is. Then SUM over the calculated KB values.

This is correct formula for G, M, K suffixies, value getting from B2 cell:
=IF(RIGHT(B2;1)="G";1048576*VALUE(LEFT(B2;LEN(B2)-1));IF(RIGHT(B2;1)="M";1024*VALUE(LEFT(B2;LEN(B2)-1));IF(RIGHT(B2;1)="K";VALUE(LEFT(B2;LEN(B2)-1));"No byte size given")))

Related

How many bytes in BigQuery types

How many bytes do the following types take up in BigQuery:
Timestamp
Datetime
Date
My guess was that date could be stored in 2 bytes, and a timestamp perhaps 8, but I wasn't sure about that and it is not mentioned on the https://cloud.google.com/bigquery/docs/reference/standard-sql/data-types page.
The size of BigQuery's data types is as follows:
Data type Size
INT64/INTEGER 8 bytes
FLOAT64/FLOAT 8 bytes
NUMERIC 16 bytes
BOOL/BOOLEAN 1 byte
STRING 2 bytes + the UTF-8 encoded string size
BYTES 2 bytes + the number of bytes in the value
DATE 8 bytes
DATETIME 8 bytes
TIME 8 bytes
TIMESTAMP 8 bytes
STRUCT/RECORD 0 bytes + the size of the contained fields
GEOGRAPHY 16 bytes + 24 bytes * the number of vertices in the geography type (you can verify the number of vertices using the ST_NumPoints function)
Null values for any data type are calculated as 0 bytes.
A repeated column is stored as an array, and the size is calculated
based on the number of values. For example, an integer column (INT64)
that is repeated (ARRAY) and contains 4 entries is calculated
as 32 bytes (4 entries x 8 bytes).
See more details in Data size calculation section of Pricing documentation

Reverse Engineering Fixed Point Numbers

I am currently putting an engine into another car and I want to keep the fuel economy calulation inside the boardcomputer working. I managed to recode this part sucessfully, but I have been trying to figure out the (simple?) two byte dataformat they used without success. I assume it is fixed point notation, but no matter how I shift it around, it does not line up. How do the two bytes represent the right number?
Some examples:
Bytes (Dec) --> Result
174,10 -> 2,67
92,11 -> 2,84
128,22 -> 3,75
25,29 -> 4,85
225,23 -> 3,98
00,40 -> 5,00
128,34 -> 5,75
Here's a partial solution:
First, swap the bytes. Then join them:
The result (in hex) is:
0AAE
0B5C
1680
1D19
17E1
2800
2280
Then split the into the first digit (4 bits), the remaining three digits (12 bits) and keep the entire number (16 bits) as well. The result (in decimal) is:
0 2734 2734
0 2908 2908
1 1664 5760
1 3353 7449
1 2017 6113
2 2048 10240
2 640 8832
The first digits seems to be a multiplication factor. 0 stands for 1024, 1 for 1536, 2 for 2048. The formula possibly is f = (1024 + n * 512).
Now divide the entire number by the multiplication factor. The result, rounded to two decimal places, is:
2734 / 1024 = 2.67
2908 / 1024 = 2.84
5760 / 1536 = 3.75
7449 / 1536 = 4.85
6113 / 1536 = 3.98
10240 / 2048 = 5.00
8832 / 2048 = 4.31
It works for all except the last number, which might contain a mistake.
So it seems to be some sort of floating-point number but I don't recoginze the specific format. Possibly, there is a simpler formula the explains the number.

How number of MFCC coefficients depends on the length of the file

I have a voice data with length 1.85 seconds, then I extract its feature using MFCC (with libraby from James Lyson). It returns 184 x 13 features. I am using 10 milisecond frame step, 25 miliseconds frame size, and 13 coefficients from DCT. How can it return 184? I still can not understand because the last frame's length is not 25 miliseconds. Is there any formula which explain how can it return 184? Thank you in advance.
There is a picture that can explain you things, basically the last window takes more space than previous ones.
If you have 184 windows, the region you cover is 183 * 10 + 25 or approximately 1855 ms.

SQL Server 2008, how much space does this occupy?

I am trying to calculate how much space (Mb) this would occupy. In the database table there is 7 bit columns, 2 tiny int and 1 guid.
Trying to calculate the amount that 16 000 rows would occupies.
My line of thought was that 7 bit columns consume 1 byte, 2 tiny ints consumes 2 bytes and a guid consumes 16 bytes. Total of 19byte for one row in the table? That would mean 304000 bytes for 16 000 rows or ~0.3mbs us that correct? Is there a meta data byte as well?
There are several estimators out there which take away the donkey work
You have to take into account the Null bitmap which will be 3 bytes in this case + number of rows per page + row header + row version + pointers + all the stuff here:
Inside the Storage Engine: Anatomy of a record
Edit:
Your 19 bytes of actual data
has 11 bytes overhead
total 30 bytes per row
around 269 rows per page (8096 / 30)
requires 60 pages (16000 / 269)
around 490k space (60 x 8192)
a few KB for the index structure of the primary

Storing a 30KB BLOB in SQL Server 2005

My data is 30KB on disk (Serialized object) was size should the binary field in t-sql be?
Is the brackets bit bytes ?
... so is binary(30000) .... 30KB?
Thanks
You need to use the varbinary(max) data type; the maximum allowed size for binary is 8,000 bytes. Per the MSDN page on binary and varbinary:
varbinary [ ( n | max) ]
Variable-length binary data. n can be a value from 1 through 8,000. max indicates that the maximum storage size is 2^31-1 bytes. The storage size is the actual length of the data entered + 2 bytes. The data that is entered can be 0 bytes in length.
The number after binary() is the number of bytes, see MSDN:
binary [ ( n ) ]
Fixed-length binary data of n bytes. n
must be a value from 1 through 8,000.
Storage size is n+4 bytes.
Whether 30kb is 30000 or 30720 bytes depends on which binary prefix system your file system is using.