What InfluxDB schema is suitable for these measurements? - schema

I have data about the status of my server collected over the years: temperatures, fan speeds, cpu load, SMART data. They are stored in a SQLite database under various tables, each one specific for each type of data.
I'm switching to InfluxDB for easier graphing (Grafana) and future expansion: the data will include values from another server and also UPS data (voltages, battery, ...).
I read the guidelines about schemas in InfluxDB but still I'm confused because I have no experience on the topic. I found another question about a schema recommendation but I cannot apply that one to my case.
How should I approach the problem and how to design an appropriate schema for the time series? what should I put in tags and what in fields? should I use a single "measurement" series or should I create multiple ones?
These are the data I am starting with:
CREATE TABLE "case_readings"(date, sensor_id INTEGER, sensor_name TEXT, Current_Reading)
CREATE TABLE cpu_load(date, load1 REAL, load2 REAL, load3 REAL)
CREATE TABLE smart_readings(date, disk_serial TEXT, disk_long_name TEXT, smart_id INTEGER, value)
Examples of actual data:
case_readings:
"1478897100" "4" "01-Inlet Ambient" "20.0"
"1478897100" "25" "Power Supply 1" "0x0"
cpu_load:
"1376003998" "0.4" "0.37" "0.36"
smart_readings:
"1446075624" "50026B732C022B93" "KINGSTON SV300S37A60G" "194" "26 (Min/Max 16/76)"
"1446075624" "50026B732C022B93" "KINGSTON SV300S37A60G" "195" "0/174553172"
"1446075624" "50026B732C022B93" "KINGSTON SV300S37A60G" "196" "0"
"1446075624" "50026B732C022B93" "KINGSTON SV300S37A60G" "230" "100"
This is my idea for a InfluxDB schema. I use uppercase to indicate the actual value and spaces only when a string actually contains spaces:
case_readings,server=SERVER_NAME,sensor_id=SENSOR_ID "sensor name"=CURRENT_READING DATE
cpu_readings,server=SERVER_NAME load1=LOAD1 load2=LOAD2 load3=LOAD3 DATE
smart_readings,server=SERVER_NAME,disk=SERIAL,disk="DISK LONG NAME" smart_id=VALUE DATE

I found the schema used by an official Telegraph plugin for the same IPMI readings I have:
ipmi_sensor,server=10.20.2.203,unit=degrees_c,name=ambient_temp \
status=1i,value=20 1458488465012559455
I will convert my old data into that format, I have all the required fields stored in my old SQLite DB. I will modify the plugin to save the name of the server instead of the IP, that here at home is more volatile than the name itself. I will also probably reduce the precision of the timestamp to simple milliseconds or seconds.
Using that one as example, I understand that the one I proposed for CPU readings could be improved:
cpu,server=SERVER_NAME,name=load1 value=LOAD1 DATE
cpu,server=SERVER_NAME,name=load2 value=LOAD2 DATE
cpu,server=SERVER_NAME,name=load3 value=LOAD3 DATE
However I am still considering the one I proposed, without indexing of the single values:
cpu,server=SERVER_NAME load1=LOAD1 load2=LOAD2 load3=LOAD3 DATE
For SMART data my proposal was also not optimal so I will use:
smart_readings,server=SERVER_NAME,serial=SERIAL,name=DISK_LONG_NAME",\
smart_id=SMART_ID,smart_description=SMART_DESCRIPTION \
value=VALUE value_raw=VALUE_RAW DATE

Related

Limitations in using all string columns in BigQuery

I have an input table in BigQuery that has all fields stored as strings. For example, the table looks like this:
name dob age info
"tom" "11/27/2000" "45" "['one', 'two']"
And in the query, I'm currently doing the following
WITH
table AS (
SELECT
"tom" AS name,
"11/27/2000" AS dob,
"45" AS age,
"['one', 'two']" AS info )
SELECT
EXTRACT( year from PARSE_DATE('%m/%d/%Y', dob)) birth_year,
ANY_value(PARSE_DATE('%m/%d/%Y', dob)) bod,
ANY_VALUE(name) example_name,
ANY_VALUE(SAFE_CAST(age AS INT64)) AS age
FROM
table
GROUP BY
EXTRACT( year from PARSE_DATE('%m/%d/%Y', dob))
Additionally, I tried doing a very basic group by operation casting an item to a string vs not, and I didn't see any performance degradation on a data set of ~1M rows (actually, in this particular case, casting to a string was faster):
Other than it being bad practice to "keep" this all-string table and not convert it into its proper type, what are some of the limitations (either functional or performance-wise) that I would encounter by keeping a table all-string instead of storing it as their proper type. I know there would be a slight increase in size due to storing strings instead of number/date/bool/etc., but what would be the major limitations or performance hits I'd run into if I kept it this way?
Off the top of my head, the only limitations I see are:
Queries would become more complex (though wouldn't really matter if using a query-builder).
A bit more difficult to extract non-string items from array fields.
Inserting data becomes a bit trickier (for example, need to keep track of what the date format is).
But these all seem like very small items that can be worked around. Are there are other, "bigger" reasons why using all string fields would be a huge limitation, either in limiting query-ability or having a huge performance hit in various cases?
First of all - I don't really see any bigger show-stoppers than those you already know and enlisted
Meantime,
though wouldn't really matter if using a query-builder ...
based on above excerpt - I wanted to touch upon some aspect of this approach (storing all as strings)
While we usually concerned about CASTing from string to native type to apply relevant functions and so on, I realized that building complex and generic query with some sort of query builder in some cases requires opposite - cast native type to string for applying function like STRING_AGG [just] as a quick example
So, my thoughts are:
When table is designed for direct user's access with trivial or even complex queries - having native types is beneficial and performance wise and being more friendly for user to understand, etc.
Meantime, if you are developing your own query builder and you design table such that it will be available to users for querying via that query builder with some generic logic being implemented - having all fields in string can be helpful in building the query builder itself.
So it is a balance - you can lose a little in performance but you can win in being able to better implement generic query builder. And such balance depend on nature of your business - both from data prospective and what kind of query you envision to support
Note: your question is quite broad and opinion based (which is btw not much respected on SO) so, obviously my answer - is totally my opinion but based on quite an experience with BigQuery
Are you OK to store string "33/02/2000" as a date in one row and "21st of December 2012" in another row and "22ое октября 2013" in another row?
Are you OK to store string "45" as age in one row and "young" in another row?
Are you OK when age "10" is less than age "9"?
Data types provide some basic data validation mechanism at the database level.
Does BigQuery databases have a notion of indexes?
If yes, then most likely these indexes become useless as soon as you start casting your strings to proper types, such as
SELECT
...
WHERE
age > 10 and age < 30
vs
SELECT
...
WHERE
ANY_VALUE(SAFE_CAST(age AS INT64)) > 10
and ANY_VALUE(SAFE_CAST(age AS INT64)) < 30
It is normal that with less columns/rows you don't feel the problems. You start to feel the problems when your data gets huge.
Major concerns:
Maintenance of the code: Think of future requirements that you may receive. Every conversion for data manipulation will add extra complexity to your code. For example, if your customer asks for retrieving teenagers in future, you'll need to convert string to date to get the age and then be able to do the manupulation.
Data size: The data size has broader impacts that can not be seen at the start. For example if you have N parallel test teams which require own test systems, you'll need to allocate more disk space.
Read Performance: When you have more bytes to read in huge tables it will cost you considerable time. For example typically telco operators have a couple of billions of rows data per month.
If your code complexity increase, you'll need to replicate conversions in multiple places.
Even single of above items should push one to distance from using strings for everything.
I would think the biggest issue with this would be if there are other users of this table/data, for instance if someone is trying to write reports with it and do calculations or charts or date ranges it could be a big headache having to always cast or convert the data with whatever tool they are using. You or someone would likely get a lot of complaints about it.
And if someone decided to build a layer between this data and the reporting tool which converted all of the data, then you may as well just do it one time to the table/data and be done with it.
From the solution below, you might face some storage and performance problems, you can find some guidance in the official documentation:
The main performance problem will come from the CAST operation, remember that the BigQuery Engine will have to deal with a CAST operation for each value per row.
In order to test the compute cost of this operations, I used the following query:
SELECT
street_number
FROM
`bigquery-public-data.austin_311.311_service_requests`
LIMIT
5000
Inspecting the stages executed in the execution details we are able to see the following:
READ
$1:street_number
FROM bigquery-public-data.austin_311.311_service_requests
LIMIT
5000
WRITE
$1
TO __stage00_output
Only the Read, Limit and Write operations are required. However if we execute the same query adding the the CAST operator.
SELECT
CAST(street_number AS int64)
FROM
`bigquery-public-data.austin_311.311_service_requests`
LIMIT
5000
We see that a compute operation is also required in order to perform the cast operation:
READ
$1:street_number
FROM bigquery-public-data.austin_311.311_service_requests
LIMIT
5000
COMPUTE
$10 := CAST($1 AS INT64)
WRITE
$10
TO __stage00_output
Those compute operations will consume some time, that might cause problems when escalating the operation size.
Also, remember that each time that you want to use the data type properties of each data type, you will have to cast your value, and deal with the compute operation time required.
Finally, referring to the storage performance, as you mentioned Strings do not have a fixed size, and that might cause a size increase.

How many bytes does a BigQuery Date require

I've got a timestamp column in BigQuery, and now I realize I could have used a date data type to represent this column instead (I don't need fine time granularity). My table is large and expensive to query so I'm wondering whether I'll save money by converting it to a new column of type DATE instead.
However, the official BigQuery documentation on data types doesn't seem to indicate how many bytes a date object requires. Does anyone here know?
DATE and TIMESTAMP both require 8 bytes
You can see more details at Data size calculation

Find out the amount of space each field takes in Google Big Query

I want to optimize the space of my Big Query and google storage tables. Is there a way to find out easily the cumulative space that each field in a table gets? This is not straightforward in my case, since I have a complicated hierarchy with many repeated records.
You can do this in Web UI by simply typing (and not running) below query changing to field of your interest
SELECT <column_name>
FROM YourTable
and looking into Validation Message that consists of respective size
Important - you do not need to run it – just check validation message for bytesProcessed and this will be a size of respective column
Validation is free and invokes so called dry-run
If you need to do such “columns profiling” for many tables or for table with many columns - you can code this with your preferred language using Tables.get API to get table schema ; then loop thru all fields and build respective SELECT statement and finally Dry Run it (within the loop for each column) and get totalBytesProcessed which as you already know is the size of respective column
I don't think this is exposed in any of the meta data.
However, you may be able to easily get good approximations based on your needs. The number of rows is provided, so for some of the data types, you can directly calculate the size:
https://cloud.google.com/bigquery/pricing
For types such as string, you could get the average length by querying e.g. the first 1000 fields, and use this for your storage calculations.

SQL Database Design for Test Data

So, I am trying to learn how to set up good, and usable databases. I have ran into a problem involving storing large amounts of data correctly. The database I am using is MSSQL 2008. For example:
We test about 50,000 devices a week. Each one of these devices have a lot of data associated with them. Overall, we are just looking at the summary of data calculated from the raw data. The summary is easy to handle, its just the raw data I'm trying to enter into a database for future use in case someone wants more details.
For the summary, I have a database full of tables for each set of 50,000 devices. But, each device there is data similar to this:
("DevID") I,V,P I,V,P I,V,P ...
("DevID") WL,P WL,P WL,P ...
Totaling to 126 (~882 chars) data points for the first line and 12000 (~102,000 chars) data points for the second line. What would be the best way to store this information? Create a table for each and every device (this seems unruly)? Is there a data type that can handle this much info? I am just not sure.
Thanks!
EDIT: Updated ~char count and second line data points.
You could just normalize everything into one table
CREATE TABLE device
( id BIGINT AUTO_INCREMENT PRIMARY KEY
, DevID INT
, DataPoint VARCHAR
, INDEX(DevID))
Psudocode obviously, since I don't know your exact requirements.
Does this data represent a series of readings over time? Time-series data tends to be highly repetetive. So a common strategy is to compress it in ways that avoid storing every single value. For example use run-length encoding, or associate time intervals with each value instead of single points.

List of Best Practice MySQL Data Types

Is there a list of best practice MySQL data types for common applications. For example, the list would contain the best data type and size for id, ip address, email, subject, summary, description content, url, date (timestamp and human readable), geo points, media height, media width, media duration, etc
Thank you!!!
i don't know of any, so let's start one!
numeric ID/auto_increment primary keys: use an unsigned integer. do not use 0 as a value. and keep in mind the maximum value of of the various sizes, i.e. don't use int if you don't need 4 billion values when the 16 million offered by mediumint will suffice.
dates: unless you specifically need dates/times that are outside the supported range of mysql's DATE and TIME types, use them! if you instead use unix timestamps, you have to convert them to use the built-in date and time functions. if your app needs unix timestamps, you can always convert the standard date and time data types on the way out using unix_timestamp().
ip addresses: use inet_aton() and inet_ntoa() since it easily compacts an ip address in to 4 bytes and gives you the ability to do range searches that utilize indexes.
Integer Display Width You likely define your integers something like this "INT(4)" but have been baffled by the fact that (4) has no real effect on the stored numbers. In other words, you can store numbers like 999999 just fine. The reason is that for integers, (4) is the display width, and only has an effect if used with the ZEROFILL modifier. Further, this is for display purposes only, so you could define a column as "INT(4) ZEROFILL" and store 99999. If you stored 999, the mysql REPL (console) would output 0999 when you've selected this column.
In other words, if you don't need the ZEROFILL stuff, you can leave off the display width.
Money: Use the Decimal data type. Based on real-world production scenarios I recommend (19,8).
EDIT: My original recommendation was (19,4); however, I've recently run into a production issue where the client reported that they absolutely needed decimal with a "scale" of "8"; thus "4" wasn't enough and was causing improper tax calculations. I now recommend (19,8) based on a real-world scenario. I would love to hear stories needing a more granular scale.