Bigquery pricing for Temporary storage - google-bigquery

Background
I am loading data to BQ.Each time i load data to BQ using schema autodetect option for which BQ creates a table with schema.Then i download the autogenerated schema and delete the created table.
I need to this task ,few times on a shedule basis.
I have read in documentation that:
for 100 MB for a month it costs some amount and i have read that loading data is free.
Query
Will this storage cost me any amount?
Apart from storage ,will i be charged for this activity.
Need your suggestions on these.!

There is no such term as temporary storage. As you say you will pay the storage for the time that you keep it. There is a cost if you do streaming insert, but if you do a load job from a file there is no cost for the bandwidth, but you pay the storage price as normal. Also you don't pay for cached queries.
The following table summarizes BigQuery pricing. BigQuery's quota policy applies for these operations.
+---------------------+-------------------------+-----------------------------------------------------------------+
| Action | Cost | Notes |
+---------------------+-------------------------+-----------------------------------------------------------------+
| Storage | $0.02 per GB, per month | See Storage pricing. |
+---------------------+-------------------------+-----------------------------------------------------------------+
| Long Term Storage | $0.01 per GB, per month | See Long term storage pricing. |
+---------------------+-------------------------+-----------------------------------------------------------------+
| Streaming Inserts | $0.01 per 200 MB | See Storage pricing. |
+---------------------+-------------------------+-----------------------------------------------------------------+
| Queries | $5 per TB | First 1 TB per month is free, subject to query pricing details. |
+---------------------+-------------------------+-----------------------------------------------------------------+
| Loading data | Free | See Loading data into BigQuery. |
+---------------------+-------------------------+-----------------------------------------------------------------+
| Copying data | Free | See Copying an existing table. |
+---------------------+-------------------------+-----------------------------------------------------------------+
| Exporting data | Free | See Exporting data from BigQuery. |
+---------------------+-------------------------+-----------------------------------------------------------------+
| Metadata operations | Free | List, get, patch, update and delete calls. |
+---------------------+-------------------------+-----------------------------------------------------------------+
All this is explained better on the official page: https://cloud.google.com/bigquery/pricing
You can check out your current usage on this page:
https://console.cloud.google.com/billing/unbilledinvoice
And there is a pricing calculator here:
https://cloud.google.com/products/calculator/
Update
Storage pricing is prorated per MB, per second. For example, if you store:
100 MB for half a month, you pay $0.001 (a tenth of a cent)
500 GB for half a month, you pay $5
1 TB for a full month, you pay $20

Related

How to track day wise change in Power BI?

I am creating a Power BI report using data from https://www.mohfw.gov.in/ website which provides latest corona virus data for all Indian states/union territories.
Data is in below format -
+-----+-----------------------------+-----------+-------+-------+
| SNo | State | Confirmed | Cured | Death |
+-----+-----------------------------+-----------+-------+-------+
| 1 | Andaman and Nicobar Islands | 14 | 11 | 0 |
| 2 | Andhra Pradesh | 603 | 42 | 15 |
| 3 | Arunachal Pradesh | 1 | 0 | 0 |
| 4 | Assam | 35 | 12 | 1 |
| 5 | Bihar | 86 | 37 | 2 |
They website is refreshed with new data everyday, so there is no date wise tracker. I wanted to track the day wise change(increment/decrements) in cases for every state, is there any way I can model it in power BI to achieve this?
For now what I am doing is I am downloading the table from the web page everyday and adding a date column which will be today's date(getdate()) and loading the data into a SQL table. So everyday I am inserting a row for each of the state with that day's date-stamp in the table and then I can subtract it from previous day's data to see the changes, but I feel it is a inefficient way and the table size keep on increasing everyday.
So any suggestion to improve it, either by some changes in Power BI data model, or in SQL will be much appreciated.
Context
considering the data source is updated according to SCD 1 (Overwriting) the only way to track day wise change is to historize data every day. In practice, schedule a daily load of the data source and store the new data of that day.
Answer
You are implementing SCD 2 (Create a new record on change) in the correct way. It is important to make sure adding a technical field to each record with the timestamp when it was generated so you can study the trend later.
Extra
You could eventually optimize this approach by normalizing the model in order to reduce the size of the table you are applying SCD 2 (Create a new record on change).
Please let me give a simple example. Consider a table with:
only 1 record
1000 fields of which only 1 field (LAST_UPDATE) can change using SCD 2 (Create a new record on change)
If LAST_UPDATE changes 100,000 times a day, every days it triggers the creation of 100,000 new version of the same record (because we track its changes). Therefore, after one year the table would have still 1,000 fields and 36,500,000 records. Instead, if we normalize the model such that LAST_UPDATE field (historized with SCD 2) is stored in a separate table, after one year we would have one table with 1 record and 999 columns, and a different table with 1 column and 36,500,000 records.
In the case your database is a row database, you would much benefit from normalizing the model. Instead, if your database is columnar database, everything is already taken care of because each column is individually compressed instead of compressing row-wise.

Identifying newest records in parallel

We're using U-SQL to extract sensor data from a set of .csv files. Each record contains a sensor ID, time of measurement and value, as well as a time for when the record was received:
+----------+---------------------+------------------+---------------------+
| SensorID | MeasurementTime | MeasurementValue | ReceivedTime |
+----------+---------------------+------------------+---------------------+
| xxx | 2017-09-10 11:00:00 | 12.342 | 2017-09-19 14:25:17 |
| xxx | 2017-09-10 12:00:00 | 14.654 | 2017-09-19 14:25:17 |
| yyy | 2017-09-10 11:00:00 | 1.054 | 2017-09-19 14:25:17 |
| yyy | 2017-09-10 12:00:00 | 1.354 | 2017-09-19 14:25:17 |
...
| xxx | 2017-09-10 11:00:00 | 10.261 | 2017-09-19 15:25:17 |
+----------+---------------------+------------------+---------------------+
The files are stored in ADLS in a path based on the date-portion of the measurement time, so the data seen above would be found in /Data/2017/09/10/measurements.csv, where the first four rows were written at 14:25:17 on the 19th of September, and the last row was appended one hour later, at 15:25:17.
As the above example illustrates, new values for the same SensorID and MeasurementTime can be received at a later time. Each partition holds a few million rows, with a few thousand rows being appended to a small number of partitions every day. We want to run a batch job say every 24 hours, that will output only the newest values, for any given SensorID and MeasurementTime. For this, we use a U-SQL script that looks similar to this:
#newestMeasurements_addRN =
SELECT *,
ROW_NUMBER() OVER (PARTITION BY PDate,
SensorId,
MeasurementTime
ORDER BY ReceivedTime DESC) AS MeasurementRN;
#newestMeasurements =
SELECT SensorId,
MeasurementTime,
MeasurementValue
FROM #newestMeasurements_addRN
WHERE MeasurementRN == 1;
Here, PDate is a virtual column inferred from the yyyy/MM/dd in the path of the CSV file (equals the date-portion of MeasurementTime).
Now, since we use PDate in the PARTITION BY part of the window function, I expected that this operation could be parallelised, since we don't have to consider different days (partitions) when trying to find the newest record for any given SensorID and MeasurementTime. Unfortunately, that does not seem to be the case, looking at a job graph:
Here, we are extracting data from 4 different days. Each of the Extract vertices outputs the full number of records, leaving the task of identifying only the newest records to the Combine vertex at the bottom, indicating that the ROW_NUMBER and subsequent filtering does not happen in parallel.
Is this a bug in the implementation of ROW_NUMBER?
Is there a different U-SQL technique we can use to ensure parallelism?
I managed to find a usable solution, in which I encapsulated the U-SQL that detects the latest measurements inside a U-SQL stored proc, which takes a value corresponding to pdate as input parameter.
Then, I simply execute this stored proc several times, with a list of dates that I want to process in parallel:
DetectLatestMeasurements(20170910);
DetectLatestMeasurements(20170911);
DetectLatestMeasurements(20170912);
DetectLatestMeasurements(20170913);
The stored proc handles EXTRACT, transformation and OUTPUT of one days worth of data, so this does the job, and it is parallelised the way I expect.

How to join between table DurationDetails and Table cost per program

How to design database for tourism company to calculate cost of flight and hotel per every program tour based on date ?
what i do is
Table - program
+-----------+-------------+
| ProgramID | ProgramName |
+-----------+-------------+
| 1 | Alexia |
| 2 | Amon |
| 3 | Sfinx |
+-----------+-------------+
every program have more duration may be 8 days or 15 days only
it have two periods only 8 days or 15 days .
so that i do duration program table have one to many with program .
Table - ProgramDuration
+------------+-----------+---------------+
| DurationNo | programID | Duration |
+------------+-----------+---------------+
| 1 | 1 | 8 for Alexia |
| 2 | 1 | 15 for Alexia |
+------------+-----------+---------------+
And same thing to program amon program and sfinx program 8 and 15 .
every program 8 or 15 have fixed details for every day as following :
Table Duration Details
+------+--------+--------------------+-------------------+
| Days | Hotel | Flight | transfers |
+------+--------+--------------------+-------------------+
| Day1 | Hilton | amsterdam to luxor | airport to hotel |
| Day2 | Hilton | | AbuSimple musuem |
| Day3 | Hilton | | |
| Day4 | Hilton | | |
| Day5 | Hilton | Luxor to amsterdam | |
+------+--------+--------------------+-------------------+
every program determine starting by flight date so that
if flight date is 25/06/2017 for program alexia 8 days it will be as following
+------------+-------+--------+----------+
| Date | Hotel | Flight | Transfer |
+------------+-------+--------+----------+
| 25/06/2017 | 25 | 500 | 20 |
| 26/06/2017 | 25 | | 55 |
| 27/06/2017 | 25 | | |
| 28/06/2017 | 25 | | |
| 29/06/2017 | 25 | 500 | |
+------------+-------+--------+----------+
And this is actually what i need how to make relations ship to join costs with program .
for flight and hotel costs as above ?
for 5 days cost will be 1200
25 is cost per day for hotel Hilton
500 is cost for flight
20 and 55 is cost per transfers
image display what i need
relation between duration and cost
Truthfully, I don't fully understand exactly what you're trying to accomplish. Your description is not clear, your tables seem to be missing information / contain information that should not be in your tables, and the way that I'm understanding your description doesn't really make sense based on the UI screenshot that you shared.
It looks like you're working on an application for a travel agency which will allow agents to create an itinerary for a trip. They can give this trip a name (so if a particular package is a hit with customers, they can just offer the "Alexa" package), and the utility will calculate the total estimated cost of the trip. If I understand correctly, the trips will be either 8, or 15 days long.
Personally, I would delete the "ProgramDuration" table altogether. If there are two versions of the Alexa trip at index 1, then you're going to run into all manners of issues. I can get into the details of why this is a bad idea, but unless you're really hung up on having this ProgramDuration table, it's not worth the time. You should add a "duration" field to your "program" table, and assign a new ProgramID for each different duration version of the "Alexa" program.
Your table "Duration details" also misses the mark. Your fields in this table will make it harder to add new features to your application down the line. You should have a field "ProgramID," which we will use to join this table against the program table later. You should have a field "Day" which obviously indicates the day in the itinerary. You should have only one more field "ItemID." We're going to use the "ItemID" field to join your itinerary against a new items table we're going to create.
Your items table is where you define all of the items that can possibly appear in an itinerary. Your current itinerary table has three possible "types" of expenses, flights, hotels, and transfers. What if your travel agents want to start adding meal expenditures into their itineraries / budgets? What about activities that cost money? What about currency exchange fees? What about items that your clientele will need before their trip (wall adapters, luggage, etc.)? In your items table, you will have fields for an ItemID, ItemName, ItemUnitPrice, and ItemType. A possible item is as follows:
ItemID: 1, ItemName: Night At The Hilton, ItemUnitPrice: 300, ItemType: Lodging
Using the "SELECT [Column] AS [Alias]" syntax with some CTEs or subqueries and the JOIN operator, we can easily reconstitute a table that looks like your "Program Duration Details" table, but we will be afforded considerably more flexibility to add or remove things later down the line.
In the interests of security and programmability, I would also add a table called "ItemTypeTable" with a single field "TypeName." You can use this table to prevent unauthorized users from defining new item types, and you can use this table to create drop down menus, navigation, and all manners of other useful features. There might be cleaner implementations, but this shouldn't represent a serious performance or size hit.
All in all, at the risk of being somewhat rude, it seems like you're trying to take on a rather large, sophisticated task with a very rudimentary understanding of basic relational database design and implementation. If you are doing this in a professional context, I would strongly encourage you to consider consulting with another professional that may be more experienced in this area.

CQRS and Race : how to handle race requirements

While there are articles saying that race conditions do not occur in business world, and it the solution that what we need to look, I am not sure it is the case.
I have a need of capacity and do event ticketing. When the demand for the event is high there are many concurrent bookingCommands that come in the same microsecond. The traditional way to do this is to use locking to prevent RACE conditions. Otherwise it ends up selling tickets for seats that are not available which is a strict business no-no.
Below table shows the sequence of steps that occur concurrently.
Time | Total Capacity | Consumed | Available | Customer1 | customer2
1 | 100 | 99 | 1 |seat available?| -
2 | | | | apply | seat available ?
3 | | | | event handle | apply
4 | | 100 | 0 | update state | event handle
5 | | 101 | -1 | | update state
If "selling tickets for seats that are not available is a strict business no-no." then model it this way. What this requirement tells you is that "selling/reserving a seat" and "number of seats available" should end up in the same transaction and be consistent. You can't take reservation and fire an event to change the number of available seats, it has to be in single transaction. This way when you try to decrease "number of seats available" (Time-5 from your table) you will receive optimistic concurrency exception, because someone modified it in the meantime. Then you can try to process it again and this time number of available seats has been exhausted so you can publish "application/reservation rejected" event and notify the user.
Project "a CQRS Journey" is something you should have a look at:
The reference implementation will be a conference management system
that you will be able to easily deploy and run in your own
environment. This will enable you to explore and experiment with a
realistic application built following a CQRS-based approach.
Especially have a look at SeatsAvailability.MakeReservation and SeatsAvailabilityHandler.Handle(MakeSeatReservation command)

MySQL Performance inquiry

I have moved my drupal site from one mysql server to another one.
Old Machine has 1 cpu, 1GB Ram
New Machine has 4 cpu, 4GB Ram.
I have a huge negative difference in perfomance on this query ( 2 mins vs 2 secs )
select distinct c.client
from client_table c
LEFT JOIN reps r on ( c.client = r.client )
where r.user_id is NULL
AND c.client not in ( select distinct client from billing where first_purchase = 1 )
NEW OLD
| connect_timeout 10 |connect_timeout 5
| have_federated_engine DISABLED | have_federated_engine YES
| max_connections 100 | max_connections 400
| max_seeks_for_key 18446744073709551615 | max_seeks_for_key 4294967295
| max_write_lock_count 18446744073709551615 | max_write_lock_count 4294967295
| myisam_max_sort_file_size 9223372036853727232 | myisam_max_sort_file_size 2147483647
| max_binlog_cache_size 18446744073709547520 | max_binlog_cache_size 4294967295
| myisam_recover_options BACKUP | myisam_recover_options OFF
| range_alloc_block_size 4096 | range_alloc_block_size 2048
| table_cache 457 | table_cache 307
| version 5.0.67-0ubuntu6-log | version 5.0.51a-3ubuntu5.4-log
| version_compile_machine x86_64 | version_compile_machine i486
ONLY on NEW | relay_log |
ONLY on NEW | relay_log_index |
ONLY on NEW | relay_log_info_file | relay-log.info
ONLY on NEW innodb_adaptive_hash_index | ON
Any ideas on how to identify what is causing the problem or how to fix it?
You might need to rebuild your indexes on the new instance.
Make triple-sure you've rebuilt your indicies, they don't really carry over.
Try using the MySQL Query Profiler.
I would profile in both environments.
So how do you go about analyzing
database performance? There are three
forms of performance analysis that are
used to troubleshoot and tune database
systems:
Bottleneck analysis - focuses on
answering the questions: What is my
database server waiting on; what is a
user connection waiting on; what is a
piece of SQL code waiting on?
Workload analysis - examines the server and who
is logged on to determine the resource
usage and activity of each.
Ratio-based analysis - utilizes a
number of rule-of-thumb ratios to
gauge performance of a database, user
connection, or piece of code.