I want to create a CSV of all bitcoin transactions after timestamp 1572491526. so i tried below code. I want CSV to have four columns -
transaction_id, timestamp, input, output
1 1 aaa bbb
1 1 abc cde
2 2 pqr xyz
i tried this so far
from google.cloud import bigquery
client = bigquery.Client()
QUERY = """
SELECT timestamp, transaction_id, inputs, outputs
FROM bigquery-public-data.bitcoin_blockchain.transactions
WHERE timestamp > 1572491526
LIMIT 1
"""
# note that max_gb_scanned is set to 24, rather than 1
queryjob = client.query(QUERY) # API request
rows = queryjob.result()
row = list(rows)
import pandas as pd
headlines = pd.DataFrame(data=[list(x.values()) for x in row], columns=list(row[0].keys()))
headlines
But the output i am getting is incorrect. how to solve this
timestamp transaction_id inputs outputs
0 1237254030000 8425ac5096ff2b55e0feefa7c78ba609a245e6f185ecde... [{'input_script_bytes': b'\x04\xff\xff\x00\x1d... [{'output_satoshis': 5000000000, 'output_scrip...
The output you are getting is correct, I tested your query in the Bigquery UI and seems to be the same you are getting. Please consider that the fields inputs and outputs are arrays, the confusion might be there.
Also, I tested your code and I got the same output
from google.cloud import bigquery
client = bigquery.Client()
QUERY = """
SELECT timestamp, transaction_id, inputs, outputs
FROM bigquery-public-data.bitcoin_blockchain.transactions
WHERE timestamp > 1572491526
LIMIT 1
"""
# note that max_gb_scanned is set to 24, rather than 1
queryjob = client.query(QUERY) # API request
rows = queryjob.result()
row = list(rows)
import pandas as pd
headlines = pd.DataFrame(data=[list(x.values()) for x in row], columns=list(row[0].keys()))
headlines.to_csv('output.csv', index=False, header=True)
Related
I am very new to Pandas. How do I convert the following query into pandas syntax. I am no longer querying a MS Access table, I am now querying a pandas DataFrame called df.
The query is:
SELECT
Short_ID,
SUM(IIF(Status = 'Completed', 1, 0))) / COUNT (Status) AS completion_metric
FROM
PROMIS_LT_Long_ID
GROUP BY
Short_ID;
The query results would be something like this:
Short_ID | completion_metric
---------+------------------
1004 | 0.125
1005 | 0
1004 | 0.5
I have created the pandas df with the following code and now I would like to query the pandas DataFrame and obtain the same result as the query above.
import pyodbc
import pandas as pd
def connect_to_db():
db_name = "imuscigrp"
conn = pyodbc.connect(r'DRIVER={SQL Server};SERVER=tcp:SQLDCB301P.uhn.ca\SQLDCB301P;DATABASE=imucsigrp'
r';UID=imucsigrp_data_team;PWD=Kidney123!')
cursor = conn.cursor()
return cursor, conn
def completion_metric():
SQL_Query = pd.read-sql_query('SELECT PROMIS_LT_Long_ID.Short_ID, PROMIS_LT_Long_ID.Status FROM PROMIS_LT_Long_ID', conn)
#converts SQL_Query into Pandas dataframe
df = pd.DataFrame(SQL_Query, columns = ["Short_ID", "Status"])
#querying the df to obtain longitudinal completion metric values
return
Any contributions will help, thank you
You can use some numpy functions for performing similar operations.
For example, numpy.where to replace the value based on a condition.
import numpy as np
df = pd.DataFrame(SQL_Query, columns = ["Short_ID", "Status"])
df["completion_metric"] = np.where(df.Status == "Completed", 1, 0)
Then numpy.average to compute an average value for the grouped data.
completion_metric = df.groupby("Short_ID").agg({"completion_metric": np.average})
I was trying to collect 2 user_id dataframes which have no same user_id mutually in pyspark.
So, I typed some codes below you can see
import pyspark.sql.functions as f
query = "select * from tb_original"
df_original = spark.sql(query)
df_original = df_original.select("user_id").distinct()
df_a = df_original.sort(f.rand()).limit(10000)
df_a.count()
# df_a: 10000
df_b = df_original.join(df_a,on="user_id",how="left_anti").sort(f.rand()).limit(10000)
df_b.count()
# df_b: 10000
df_a.join(df_b,on="user_id",how="left_anti").count()
# df_a - df_b = 9998
# What?????
As a result, df_a and df_b have the same 2 user_ids... sometimes 1, or 0.
It looks like no problem with codes. However, this occurs due to lazy action of spark mechanism maybe...
I need to solve this problem for collecting 2 user_id dataframes which have no same user_id mutually.
Since you want to generate two different set of users from a given pool of users with no overlap you may use this simple trick : =
from pyspark.sql.functions import monotonically_increasing_id
import pyspark.sql.functions as f
#"Creation of Original DF"
query = "select * from tb_original"
df_original = spark.sql(query)
df_original = df_original.select("user_id").distinct()
df_original =df.withColumn("UNIQUE_ID", monotonically_increasing_id())
number_groups_needed=2 ## you can adjust the number of group you need for your use case
dfa=df_original.filter(df_original.UNIQUE_ID % number_groups_needed ==0)
dfb=df_original.filter(df_original.UNIQUE_ID % number_groups_needed ==1)
##dfa and dfb will not have any overlap for user_id
Ps- if your user_id is itself a integer you don't need to create a new UNIQUE_ID column you can use it directly .
I choose randomSplit function pyspark supports.
df_a,df_b = df_original.randomSplit([0.6,0.4])
df_a = df_a.limit(10000)
df_a.count()
# 10000
df_b = df_b.limit(10000)
df_b.count()
# 10000
df_a.join(df_b,on="user_id",how="left_anti").count()
# 10000
never conflict between df_a and df_b anymore!
I have two data frames.
DF_Address, which is having 347k distinct addresses and DF_Project which is having 24k records having
Project_Id, Project_Start_Date and Project_Address
I want to check if there is a fuzzy match of my Project_Address in Df_Address. If there is a match, I want to extract the Project_ID and Project_Start_Date for the same. Below is code of what I am trying
import pandas as pd
import numpy as np
from fuzzywuzzy import fuzz
from fuzzywuzzy import process
Df_Address = pd.read_csv("Cantractor_Addresses.csv")
Df_Project = pd.read_csv("Project_info.csv")
#address = list(Df_Project["Project_Address"])
def fuzzy_match(x, choices, cutoff):
print(x)
return process.extractOne(
x, choices=choices, score_cutoff=cutoff
)
Matched = Df_Address ["Address"].apply(
fuzzy_match,
args=(
Df_Project ["Project_Address"],
80
)
)
This code does provide an output in the form of a tuple
('matched_string', score)
But it is also giving similar strings. Also I need to extract
Project_Id and Project_Start_Date
. Can someone help me to achieve this using parallel processing as the data is huge.
You can convert the tuple into dataframe and then join out to your base data frame.
import pandas as pd
Df_Address = pd.DataFrame({'address': ['abc','cdf'],'random_stuff':[100,200]})
Matched = (('abc',10),('cdf',20))
dist = pd.DataFrame(x)
dist.columns = ['address','distance']
final = Df_Address.merge(dist,how='left',on='address')
print(final)
Output:
address random_stuff distance
0 abc 100 10
1 cdf 200 20
My data is as below. I want to sort by the timestamp and use the latest sample of each userid as the testing data. How should I do the train and test split? What I have tried is using pandas to sort_values timestamp and then groupby 'userid'. But I only get a groupby object. What is the correct way to do that? Is pyspark a better tool?
After I get the dataframe of the testing data, how should split data? Obviously I cannot use sklearn's train_test_split.
You could do the following:
# Sort the data by time stamp
df = df.sort_values('timestamp')
# Group by userid and get the last entry from each group
test_df = df.groupby(by='userid', as_index=False).nth(-1)
# The rest of the values
train_df = df.drop(test_df.index)
You can do the following:
import pyspark.sql.functions as F
max_df = df.groupby("userid").agg(F.max("timestamp"))
# join it back to the original DF
df = df.join(max_df, on="userid")
train_df = df.filter(df["timestamp"] != df["max(timestamp)"])
test_df = df.filter(df["timestamp"] == df["max(timestamp)"])
I have a fairly large table in big query ( app. 9M rows) and I would like to read it via pandas.
I've tried reading and using the [pd.read_gbq()][1] function, which works fine on small tables.
On the large table it gets stuck after 50 secs or so (logs show elapsed .. 50s) - without giving an error or anything.
My question is how can I read that table using pd (chunks?). Any conventions on scaling up these bigquery reads will be helpful.
EDIT / resolution
adding to Khan's answer, I ended up implementing chunks, writing 500,000 each time to a file, then reading these files to dataframe like so:
def download_gbq_table(self):
if not os.path.exists(self.tmp_dir):
os.makedirs(self.tmp_dir)
increment = 100000
intervals = list(range(0, self.table_size, 100000))
intervals.append(self.table_size - intervals[len(intervals)-1])
df = pd.DataFrame()
for offset in intervals:
query = f"select * from `<table_name>` limit {increment} offset {offset};"
logger.info(f"running query: {query}")
start_time = time.time()
tmp_df = pd.read_gbq(query,
project_id=self.connection_parameters['project_id'],
private_key=self.connection_parameters['service_account'],
dialect='standard'
)
df = pd.concat([df, tmp_df])
logger.info(f'time took: {str(round(time.time() - start_time, 2))}')
if len(df) % 500000 == 0:
df.to_csv(os.path.join(self.tmp_dir, f'df_{str(offset + increment)}.csv'))
df = pd.DataFrame()
def read_df_from_multi_csv(self):
all_files = glob.glob(os.path.join(self.tmp_dir, "df_*"))
df_list = []
for f in all_files:
start_time = time.time()
df_list.append(pd.read_csv(f))
logger.info(f'time took for reading {f}: {str(round(time.time() - start_time, 2))}')
return pd.concat((pd.read_csv(f) for f in all_files))
Pandas' read_gbq function currently does not provide a chunksize parameter (even though its opposite to_gbq function does provide a chunksize parameter).
Anyways, you can solve your problem with adding LIMIT and OFFSET to your SQL query read stuff iteratively from BigQuery. Something on the lines of:
project_id = "xxxxxxxx"
increment=100000
chunks=range(0, 9000000, 100000)
chunks[-1]+=increment
intervals=[[chunks[i-1], chunks[i]+1] for i, e in enumerate(chunks) if i > 0]
query_str="select * from `mydataset.mytable` limit {end} offset {start};"
for start, end in intervals:
query = query_str.format(start=start, end=end)
df = pd.read_gbq(query, project_id)
#-- do stuff with your df here..
Not sure if this existed back when the question was originally asked, but now you can use python-bigquery-sqlalchemy (link) to read data from BigQuery, which allows you to use the built-in chunking ability of pandas.read_sql(). You just create a SQLAlchemy connection engine using "bigquery://{project-name}" and pass that to con in pandas.read_sql().
For example:
from sqlalchemy.engine import create_engine
import pandas as pd
read_project = "my-cool-project"
query = f"""
select * from `{read_project}.cool-dataset.cooltable`
"""
bq_engine = create_engine(f"bigquery://{read_project}")
for df in pd.read_sql(query, con=bq_engine, chunksize=100_000):
# do stuff with df...