Apache Hbase - Fetching large row is extremely slow - apache

I'm running an Apache Hbase Cluster on AWS EMR. I have a table that is a single column family, 75,000 columns and 50,000 rows. I'm trying to get all the column values for a single row, and when the row is not sparse, and has 75,000 values, the return time is extremely slow - it takes almost 2.5 seconds for me to fetch the data from the DB. I'm querying the table from a Lambda function running Happybase.
import happybase
start = time.time()
col = 'mycol'
table = connection.table('mytable')
row = table.row(col)
end = time.time() - start
print("Time taken to fetch column from database:")
print(end)
What can I do to make this faster? This seems incredibly slow - the return payload is 75,000 value pairs, and is only ~2MB. It should be much faster than 2 seconds. I'm looking for millisecond return time.
I have a BLOCKCACHE size of 8194kb, a BLOOMFILTER of type ROW, and SNAPPY compression enabled on this table.

Related

Set number of records in in a file while unloading in athena to S3

I am using CTAS command and I was wondering if there is a way to set number of records in a file in S3. All I have got now is to set the size in this link:
https://aws.amazon.com/premiumsupport/knowledge-center/set-file-number-size-ctas-athena/
However, I will not know the size beforehand.
I will be using this command
CREATE TABLE "historic_climate_gz_20_files"
WITH (
external_location = 's3://awsexamplebucket/historic_climate_gz_20_files/',
format = 'TEXTFILE',
bucket_count=20,
bucketed_by = ARRAY['yearmonthday']
) as
select * from historic_climate_gz
I dont see any option to set number of records.
How can I do this ?
Thanks in advance
Nobody knows beforehand how many buckets exactly should be created to get the desired size or the number of rows per file.
This is why (as described in the link you provided) they are measuring total size, divide it on bucket size required to get the number of buckets. And you cannot specify the size, only the number of buckets when you creating the table. After you calculated the number of buckets, create new bucketed table and reload the data from initial table.
So, if you want your files to contain the desired number of rows each instead of being of desired size, you need to calculate the number of buckets using the same approach.
Calculate the total row number in the initial dataset:
select count(*) as cnt from historic_climate_gz
For example your table contains 1000000 (1M) rows. And you want to have buckets with 10K rows. Calculate bucket_count = 1000000/10000 = 100.
Create new bucketed table with 100 buckets and reload the data, each file in it will approximately contain 10K rows (if the bucket key is evenly distributed and have enough cardinaity).
CREATE TABLE "historic_climate_gz_20_files"
WITH (
external_location = 's3://awsexamplebucket/historic_climate_gz_20_files/',
format = 'TEXTFILE',
bucket_count=100, ---100 buckets
bucketed_by = ARRAY['yearmonthday']
) as
select * from historic_climate_gz
You see, in case of bucketed table you can only control the number_of_buckets and the bucket key. Size or number of rows is an approximation (expectation) and is not accurate. For not evenly distributed bucket key of course you will get different size of buckets. Some of them bigger, some of them smaller. For evenly distributed key you will have approximately the same number of rows.
Which row comes to which bucket is decided by this function:
`hash(bucket_key) MOD number_of_buckets` where hash is integer.
MOD generates integer bucket numbers in a range [0, number_of_buckets-1]. Number of buckets and bucket key is what you can specify before load.
Rows with the same values will be written in the same bucket. If you have skew in bucket key distribution, then you will have buckets with size and number of rows different (skewed accordingly).

Pandas to_sql() performance related to number of columns

I noticed some odd behaviour of a script of mine which uses pandas' to_sql function to insert large numbers of rows into one of my mssql server.
The performance dramatically decreases when the number of columns exceeds 10
For example:
34484 rows x 10 columns => ~10k records per second
34484 rows x 12 columns => ~500 records per second
I use the fast_executemany Flag when establishing the conneciton, anyone got any idea!?
engine = sqlalchemy.create_engine("mssql+pyodbc:///?odbc_connect=%s?charset=utf8" % params, fast_executemany=True)
sqlalchemy_connection = engine.connect()
....
df.to_sql(name='TEST', con=sqlalchemy_connection , if_exists='append', index=False)

Redis `SCAN`: how to maintain a balance between newcomming keys that might match and ensure eventual result in a reasonable time?

I am not that familiar with Redis. At the moment I am designing some realtime service and I'd like to rely on it. I expect ~10000-50000 keys per minute to be SET with some reasonable EX and match over them using SCAN rarely enough not to bother with performance bottlenecks.
The thing I doubt is "in/out rate" and possible overflooding with keys that might match some SCAN query and thus it never terminates (i.e. always replies with latest cursor position and forces you to continue; that could happen easily if one consumes x items per second and there are x + y items per second coming in with y > 0).
Obviously, I could set desired SCAN size long enough; but I wonder if there exists a better solution or does Redis itself guarantees that SCAN will grow size automatically in such a case?
First some context, solution at the end:
From SCAN command > Guarantee of termination
The SCAN algorithm is guaranteed to terminate only if the size of the
iterated collection remains bounded to a given maximum size, otherwise
iterating a collection that always grows may result into SCAN to never
terminate a full iteration.
This is easy to see intuitively: if the collection grows there is more
and more work to do in order to visit all the possible elements, and
the ability to terminate the iteration depends on the number of calls
to SCAN and its COUNT option value compared with the rate at which the
collection grows.
But in The COUNT option it says:
Important: there is no need to use the same COUNT value for every
iteration. The caller is free to change the count from one iteration
to the other as required, as long as the cursor passed in the next
call is the one obtained in the previous call to the command.
Important to keep in mind, from Scan guarantees:
A given element may be returned multiple times. It is up to the
application to handle the case of duplicated elements, for example
only using the returned elements in order to perform operations that
are safe when re-applied multiple times.
Elements that were not
constantly present in the collection during a full iteration, may be
returned or not: it is undefined.
The key to a solution is in the cursor itself. See Making sense of Redis’ SCAN cursor. It is possible to deduce the percent of progress of your scan because the cursor is really the bits-reversed of an index to the table size.
Using DBSIZE or INFO keyspace command you can get how many keys you have at any time:
> DBSIZE
(integer) 200032
> info keyspace
# Keyspace
db0:keys=200032,expires=0,avg_ttl=0
Another source of information is the undocumented DEBUG htstats index, just to get a feeling:
> DEBUG htstats 0
[Dictionary HT]
Hash table 0 stats (main hash table):
table size: 262144
number of elements: 200032
different slots: 139805
max chain length: 8
avg chain length (counted): 1.43
avg chain length (computed): 1.43
Chain length distribution:
0: 122339 (46.67%)
1: 93163 (35.54%)
2: 35502 (13.54%)
3: 9071 (3.46%)
4: 1754 (0.67%)
5: 264 (0.10%)
6: 43 (0.02%)
7: 6 (0.00%)
8: 2 (0.00%)
[Expires HT]
No stats available for empty dictionaries
The table size is the power of 2 following your number of keys:
Keys: 200032 => Table size: 262144
The solution:
We will calculate a desired COUNT argument for every scan.
Say you will be calling SCAN with a frequency (F in Hz) of 10 Hz (every 100 ms) and you want it done in 5 seconds (T in s). So you want this finished in N = F*T calls, N = 50 in this example.
Before your first scan, you know your current progress is 0, so your remaining percent is RP = 1 (100%).
Before every SCAN call (or every given number of calls that you want to adjust your COUNT if you want to save the Round Trip Time (RTT) of a DBSIZE call), you call DBSIZE to get the number of keys K.
You will use COUNT = K*RP/N
For the first call, this is COUNT = 200032*1/50 = 4000.
For any other call, you need to calculate RP = 1 - ReversedCursor/NextPowerOfTwo(K).
For example, let say you have done 20 calls already, so now N = 30 (remaining number of calls). You called DBSIZE and got K = 281569. This means NextPowerOfTwo(K) = 524288, this is 2^19.
Your next cursor is 14509 in decimal = 000011100010101101 in binary. As the table size is 2^19, we represent it with 18 bits.
You reverse the bits and get 101101010001110000 in binary = 185456 in decimal. This means we have covered 185456 out of 524288. And:
RP = 1 - ReversedCursor/NextPowerOfTwo(K) = 1 - 185456 / 524288 = 0.65 or 65%
So you have to adjust:
COUNT = K*RP/N = 281569 * 0.65 / 30 = 6100
So in your next SCAN call you use 6100. Makes sense it increased because:
The amount of keys has increased from 200032 to 281569.
Although we have only 60% of our initial estimate of calls remaining, progress is behind as 65% of the keyspace is pending to be scanned.
All this was assuming you are getting all keys. If you're pattern-matching, you need to use the past to estimate the remaining amount of keys to be found. We add as a factor PM (percent of matches) to the COUNT calculation.
COUNT = PM * K*RP/N
PM = keysFound / ( K * ReversedCursor/NextPowerOfTwo(K))
If after 20 calls, you have found only keysFound = 2000 keys, then:
PM = 2000 / ( 281569 * 185456 / 524288) = 0.02
This means only 2% of the keys are matching our pattern so far, so
COUNT = PM * K*RP/N = 0.02 * 6100 = 122
This algorithm can probably be improved, but you get the idea.
Make sure to run some benchmarks on the COUNT number you'll use to start with, to measure how many milliseconds is your SCAN taking, as you may need to moderate your expectations about how many calls you need (N) to do this in a reasonable time without blocking the server, and adjust your F and T accordingly.

Which statistics is calculated faster in SAS, proc summary?

I need a theoretical answer.
Imagine that you have a table with 1.5 billion rows (the table is created as column-based with DB2-Blu).
You are using SAS and you will do some statistics by using Proc Summary like min/max/mean values, standard deviation value and percentile-10, percentile-90 through your peer-groups.
For instance, you have 30.000 peer-groups and you have 50.000 values in each peer group (Total 1.5 billions values).
The other case you have 3 million peer-groups and also you have 50 values in each peer-group. So you have total 1.5 billion values again.
Would it go faster if you have less peer groups but more values in each peer-group? Or would it go faster with more peer-groups but less less values in each peer-group.
I could test the first case (30.000 peer-groups and 50.000 values per peer group) and it took around 16 mins. But I can't test for the second case.
Can you write an approximate prognose for run-time in case when I have 3 million peer-groups and also 50 values in each peer-group?
One more dimension for the question. Would it be faster to do those statistics if I use Proc SQL instead?
Example code is below:
proc summary data = table_blu missing chartype;
class var1 var2; /* Var1 and var2 are toghether peer-group */
var values;
output out = stattable(rename = (_type_ = type) drop = _freq_)
n=n min=min max=max mean=mean std=std q1=q1 q3=q3 p10=p10 p90=p90 p95=p95
;
run;
So there are a number of things to think about here.
The first point and quite possibly the largest in terms of performance is getting the data from DB2 into SAS. (I'm assuming this is not an in database instance of SAS -- correct me if it is). That's a big table and moving it across the wire takes time. Because of that, if you can calculate all these statistics inside DB2 with an SQL statement, that will probably be your fastest option.
So assuming you've downloaded the table to the SAS server:
A table sorted by the CLASS variables will be MUCH faster to process than an unsorted table. If SAS knows the table is sorted, it doesn't have to scan the table for records to go into a group, it can do block reads instead of random IO.
If the table is not sorted, then the larger the number of groups, then more table scans that have to occur.
The point is, the speed of getting data from the HD to the CPU will be paramount in an unsorted process.
From there, you get into a memory and cpu issue. PROC SUMMARY is multithreaded and SAS will read N groups at a time. If group size can fit into the memory allocated for that thread, you won't have an issue. If the group size is too large, then SAS will have to page.
I scaled down the problem to a 15M row example:
%let grps=3000;
%let pergrp=5000;
UNSORTED:
NOTE: There were 15000000 observations read from the data set
WORK.TEST.
NOTE: The data set WORK.SUMMARY has 3001 observations and 9
variables.
NOTE: PROCEDURE SUMMARY used (Total process time):
real time 20.88 seconds
cpu time 31.71 seconds
SORTED:
NOTE: There were 15000000 observations read from the data set
WORK.TEST.
NOTE: The data set WORK.SUMMARY has 3001 observations and 9
variables.
NOTE: PROCEDURE SUMMARY used (Total process time):
real time 5.44 seconds
cpu time 11.26 seconds
=============================
%let grps=300000;
%let pergrp=50;
UNSORTED:
NOTE: There were 15000000 observations read from the data set
WORK.TEST.
NOTE: The data set WORK.SUMMARY has 300001 observations and 9
variables.
NOTE: PROCEDURE SUMMARY used (Total process time):
real time 19.26 seconds
cpu time 41.35 seconds
SORTED:
NOTE: There were 15000000 observations read from the data set
WORK.TEST.
NOTE: The data set WORK.SUMMARY has 300001 observations and 9
variables.
NOTE: PROCEDURE SUMMARY used (Total process time):
real time 5.43 seconds
cpu time 10.09 seconds
I ran these a few times and the run times were similar. Sorted times are about equal and way faster.
The more groups / less per group was faster unsorted, but look at the total CPU usage, it is higher. My laptop has an extremely fast SSD so IO was probably not the limiting factor -- the HD was able to keep up with the multi-core CPU's demands. On a system with a slower HD, the total run times could be different.
In the end, it depends too much on how the data is structured and the specifics of your server and DB.
Not a theoretical answer but still relevant IMO...
To speed up your proc summary on large tables add the / groupinternal option to your class statement. Of course, assuming you don't want the variables formatted prior to being grouped.
e.g:
class age / groupinternal;
This tells SAS that it doesn't need to apply a format to the value prior to calculating what class to group the value into. Every value will have a format applied to it even if you have not specified one explicitly. This doesn't make a large difference on small tables, but on large tables it can.
From this simple test, it reduces the time from 60 seconds on my machine to 40 seconds (YMMV):
data test;
set sashelp.class;
do i = 1 to 10000000;
output;
end;
run;
proc summary data=test noprint nway missing;
class age / groupinternal;
var height;
output out=smry mean=;
run;

store matrix data in SQLite for fast retrieval in R

I have 48 matrices of dimensions 1,000 rows and 300,000 columns where each column has a respective ID, and each row is a measurement at one time point. Each of the 48 matrices is of the same dimension and their column IDs are all the same.
The way I have the matrices stored now is as RData objects and also as text files. I guess for SQL I'd have to transpose and store by ID, and in such case now the matrix would be of dimensions 300,000 rows and 1,000 columns.
I guess if I transpose it a small version of the data would look like this:
id1 1.5 3.4 10 8.6 .... 10 (with 1,000 columns, and 30,0000 rows now)
I want to store them in a way such that I can use R to retrieve a few of the rows (~ 5 to 100 each time).
The general strategy I have in mind is as follows:
(1) Create a database in sqlite3 using R that I will use to store the matrices (in different tables)
For file 1 to 48 (each file is of dim 1,000 rows and 300,000 columns):
(2) Read in file into R
(3) Store the file as a matrix in R
(4) Transpose the matrix (now its of dimensions 300,000 rows and 1,000 columns). Each row now is the unique id in the table in sqlite.
(5) Dump/write the matrix into the sqlite3 database created in (1) (dump it into a new table probably?)
Steps 1-5 are to create the DB.
Next, I need step 6 to read-in the database:
(6) Read some rows (at most 100 or so at a time) into R as a (sub)matrix.
A simple example code doing steps 1-6 would be best.
Some Thoughts:
I have used SQL before but it was mostly to store tabular data where each column had a name, in this case each column is just one point of the data matrix, I guess I could just name it col1 ... to col1000? or there are better tricks?
If I look at: http://sandymuspratt.blogspot.com/2012/11/r-and-sqlite-part-1.html they show this example:
dbSendQuery(conn = db,
"CREATE TABLE School
(SchID INTEGER,
Location TEXT,
Authority TEXT,
SchSize TEXT)")
But in my case this would look like:
dbSendQuery(conn = db,
"CREATE TABLE mymatrixdata
(myid TEXT,
col1 float,
col2 float,
.... etc.....
col1000 float)")
I.e., I have to type in col1 to ... col1000 manually, that doesn't sound very smart. This is where I am mostly stuck. Some code snippet would help me.
Then, I need to dump the text files into the SQLite database? Again, unsure how to do this from R.
Seems I could do something like this:
setwd(<directory where to save the database>)
db <- dbConnect(SQLite(), dbname="myDBname")
mymatrix.df = read.table(<full name to my text file containing one of the matrices>)
mymatrix = as.matrix(mymatrix.df)
Here I need to now the coe on how to dump this into the database...
Finally,
How to fast retrieve the values (without having to read the entire matrices each time) for some of the rows (by ID) using R?
From the tutorial it'd look like this:
sqldf("SELECT id1,id2,id30 FROM mymatrixdata", dbname = "Test2.sqlite")
But it the id1,id2,id30 are hardcoded in the code and I need to dynamically obtain them. I.e., sometimes i may want id1, id2, id10, id100; and another time i may want id80, id90, id250000, etc.
Something like this would be more approp for my needs:
cols.i.want = c("id1","id2","id30")
sqldf("SELECT cols.i.want FROM mymatrixdata", dbname = "Test2.sqlite")
Again, unsure how to proceed here. Code snippets would also help.
A simple example would help me a lot here, no need to code the whole 48 files, etc. just a simple example would be great!
Note: I am using Linux server, SQlite 3 and R 2.13 (I could update it as well).
In the comments the poster explained that it is only necessary to retrieve specific rows, not columns:
library(RSQLite)
m <- matrix(1:24, 6, dimnames = list(LETTERS[1:6], NULL)) # test matrix
con <- dbConnect(SQLite()) # could add dbname= arg. Here use in-memory so not needed.
dbWriteTable(con, "m", as.data.frame(m)) # write
dbGetQuery(con, "create unique index mi on m(row_names)")
# retrieve submatrix back as m2
m2.df <- dbGetQuery(con, "select * from m where row_names in ('A', 'C')
order by row_names")
m2 <- as.matrix(m2.df[-1])
rownames(m2) <- m2.df$row_names
Note that relational databases are set based and the order that the rows are stored in is not guaranteed. We have used order by row_names to get out a specific order. If that is not good enough then add a column giving the row index: 1, 2, 3, ... .
REVISED based on comments.