Hive/SQL queries for funnel analysis. - sql

I need to perform funnel analysis on a data the schema for which is following:
A(int X) Matched_B(int[] Y) Filtered_C(int[] Z)
Where,
A refers to client ID which can send multiple requests. Instead of storing request ID only client ID is being stored per request in the data pipeline. (I don't know why)
Matched_B refers to a list of items returned for a query.
Flitered_C is a subset of Matched_B and refers to items which successfully passed the filter.
All the data is stored in avro files in HDFS. The QPS with which data is being stored in HDFS is around 12000.
I need to prepare the following reports:
For each combination of (X,Y[i]), the number of times Y[i] appears in Matched_B.
For each combination of (X,Y[i]), the number of times Y[i] appears in Filtered_C.
Basically I would like to know whether this task can be performed using Hive only?
Currently, I am thinking of the following architecture.
HDFS(avro_schema)--> Hive_Script_1 --> HDFS(avro_schema_1) --> Java Application --> HDFS(avro_schema_2) --> Hive_Script_2(external_table) --> result
Where,
avro_schema is the schema described above.
avro_schema_1 is generated by Hive_Script_1 by transforming (using Lateral View explode(Matched_B)) avro_schema and is described as follows:
A(int X) Matched_B_1(int Y) Filtered_C(int[] Z)
avro_schema_2 is generated by the Java Application and is described as follows:
A(int X) Matched_B(int Y) Matched_Y(1 if Y is matched, else 0) Filtered_Y(1 if Y is filtered, 0 otherwise)
Finally we can run a Hive script to process this data for events generated each day.
The other architecture could be that we remove avro_schema_1 generation and directly process avro_schema from the Java application and generate the result.
However, I would like to avoid writing a Java application for this task. Could some point me to a Hive solution to the above problem?
Would also like some architecture's POV regarding the efficient solution to this problem.
Note: Kindly suggest the solution taking into account the QPS(12000).

Related

Azure Data Factory Limits

I have created a simple pipeline that operates as such:
Generates an access token via an Azure Function. No problem.
Uses a Lookup activity to create a table to iterate through the rows (4 columns by 0.5M rows). No problem.
For Each activity (sequential off, batch-size = 10):
(within For Each): Set some variables for checking important values.
(within For Each): Pass values through web activity to return a json.
(within For Each): Copy Data activity mapping parts of the json to the sink-dataset (postgres).
Problem: The pipeline slows to a crawl after approximately 1000 entries/inserts.
I was looking at this documentation regarding the limits of ADF.
ForEach items: 100,000
ForEach parallelism: 20
I would expect that this falls within in those limits unless I'm misunderstanding it.
I also cloned the pipeline and tried it by offsetting the query in one, and it tops out at 2018 entries.
Anyone with more experience be able to give me some idea of what is going on here?
As a suggestion, whenever I have to fiddle with variables inside a foreach, I made a new pipeline for the foreach process, and call it from within the foreach. That way I make sure that the variables get their own context for each iteration of the foreach.
Have you already checked that the bottleneck is not at the source or sink? If the database or web service is under some stress, then going sequential may help if your scenario allows that.
Hope this helped!

Airflow: BigQueryOperator vs BigQuery Quotas and Limits

Is there any pratical way to control quotas and limits on Airflow?.
I'm specially interested on controlling BigQuery concurrency.
There are different levels of quotas on BigQuery . So according to the Operator inputs, there should be a way to check if conditions are met, otherwise waiting for it to fulfill.
It seems to be a composition of Sensor-Operators, querying against a database like redis for example:
QuotaSensor(Project, Dataset, Table, Query) >> QuotaAddOperator(Project, Dataset, Table, Query)
QuotaAddOperator(Project, Dataset, Table, Query) >> BigQueryOperator(Project, Dataset, Table, Query)
BigQueryOperator(Project, Dataset, Table, Query) >> QuotaSubOperator(Project, Dataset, Table, Query)
The Sensor must check conditions like:
- Global running queries <= 300
- Project running queries <= 100
- .. etc
Is there any lib that already does that for me? A plugin perhaps?
Or any other easier solution?
Otherwise, following the Sensor-Operators approach.
How can I encapsulate all of it under a single operator? To avoid repetition of code,
a single operator: QuotaBigQueryOperator
Currently, it is only possible to get the Compute Engine quotas programmatically. However, there is an opened feature request to get/set other project quotas via API. You can post there about the specific case you would like to have implemented and follow it to track it and ask for updates.
Meanwhile, as workaround you can try to use the PythonOperator. With it you can define your own custom code and you would be able to implement retries for the queries that you send that get a quotaExceeded error (or the specific error you are getting). In this way you wouldn't have to explicitly check for the quota levels. You just run the queries and retry until they get executed. This is a simplified code for the strategy I am thinking about:
for query in QUERIES_TO_RUN:
while True:
try:
run(query)
except quotaExceededException:
continue # Jumps to the next cycle of the nearest enclosing loop.
break

How to map the column wise data in flowfile in NiFi?

i have csv file which having following structure.,
Alfreds,Centro,Ernst,Island,Bacchus
Germany,Mexico,Austria,UK,Canada
01,02,03,04,05
Now i have to move that data into database like below.
Name,City,ID
Alfreds,Germay,01
Centro,Mexico,02
Ernst,Austria,03
Island,UK,04
Bacchus,Canda,05
i try to map those colums but i can't able to extract the data in column wise.
Here my input data in column wise but i need to insert those in row wise in SQLServer
Can anyone suggest way to transfer column wise data into row wise in sql server?.
Thanks
There is no existing Apache NiFi processor to perform column transposition. One of the problems is that this is difficult to do in a streaming manner, as most NiFi components are designed, because in a naïve implementation you need to hold the entire contents of the flowfile in active memory at the same time.
I would recommend using an ExecuteScript processor to do this (here's a 6 line Python example). Be careful doing this because you can easily end up overflowing your heap if it is not set properly/you read unexpectedly large files into memory.
You could write a custom processor which performs a streaming transpose operation by iterating over each of n rows and reading up to your delimiter, storing a byte counter per row, combining the n elements as a single output row, and repeating the process starting from the respective byte counter of each row. (Given m columns, this is O(m * n)).
Another solution would be splitting the CSV input into individual rows using the SplitText processor, using an ExecuteScript or custom processor to transpose a single row into a single column, and then using a custom merge operation (either extend the existing MergeContent processor or write a script to do this) which laterally concatenates the incoming columns into a reconstructed matrix. (O(n) + O(n) + O(m) => O(2n + m) but the individual transposition operations can be performed in parallel so with x threads it's O(n + n/x + m)).
Any of these approaches will require some level of custom development. If you are really hesitant to pursue that, you could try using ExecuteStreamCommand and one of the many bash solutions to do the transposition on the command-line.
#Andy,
It could be possible in NiFi also without using ExecuteScript.
I have extract the 3 input rows as input.1,input.2,input.3 in ExtractText. And then count number of columns in "input.1" using AnydelinateValues in expression language and store that in "TotalCount" Attribute.
Initially made "Count=1".
Using Loop Concept to get the first column by using "Count" and then increment "Count" Check "Count" in RouteOnAttribute
"le(totalcount)"
Now form insert Query with "Count" Attribute.
It worked well for me.It could be useful for someone.

Imperative USQL

How do I write control code for USQL? For example, move records between rowsets depending on the result of a query, and do this iteratively? Do I need to write a C# wrapper that dynamically generates and submits USQL?
For example, imagine I am writing a type of divisive hierarchical clustering algorithm. I have a large data set, where each row represents a point I want to segment. I iteratively repeat the following procedure:
For next cluster of rows X on stack:
For each potential disjoint subsegment Y of X:
Test if Y is different than (X-Y)
If yes:
mark Y as cluster and push to stack
let X=X-Y
Obviously, this procedure involves lots of control flow. How could I implement such a routine in USQL? If it is not possible, how would I implement with ADLA?

Caching of Map applications in Hadoop MapReduce?

Looking at the combination of MapReduce and HBase from a data-flow perspective, my problem seems to fit. I have a large set of documents which I want to Map, Combine and Reduce. My previous SQL implementation was to split the task into batch operations, cumulatively storing what would be the result of the Map into table and then performing the equivalent of a reduce. This had the benefit that at any point during execution (or between executions), I had the results of the Map at that point in time.
As I understand it, running this job as a MapReduce would require all of the Map functions to run each time.
My Map functions (and indeed any function) always gives the same output for a given input. There is simply no point in re-calculating output if I don't have to. My input (a set of documents) will be continually growing and I will run my MapReduce operation periodically over the data. Between executions I should only really have to calculate the Map functions for newly added documents.
My data will probably be HBase -> MapReduce -> HBase. Given that Hadoop is a whole ecosystem, it may be able to know that a given function has been applied to a row with a given identity. I'm assuming immutable entries in the HBase table. Does / can Hadoop take account of this?
I'm made aware from the documentation (especially the Cloudera videos) that re-calculation (of potentially redundant data) can be quicker than persisting and retrieving for the class of problem that Hadoop is being used for.
Any comments / answers?
If you're looking to avoid running the Map step each time, break it out as its own step (either by using the IdentityReducer or setting the number of reducers for the job to 0) and run later steps using the output of your map step.
Whether this is actually faster than recomputing from the raw data each time depends on the volume and shape of the input data vs. the output data, how complicated your map step is, etc.
Note that running your mapper on new data sets won't append to previous runs - but you can get around this by using a dated output folder. This is to say that you could store the output of mapping your first batch of files in my_mapper_output/20091101, and the next week's batch in my_mapper_output/20091108, etc. If you want to reduce over the whole set, you should be able to pass in my_mapper_output as the input folder, and catch all of the output sets.
Why not apply your SQL workflow in a different environment? Meaning, add a "processed" column to your input table. When time comes to run a summary, run a pipeline that goes something like:
map (map_function) on (input table filtered by !processed); store into map_outputs either in hbase or simply hdfs.
map (reduce function) on (map_outputs); store into hbase.
You can make life a little easier, assuming you are storing your data in Hbase sorted by insertion date, if you record somewhere timestamps of successful summary runs, and open the filter on inputs that are dated later than last successful summary -- you'll save some significant scanning time.
Here's an interesting presentation that shows how one company architected their workflow (although they do not use Hbase):
http://www.scribd.com/doc/20971412/Hadoop-World-Production-Deep-Dive-with-High-Availability