I am using catalog method to read data from hbase and store it into dataframe using method described here Read HBase table with where clause using Spark,
but I am wondering if there is any other efficient way to this
problem statement is :
scan hbase table_a
scan hbase table_b(mapping table)
check if col_1 value present in table_b, if yes get the parent_id from mapping table
if not then check col_2 present in table_b, if yes then get the parent_id from mapping table
save the result in file.
I am able to do this using above method but as i am using join like below
select * from a join b where (case when a.duns is null then a.ig else a.duns end) = b.rowkey
it takes forever
please help
import org.apache.hadoop.hbase.{HBaseConfiguration,
HTableDescriptor,HColumnDescriptor,HConstants,TableName,CellUtil}
import org.apache.hadoop.hbase.client.{HBaseAdmin,
Result,Put,HTable,ConnectionFactory,Connection,Get,Scan}
import org.apache.hadoop.hbase.io.ImmutableBytesWritable
import org.apache.hadoop.hbase.mapreduce.TableInputFormat
import org.apache.hadoop.hbase.util.Bytes
val hconf = HBaseConfiguration.create()
hconf.set("hbase.zookee per.quorum","localhost")
hconf.set("hbase.zookeeper.property.clientPort","2181")
val admin = new HBaseAdmin(hconf)
val hconn=ConnectionFactory.createConnection(hconf)
var tabName_string= admin.getTableNames("student")(0) // enter table name
val table = new HTable(hconf,tabName_string) // create table connection
var data= table.get(new Get(Bytes.toBytes("row-id97"))) // row ID
def getHBaseRowData (x: org.apache.hadoop.hbase.Cell, hint: Int )= {
if(hint == 1){
((Bytes.toString(x.getRow())), Bytes.toString(CellUtil.cloneQualifier(x)))
} else if(hint == 2) {
((Bytes.toString(x.getRow())),Bytes.toString(CellUtil.cloneValue(x)))
} else if(hint == 3) {
((Bytes.toString(x.getRow())),Bytes.toString(CellUtil.cloneFamily(x)))
} else if(hint == 4) {
((Bytes.toString(x.getRow())),(Bytes.toString(CellUtil.cloneQualifier(x))), (Bytes.toString(CellUtil.cloneFamily(x))), (Bytes.toString(CellUtil.cloneValue(x))))
} else
("Wrong Hint")
}
data.rawCells().foreach(x=> println(getHBaseRowData(x,4)))
Related
I try create DataFrame from Hive table. But I bad work with Spark API.
I need help to optimize the query in method getLastSession, make two tasks into one task for spark:
val pathTable = new File("/src/test/spark-warehouse/test_db.db/test_table").getAbsolutePath
val path = new Path(s"$pathTable${if(onlyPartition) s"/name_process=$processName" else ""}").toString
val df = spark.read.parquet(path)
def getLastSession: Dataset[Row] = {
val lastTime = df.select(max(col("time_write"))).collect()(0)(0).toString
val lastSession = df.select(col("id_session")).where(col("time_write") === lastTime).collect()(0)(0).toString
val dfByLastSession = df.filter(col("id_session") === lastSession)
dfByLastSession.show()
/*
+----------+----------------+------------------+-------+
|id_session| time_write| key| value|
+----------+----------------+------------------+-------+
|alskdfksjd|1639950466414000|schema2.table2.csv|Failure|
*/
dfByLastSession
}
PS. My Source Table (for example):
name_process
id_session
time_write
key
value
OtherClass
jsdfsadfsf
43434883477
schema0.table0.csv
Success
OtherClass
jksdfkjhka
23212123323
schema1.table1.csv
Success
OtherClass
alskdfksjd
23343212234
schema2.table2.csv
Failure
ExternalClass
sdfjkhsdfd
34455453434
schema3.table3.csv
Success
You can use row_number with Window like this:
import org.apache.spark.sql.expressions.Window
val dfByLastSession = df.withColumn(
"rn",
row_number().over(Window.orderBy(desc("time_write")))
).filter("rn=1").drop("rn")
dfByLastSession.show()
However, as you do not partition by any field maybe it can degrade performances.
Another thing you can change in your code, is using struct ordering to get the id_session associated with most recent time_write with one query:
val lastSession = df.select(max(struct(col("time_write"), col("id_session")))("id_session")).first.getString(0)
val dfByLastSession = df.filter(col("id_session") === lastSession)
I am using databricks spark-avro to convert a dataframe schema into avro schema.The returned avro schema fails to have a default value. This is causing issues when i am trying to create a Generic record out of the schema. Can, any one help with the right way of using this function ?
Dataset<Row> sellableDs = sparkSession.sql("sql query");
SchemaBuilder.RecordBuilder<Schema> rb = SchemaBuilder.record("testrecord").namespace("test_namespace");
Schema sc = SchemaConverters.convertStructToAvro(sellableDs.schema(), rb, "test_namespace");
System.out.println(sc.toString());
System.out.println(sc.getFields().get(0).toString());
String schemaString = sc.toString();
sellableDs.foreach(
(ForeachFunction<Row>) row -> {
Schema scEx = new Schema.Parser().parse(schemaString);
GenericRecord gr;
gr = new GenericData.Record(scEx);
System.out.println("Generic record Created");
int fieldSize = scEx.getFields().size();
for (int i = 0; i < fieldSize; i++ ) {
// System.out.println( row.get(i).toString());
System.out.println("field: " + scEx.getFields().get(i).toString() + "::" + "value:" + row.get(i));
gr.put(scEx.getFields().get(i).toString(), row.get(i));
//i++;
}
}
);
This is the df schema:
StructType(StructField(key,IntegerType,true), StructField(value,DoubleType,true))
This is the avro converted schema:
{"type":"record","name":"testrecord","namespace":"test_namespace","fields":[{"name":"key","type":["int","null"]},{"name":"value","type":["double","null"]}]}
The problems is that the class SchemaConverters does not include default values as part of the schema creation. You have 2 options, modify the schema adding default values before Record creation or filling the record before building with some value( it could be actually values from your row). For example null. This is an example how create a Record using your schema
import org.apache.avro.generic.GenericRecordBuilder
import org.apache.avro.Schema
var schema = new Schema.Parser().parse("{\"type\":\"record\",\"name\":\"testrecord\",\"namespace\":\"test_namespace\",\"fields\":[{\"name\":\"key\",\"type\":[\"int\",\"null\"]},{\"name\":\"value\",\"type\":[\"double\",\"null\"]}]}")
var builder = new GenericRecordBuilder(schema);
for (i <- 0 to schema.getFields().size() - 1 ) {
builder.set(schema.getFields().get(i).name(), null)
}
var record = builder.build();
print(record.toString())
Short question: I would like to split a BQ table into multiple small tables, based on the distinct values of a column. So, if column country has 10 distinct values, it should split the table into 10 individual tables, with each having respective country data. Best, if done from within a BQ query (using INSERT, MERGE, etc.).
What I am doing right now is importing data to gstorage -> local storage -> doing splits locally and then pushing into tables (which is kind of a very time consuming process).
Thanks.
If the data has the same schema, just leave it in one table and use the clustering feature: https://cloud.google.com/bigquery/docs/reference/standard-sql/data-definition-language#creating_a_clustered_table
#standardSQL
CREATE TABLE mydataset.myclusteredtable
PARTITION BY dateCol
CLUSTER BY country
OPTIONS (
description="a table clustered by country"
) AS (
SELECT ....
)
https://cloud.google.com/bigquery/docs/clustered-tables
The feature is in beta though.
You can use Dataflow for this. This answer gives an example of a pipeline that queries a BigQuery table, splits the rows based on a column and then outputs them to different PubSub topics (which could be different BigQuery tables instead):
Pipeline p = Pipeline.create(PipelineOptionsFactory.fromArgs(args).withValidation().create());
PCollection<TableRow> weatherData = p.apply(
BigQueryIO.Read.named("ReadWeatherStations").from("clouddataflow-readonly:samples.weather_stations"));
final TupleTag<String> readings2010 = new TupleTag<String>() {
};
final TupleTag<String> readings2000plus = new TupleTag<String>() {
};
final TupleTag<String> readingsOld = new TupleTag<String>() {
};
PCollectionTuple collectionTuple = weatherData.apply(ParDo.named("tablerow2string")
.withOutputTags(readings2010, TupleTagList.of(readings2000plus).and(readingsOld))
.of(new DoFn<TableRow, String>() {
#Override
public void processElement(DoFn<TableRow, String>.ProcessContext c) throws Exception {
if (c.element().getF().get(2).getV().equals("2010")) {
c.output(c.element().toString());
} else if (Integer.parseInt(c.element().getF().get(2).getV().toString()) > 2000) {
c.sideOutput(readings2000plus, c.element().toString());
} else {
c.sideOutput(readingsOld, c.element().toString());
}
}
}));
collectionTuple.get(readings2010)
.apply(PubsubIO.Write.named("WriteToPubsub1").topic("projects/fh-dataflow/topics/bq2pubsub-topic1"));
collectionTuple.get(readings2000plus)
.apply(PubsubIO.Write.named("WriteToPubsub2").topic("projects/fh-dataflow/topics/bq2pubsub-topic2"));
collectionTuple.get(readingsOld)
.apply(PubsubIO.Write.named("WriteToPubsub3").topic("projects/fh-dataflow/topics/bq2pubsub-topic3"));
p.run();
I want to process csv file present in cloud bucket and insert its data in a BQ table. I found following piece of code but I am not sure how I can instantiate com.google.cloud.bigquery.Table for a given table name
com.google.cloud.bigquery.Table table = null;
com.google.cloud.bigquery.Job job = table.load(FormatOptions.csv(), sourceUri);
com.google.cloud.bigquery.Job completedJob = job.waitFor(WaitForOption.checkEvery(1, TimeUnit.SECONDS),
WaitForOption.timeout(3, TimeUnit.MINUTES));
if (!(completedJob != null && completedJob.getStatus().getError() == null)) {
throw new InterruptedException("Unable to load file from bucket into BQ");
}
return job;
Snippet taken from here.
[imports]
BigQuery bigquery = BigQueryOptions.getDefaultInstance().getService();
TableId tableId = TableId.of("dataset", "table");
Table table = bigquery.getTable(tableId);
[..]
Side note - that is an Alpha client library you are using. Just so you know.
I have a current ETL Logic which I have to implement in pig.
ETL Logic is creating a unique sequence number for the column if incoming value is null or blank.
Need to do this through pig.
you can generate sequence number using RANK but in your condition it is little bit different you are checking if that value is either '0' or 'null' then only you are assigning sequence number..
My point of view you should use UDF for this..
package pig.test;
import java.io.IOException;
import org.apache.pig.EvalFunc;
import org.apache.pig.data.Tuple;
public class SequenceNumber extends EvalFunc<Integer> {
static int cnt = 0;
public Integer exec(Tuple v) throws IOException{
int a = (Integer)v.get(0);
if(a == 0) {
cnt++ ;
return new Integer(cnt);
}
else
return new Integer(a);
}
}
In pig:
--Replace all null with 0
Step-1 A1 = foreach A generate *, (id is null ? 0 : id) as sq;
Step-2 T1 = foreach A1 generate sq,<your_fields>,<your_fields>;
Step-3 Result = foreach T1 generate sqno(*),<your_fields>,<your_fields>;
Hope this will help!!