I have piece of code written in spark that loads data from HDFS into java classes generated from avro idl. On RDD created in that way I am executing simple operation which results depends on fact whether I cache RDD before it or not
i.e if I run code below
val loadedData = loadFromHDFS[Data](path,...)
println(loadedData.map(x => x.getUserId + x.getDate).distinct().count()) // 200000
program will print 200000, on the other hand executing next code
val loadedData = loadFromHDFS[Data](path,...).cache()
println(loadedData.map(x => x.getUserId + x.getDate).distinct().count()) // 1
result in 1 printed to stdout.
When I inspect values of the fields after reading cached data it seems
I am pretty sure that root cause of described problem is issue with serialization of classes generated from avro idl, but I do not know how to resolve it. I tried to use Kryo, registering generated class (Data), registering different serializers from chill_avro for given class (SpecificRecordSerializer, SpecificRecordBinarySerializer, etc), but none of those ideas helps me.
How I can solve this problem?
Link to minimal, complete, and verifiable example.
Try the code below out -
val loadedData = loadFromHDFS[Data](path,...)
println(loadedData.map(x => x.getUserId + x.getDate).distinct().count()).cache()
Related
I am working with spark version 3.0.1. I am generating a large dataframe. At the end calculations, I save dataframe plan in json format. I need him.
But there is one problem. If I persist a DataFrame, then its plan in json format is completely truncated. That is, all data lineage disappears.
For example, I do this:
val myDf: DataFrame = ???
val myPersistDf = myDf.persist
//toJSON method cuts down my plan
val jsonPlan = myPersistDf.queryExecution.optimizedPlan.toJSON
As a result, only information about the current columns remains.
But for example, if you use spark version 3.1.2, then there is no such problem. That is, the plan is not cut.
It is also worth saying that if you do not call the toJSON method, then the plan is not cut:
// Plan is not being cut.
val textPlan = myPersistDf.queryExecution.optimizedPlan.toString
I made a small test project to make sure of this:
https://github.com/MinorityMeaning/CutPlanDataFrame
Please help me figure it out.
I need to get the full plan in json format.
UPD(1):
Now I'm trying to convert each node to json separately. Now it doesn't work perfectly, but I think we need to go in this direction.
val jsonPlan = s"[${getJson(result_df.queryExecution.optimizedPlan).mkString(",")}]"
def getJson(lp: TreeNode[_]): Seq[String] = {
val children = (lp.innerChildren ++ lp.children.map(c => c.asInstanceOf[TreeNode[_]])).distinct
JsonMethods.compact(JsonMethods.render(JsonMethods.parse(lp.toJSON)(0))) +:
getJson(t.asInstanceOf[TreeNode[_]])))
children.flatMap(t => getJson(t))
}
UPD(2):
OK, here's how I finally solved this problem.
I downloaded spark 3.0.1 from github. Then replaced the TreeNode class in this project with a file from spark 3.1.2. Recompiled the project.
As a result, I received a package spark-catalyst_2.12-3.0.1.jar
Which replaced the existing original packaging.
There is no option to switch to another version of spark. I have not found any other solutions to the problem.
Thank you guys for prompting. Your advice helped.
You can cherry pick below 2 commits into spark 3.0.1 to fix this issue.
* 1603775934 - [SPARK-35411][SQL][FOLLOWUP] Handle Currying Product while serializing TreeNode to JSON (8 months ago) <Tengfei Huang>
* 9804f07c17 - [SPARK-35411][SQL] Add essential information while serializing TreeNode to json (9 months ago) <Tengfei Huang>
Well, you already kinda have your answer.
TLDR: Upgrade.
You could, could, look into the github repo at the logical plan code, and look for difference in how logical plans are calculated between. 3.0.1 and 3.1.2. (Or look at the code for persist and see how it's changed.) You could then back port a patch to 3.0.1. But you'd still need to build a new version of spark and then deploy it so that the plan returned. But if you are doing all that work why not upgrade to 3.1.2 if you know it works? (Or some later version of Spark?)
(You must have some dependency on a sub-component that only compatible with 3.0.1?)
I have a relatively large db that may take 1 to 2 minutes to initialise, is it possible to load a pre-populated db when using sqldelight (kotlin multiplatform) instead of initialising the db on app launch?
Yes, but it can be tricky. Not just for "Multiplatform". You need to copy the db to the db folder before trying to init sqldelight. That probably means i/o on the main thread when the app starts.
There is no standard way to do this now. You'll need to put the db file in assets on android and in a bundle on iOS and copy them to their respective folders before initializing sqldelight. Obviously you'll want to check if the db exists first, or have some way of knowing this is your first app run.
If you're planning on shipping updates that will have newer databases, you'll need to manage versions outside of just a check for the existance of the db.
Although not directly answering your question, 1 to 2 minutes is really, really long for sqlite. What are you doing? I would first make sure you're using transactions properly. 1-2 minutes of inserting data would (probably) result in a huge db file.
Sorry, but I can't add any comments yet, which would be more appropriate...
Although not directly answering your question, 1 to 2 minutes is
really, really long for sqlite. What are you doing? I would first make
sure you're using transactions properly. 1-2 minutes of inserting data
would (probably) result in a huge db file.
Alternatively, my problem due to which I had to use a pre-populated database was associated with the large size of .sq files (more than 30 MB text of INSERTs per table), and SqlDeLight silently interrupted the generation, without displaying error messages.
You'll need to put the db file in assets on android and in a bundle on
iOS and copy them to their respective folders before initializing
sqldelight.
Having to load a db from resources on both android and ios feels a lot
of work + it means the shared project wont be the only place where the
data is initialised.
Kotlin MultiPlatform library Moko-resources solves the issue of a single source for a database in a shared module. It works for KMM the same way for Android and iOS.
Unfortunately, using this feature are almost not presented in the samples of library. I added a second method (getDriver) to the expected class DatabaseDriverFactory to open the prepared database, and implemented it on the platform. For example, for androidMain:
actual class DatabaseDriverFactory(private val context: Context) {
actual fun createDriver(schema: SqlDriver.Schema, fileName: String): SqlDriver {
return AndroidSqliteDriver(schema, context, fileName)
}
actual fun getDriver(schema: SqlDriver.Schema, fileName: String): SqlDriver {
val database: File = context.getDatabasePath(fileName)
if (!database.exists()) {
val inputStream = context.resources.openRawResource(MR.files.dbfile.rawResId)
val outputStream = FileOutputStream(database.absolutePath)
inputStream.use { input: InputStream ->
outputStream.use { output: FileOutputStream ->
input.copyTo(output)
}
}
}
return AndroidSqliteDriver(schema, context, fileName)
}
}
MR.files.fullDb is the FileResource from the class generated by the library, it is associated with the name of the file located in the resources/MR/files directory of the commonMain module. It property rawResId represents the platform-side resource ID.
The only thing you need is to specify the path to the DB file using the driver.
Let's assume your DB lies in /mnt/my_best_app_dbs/super.db. Now, pass the path in the name property of the Driver. Something like this:
val sqlDriver: SqlDriver = AndroidSqliteDriver(Schema, context, "/mnt/my_best_app_dbs/best.db")
Keep in mind that you might need to have permissions that allow you to read a given storage type.
We are using a pretty simple flow where messages are retrieved from PubSub, their JSON content is being flatten into two types (for BigQuery and Postgres) and then inserted into both sinks.
But, we are seeing duplicates in both sinks (Postgres was kinda fixed with a unique constraint and a "ON CONFLICT... DO NOTHING").
At first we trusted in the supposedly "insertId" UUId that the Apache Beam/BigQuery creates.
Then we add a "unique_label" attribute to each message before queueing them into PubSub, using data from the JSON itself, which gives them uniqueness (a device_id + a reading's timestamp). And subscribed to the topic using that attribute with "withIdAttribute" method.
Finally we paid for GCP Support, and their "solutions" do not work. They have told us to even use Reshuffle transform, which is deprecated by the way, and some windowing (that we do not won't since we want near-real time data).
This the main flow, pretty basic:
[UPDATED WITH LAST CODE]
Pipeline
val options = PipelineOptionsFactory.fromArgs(*args).withValidation().`as`(OptionArgs::class.java)
val pipeline = Pipeline.create(options)
var mappings = ""
// Value only available at runtime
if (options.schemaFile.isAccessible){
mappings = readCloudFile(options.schemaFile.get())
}
val tableRowMapper = ReadingToTableRowMapper(mappings)
val postgresMapper = ReadingToPostgresMapper(mappings)
val pubsubMessages =
pipeline
.apply("ReadPubSubMessages",
PubsubIO
.readMessagesWithAttributes()
.withIdAttribute("id_label")
.fromTopic(options.pubSubInput))
pubsubMessages
.apply("AckPubSubMessages", ParDo.of(object: DoFn<PubsubMessage, String>() {
#ProcessElement
fun processElement(context: ProcessContext) {
LOG.info("Processing readings: " + context.element().attributeMap["id_label"])
context.output("")
}
}))
val disarmedMessages =
pubsubMessages
.apply("DisarmedPubSubMessages",
DisarmPubsubMessage(tableRowMapper, postgresMapper)
)
disarmedMessages
.get(TupleTags.readingErrorTag)
.apply("LogDisarmedErrors", ParDo.of(object: DoFn<String, String>() {
#ProcessElement
fun processElement(context: ProcessContext) {
LOG.info(context.element())
context.output("")
}
}))
disarmedMessages
.get(TupleTags.tableRowTag)
.apply("WriteToBigQuery",
BigQueryIO
.writeTableRows()
.withoutValidation()
.withCreateDisposition(BigQueryIO.Write.CreateDisposition.CREATE_NEVER)
.withWriteDisposition(BigQueryIO.Write.WriteDisposition.WRITE_APPEND)
.withFailedInsertRetryPolicy(InsertRetryPolicy.neverRetry())
.to(options.bigQueryOutput)
)
pipeline.run()
DissarmPubsubMessage is a PTransforms that uses FlatMapElements transform to get TableRow and ReadingsInputFlatten (own class for Postgres)
We expect zero duplicates or the "best effort" (and we append some cleaning cron job), we paid for these products to run statistics and bigdata analysis...
[UPDATE 1]
I even append a new simple transform that logs our unique attribute through a ParDo which supposedly should ack the PubsubMessage, but this is not the case:
new flow with AckPubSubMessages step
Thanks!!
Looks like you are using the global window. One technique would be to window this into an N minute window. Then process the keys in the window and drop an items with dup keys.
The supported programming languages are Python and Java, your code seems to be Scala and as far as I know it is not supported. I strongly recommend using Java to avoid any unsupported feature for the programming language you use.
In addition, I would recommend the following approaches to work on duplicates, the option 2 could meet your need of near-real-time:
message_id. Probably you already read the FAQ - duplicates which points to deprecated doc. However, if you check the PubsubMessage object you will notice that messageId is still available and it will be populated if not set by the publisher:
"ID of this message, assigned by the server when the message is
published ... It must not be populated by the publisher in a
topics.publish call"
BigQuery Streaming. To validate duplicate during loading the data, right before inserting in BQ you can create UUID.Please refer the section Example sink: Google BigQuery.
Try the Dataflow template PubSubToBigQuery and validate there are not duplicates in BQ.
I'm using Jmeter to design a test that requires data to be randomly read from text files. To save memory, I have set up a "setUp Thread Group" with a BeanShell PreProcessor with the following:
//Imports
import org.apache.commons.io.FileUtils;
//Read data files
List items = FileUtils.readLines(new File(vars.get("DataFolder") + "/items.txt"));
//Store for future use
props.put("items", items);
I then attempt to read this in my other thread groups and am trying to access a random line in my text files with something like this:
(props.get("items")).get(new Random().nextInt((props.get("items")).size()))
However, this throws a "Typed variable declaration" error and I think it's because the get() method returns an object and I'm trying to invoke size() on it, since it's really a List. I'm not sure what to do here. My ultimate goal is to define some lists of data once to be used globally in my test so my tests don't have to store this data themselves.
Does anyone have any thoughts as to what might be wrong?
EDIT
I've also tried defining the variables in the setUp thread group as follows:
bsh.shared.items = items;
And then using them as this:
(bsh.shared.items).get(new Random().nextInt((bsh.shared.items).size()))
But that fails with the error "Method size() not found in class'bsh.Primitive'".
You were very close, just add casting to List so the interpreter will know what's the expected object:
log.info(((List)props.get("items")).get(new Random().nextInt((props.get("items")).size())));
Be aware that since JMeter 3.1 it is recommended to use Groovy for any form of scripting as:
Groovy performance is much better
Groovy supports more modern Java features while with Beanshell you're stuck at Java 5 level
Groovy has a plenty of JDK enhancements, i.e. File.readLines() function
So kindly find Groovy solution below:
In the first Thread Group:
props.put('items', new File(vars.get('DataFolder') + '/items.txt').readLines()
In the second Thread Group:
def items = props.get('items')
def randomLine = items.get(new Random().nextInt(items.size))
I want to write a test for my spark streaming application that consume a flume source.
http://mkuthan.github.io/blog/2015/03/01/spark-unit-testing/ suggests using ManualClock but for the moment reading a file and verifying outputs would be enough for me.
So I wish to use :
JavaStreamingContext streamingContext = ...
JavaDStream<String> stream = streamingContext.textFileStream(dataDirectory);
stream.print();
streamingContext.awaitTermination();
streamingContext.start();
Unfortunately it does not print anything.
I tried:
dataDirectory = "hdfs://node:port/absolute/path/on/hdfs/"
dataDirectory = "file://C:\\absolute\\path\\on\\windows\\"
adding the text file in the directory BEFORE the program begins
adding the text file in the directory WHILE the program run
Nothing works.
Any suggestion to read from text file?
Thanks,
Martin
Order of start and await are indeed inversed.
In addition to that, the easiest way to pass data to your Spark Streaming application for testing is a QueueDStream. It's a mutable queue of RDD of arbitrary data. This means that you could create the data programmatically or load it from disk into an RDD and pass that to your Spark Streaming code.
Eg. to avoid the timing issues faced with the fileConsumer, you could try this:
val rdd = sparkContext.textFile(...)
val rddQueue: Queue[RDD[String]] = Queue()
rddQueue += rdd
val dstream = streamingContext.queueStream(rddQueue)
doMyStuffWithDstream(dstream)
streamingContext.start()
streamingContext.awaitTermination()
I am so stupid, I inverted calls to start() and awaitTermination()
If you want to do the same, you should read from HDFS, and add the file WHILE the program runs.