How do i create a Singleton instance of HiveContext?
In SQLContext, we have a method called getOrCreate to create a singleton instance of SQLContext. But i don't see same kind of method in HiveContext.
Update: We are using Spark 1.6.1, so we cannot use 2.0 SparkSession.
HiveContext and SqlContext is getting combined.It deprecate HiveContext in Spark2.0
Refer :
https://issues.apache.org/jira/browse/SPARK-13737
Related
I have a non-Android app having many similar shards objects, and I want all objets inside each shard (DB client, DAOs...) to be singletons.
For this purpose, I have created a ShardSingleton annotation:
#Scope
#Retention(AnnotationRetention.RUNTIME)
annotation class ShardSingleton
and I am creating each shard object inside its own scope:
var shard1: Shard = KTP.openScopes("app", "shard1")
.supportScopeAnnotation(ShardSingleton::class.java)
.getInstance(Shard::class.java)
For my DAO to actually be a singleton inside its shard, I have to annotate it with both #ShardSingleton and #Singleton:
#Singleton // the FooDAO is not a singleton without this annotation
#ShardSingleton
#InjectConstructor
class FooDAO(val dbClient: DBClient)
At first sight (and probably out of ignorance), I thought #ShardSingleton would have been enough.
Is it expected?
Here is a gist demonstrating the behavior:
https://gist.github.com/bfreuden/a866b21c5a6342a3ce1ed26aa636f9f6
I want a map between Int and any class. In Java it would be Map<Class<?>, Integer>. What's the Kotlin equivalent of that?
KClass is Kotlin's equivalent to java.lang.Class.
An instance of KClass can be obtained with ::class on either a type or a value (i.e. String::class, 3.8::class).
If you require a Java Class instance from a KClass you can use the java extension property:
val kotlinClass: KClass<String> = String::class
val javaClass: Class<String> = String::class.java
Keep in mind that if you want to use kotlin-reflect's full features you will need kotlin-reflect on the classpath.
So in your case, the equivalent would be Map<KClass<*>, Int>.
The equivalent declaration would be Map<Class<*>, Int>.
You're looking for KClass. You need to add the Kotlin reflection library in order to use it.
I want to add a new operation on Spark SQL, I have already used user defined function of the form
dataframe.filter(udf("$a", "$b"))
I need to add a similar function but operating on two dataFrames, for example adding a function like:
dataframe1.udf(dataframe2))
To be more precise, the function is an optimized join on two dataframes.
The actual code is
CustomJoin(dataframe1,dataframe2)
Is this possible using user defined functions? Any other solutions or examples?
you can use implicit for this:
class AugmentedDataFrame(val df: DataFrame) {
def CustomJoin(df2: Dataframe){ ......}
}
object DataFrameImplicits {
implicit def dfToAugmentedDataFrame(df: DataFrame) = new AugmentedDataFrame(df)
}
and then:
import DataFrameImplicits._
df.CustomJoin(df2)
to learn more how using implicit to add a custom methods to an existing class :
Add Your Own Methods to the String Class
I am using CacheConfiguration with setIndexedTypes(Long.class, StructType.class) were StructType is an object of Spark and using igniteRDD.saveValues(df.rdd()) to push values. But when i try to query on that cache getting "Use setIndexedTypes or setTypeMetadata methods on CacheConfiguration to enable". I am aware of annotating fields with querysqlfields on POJO but the value here is Spark object how can we do this.
This doesn't work because StructType class doesn't know anything about Ignite SQL. You should create your own key and value classes and convert each StructType instance to a key-value pair during loading (use savePairs method). After that you will be able to configure SQL as described here: https://apacheignite.readme.io/docs/sql-queries
Why does this line of unit test code work? groovy.sql.Sql doesn't have a no argument constructor.
Sql.metaClass.constructor = { dataSource -> return new Sql(); }
That line is amongst some others in a grails app which mocks out a Sql object's constructor and one of its methods. It works great.
Looking at the API for the Sql object, I do not see a no argument constructor: http://groovy.codehaus.org/api/groovy/sql/Sql.html
This style of overriding the constructor using Sql.metaClass.constructor is something I found at:
http://manuel-palacio.blogspot.com/2010/07/groovy-tip-metaprogramming-1.html
Thanks!
groovy.sql.Sql has no public no-args constructor, but as can be seen in the source, it does have a private no-args constructor -- I guess in order to support the syntax new Sql(connection: connection)?.
I'm kind of surprised, though, that that technique for stubbing doesn't generate an exception, e.g., when running sql.execute or the like.