How can I register classes to Kryo Serializer in Apache Spark? - serialization

I am using Spark 1.6.1 and Python. How can I enable Kryo serialization when working with PySpark?
I have the following settings in the spark-default.conf file:
spark.eventLog.enabled true
spark.eventLog.dir //local_drive/sparkLogs
spark.default.parallelism 8
spark.locality.wait.node 5s
spark.executor.extraJavaOptions -XX:+UseCompressedOops
spark.serializer org.apache.spark.serializer.KryoSerializer
spark.kryo.classesToRegister Timing, Join, Select, Predicate, Timeliness, Project, Query2, ScanSelect
spark.shuffle.compress true
And the following error:
py4j.protocol.Py4JJavaError: An error occurred while calling o35.load.
: org.apache.spark.SparkException: Failed to register classes with Kryo
at org.apache.spark.serializer.KryoSerializer.newKryo(KryoSerializer.scala:128)
at org.apache.spark.serializer.KryoSerializerInstance.borrowKryo(KryoSerializer.scala:273)
at org.apache.spark.serializer.KryoSerializerInstance.<init>(KryoSerializer.scala:258)
at org.apache.spark.serializer.KryoSerializer.newInstance(KryoSerializer.scala:174)
Caused by: java.lang.ClassNotFoundException: Timing
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:274)
at org.apache.spark.serializer.KryoSerializer$$anonfun$newKryo$4.apply(KryoSerializer.scala:120)
at org.apache.spark.serializer.KryoSerializer$$anonfun$newKryo$4.apply(KryoSerializer.scala:120)
at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)
at org.apache.spark.serializer.KryoSerializer.newKryo(KryoSerializer.scala:120)
The main class contains (Query2.py):
from Timing import Timing
from Predicate import Predicate
from Join import Join
from ScanSelect import ScanSelect
from Select import Select
from Timeliness import Timeliness
from Project import Project
conf = SparkConf().setMaster(master).setAppName(sys.argv[1]).setSparkHome("$SPARK_HOME")
sc = SparkContext(conf=conf)
conf.set("spark.kryo.registrationRequired", "true")
sqlContext = SQLContext(sc)
I know that "Kryo won’t make a major impact on PySpark because it just stores data as byte[] objects, which are fast to serialize even with Java. But it may be worth a try to set spark.serializer and not try to register any classes"(Matei Zaharia, 2014). However, I need to register the classes.
Thanks in advance.

It is not possible. Kryo is a Java (JVM) serialization framework. It cannot be used with Python classes. To serialize Python object PySpark is using Python serialization tools including standard pickle module and improved version of coludpickle. You can find some additional information about PySpark serialization in Tips for properly using large broadcast variables?.
Sp while you can enable Kryo serialization when working with PySpark, this won't affect the way how Python objects are serialized. It will be used only for serialization of Java or Scala objects.

Related

how to enable Apache Arrow in Pyspark

I am trying to enable Apache Arrow for conversion to Pandas. I am using:
pyspark 2.4.4
pyarrow 0.15.0
pandas 0.25.1
numpy 1.17.2
This is the example code
spark.conf.set("spark.sql.execution.arrow.enabled", "true")
x = pd.Series([1, 2, 3])
df = spark.createDataFrame(pd.DataFrame(x, columns=["x"]))
I got this warning message
c:\users\administrator\appdata\local\programs\python\python37\lib\site-packages\pyspark\sql\session.py:714: UserWarning: createDataFrame attempted Arrow optimization because 'spark.sql.execution.arrow.enabled' is set to true; however, failed by the reason below:
An error occurred while calling z:org.apache.spark.sql.api.python.PythonSQLUtils.readArrowStreamFromFile.
: java.lang.IllegalArgumentException
at java.nio.ByteBuffer.allocate(ByteBuffer.java:334)
at org.apache.arrow.vector.ipc.message.MessageSerializer.readMessage(MessageSerializer.java:543)
at org.apache.spark.sql.execution.arrow.ArrowConverters$$anon$3.readNextBatch(ArrowConverters.scala:243)
at org.apache.spark.sql.execution.arrow.ArrowConverters$$anon$3.<init>(ArrowConverters.scala:229)
at org.apache.spark.sql.execution.arrow.ArrowConverters$.getBatchesFromStream(ArrowConverters.scala:228)
at org.apache.spark.sql.execution.arrow.ArrowConverters$$anonfun$readArrowStreamFromFile$2.apply(ArrowConverters.scala:216)
at org.apache.spark.sql.execution.arrow.ArrowConverters$$anonfun$readArrowStreamFromFile$2.apply(ArrowConverters.scala:214)
at org.apache.spark.util.Utils$.tryWithResource(Utils.scala:2543)
at org.apache.spark.sql.execution.arrow.ArrowConverters$.readArrowStreamFromFile(ArrowConverters.scala:214)
at org.apache.spark.sql.api.python.PythonSQLUtils$.readArrowStreamFromFile(PythonSQLUtils.scala:46)
at org.apache.spark.sql.api.python.PythonSQLUtils.readArrowStreamFromFile(PythonSQLUtils.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.lang.Thread.run(Thread.java:748)
Attempting non-optimization as 'spark.sql.execution.arrow.fallback.enabled' is set to true.
warnings.warn(msg)
We made a change in 0.15.0 that makes the default behavior of pyarrow incompatible with older versions of Arrow in Java -- your Spark environment seems to be using an older version.
Your options are
Set the environment variable ARROW_PRE_0_15_IPC_FORMAT=1 from where you are using Python
Downgrade to pyarrow < 0.15.0 for now.
For calling my pandas UDF in my Spark 2.4.4 cluster with pyarrow==0.15. I struggled with setting the ARROW_PRE_0_15_IPC_FORMAT=1 flag as mentioned above successfully.
I set the flag in (1) the command line via export on the head node, (2) via spark-env.sh and yarn-env.sh on all nodes in the cluster, and (3) in the pyspark code itself from my script on the head node. None of these worked to actually set this flag inside of the udf, for unknown reasons.
The simplest solution I found was to call this inside the udf:
#pandas_udf("integer", PandasUDFType.SCALAR)
def foo(*args):
import os
os.environ["ARROW_PRE_0_15_IPC_FORMAT"] = "1"
#...
Hopefully this saves someone else several hours.

Flink 1.9 SQL Client throws ClassNotFoundException: org.apache.kafka.clients.consumer.ConsumerRecord

Trying to use Flink 1.9 SQL-Client with Kafka without success.
After figuring out about the required jar files and copying them into the lib directory, I am getting the following run-time exception when doing SELECT * FROM table-name:
Flink SQL> select * from default_catalog.default_database.member_customer_newsletters ;
[ERROR] Could not execute SQL statement. Reason:
java.lang.ClassNotFoundException: org.apache.kafka.clients.consumer.ConsumerRecord
Doing jar -tf of the jar-file, I can see the class ConsumerRecord being there:
jar -tf ./lib/flink-sql-connector-kafka-0.11_2.12-1.9.0.jar|grep 'ConsumerRecord'
org/apache/flink/kafka011/shaded/org/apache/kafka/clients/consumer/ConsumerRecord.class
So, I am not sure why it is trhowing a ClassNotFoundException as the class is already in the jar-file?
I just need to add the run-time is looking for "org.apache.kafka.clients.consumer.ConsumerRecord" but because this jar-file is shaded the full qualified name for the class is "org.apache.flink.kafka011.shaded.org.apache.kafka.clients.consumer.ConsumerRecord.class"
But, this should be true with any other class in this shaded jar as well!
There are two different classes:
org.apache.kafka.clients.consumer.ConsumerRecord
org.apache.flink.kafka011.shaded.org.apache.kafka.clients.consumer.ConsumerRecord
In flink-sql-connector-kafka-0.11_2.12-1.9.0.jar, you found the class
org.apache.flink.kafka011.shaded.org.apache.kafka.clients.consumer.ConsumerRecord
while Flink is complaining about:
org.apache.kafka.clients.consumer.ConsumerRecord
The first is a class used internally by Flink, after a kind of copy-paste from Kafka.
The second one is a class in kafka-clients-0.11.0.2.jar.
So Flink is right to complain about a missing library.

Using JNDI Connection pool as Datanucleus PersistenceManagerFactory

I am developing a web application using DataNucleus as DAO layer (mainly due to historical reasons). It runs inside Payara server (a Glassfish 4 fork)
It works fine, but now I'd like to use a JNDI db connection pool to obtain the PersistenceManagerFactory for DataNucleus.
From the documentation, it seems that the following code would suffice:
pmf = JDOHelper.getPersistenceManagerFactory( "jdbc/HxWmDb", context );
but this way I obtain an error starting the application (DbSession is the class which implements the DAO layer, and the error line is exactly the one above):
Caused by: java.lang.ClassCastException
at com.sun.corba.ee.impl.javax.rmi.PortableRemoteObject.narrow(PortableRemoteObject.java:262)
at javax.rmi.PortableRemoteObject.narrow(PortableRemoteObject.java:150)
at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:1791)
at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:1755)
at ejb.DbSession.<init>(DbSession.java:119)
...
Caused by: java.lang.ClassCastException: com.sun.gjc.spi.jdbc40.DataSource40 cannot be cast to org.omg.CORBA.Object
at com.sun.corba.ee.impl.javax.rmi.PortableRemoteObject.narrow(PortableRemoteObject.java:245)
Any suggestion?
Little update, as requested by DN1:
As a first approach, I tried exactly what is described in the link:
Properties properties = new Properties();
properties.setProperty("datanucleus.ConnectionFactoryName","jdbc/HxWmDb");
PersistenceManagerFactory pmf = JDOHelper.getPersistenceManagerFactory(properties);
And the error is, as already said, that a URI is anyway required:
Caused by: org.datanucleus.exceptions.NucleusException: You haven't specified persistence property 'datanucleus.ConnectionURL'

org.apache.ignite.IgniteCheckedException: Failed to read class name from file

I have a 3 node Apache Ignite Cluster, I have created a cache with Integer as Key and a 'Subscriber' POJO as value, when I connect to the cluster from inside a JAVA program and access the cache , I get the above mentioned exception, I have 'peerclassloading' property set to false, and I have deployed 'Subscriber' POJO Binaries in all the nodes, Please find the complete stack trace below. What am I missing here? Why is it looking for some file inside my IGNITE_HOME when I am starting client inside my JAVA program with Ignition.start()?
class org.apache.ignite.IgniteCheckedException: Failed to read class name from file [id=-1219769240, file=/home/benakaraj/Downloads/apache-ignite-fabric-1.5.0.final-bin/work/marshaller/-1219769240.classname]
at org.apache.ignite.internal.MarshallerContextImpl.className(MarshallerContextImpl.java:158)
at org.apache.ignite.internal.MarshallerContextAdapter.getClass(MarshallerContextAdapter.java:174)
at org.apache.ignite.internal.binary.BinaryContext.descriptorForTypeId(BinaryContext.java:483)
at org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1443)
at org.apache.ignite.internal.binary.BinaryObjectImpl.deserializeValue(BinaryObjectImpl.java:537)
at org.apache.ignite.internal.binary.BinaryObjectImpl.value(BinaryObjectImpl.java:117)
at org.apache.ignite.internal.processors.cache.CacheObjectContext.unwrapBinary(CacheObjectContext.java:280)
at org.apache.ignite.internal.processors.cache.CacheObjectContext.unwrapBinaryIfNeeded(CacheObjectContext.java:145)
at org.apache.ignite.internal.processors.cache.CacheObjectContext.unwrapBinaryIfNeeded(CacheObjectContext.java:132)
at org.apache.ignite.internal.processors.cache.GridCacheContext.unwrapBinaryIfNeeded(GridCacheContext.java:1748)
at org.apache.ignite.internal.processors.cache.distributed.dht.GridPartitionedSingleGetFuture.setResult(GridPartitionedSingleGetFuture.java:598)
at org.apache.ignite.internal.processors.cache.distributed.dht.GridPartitionedSingleGetFuture.onResult(GridPartitionedSingleGetFuture.java:454)
at org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtCacheAdapter.processNearSingleGetResponse(GridDhtCacheAdapter.java:153)
at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.access$1200(GridDhtAtomicCache.java:128)
at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache$11.apply(GridDhtAtomicCache.java:295)
at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache$11.apply(GridDhtAtomicCache.java:293)
at org.apache.ignite.internal.processors.cache.GridCacheIoManager.processMessage(GridCacheIoManager.java:582)
at org.apache.ignite.internal.processors.cache.GridCacheIoManager.onMessage0(GridCacheIoManager.java:280)
at org.apache.ignite.internal.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:204)
at org.apache.ignite.internal.processors.cache.GridCacheIoManager.access$000(GridCacheIoManager.java:80)
at org.apache.ignite.internal.processors.cache.GridCacheIoManager$1.onMessage(GridCacheIoManager.java:163)
at org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:821)
at org.apache.ignite.internal.managers.communication.GridIoManager.access$1600(GridIoManager.java:103)
at org.apache.ignite.internal.managers.communication.GridIoManager$5.run(GridIoManager.java:784)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.FileNotFoundException: /home/benakaraj/Downloads/apache-ignite-fabric-1.5.0.final-bin/work/marshaller/-1219769240.classname (No such file or directory)
at java.io.FileInputStream.open(Native Method)
at java.io.FileInputStream.<init>(FileInputStream.java:138)
at java.io.FileReader.<init>(FileReader.java:72)
at org.apache.ignite.internal.MarshallerContextImpl.className(MarshallerContextImpl.java:154)
... 26 more
Looks like the cache tries to deserialize the value after retrieving it from cache, but you don't have a class for it on the node where IgniteCache.get() was called. You can either deploy the class, or use IgniteCache.withKeepBinary() to avoid deserialization: https://apacheignite.readme.io/docs/binary-marshaller#binaryobject-cache-api
The issue turned out to be pretty simple, Ignite looks for the user defined POJOs from the list of classes loaded by default class loader, if it does not find it there , it looks inside marshalled classes, In my case, my value POJO was inside the test resources , hence default class loader was not loading the class, causing ignite to look inside marshalled classes folder(IGNITE_HOME/work/marshaller/) .

HSQLDB throws Asset failed exception and file io error on db.script.new file during Checkpoint

Our application is a Java based desktop application which will download the binary data from the source, parses it and add it to HSQLDB database. When downloading from the sources individually, application works perfectly. But when doing the same from multiple sources simultaneously with each source in an individual thread, I am getting an error of
java.sql.SQLException: Assert failed: java.lang.ArrayIndexOutOfBoundsException: 23 in statement [CHECKPOINT]
at org.hsqldb.jdbc.Util.throwError(Unknown Source)
at org.hsqldb.jdbc.jdbcPreparedStatement.execute(Unknown Source)
or sometimes,
java.sql.SQLException: Assert failed: java.lang.ArrayIndexOutOfBoundsException: 1016 in statement [CHECKPOINT]
followed by
java.sql.SQLException: File input/output error: C:\ProgramData\test\data\database\db.script.new in statement [CHECKPOINT]
at org.hsqldb.jdbc.Util.throwError(Unknown Source)
at org.hsqldb.jdbc.jdbcPreparedStatement.execute(Unknown Source)
Java: 1.8;
HSQL version: 1.8.10
We are not in the position to migrate the HSQLDB to latest version because of various reasons.
HSQL Properties:
hsqldb.script_format=0
runtime.gc_interval=0
sql.enforce_strict_size=false
hsqldb.cache_size_scale=8
readonly=false
hsqldb.nio_data_file=true
hsqldb.cache_scale=14
version=1.8.0
hsqldb.default_table_type=memory
hsqldb.cache_file_scale=1
hsqldb.log_size=200
modified=yes
hsqldb.cache_version=1.7.0
hsqldb.original_version=1.8.0
hsqldb.compatible_version=1.8.0
Any help or hint will be appreciated.
This is an 7 year old version which is not ideal for multi-threaded usage.
The simple solution is to perform the database updates with a single thread. You can retrofit your multi-threaded application with a synchronized block over a singleton object around the code that performs the database update.