object circe is not a member of package io - circe

I am trying to create a predef.sc file for ammonite REPL. This is what I have written
val fs2Version = "2.2.2"
val circeVersion = "0.13.0"
// fs2
interp.load.ivy("co.fs2" %% "fs2-core" % fs2Version)
import scala.collection.immutable.{Stream => _}
import scala.{Stream => _}
import _root_.fs2._
// circe
interp.load.ivy("io.circe" %% "circe-core" % circeVersion)
interp.load.ivy("io.circe" %% "circe-parser" % circeVersion)
interp.load.ivy("io.circe" %% "circe-generic" % circeVersion)
import _root_.io.circe._, _root_.io.circe.parser._, _root_.io.circe.syntax._, _root_.io.circe.optics.JsonPath._, _root_.io.circe.generic.auto._
But it gives me an error saying
object circe is not a member of package io
I think its because fs2 also has a sub package called "io"

If you are using Intellij checkout this question How to force IntelliJ IDEA to reload dependencies from build.sbt after they changed? it did work for me and I was having your same error.
Or if you are using VSCode see this https://scalameta.org/metals/docs/editors/vscode/ you basically have to Ctrl + Shift + P and type import build but you gonna need the Scala (Metals) extension to do that.

Works for me with the following predef.sc file:
import $ivy.`org.typelevel::cats-core:2.1.1`, cats._, cats.implicits._
import $ivy.`org.typelevel::cats-effect:2.1.1`
import $ivy.`co.fs2::fs2-core:2.2.2`
import $ivy.`io.circe::circe-core:0.13.0`
import $ivy.`io.circe::circe-parser:0.13.0`
import $ivy.`io.circe::circe-generic:0.13.0`
import $ivy.`io.circe::circe-optics:0.13.0`
import scala.concurrent.ExecutionContext.Implicits.global
import scala.concurrent.duration._
import scala.concurrent.Future
import scala.util.{Failure, Success}
import scala.concurrent.Await
import scala.collection.immutable.{Stream => _}
import scala.{Stream => _}
import _root_.fs2._
import _root_.io.circe._, _root_.io.circe.parser._, _root_.io.circe.syntax._, _root_.io.circe.optics.JsonPath._, _root_.io.circe.generic.auto._
and all your imports after this

Related

error: value show is not a member of Unit CaseFileDFTemp.show()

I ran below code in databricks scala notebook but I am getting error.
LIBRARY ADDED : azure-cosmosdb-spark_2.4.0_2.11-1.3.4-uber
CODE :
import org.apache.spark.rdd.RDD
import org.apache.spark.{SparkConf, SparkContext}
import spark.implicits._
import org.apache.spark.sql.functions._
import org.apache.spark.sql.Column
import org.apache.spark.sql.types.{StructType, StructField, StringType, IntegerType,LongType,FloatType,DoubleType, TimestampType}
import org.apache.spark.sql.cassandra._
//datastax Spark connector
import com.datastax.spark.connector._
import com.datastax.spark.connector.cql.CassandraConnector
import com.datastax.driver.core.{ConsistencyLevel, DataType}
import com.datastax.spark.connector.writer.WriteConf
//Azure Cosmos DB library for multiple retry
import com.microsoft.azure.cosmosdb.cassandra
import sqlContext.implicits._
spark.conf.set("x","x")
spark.conf.set("x","x")
spark.conf.set("x","x")
spark.conf.set("x","x")
val CaseFileDFTemp = sqlContext
.read
.format("org.apache.spark.sql.cassandra")
.options(Map( "table" -> "case_files", "keyspace" -> "shared"))
.load().show()
CaseFileDFTemp.show()
ERROR:
error: value show is not a member of Unit CaseFileDFTemp.show()
Can you please try creating the SQL context and try the show function.
import sqlContext.implicits._
val sqlContext= new org.apache.spark.sql.SQLContext(sc)
Please let me know if it helps.
If you write
val CaseFileDFTemp = sqlContext
.read
.format("org.apache.spark.sql.cassandra")
.options(Map( "table" -> "case_files", "keyspace" -> "shared"))
.load().show()
Then CaseFileDFTemp will have type Unit, because the show() will "consume" your dataframe. So remove show(), then it will work

Facing issue while using SparkUDF with multiple arguments

I am trying to encript the data using SHA-256 by passing as an argument in Spark UDF but getting below error. Please find the program snippet and error details below.
Code Snippet:
package com.sample
import org.apache.spark.SparkContext
import org.apache.spark.SparkConf
import org.apache.spark.sql.SparkSession
import java.security.MessageDigest
import org.apache.spark.sql.functions._
import org.apache.spark.sql.expressions.UserDefinedFunction
import javax.xml.bind.DatatypeConverter;
import org.apache.spark.sql.Column
object Customer {
def main(args: Array[String]) {
val conf = new SparkConf().setAppName("Customer-data").setMaster("local[2]").set("spark.executor.memory", "1g");
val sc = new SparkContext(conf)
val spark = SparkSession.builder().config(sc.getConf).getOrCreate()
//val hash_algm=sc.getConf.get("halgm")
val hash_algm="SHA-256"
val df = spark.read.format("csv").option("header", "true").load("file:///home/tcs/Documents/KiranDocs/Data_files/sample_data")
spark.udf.register("encriptedVal1", encriptedVal)
//calling encription UDF function
//val resDF1 = df.withColumn(("ssn_number"), encriptedVal(df("customer_id"))).show()
val resDF2 = df.withColumn(("ssn_number"), encriptedVal(array("customer_id", hash_algm))).show()
println("data set"+resDF2)
sc.stop()
}
def encriptedVal = udf((s: String,s1:String) => {
val digest = MessageDigest.getInstance(s1)
val hash = digest.digest(s.getBytes("UTF-8"))
DatatypeConverter.printHexBinary(hash)
})
}
Error details are below:
Exception in thread "main" 2019-01-21 19:42:48 INFO SparkContext:54 -
Invoking stop() from shutdown hook java.lang.ClassCastException:
com.sample.Customer$$anonfun$encriptedVal$1 cannot be cast to
scala.Function1 at
org.apache.spark.sql.catalyst.expressions.ScalaUDF.(ScalaUDF.scala:104)
at
org.apache.spark.sql.expressions.UserDefinedFunction.apply(UserDefinedFunction.scala:85)
at com.sample.Customer$.main(Customer.scala:26) at
com.sample.Customer.main(Customer.scala)
The problem here is how you call the defined UDF. You should use it like the following:
val resDF1 = df.withColumn(("ssn_number"), encriptedVal(df.col("customer_id"), lit(hash_algm)))
because it accepts two Columns object (both Columns must be String type as defined in your UDF).

sparkSql no such method error

I'm new to learning sparkSQL, and I'm trying to run the examples provided by the spark document, but got error like:
enter image description here
my program like this:
enter image description here
what should I do.
note:I'm using the IDEA to edit my program
all of the code:
import org.apache.spark.{SparkConf, SparkContext}
import org.apache.spark.sql.{Row, SQLContext, SparkSession}
import org.apache.spark.sql.types._
object SqlTest1 {
case class Person(name: String, age:Long)
def main(args: Array[String]): Unit = {
val spark = SparkSession
.builder()
.getOrCreate()
import spark.implicits._
runBasicDataFrameExample(spark)
}
private def runBasicDataFrameExample(spark: SparkSession)={
val df = spark.read.json("resorces/people.json")
df.show()
}
}
Val Conf = new SparkConf(true).setAppName(“appName”)
Val spark = SparkSession.builder().config(Conf).getOrCreate()
Val df = spark.read.option(“timestampFormat”,”yyyy/MM/dd HH:mm:ss ZZ).json(path)
it may be the scala version does not match. you can check the scala verion and spark version.

Why ‘on_<propname>’ in kivy is not called?

in the kivy document (1.9.0-dev), it says
Observe using ‘on_<propname>’ If you created the class yourself, you
can use the ‘on_<propname>’ callback:
class MyClass(EventDispatcher):
a = NumericProperty(1)
def on_a(self, instance, value):
print(’My property a changed to’, value)
My code is
class MyClass(EventDispatcher):
a = StringProperty('')
def __init__(self, **kwargs):
...
self.bind(a=self.on_a) <--- if I remove this
def on_a(self, instance, value):
print(’My property a changed to’, value)
This works. But if I remove self.bind(a=self.on_a)
Then on_a function is not called. I thought if I put on_ as a function name then I do not need to do bind(). Do I miss something?
=================================================
Ps. I simplified my code below. It is a full run-able code.
course_view.py:
from kivy.app import App
from kivy.properties import StringProperty
from kivy.event import EventDispatcher
from kivy.uix.listview import ListView, ListItemButton
from kivy.adapters.dictadapter import DictAdapter
from kivy.uix.boxlayout import BoxLayout
from kivy.factory import Factory
from kivy.uix.screenmanager import ScreenManager, Screen, FadeTransition
from kivy.properties import ObjectProperty
from kivy.properties import StringProperty
class ChangeTest(App):
pass
class StartScreen(Screen):
pass
def load_view(self):
self.course_view_object = CourseCnDetailListView(view_box = self.ids.view_box)
self.clear_widgets()
self.add_widget(self.course_view_object.master_item_list)
class CourseCnDetailListView(EventDispatcher):
course_code = StringProperty('course_code_str')
def __init__(self, **kwargs):
self.view_box = kwargs.get('view_box', None)
self.course_data= {"1": {"course_code": "it123"},
"2": {"course_code": "it456"}
}
list_item_args_converter = \
lambda row_index, rec: {'text': rec["course_code"],
'size_hint_y': None,
'height': 25}
dict_adapter = DictAdapter(sorted_keys=sorted(self.course_data.keys()),
data=self.course_data,
args_converter=list_item_args_converter,
selection_mode='single',
allow_empty_selection=False,
cls=ListItemButton)
self.master_item_list = ListView(adapter=dict_adapter,
size_hint=(.3, 1.0))
dict_adapter.bind(on_selection_change=self.course_changed)
#self.bind(course_code=self.on_course_code) <-- un-comment this will work
def on_course_code(self, instance, value):
print "on_course_code: update string value:", value
def redraw(self, *args):
pass
def course_changed(self, list_adapter, *args):
if len(list_adapter.selection) != 0:
selection = list_adapter.selection[0]
if type(selection) is str:
self.course_code = selection
else:
self.course_code = selection.text
self.redraw()
ChangeTest().run()
Factory.register('StartScreen', cls=StartScreen)
ChangeTest.kv
#: kivy 1.9
#: import ScreenManager kivy.uix.screenmanager.ScreenManager
#: import Screen kivy.uix.screenmanager.ScreenManager
ScreenManager:
id: screen_manager
StartScreen:
id: start_screen
name: 'StartScreen'
manager: screen_manager
<StartScreen>:
BoxLayout:
id: view_box
Button:
text: "load view"
on_release: root.load_view()
Thank you zeeMonkeez.
Yes. That is exactly the problem. After I add the constructor super(CourseCnDetailListView, self).__init__(**kwargs). It works perfectly.
It was accidentally removed when I changed the structure. Thank you very much.
Also it is good to know that it is the default constructor of EventDispatcher make <on_propname> work.

tornado's AsyncHttpTestCase is not available outside self.fetch

I have an AsyncHttpTestCase and I want to access it from methods besides self.fetch. Specifically, I have a SockJS handler that I want a sockjs-client to attach too for my tests.
I've discovered that even though self.get_url('/foo') returns a valid url, that url does not respond to anything except for self.fetch(). What gives?
Is this just not possible with AsyncHttpTestCase? What is the best pattern for doing this?
Here's tests.py
import urllib2
from tornado.httpclient import AsyncHTTPClient
import tornado.testing
from tornado.testing import AsyncTestCase, AsyncHTTPTestCase
from app import DebugApp
class TestDebug(AsyncHTTPTestCase):
def get_app(self):
return DebugApp()
def test_foo(self):
response = self.fetch('/foo')
print response.body
assert response.body == 'derp'
def test_foo_from_urllib(self):
response = urllib2.urlopen(self.get_url('/foo'), None, 2)
print response
assert response.body == 'derp'
def runTest(self):
pass
and app.py
import tornado.httpserver
import tornado.ioloop
import tornado.web
from tornado.options import options
class FooHandler(tornado.web.RequestHandler):
def get(self):
self.write("derp")
url_patterns = [
(r"/foo", FooHandler),
]
class DebugApp(tornado.web.Application):
def __init__(self):
tornado.web.Application.__init__(self, url_patterns, debug=True, xsrf_cookies=False)
def main():
app = DebugApp()
http_server = tornado.httpserver.HTTPServer(app)
http_server.listen(6006)
tornado.ioloop.IOLoop.instance().start()
if __name__ == "__main__":
main()
and runtest.py
#!/usr/bin/env python
import unittest
from os import path
import sys
import tornado.testing
PROJECT_PATH = path.dirname(path.abspath(__file__))
sys.path.append(PROJECT_PATH)
def all():
suite = unittest.defaultTestLoader.discover('./', 'tests.py', path.dirname(path.abspath(__file__)))
print suite
return suite
if __name__ == '__main__':
# Print a nice message so that we can differentiate between test runs
print ''
print '%s %s' % ('Debug app', '0.1.0')
print '\033[92m' + '-------------- Running Test Suite --------------' + '\033[0m'
print ''
tornado.testing.main()
The problem is that the IOLoop is not running when you call urlopen (and it cannot be, because urlopen is a blocking function). You must run the IOLoop (with either #gen_test, self.wait(), or indirectly via methods like self.fetch()) for the server to respond, and you can only interact with it via non-blocking functions (or if you must use blocking functions like urlopen, run them in a separate thread).