I'm playing with ScalikeJdbc library. I want to retrieve the data from PostgreSQL database. The error I get is quite strange for me. Even if I configure manually the CP:
val poolSettings = new ConnectionPoolSettings(initialSize = 100, maxSize = 100)
ConnectionPool.singleton("jdbc:postgresql://localhost:5432/test", "user", "pass", poolSettings)
I still see the error. Here is my DAO:
class CustomerDAO {
case class Customer(id: Long, firstname: String, lastname: String)
object Customer extends SQLSyntaxSupport[Customer]
val c = Customer.syntax("c")
def findById(id: Long)(implicit session: DBSession = Customer.autoSession) =
withSQL {
select.from(Customer as c)
}.map(
rs => Customer(
rs.int("id"),
rs.string("firstname"),
rs.string("lastname")
)
).single.apply()
}
The App:
object JdbcTest extends App {
val dao = new CustomerDAO
val res: Option[dao.Customer] = dao.findById(2)
}
My application.conf file
# PostgreSQL
db.default.driver = "org.postgresql.Driver"
db.default.url = "jdbc:postgresql://localhost:5432/test"
db.default.user = "user"
db.default.password = "pass"
# Connection Pool settings
db.default.poolInitialSize = 5
db.default.poolMaxSize = 7
db.default.poolConnectionTimeoutMillis = 1000
The error:
Exception in thread "main" java.lang.IllegalStateException: Connection pool is not yet initialized.(name:'default)
at scalikejdbc.ConnectionPool$$anonfun$get$1.apply(ConnectionPool.scala:57)
at scalikejdbc.ConnectionPool$$anonfun$get$1.apply(ConnectionPool.scala:55)
at scala.Option.getOrElse(Option.scala:120)
at scalikejdbc.ConnectionPool$.get(ConnectionPool.scala:55)
at scalikejdbc.ConnectionPool$.apply(ConnectionPool.scala:46)
at scalikejdbc.NamedDB.connectionPool(NamedDB.scala:20)
at scalikejdbc.NamedDB.db$lzycompute(NamedDB.scala:32)
What did I miss?
To load application.conf, scalikejdbc-config's DBs.setupAll() should be called in advance.
http://scalikejdbc.org/documentation/configuration.html#scalikejdbc-config
https://github.com/scalikejdbc/hello-scalikejdbc/blob/9d21ec7ddacc76977a7d41aa33c800d89fedc7b6/test/settings/DBSettings.scala#L3-L22
In my case I omit play.modules.enabled += "scalikejdbc.PlayModule" in conf/application.conf using ScalikeJDBC Play support...
Related
I am using ignite 2.7 version and getting the below exception while querying the ignite cache:
19/10/30 05:33:14 ERROR yarn.ApplicationMaster: User class threw exception: javax.cache.CacheException: Failed to find SQL table for type: CanonicalXXXXX
javax.cache.CacheException: Failed to find SQL table for type: CanonicalXXXXX
This is my code to start ignite:
def getIgnite(): Ignite = {
val clusterName = clusterProperties.getProperty("XXXXXXXX")
Ignition.setClientMode(true)
try {
val ignite = Ignition.ignite(clusterName);
if ( clusterName == ignite.name() ) {
logInfo("#### Found and returning client for cluster: " +
clusterName)
return ignite
}
}
catch {
case e: Exception => e.printStackTrace()
}
val configFilePath = clusterProperties.getProperty("XXXXXXXX")
logInfo("#### configFilePath: " + configFilePath)
val configInputStream = FileSystem.get(new Configuration()).open(new
Path(configFilePath));
logInfo("#### Starting Ignite Client")
return Ignition.start(configInputStream) }
I am loading data into the cache and data get loaded successfully and cache size also get printed as below:
INFO dataloader.IgniteDataLoader: #### OFFICIAL_NAME_CACHE SIZE => 51016471
Below code is for accessing the cache:
def createXXXXXXCache: IgniteCache[String, CanonicalXXXXX] = {
val orgCacheCfg: CacheConfiguration[String, CanonicalXXXXX] =
new CacheConfiguration[String, CanonicalXXXXX](OFFICIAL_NAME_CACHE)
orgCacheCfg.setIndexedTypes(classOf[String], classOf[CanonicalXXXXX])
orgCacheCfg.setCacheMode(CacheMode.PARTITIONED)
orgCacheCfg.setAtomicityMode(CacheAtomicityMode.ATOMIC)
orgCacheCfg.setBackups(3)
getIgnite().getOrCreateCache(orgCacheCfg) }
But while trying to query the cache I am getting the exception. Below code is for querying the cache:
val companyName = "SUFFOLK CONST CO"
val queryString = "orgName = '" + companyName + "'"
val companyNameQuery = new SqlQuery[String, CanonicalXXXXX]
(classOf[CanonicalXXXXX], queryString)
val queryCursor = igniteXXXXXX.createXXXXXXCache.query(companyNameQuery)
val queryResults = Future {
queryCursor.getAll()
}
try {
val companyResults = extractXXXXXVO(queryResults)
logInfo(s"Ignite Results returned - $companyResults")
} catch {
case exe: Exception => logInfo(s"OfficialXXXX exact search timed out")
queryCursor.close()
Vector.empty
}
The code throwing exception at query() method for the below line:
val queryCursor = igniteXXXXXX.createXXXXXXCache.query(companyNameQuery)
19/10/30 05:33:14 INFO dataloader.IgniteServerXXXXXX: Examplelogger1 - 'SqlQuery [type= CanonicalXXXXX, alias=null, sql=orgName = 'SUFFOLK CONSTRUCTION CO INC', args=null, timeout=0, distributedJoins=false, replicatedOnly=false]'
19/10/30 05:33:14 INFO dataloader.igniteXXXXXX: #### Found and returning client for cluster: JalaDalaXXXXX
19/10/30 05:33:14 ERROR yarn.ApplicationMaster: User class threw exception: javax.cache.CacheException: Failed to find SQL table for type: CanonicalXXXXX
javax.cache.CacheException: Failed to find SQL table for type: CanonicalXXXXX
at org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.query(IgniteCacheProxyImpl.java:697)
at org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.query(GatewayProtectedCacheProxy.java:376)
at xx.xx.dataloader.IgniteServerXXXXXX$.loadOAData(IgniteServerXXXXXX.scala:70)
at xx.xx.StartXXXXXXX$.startIgniteAndDataloading(StartXXXXXXX.scala:49)
at xx.xx.StartXXXXXXX$delayedInit$body.apply(StartXXXXXXX.scala:13)
at scala.Function0$class.apply$mcV$sp(Function0.scala:40)
at scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:12)
at scala.App$$anonfun$main$1.apply(App.scala:71)
at scala.App$$anonfun$main$1.apply(App.scala:71)
at scala.collection.immutable.List.foreach(List.scala:318)
at scala.collection.generic.TraversableForwarder$class.foreach(TraversableForwarder.scala:32)
at scala.App$class.main(App.scala:71)
at xx.xx.StartXXXXXXX$.main(StartXXXXXXX.scala:12)
at xx.xx.StartXXXXXXX.main(StartXXXXXXX.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:567)
Caused by: class org.apache.ignite.internal.processors.query.IgniteSQLException: Failed to find SQL table for type: CanonicalNameOAVO
at org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.queryDistributedSql(IgniteH2Indexing.java:1843)
at org.apache.ignite.internal.processors.query.GridQueryProcessor$7.applyx(GridQueryProcessor.java:2289)
at org.apache.ignite.internal.processors.query.GridQueryProcessor$7.applyx(GridQueryProcessor.java:2287)
at org.apache.ignite.internal.util.lang.IgniteOutClosureX.apply(IgniteOutClosureX.java:36)
at org.apache.ignite.internal.processors.query.GridQueryProcessor.executeQuery(GridQueryProcessor.java:2707)
at org.apache.ignite.internal.processors.query.GridQueryProcessor.queryDistributedSql(GridQueryProcessor.java:2286)
at org.apache.ignite.internal.processors.query.GridQueryProcessor.querySql(GridQueryProcessor.java:2267)
at org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.query(IgniteCacheProxyImpl.java:682)
Could someone please suggest how to resolve the issue.
Thanks,
Chandan
I'm not sure about the root cause of the issue, but first of all you should replace
val queryString = "orgName = '" + companyName + "'"
val companyNameQuery = new SqlQuery[String, CanonicalXXXXX](classOf[CanonicalXXXXX], queryString)
with
val queryString = "orgName = ?"
val companyNameQuery = new SqlQuery[String, CanonicalXXXXX](classOf[CanonicalXXXXX], queryString)
companyNameQuery.setArgs(companyName)
That's the correct way of using of this API. By the way SqlQuery is a kind of deprecated stuff. You should prefer to use SqlFieldsQuery instead of it.
I need to receive region_name by region_code from Oracle DB
I use Exposed for my program, but I receive error
in thread "main" java.lang.AbstractMethodError
at org.jetbrains.exposed.sql.Transaction.closeExecutedStatements(Transaction.kt:181)
at org.jetbrains.exposed.sql.transactions.ThreadLocalTransactionManagerKt.inTopLevelTransaction(ThreadLocalTransactionManager.kt:137)
at org.jetbrains.exposed.sql.transactions.ThreadLocalTransactionManagerKt.transaction(ThreadLocalTransactionManager.kt:75)
Code is
object Codes : Table("REGIONS") {
val region_code = varchar("region_code",32)
val region_name = varchar("region_name",32)}
The main fun contains
.......
val conn = Database.connect("jdbc:oracle:thin:#//...", driver = "oracle.jdbc.OracleDriver",
user = "...", password = "...")
transaction(java.sql.Connection.TRANSACTION_READ_COMMITTED, 1, conn) {
addLogger(StdOutSqlLogger)
Codes.select { Codes.region_code eq "a" }.limit(1).forEach {
print(it[Codes.region_name])
}
}
AbstractMethodError usually means that you compiled the code with one version of a library, but are running it against a different (incompatible) version. (See for example these questions.)
So I'd check your dependencies &c carefully.
We are on: akka-stream-experimental_2.11 1.0.
Inspired by the example
We wrote a TCP receiver as follows:
def bind(address: String, port: Int, target: ActorRef)
(implicit system: ActorSystem, actorMaterializer: ActorMaterializer): Future[ServerBinding] = {
val sink = Sink.foreach[Tcp.IncomingConnection] { conn =>
val serverFlow = Flow[ByteString]
.via(Framing.delimiter(ByteString("\n"), maximumFrameLength = 256, allowTruncation = true))
.map(message => {
target ? new Message(message); ByteString.empty
})
conn handleWith serverFlow
}
val connections = Tcp().bind(address, port)
connections.to(sink).run()
}
However, our intention was to have the receiver not respond at all and only sink the message. (The TCP message publisher does not care about response ).
Is it even possible? to not respond at all since akka.stream.scaladsl.Tcp.IncomingConnection takes a flow of type: Flow[ByteString, ByteString, Unit]
If yes, some guidance will be much appreciated. Thanks in advance.
One attempt as follows passes my unit tests but not sure if its the best idea:
def bind(address: String, port: Int, target: ActorRef)
(implicit system: ActorSystem, actorMaterializer: ActorMaterializer): Future[ServerBinding] = {
val sink = Sink.foreach[Tcp.IncomingConnection] { conn =>
val targetSubscriber = ActorSubscriber[Message](system.actorOf(Props(new TargetSubscriber(target))))
val targetSink = Flow[ByteString]
.via(Framing.delimiter(ByteString("\n"), maximumFrameLength = 256, allowTruncation = true))
.map(Message(_))
.to(Sink(targetSubscriber))
conn.flow.to(targetSink).runWith(Source(Promise().future))
}
val connections = Tcp().bind(address, port)
connections.to(sink).run()
}
You are on the right track. To keep the possibility to close the connection at some point you may want to keep the promise and complete it later on. Once completed with an element this element published by the source. However, as you don't want any element to be published on the connection, you can use drop(1) to make sure the source will never emit any element.
Here's an updated version of your example (untested):
val promise = Promise[ByteString]()
// this source will complete when the promise is fulfilled
// or it will complete with an error if the promise is completed with an error
val completionSource = Source(promise.future).drop(1)
completionSource // only used to complete later
.via(conn.flow) // I reordered the flow for better readability (arguably)
.runWith(targetSink)
// to close the connection later complete the promise:
def closeConnection() = promise.success(ByteString.empty) // dummy element, will be dropped
// alternatively to fail the connection later, complete with an error
def failConnection() = promise.failure(new RuntimeException)
On: akka-stream-experimental_2.11 1.0.
We are using Framing.delimiter in a Tcp server. When a message arrives with length greater than maximumFrameLength the FramingException is thrown and we could capture it from OnError of the ActorSubscriber.
Server Code:
def bind(address: String, port: Int, target: ActorRef, maxInFlight: Int, maxFrameLength: Int)
(implicit system: ActorSystem, actorMaterializer: ActorMaterializer): Future[ServerBinding] = {
val sink = Sink.foreach {
conn: Tcp.IncomingConnection =>
val targetSubscriber = ActorSubscriber[Message](system.actorOf(Props(new TargetSubscriber(target, maxInFlight))))
val targetSink = Flow[ByteString]
.via(Framing.delimiter(ByteString("\n"), maximumFrameLength = maxFrameLength, allowTruncation = true))
.map(raw ⇒ Message(raw))
.to(Sink(targetSubscriber))
conn.flow.to(targetSink).runWith(Source(Promise().future))
}
val connections = Tcp().bind(address, port)
connections.to(sink).run()
}
Subscriber code:
class TargetSubscriber(target: ActorRef, maxInFlight: Int) extends ActorSubscriber with ActorLogging {
private var inFlight = 0
override protected def requestStrategy = new MaxInFlightRequestStrategy(maxInFlight) {
override def inFlightInternally = inFlight
}
override def receive = {
case OnNext(msg: Message) ⇒
target ! msg
inFlight += 1
case OnError(t) ⇒
inFlight -= 1
log.error(t, "Subscriber encountered error")
case TargetAck(_) ⇒
inFlight -= 1
}
}
Problem:
Messages that are under the max frame length do not flow after this exception for that incoming connection. killing the client and re running it works fine.
ActorSubscriber does not honor supervision
What is the correct way to skip the bad message and continue with the next good message ?
Have you tried to put supervision on the targetFlow sink instead of the whole materialiser? I don't see it anywhere here and I believe it should be set on that flow directly.
Stil this is more a guess than science ;)
I had the same exception reading from a file, and for me it was solved by putting a return after last line.
I am trying to create a Pig UDF that extracts the locations mentioned in a tweet using the Stanford CoreNLP package interfaced through the sista Scala API. It works fine when run locally with 'sbt run', but throws a "java.lang.NoSuchMethodError" exception when called from Pig:
Loading default properties from tagger
edu/stanford/nlp/models/pos-tagger/english-left3words/english-left3words-distsim.tagger
Reading POS tagger model from
edu/stanford/nlp/models/pos-tagger/english-left3words/english-left3words-distsim.tagger
Loading classifier from edu/stanford/nlp/models/ner/english.all.3class.distsim.crf.ser.gz
2013-06-14 10:47:54,952 [communication thread] INFO
org.apache.hadoop.mapred.LocalJobRunner - reduce > reduce done [7.5
sec]. Loading classifier from
edu/stanford/nlp/models/ner/english.muc.7class.distsim.crf.ser.gz ...
2013-06-14 10:48:02,108 [Low Memory Detector] INFO
org.apache.pig.impl.util.SpillableMemoryManager - first memory handler
call - Collection threshold init = 18546688(18112K) used =
358671232(350264K) committed = 366542848(357952K) max =
699072512(682688K) done [5.0 sec]. Loading classifier from
edu/stanford/nlp/models/ner/english.conll.4class.distsim.crf.ser.gz
... 2013-06-14 10:48:10,522 [Low Memory Detector] INFO
org.apache.pig.impl.util.SpillableMemoryManager - first memory handler
call- Usage threshold init = 18546688(18112K) used =
590012928(576184K) committed = 597786624(583776K) max =
699072512(682688K) done [5.6 sec]. 2013-06-14 10:48:11,469 [Thread-11]
WARN org.apache.hadoop.mapred.LocalJobRunner - job_local_0001
java.lang.NoSuchMethodError:
org.joda.time.Duration.compareTo(Lorg/joda/time/ReadableDuration;)I
at edu.stanford.nlp.time.SUTime$Duration.compareTo(SUTime.java:3406)
at edu.stanford.nlp.time.SUTime$Duration.max(SUTime.java:3488) at
edu.stanford.nlp.time.SUTime$Time.difference(SUTime.java:1308) at
edu.stanford.nlp.time.SUTime$Range.(SUTime.java:3793) at
edu.stanford.nlp.time.SUTime.(SUTime.java:570)
Here is the relevant code:
object CountryTokenizer {
def tokenize(text: String): String = {
val locations = TweetEntityExtractor.NERLocationFilter(text)
println(locations)
locations.map(x => Cities.country(x)).flatten.mkString(" ")
}
}
class PigCountryTokenizer extends EvalFunc[String] {
override def exec(tuple: Tuple): java.lang.String = {
val text: java.lang.String = Util.cast[java.lang.String](tuple.get(0))
CountryTokenizer.tokenize(text)
}
}
object TweetEntityExtractor {
val processor:Processor = new CoreNLPProcessor()
def NERLocationFilter(text: String): List[String] = {
val doc = processor.mkDocument(text)
processor.tagPartsOfSpeech(doc)
processor.lemmatize(doc)
processor.recognizeNamedEntities(doc)
val locations = doc.sentences.map(sentence => {
val entities = sentence.entities.map(List.fromArray(_)) match {
case Some(l) => l
case _ => List()
}
val words = List.fromArray(sentence.words)
(words zip entities).filter(x => {
x._1 != "" && x._2 == "LOCATION"
}).map(_._1)
})
List.fromArray(locations).flatten
}
}
I am using sbt-assembly to construct a fat-jar, and so the joda-time jar file should be accessible. What is going on?
Pig ships with its own version of joda-time (1.6), which is incompatible with 2.x.