Fetch data from Google Analytics in Scalajs - intellij-idea

I am trying to fetch the tracked data by Google Analytics of my web page using Google Analytics Java API into my scalajs program. I tried to follow its documentation and tried the following code :
Import statements :
import com.google.api.client.googleapis.auth.oauth2.GoogleCredential
import com.google.api.client.http.javanet.NetHttpTransport
import com.google.api.client.json.jackson2.JacksonFactory
import com.google.api.services.analytics.{Analytics, AnalyticsScopes}
and the code is :
def gaTracker():Unit= {
val HTTP_TRANSPORT=new NetHttpTransport()
val JSON_FACTORY=new JacksonFactory()
val file = new File("/home/ishan/IdeaProjects/client_secret.json")
val apiEmail = "abc#developer.gserviceaccount.com"
val analyticsPrivateKeyFile: String = "/home/ishan/IdeaProjects/client_secret.p12"
val credential: GoogleCredential = new GoogleCredential.Builder()
.setTransport(HTTP_TRANSPORT)
.setJsonFactory(JSON_FACTORY)
.setServiceAccountId(apiEmail)
.setServiceAccountPrivateKeyFromP12File(new File(analyticsPrivateKeyFile))
.setServiceAccountScopes(AnalyticsScopes.all())
.build()
val analyticsService= new Analytics.Builder(HTTP_TRANSPORT,JSON_FACTORY,credential).setApplicationName("tracker").build()
val startDate:String="2016-01-12"
val endDate:String="2016-15-12"
val metrics:String="ga:sessions,ga:timeOnPage"
val get=analyticsService.data().ga().get("ga:xxxxxx111",startDate,endDate,metrics)
val data=get.execute()
if (data.getRows()!=null){
println("Fetched")
}
else{
println("Not Fetched")
}
}
I added dependency in the build.sbt file :
libraryDependencies += "com.google.apis" % "google-api-services-analytics" % "v3-rev136-1.22.0"
But it is showing errors :
Referring to non-existent class com.google.api.client.json.jackson2.JacksonFactory
Referring to non-existent class com.google.api.client.http.javanet.NetHttpTransport
I tried adding following dependencies also but nothing changed :
libraryDependencies+="com.google.http-client" % "google-http-client-jackson" % "1.18.0-rc"
Can someone guide me from here ? Thanks in advance.

Related

MapStruct Property has no write accessor in MetaData for target name

I have a FlatDTO that needs to be mapped to a nested Response containing InfoData and MetaData
The code for the response is generated by OpenAPI. So the below definitions can't be changed.
package org.mapstruct.example.kotlin.openapi
import com.fasterxml.jackson.annotation.JsonProperty
import javax.validation.Valid
data class Response(
#field:Valid #field:JsonProperty("infoData", required = true) val infoData: InfoData,
#field:Valid #field:JsonProperty("metaData", required = true) val metaData: MetaData
)
data class InfoData(
#field:JsonProperty("id", required = true) val id: kotlin.String,
)
data class MetaData(
#field:JsonProperty("firstProperty") val firstProperty: String? = null,
)
I have defined FlatDTO as follows.
package org.mapstruct.example.kotlin.models
data class FlatDTO(
var id: String? = null,
var firstProperty: String,
)
Here is my mapper class which maps FlatDTO to Response
package org.mapstruct.example.kotlin.mapper
import org.mapstruct.Mapper
import org.mapstruct.Mapping
import org.mapstruct.Mappings
import org.mapstruct.example.kotlin.models.FlatDTO
import org.mapstruct.example.kotlin.openapi.Response
#Mapper
interface DataMapper {
#Mappings(
Mapping(target = "infoData.id", source = "id"),
Mapping(target = "metaData.firstProperty", source = "firstProperty")
)
fun flatToResponse(flatDTO: FlatDTO): Response
}
When I try to build the code using mvn clean install
I get the following error.
error: Property "firstProperty" has no write accessor in MetaData for target name "metaData.firstProperty".
[ERROR] #org.mapstruct.Mappings(value = {#org.mapstruct.Mapping(target = "infoData.id", source = "id"), #org.mapstruct.Mapping(target = "metaData.firstProperty", source = "firstProperty")})
I understand that this message is trying to say that there is no setter function for firstProperty because it's defined as val but that code cannot be edited. I can write my own custom mapper that works just fine.
I wanted to understand if there is a way to use MapStruct in this scenario.

How to convert spark dataframe[double , String] to LabeledPoint?

Following is the code that am experimenting with. Am trying to convert SalesData in csv to DF and then to LabeledPoints. However in the last step am getting following compilation error
package macros contains object and package with same name: blackbox
Can you please give me pointers on what am doing wrong here ? Thank you
--EDIT--
Compilation Issue solved by adding 2.11 mllib to build.gradle . but mlData.show fails with
ERROR: java.lang.ClassCastException: java.lang.String cannot be cast to org.apache.spark.ml.linalg.Vector
val path = "SalesData.csv"
val conf = new SparkConf().setMaster("local[2]").set("deploy-mode", "client").set("spark.driver.bindAddress", "127.0.0.1")
.set("spark.broadcast.compress", "false")
.setAppName("local-spark-kafka-consumer-client")
val sparkSession = SparkSession
.builder()
.config(conf)
.getOrCreate()
val data = sparkSession.read.format("csv").option("header", "true").option("inferSchema", "true").load(path)
data.cache()
import org.apache.spark.sql.DataFrameNaFunctions
data.na.drop()
data.show
//get monthly sales totals
val summary = data.select("OrderMonthYear","SaleAmount").groupBy("OrderMonthYear").sum().orderBy("OrderMonthYear").toDF("OrderMonthYear","SaleAmount")
summary.show
// convert ordermonthyear to integer type
//val results = summary.map(df => (df.getAs[String]("OrderMonthYear").replace("-", "") , df.getAs[String]("SaleAmount"))).toDF(["OrderMonthYear","SaleAmount"])
import org.apache.spark.sql.functions._
val test = summary.withColumn("OrderMonthYear", (regexp_replace(col("OrderMonthYear").cast("String"),"-",""))).toDF("OrderMonthYear","SaleAmount")
test.printSchema()
test.show
import sparkSession.implicits._
val mlData = test.select("OrderMonthYear", "SaleAmount").
map(row => org.apache.spark.ml.feature.LabeledPoint(
row.getAs[Double](1),
row.getAs[org.apache.spark.ml.linalg.Vector](0))).toDF
mlData.show

Facing issue while using SparkUDF with multiple arguments

I am trying to encript the data using SHA-256 by passing as an argument in Spark UDF but getting below error. Please find the program snippet and error details below.
Code Snippet:
package com.sample
import org.apache.spark.SparkContext
import org.apache.spark.SparkConf
import org.apache.spark.sql.SparkSession
import java.security.MessageDigest
import org.apache.spark.sql.functions._
import org.apache.spark.sql.expressions.UserDefinedFunction
import javax.xml.bind.DatatypeConverter;
import org.apache.spark.sql.Column
object Customer {
def main(args: Array[String]) {
val conf = new SparkConf().setAppName("Customer-data").setMaster("local[2]").set("spark.executor.memory", "1g");
val sc = new SparkContext(conf)
val spark = SparkSession.builder().config(sc.getConf).getOrCreate()
//val hash_algm=sc.getConf.get("halgm")
val hash_algm="SHA-256"
val df = spark.read.format("csv").option("header", "true").load("file:///home/tcs/Documents/KiranDocs/Data_files/sample_data")
spark.udf.register("encriptedVal1", encriptedVal)
//calling encription UDF function
//val resDF1 = df.withColumn(("ssn_number"), encriptedVal(df("customer_id"))).show()
val resDF2 = df.withColumn(("ssn_number"), encriptedVal(array("customer_id", hash_algm))).show()
println("data set"+resDF2)
sc.stop()
}
def encriptedVal = udf((s: String,s1:String) => {
val digest = MessageDigest.getInstance(s1)
val hash = digest.digest(s.getBytes("UTF-8"))
DatatypeConverter.printHexBinary(hash)
})
}
Error details are below:
Exception in thread "main" 2019-01-21 19:42:48 INFO SparkContext:54 -
Invoking stop() from shutdown hook java.lang.ClassCastException:
com.sample.Customer$$anonfun$encriptedVal$1 cannot be cast to
scala.Function1 at
org.apache.spark.sql.catalyst.expressions.ScalaUDF.(ScalaUDF.scala:104)
at
org.apache.spark.sql.expressions.UserDefinedFunction.apply(UserDefinedFunction.scala:85)
at com.sample.Customer$.main(Customer.scala:26) at
com.sample.Customer.main(Customer.scala)
The problem here is how you call the defined UDF. You should use it like the following:
val resDF1 = df.withColumn(("ssn_number"), encriptedVal(df.col("customer_id"), lit(hash_algm)))
because it accepts two Columns object (both Columns must be String type as defined in your UDF).

Case Class serialization in Spark

In a Spark app (Spark 2.1) I'm trying to send a case class as input parameter of a function that is meant to run on executors
object TestJob extends App {
val appName = "TestJob"
val out = "out"
val p = Params("my-driver-string")
val spark = SparkSession.builder()
.appName(appName)
.getOrCreate()
import spark.implicits._
(1 to 100).toDF.as[Int].flatMap(i => Dummy.process(i, p))
.write
.option("header", "true")
.csv(out)
}
object Dummy {
def process(i: Int, v:Params): Vector[String] = {
Vector { if( i % 2 == 1) v + "_odd" else v + "_even" }
}
}
case class Params(v: String)
When I run it with master local[*] everything goes well, while when running in a cluster, Params class state is not getting serialized and the output results in
null_even
null_odd
...
Could you please help me understanding what I'm doing wrong?
Googling around I found this post that gave me the solution:Spark broadcasted variable returns NullPointerException when run in Amazon EMR cluster
In the end the problem is due to the extend Apps

Contract verification failure in corda Hello World pt 2

I'm following the tutorial in corda tutorial pt 2 using kotlin. Everytime I try to start a new flow via the CRaSH shell in PartyA using the next command:
start IOUFlow iouValue: 30, otherParty: "C=US, L=New York, O=PartyB"
I get a Contract Verification Failure:
Done
Contract verification failed: List has more than one element., contract: com.template.IOUContract#7c109db7, transaction: D08920023D788F80F289527BD9C27BCD54B7DAC6C53866BFA7B90B23E0E4749B
IOUFlow class:
#InitiatingFlow
#StartableByRPC
class IOUFlow(val iouValue: Int,
val otherParty: Party) : FlowLogic<Unit>() {
override val progressTracker = ProgressTracker()
#Suspendable
override fun call() {
val notary = serviceHub.networkMapCache.notaryIdentities[0]
val outputState = IOUState(iouValue, ourIdentity, otherParty)
val outputContract = IOUContract::class.jvmName
val outputContractAndState = StateAndContract(outputState, outputContract)
val cmd = Command(IOUContract.Create(), listOf(ourIdentity.owningKey, otherParty.owningKey))
val txBuilder = TransactionBuilder(notary = notary)
.addOutputState(outputState, TEMPLATE_CONTRACT_ID)
.addCommand(cmd)
txBuilder.withItems(outputContractAndState, cmd)
txBuilder.verify(serviceHub)
val signedTx = serviceHub.signInitialTransaction(txBuilder)
val otherpartySession = initiateFlow(otherParty)
val fullySignedTx = subFlow(CollectSignaturesFlow(signedTx, listOf(otherpartySession), CollectSignaturesFlow.tracker()))
subFlow(FinalityFlow(fullySignedTx))
}
}
I've tried modifying App.kt to deal with this problem without luck. Does anyone know what the problem is?
Thanks in advance for your help.
The issue is that you're adding the command and output to the transaction twice:
val txBuilder = TransactionBuilder(notary = notary)
.addOutputState(outputState, TEMPLATE_CONTRACT_ID)
.addCommand(cmd)
txBuilder.withItems(outputContractAndState, cmd)
This causes the contract verification to fail, as you have two outputs instead of one.