JSON: How to validate a field based on the value of another field - playframework-2.1

I have a class
/**
* If stub == true data == length of the binary data
* If stub == false data == binary data
*/
case class Attachment(contentType: String,
stub: Boolean,
data: Either[Int, Array[Byte]])
And I'm trying to write a Format for it:
implicit val attachmentFormat = new Format[Attachment] {
def reads(json: JsValue): JsResult[Attachment] = (
"content_type".read[String] ~
"stub".read[Boolean] ~
/// ??? How do i validate data based on the value of stub
)(Attachment.apply)
def writes(o: Attachment): JsValue = obj(
"content_type" -> toJson(o.contentType),
"stub" -> toJson(o.stub),
"data" -> o.data.fold(
length => toJson(length),
bytes => toJson(new String(Base64.encode(bytes)))
)
)
}
I can do the conditional write for data but I don't understand how to conditionally validate data on read based on the value of stub.

Related

Groovy : Class.forName().newInstance() error

I have this following method in which I return a List<ImField> object using the List<GPathResult> filteredList. I perform filteredList.each closure where I generate
class at runtime and assign it a static type of ImField.
static List<ImField> getFields(GPathResult root,String fieldClass, String fieldType){
List<GPathResult> filteredList = root.children().findAll{
XMLSlurperUtil.name(it as GPathResult) == fieldType
} as List<GPathResult>
List<ImField> fields = []
filteredList.each{GPathResult it, int index ->
fields.add(Class.forName(fieldClass).newInstance() as ImField)
fields[index].set(it)
}
fields
}
The function call would look like so :
ImStageUtil.getFields(root, ImFieldFactory.SOURCE_FIELD, ImParserConstants.SOURCE_FIELD)
where ImFieldFactory.SOURCE_FIELD = "com.dto.fields.SourceField"
and ImParserContants.SOURCE_FIELD = "SOURCEFIELD"
the error occurs at the .each closure line:
No signature of method: com.extractor.ImStageUtil$_getFields_closure11.doCall() is applicable for argument types: (groovy.util.slurpersupport.NodeChild) values: []
Possible solutions: doCall(groovy.util.slurpersupport.GPathResult, int), findAll(), findAll(), isCase(java.lang.Object), isCase(java.lang.Object)
groovy.lang.MissingMethodException: No signature of method: com.extractor.ImStageUtil$_getFields_closure11.doCall() is applicable for argument types: (groovy.util.slurpersupport.NodeChild) values: []
Possible solutions: doCall(groovy.util.slurpersupport.GPathResult, int), findAll(), findAll(), isCase(java.lang.Object), isCase(java.lang.Object)
I've tried to create a similar script to your example, there are two things I had to modify (if your filteredList is not empty, which you need to check first):
1- You need to use collect() after the findAll{} closure, this allows you to collect all entries and add them to your filteredList.
2- You're using .each{} and you're providing a List along with the index, this should be replaced by .eachWithIndex{} because the first one doesn't expect an index.
Here is a simplified version of your code:
import groovy.util.slurpersupport.GPathResult
def text = '''
<list>
<technology>
<name>Groovy</name>
</technology>
</list>
'''
def list = new XmlSlurper().parseText(text)
def List getFields(GPathResult root,String fieldClass, String fieldType){
List<GPathResult> filteredList = root.children().findAll{
//println(it)
it != null
}.collect() as List<GPathResult>
println('list: ' + filteredList.getClass() + ', ' + filteredList.size())
filteredList.eachWithIndex{GPathResult it, int index ->
println('it: ' + it)
}
}
getFields(list, '', '')
This last example doesn't raise any exception for me.
Hope this helps.

VarcharType mismatch Spark dataframe

I'am trying to change the schema of a dataframe. every time i have a column of string type i want to change it's type to VarcharType(max) where max is the maximum lentgh of string in that column. i wrote the following code. ( i want to export the dataframe later to sql server and i don't want to have nvarchar in sql server so i'am trying to limit it on spark side )
val df = spark.sql(s"SELECT * FROM $tableName")
var l : List [StructField] = List()
val schema = df.schema
schema.fields.foreach(x => {
if (x.dataType == StringType) {
val dataColName = x.name
val maxLength = df.select(dataColName).reduce((x, y) => {
if (x.getString(0).length >= y.getString(0).length) {
x
} else {
y
}
}).getString(0).length
val dataType = VarcharType(maxLength)
l = l :+ StructField(dataColName, dataType)
} else {
l = l :+ x
}
})
val newSchema = StructType(l)
val newDf = spark.createDataFrame(df.rdd, newSchema)
However when running it i get this error.
20/01/22 15:29:44 ERROR ApplicationMaster: User class threw exception: scala.MatchError:
VarcharType(9) (of class org.apache.spark.sql.types.VarcharType)
scala.MatchError: VarcharType(9) (of class org.apache.spark.sql.types.VarcharType)
Can a dataframe column can be of type VarcharType(n) ?
The data mapping from a database to/from dataframe happens in the dialect class. For MS SQL server the class is org.apache.spark.sql.jdbc.MsSqlServerDialect. You can inherit from this and override getJDBCType to influence datatype mapping from a dataframe to a table. Then register your dialect for it to take effect.
I have done this for Oracle (not sqlserver), however it can be done similarly.
//Change this
override def getJDBCType(dt: DataType): Option[JdbcType] = dt match {
case TimestampType => Some(JdbcType("DATETIME", java.sql.Types.TIMESTAMP))
case StringType => Some(JdbcType("NVARCHAR(MAX)", java.sql.Types.NVARCHAR))
case BooleanType => Some(JdbcType("BIT", java.sql.Types.BIT))
case _ => None
}
You can't use VarcharType because it is not a DataType. Also you can't check length of actual data because it is not exposed. You only have access to "dt: DataType", so you can set a default size for NVARCHAR if max is not acceptable.

how can we pass multiple arguments in the background functions in karate feature file

i am passing the two arguments to my custom function but in background while i am passing the arguments it's skipping first taking second one only arugment.
here is the sample code
* def LoadToTigerGraph =
"""
function(args1,args2) {
var CustomFunctions = Java.type('com.optum.graphplatform.util.CareGiverTest');
var cf = new CustomFunctions();
return cf.testSuiteTrigger(args1,args2);
}"""
#*eval if (karate.testType == "component") karate.call(LoadToTigerGraph '/EndTestSample.json')
* def result = call LoadToTigerGraph "functional","/EndTestSample.json"
output :
test type is ************/EndTestSample.json
path is *************undefined
When you want to pass two arguments, you need to send them as two json key/value.
* def result = call LoadToTigerGraph { var1: "functionnal", var2: "/EndTestSample.json" }
And you just have to use args.var1 and args.var2 in your function function(args)

Behat Pass a value from a test step

I'm trying to make assertion that the random text entered in one field appears on next page (confirmation)
I do it like this
When I fill in "edit-title" with random value of length "8"
/**
* Fills in form field with specified id|name|label|value with random string
* Example: And I fill in "bwayne" with random value of length "length"
*
* #When /^(?:|I )fill in "(?P<field>(?:[^"]|\\")*)" with random value of length "(?P<length>(?:[^"]|\\")*)"$/
*/
public function fillFieldWithRandomValue($field, $length)
{
$field = $this->fixStepArgument($field);
$value = $this->generateRandomString($length);
$this->getSession()->getPage()->fillField($field, $value);
}
Than I want to make assertion - something like this:
Then I should see text matching "<RANDOM VALUE ENTERED IN THE PREVIOUS STEP>"
is it possible?
UPDATE:
But how would it look like with setters and getters if i want to use a generateRandomString method multiple times and then get the values of this methods one after another? DO I have to make variables and functions for every test step? like this:
When I fill in "x" with random value of length "8"
And I fill in "y" with random value of length "12"
And I go to other page
Then I should see text matching "VALUE ENTERED TO X"
And I should see text matching "VALUE ENTERED TO Y"
You can create a property and set it in the previous step. And use it in the next one, but assert it if it has value.
Also it would be nice and readable to define that property with proper visibility type
/**
* #var string
*/
private randomString;
/**
* Fills in form field with specified id|name|label|value with random string
* Example: And I fill in "bwayne" with random value of length "length"
*
* #When /^(?:|I )fill in "(?P<field>(?:[^"]|\\")*)" with random value of length "(?P<length>(?:[^"]|\\")*)"$/
*/
public function fillFieldWithRandomValue($field, $length)
{
$field = $this->fixStepArgument($field);
$this->randomString = $this->generateRandomString($length);
$this->getSession()->getPage()->fillField($field, $this->randomString);
}
/**
*
* #Then /^(?:|I )should see that page contains random generated text$/
*/
public function assertPageContainsRandomGeneratedText()
{
//Assertion from phpunit
$this->assertNotNull($this->randomString);
$this->assertPageContainsText($this->randomString);
}
NOTE: Depending on your behat setup - assertion from phpunit might not work.
Since you will will call the generateRandomString method in multiple places then you should also have a method for getting this value like getRandomString like setters and getters.
My recommendation would be to have a class with related methods that handle all the data and not saving in variable in every place you will use data, generate+save and read from the same place anywhere you need.
Tip: You could be more flexible about the step definition and have a default length for the random string in case one one not provided.
High level example:
class Data
{
public static $data = array();
public static function generateRandomString($length = null, $name = null)
{
if ($name = null) {
$name = 'random';
};
if ($length = null) {
$length = 8;
};
// generate string like $string =
return self::$data[$name] = $string;
}
public static function getString($name = null)
{
if ($name = null) {
$name = 'random';
};
// exception handling
if (array_key_exists($name, self::$data) === false) {
return null;
}
return self::$data[$name];
}
}
In context:
/**
* #Then /^I fill in "x" with random value as (.*?)( and length (\d+))?$/
*/
public function iFillInWithRandomValue($selector, $name, $length = null){
$string = Data::generateRandomString($length, $name);
// fill method
}
/**
* #Then /^I should see text matching "first name"$/
*/
public function iShouldSeeTextMatching($variableName){
$string = Data::getString($variableName);
// assert/check method
}
This is high level example, you might need to do some adjustments.
If you have the validation in the same class then you can also have all these in the same class, meaning generateRandomString and getString in the same class with the steps.

Mapping Slick query to default projection after modifying column value

When creating a table query, I would like to modify my select statement by mapping the default table query. However, I cannot find a way to map the value of a column and still map to my case class
case class MyRecord(id: Int, name: String, value: Int)
class MyTable(tag: Tag) extends Table[MyRecord](tag, "MYTABLE") {
def id = column[Int]("id")
def name = column[String]("name")
def value = column[Int]("value")
def * = (id, name, value) <> (MyRecord.tupled, MyRecord.unapply)
}
lazy val tableQuery = TableQuery[MyTable]
I would like to trim the value of name with this function:
def trimLeading0: (Rep[String]) => Rep[String] = SimpleExpression.unary[String, String] {
(str, queryBuilder) =>
import slick.util.MacroSupport._
import queryBuilder._
b"TRIM(LEADING 0 FROM $str)"
}
Now I am at a loss about what to do here:
val trimmedTableQuery: Query[MyTable, MyRecord, Seq] = tableQuery.map(s => ???)
I have tried mapping the Rep like I would do with a case class:
val trimmedTableQuery = tableQuery.map(s => s.copy(name = trimLeading0(s.name)))
This refuses to compile with value copy is not a member of MyTable
My current workaround is to use a custom function instead of MyRecord.tupled for the default projection:
def trimming(t: (Int, String, Int)) = MyRecord(t._1, t._2.dropWhile(_ == "0"), t._3)
def * = (id, name, value) <> (trimming, MyRecord.unapply)
Alternatively, I could map the returned result of the DBIOAction returning a tuple to the case class, which is much less elegant:
val action = tableQuery.map{ s => (s.id, trimLeading0(s.name), s.value)}.result
val futureTuples: Future[Seq[(Int, String, Int)]] = db.run(action)
val records = futureTuples map (s => s.map(MyRecord.tupled))
But how can I do it inside the map method while building the query? OR would it be better to change the def name column description?
You can't mess with the default projection (i.e. def *) in MyTable as it needs to be symmetric. It's used for query and insert. But you can create a trimmedTableQuery based on a specialisation of MyTable with an overridden default projection. Then you can also have tableQuery based on the symmetric default projection. You will get an error if you try to do inserts based on the trimmedTableQuery (but you shouldn't need to do that, just use tableQuery for inserts).
lazy val tableQuery = TableQuery[MyTable]
lazy val trimmedTableQuery = new TableQuery(new MyTable(_) {
override def * = (id, trimLeading0(name), value) <> (MyRecord.tupled, MyRecord.unapply)
})