Gson api parsing issue Kotlin - api

I'm trying to parse the JSON returned by the following API call (recipe and ingredientLines only):
https://api.edamam.com/search?q=khachapuri&app_id=xxx&app_key=yyy
My model for GSON looks like this:
class FoodModel {
var label:String = "Yummy"
var image:String = "https://agenda.ge/files/khachapuri.jpg"
var ingredientLines = ""
}
After launching the app, I'm facing the following error:
com.google.gson.JsonSyntaxException: java.lang.IllegalStateException: Expected BEGIN_ARRAY but was BEGIN_OBJECT at line 1 column 2 path $
I think I'm writing the model class incorrectly, because the structure of a json is not clear for me. This is how I'm trying to use Gson: val foodItems = Gson().fromJson(response, Array<FoodModel>::class.java) can anyone help?

The JSON object returned by the API has a slightly different structure compared to your model.
In particular the API is returning a complex object that you need to traverse in order to extract the information you are interest into. A high-level example (I'm not able to test it, but hopefully you'll get the gist of it):
data class Response(
val hits: List<Hit>
)
data class Hit(
val recipe: Recipe
)
data class Recipe(
val label: String,
val image: String
)
val foodItems = Gson().fromJson(response, Response::class.java)
Just be aware that Gson may create instances in an unsafe manner, which means you may experience NullPointerExceptions thrown apparently with no reason. If you want to prove it, just rename image to anything else (you can also try with other fields, it doesn't matter), and you'll see its value is null even if the type is non-nullable.

Related

How to create object in Kotlin without getting "Null can not be a value of a non-null type Long"?

I am totally new in Kotlin as a former Java guy and try to understand some errors.
I am getting a record from db (Postgres) by using JpaRepository. I want to create a new record but the new record will only have 2 different value in its fields and I want to add it (not updating the previous one, instead adding new one)
val otp = notificationRepository.findById(id).get()
val newOtp = otp
newOtp.delivery = newOtp.delivery.plusMinutes(OTP_RESEND_TIME_IN_MINUTES);
newOtp.status = NotifyStatus.PENDING
newOtp.id = null // HERE I AM GETTING ERROR
//save logic here
I cannot set id of the new record to null to save it because the ide giving error "Null can not be a value of a non-null type Long"
How can I create a new object by just updating some fields and making 'id' null to store it ?
This is how you should define your class:
class Notification(
val id: Long?,
val status: NotifyStatus,
val delivery: DateTime
)
As i did not know what the class look like, i guess you must have a class or data class look like it.
if you want declare a variable nullable you should use ? after its type.

Parse-server result.attributes vs get()

Been using Parse Server for about a month in our project, and it's been a fantastic addition to our tech stack. However, there is one thing that has slowed down our team a bit ; when using Parse.Query, we can only access the fields of the returned Object by using get('fieldName'), which seems to be very redundant and prone to errors (using strings to get the fields). In Firebase, there is a method to get all the data : .data(). I haven't seen this feature in Parse.
We found out about a property when getting the query result called attributes. It seems to be an object that we can destructure and directly get all the fields of the Parse Object. For example :
const query = Parse.Query('Movie');
const result = await query.first();
const { title, price } = result.attributes
There is only a slight reference to it in the docs : https://parseplatform.org/Parse-SDK-JS/api/master/Parse.Object.html under Members, only with the description Prototype getters/setters.
If this property makes it much more easier/convenient than the get() method, is there any reason it is not include in the get started guide of Parse-SDK-JS? Or am I missing something?
Thanks
As you can see in this line, the get method just access the attributes property and returns the value for the specified key, so you should be fine with const { title, price } = result.attributes.

RepoDb cannot find mapping configuration

I'm trying to use RepoDb to query the contents of a table (in an existing Sql Server database), but all my attempts result in an InvalidOperationException (There are no 'contructor parameter' and/or 'property member' bindings found between the resultset of the data reader and the type 'MyType').
The query I'm using looks like the following:
public Task<ICollection<MyType>> GetAllAsync()
{
var result = new List<MyType>();
using (var db = new SqlConnection(connectionString).EnsureOpen())
{
result = (await db.ExecuteQueryAsync<MyType>("select * from mytype")).ToList();
}
return result;
}
I'm trying to run this via a unit test, similar to the following:
[Test]
public async Task MyTypeFetcher_returns_all()
{
SqlServerBootstrap.Initialize();
var sut = new MyTypeFetcher("connection string");
var actual = await sut.GetAllAsync();
Assert.IsNotNull(actual);
}
The Entity I'm trying to map to matches the database table (i.e. class name and table name are the same, property names and table column names also match).
I've also tried:
putting annotations on the class I am trying to map to (both at the class level and the property level)
using the ClassMapper to map the class to the db table
using the FluentMapper to map the entire class (i.e. entity-table, all columns, identity, primary)
putting all mappings into a static class which holds all mapping and configuration and calling that in the test
providing mapping information directly in the test via both ClassMapper and FluentMapper
From the error message it seems like RepoDb cannot find the mappings I'm providing. Unfortunately I have no idea how to go about fixing this. I've been through the documentation and the sample tutorials, but I haven't been able to find anything of use. Most of them don't seem to need any mapping configuration (similar to what you would expect when using Dapper). What am I missing, and how can I fix this?

Sorted table with a map in Apache Ignite

I initially want to accomplish something simple with Ignite. I have a type like this (simplified):
case class Product(version: Long, attributes: Map[String, String])
I have a key for each one to store it by (it's one of the attributes).
I'd like to store them such that I can retrieve a subset of them between two version numbers or, at the very least, WHERE version > n. The problem is that the cache API only seems to support either retrieval by key or table scan. On the other hand, SQL99 doesn't seem to have any kind of map type.
I was thinking I'd need to use a binary marshaler, but the docs say:
There is a set of 'platform' types that includes primitive types, String, UUID, Date, Timestamp, BigDecimal, Collections, Maps and arrays of thereof that will never be represented as a BinaryObject.
So... maps are supported?
Here's my test code. It fails with java.lang.IllegalArgumentException: Cache is not configured: ignite-sys-cache, though. Any help getting a simple test working would really aid my understanding of how this is supposed to work.
Oh, and also, do I need to configure the schema in the Ignite config file? Or are the field attributes a sufficient alternative to that?
case class Product(
#(QuerySqlField #field)(index = true) version: Long,
attributes: java.util.Map[String, String]
)
object Main {
val TestProduct = Product(2L, Map("pid" -> "123", "foo" -> "bar", "baz" -> "quux").asJava)
def main(args: Array[String]): Unit = {
Ignition.setClientMode(true)
val ignite = Ignition.start()
val group = ignite.cluster.forServers
val cacheConfig = new CacheConfiguration[String, Product]
cacheConfig.setName("inventory1")
cacheConfig.setIndexedTypes(classOf[String], classOf[Product])
val cache = ignite.getOrCreateCache(cacheConfig)
cache.put("P123", TestProduct)
val query = new SqlQuery(classOf[Product], "select * from Product where version > 1")
val resultSet = cache.query(query)
println(resultSet)
}
}
Ignite supports querying by indexed fields. Since version is a regular indexed field it should be feasible to do the described queries.
I've checked your code and it works on my side.
Please check that the Ignite version is consistent across all the nodes.
If you provide the full logs I could take a look at it.

Google Cloud Dataflow, BigQueryIO and NullPointerException on TableRow.get

I'm new to GC Dataflow and didn't find a relevant answer here. Apologies if I should have found this already answered.
I'm trying to create a simple pipeline using the v2.0 SDK and am having trouble reading data into my PCollection using BigQueryIO. I am using the .withQuery method and I have tested the query in the BigQuery interface and it seems to be working fine. The initial PCollection seems to get created without any issues, but when I think setup a simple ParDo function to convert the values from the TableRow into a PCollection I am getting a NullPointerException on the line of code that does the .get on the TableRow object.
Here is my code. (I'm probably missing something simple. I'm a total newbie at Pipeline programming. Any input would be most appreciated.)
public class ClientAutocompletePipeline {
private static final Logger LOG = LoggerFactory.getLogger(ClientAutocompletePipeline.class);
public static void main(String[] args) {
// create the pipeline
Pipeline p = Pipeline.create(
PipelineOptionsFactory.fromArgs(args).withValidation().create());
// A step to read in the product names from a BigQuery table
p.apply(BigQueryIO.read().fromQuery("SELECT name FROM [beaming-team-169321:Products.raw_product_data]"))
.apply("ExtractProductNames", ParDo.of(new DoFn<TableRow, String>() {
#ProcessElement
public void processElement(ProcessContext c) {
// Grab a row from the BigQuery Results
TableRow row = c.element();
// Get the value of the "name" column from the table row.
//NOTE: This is the line that is giving me the NullPointerException
String productName = row.get("name").toString();
// Make sure it isn't empty
if (!productName.isEmpty()) {
c.output(productName);
}
}
}))
The query definitely works in the BigQuery UI and the column called "name" is returned when I test the query. Why am I getting a NullPointerException on this line:
String productName = row.get("name").toString();
Any ideas?
This is a common problem when working with BigQuery and Dataflow (most likely the field is indeed null). If you are ok with using Scala, you could take a look at Scio (which is a Scala DSL for Dataflow) and its BigQuery IO.
Just make your code null safe. Replace this:
String productName = row.get("name").toString();
With something like this:
String productName = String.valueOf(row.get("name"));
I think I'm late for this but you can do something like if(row.containsKey("column-name")).
This will basically tell you if the field is null or not.
In BigQuery what happens is, while reading data, if a column value is null, it is not available as a part of that particular TableRow. Hence, you are getting that error. You can also do something like if(null == row.get("column-name")) to check if the field is null or not.