Dynamic table columns - kotlin

How should I proceed when I want to generate table from list of lists which contains only strings(ex. data from csv). Names of columns don't matter. From all examples provided I saw only binding table items to specific model(which doesn't fit there as I have unknown number and names of columns).

If you already know the column names and data type, I would suggest to hard code that. If you know nothing about the format and simply want to create a TableView with completely dynamic columns, you can use the index in the csv data as an extractor to create StringProperty values for your data:
class MyView : View() {
val data = FXCollections.observableArrayList<List<String>>()
val csvController: CsvController by inject()
init {
runAsync {
csvController.loadData()
} ui { entries ->
// Generate columns based on the first row
entries.first().forEachIndexed { colIndex, name ->
root.column(name, String::class) {
value { row ->
SimpleStringProperty(row.value[colIndex])
}
}
}
// Assign the extracted entries to our list, skip first row
data.setAll(entries.drop(1))
}
}
override val root = tableview(data)
}
class CsvController : Controller() {
// Load data from CSV file here, we'll use som static data
// where the first row is the headers
fun loadData() = listOf(
listOf("Name", "Age"),
listOf("John", "42"),
listOf("Jane", "24")
)
}
This approach would only be good for visualizing the data in a CSV file. If you need to edit or manipulate the data, knowledge of the data types up front would yield a less flimsy application IMO :)

Related

Jetpack compose and Kotlin, dynamic UI losing values on recomp

I am making a dynamic UI using kotlin and Jetpack compose and storing the information in an object box database.
The aim is that i will have a composable that starts off with 1 initial item that is empty and when the contents of the textbox have been filled in would allow the red "+" button to be clicked and then another textfield would appear. These values will need to be able to be edited constantly all the way until the final composable value is stored. The button changes colour currently and the states are fine with the button so i can add and remove rows
The data comes in as a string and is converted into a Hashmap<Int, String>. The int is used to store the position in the map being edited and the string would be the text value.
Using log messages i see that the information is updated in the list and for recomp sake i instantly store the value of the edited list in a converted json string.
At the moment:
When i scroll past the composable it resets and looks like the initial state (even if i have added multiple rows)
Log messages show that my hashmap has the values from before e.g. {"0":"asdfdsa"} but the previous positions are ignored and as the previous information would still be present but not shown on the UI when i enter it into the first field again (the others are not visible at the time) {"0":"asdfdsa","0":"hello"}. This would later cause an error when trying to save new data to the list because of the duplicate key
In the composables my hashmap is called textFields and is defined like this. Number is used to determine how many textfields to draw on the screen
val textFields = remember { getDataStringToMap(data.dataItem.dataValue) }
val number = remember { mutableStateOf(textFields.size) }
the method to getDataStringToMap is created like this
private fun getDataMapToString(textFieldsMap: HashMap<Int, String>): String {
val gson = Gson()
val newMap = hashMapOf<Int, String>()
for (value in textFieldsMap){
if (value.value .isNotBlank()){
newMap[value.key] = value.value
}
}
return gson.toJson(newMap)
}
and the method to getDataStringToMap is created like this (I explicitly define the empty hashmap type because its more readable for me if i can see it)
private fun getDataStringToMap(textsFieldsString: String): HashMap<Int, String> {
val gson = Gson()
return if (textsFieldsString.isBlank()) {
hashMapOf<Int, String>(0 to "")
} else {
val mapType = HashMap<Int, String>().javaClass
gson.fromJson(textsFieldsString, mapType)
}
the composables for the textfields are called like this
items(number.value) { index ->
listItem(
itemValue = textFields[index].orEmpty(),
changeValue = {
textFields[index] = it
setDataValue(getDataMapToString(textFields))
},
addItem = {
columnHeight.value += itemHeight
scope.launch {
scrollState.animateScrollBy(itemHeight)
}
},
deleteItem = {
columnHeight.value -= itemHeight
scope.launch {
scrollState.animateScrollBy(-itemHeight)
}
},
lastItem = index == number.value - 1,
index = index
)
}
Edited 30/12/2022
Answer from #Arthur Kasparian solved issues. Change to rememberSaveable retains the UiState even on scroll and recomp.
Now just to sort out which specific elements are removed and shown after :D
The problem is that remember alone does not save values on configuration changes, whereas rememberSaveable does.
You can read more about this here.

Filter a Map with mutableList values

I have the following map object :
var myStringMap = mapOf(10 to mutableListOf<String>(),11 to mutableListOf<String>(), 12 to mutableListOf<String>())
I want to append files from a source to the corresponding key as follows :
myStringMap.keys.forEach { key ->
getStringFromSource(source, user).let {
if (it != null) {
myStringMap[key]!!.add(it)
}
}
}
The thing is that I need to add !! else the editor is complaining about a safe call for nullable object. Why is that ?
After that, when I want to filter the keys whose values are empty I have a typing error as long as GetBytes has MutableList? .
myStringMap.filter { (_: Int, value) -> value.isNotEmpty() }.let {
it.keys.forEach { key ->
val bytes = GetBytes(it[key])
allBytes.add(bytes)
}
}
Why is that? the it context should be Map<Int,MutableList>?
Probably I should convert the mutableList to a list?
Why it[key] still returns a nullable type even when you pass in something from its keys collection? Well, because that is what the get method of Map is declared to return. It doesn't depend on the value you pass in.
The key thing to realise here is that you are trying to do something to each of the values of the map, which are all mapped to a key. You are trying to add an element to each of the values, which are lists. You should access values, not keys.
If getStringFromSource has side effects, e.g. returns a different string every time you call it, you can do:
myStringMap.values.forEach { value ->
getStringFromSource(source, user)?.let { value.add(it) }
}
If getStringFromSource does not have side effects, you can call it first:
getStringFromSource(source, user)?.let { string ->
myStringMap.values.forEach { it.add(string) }
}
If you actually meant getStringFromSource(source, key), then you should operate on entries, which gives you both the key and value to work with:
myStringMap.entries.forEach { (key, value) ->
getStringFromSource(source, key)?.let { value.add(it) }
}
This applies to the second situation too. It seems like you are trying to do something with each of the values, so just access the values, not the keys.
myStringMap.filterValues(List<String>::isNotEmpty).values.forEach {
allBytes.add(GetBytes(it))
}
Use the filter that you need:
filter
first
...
and them let example:
mutableList.first { it.isSelected == false }.let { this?.isSelected = true }

can't find all field of a pdf (acroform)

Considering this pdf
With this code I except retrieve all field but I get half of them:
pdfOriginal.getDocumentCatalog().getAcroForm().getFields().forEach(field -> {
System.out.println(field.getValueAsString());
});
What is wrong here ? It seems all annotations are not in aocroform reference, what is the correct way to add form field annotation into acroform object?
Update 1
The wierd thing here if I tried to set field's value which is not referenced/found in getAcroForm.getFields() like this :
doc.getDocumentCatalog().getAcroForm().getField("fieldNotInGetFields").setValue("a");
This works
Update 2
It seems using doc.getDocumentCatalog().getAcroForm().getFieldTree() retrieve all fields. I don't understand why doc.getDocumentCatalog().getAcroForm().getFields() not ?
What is the correct way retrieve all fields of a pdf acroform.getFieldTree() or acroform.getFields() (I need retrieve them to set them partialValue)
From the java doc on method public List<PDField> getFields() we can read:
A field might have children that are fields (non-terminal field) or does not have children which are fields (terminal fields).
In my case some fields contain non-terminal field so to print them all we need check if we are in a PDNonTerminalField like :
document.getDocumentCatalog().getAcroForm().getFields().forEach(f -> {
listFields(f);
});
// loop over PDNonTerminalField otherwise print field value
public static void listFields(PDField f){
if(f instanceof PDNonTerminalField) {
((PDNonTerminalField) f).getChildren().forEach(ntf-> {
listFields(ntf);
});
}else {
System.out.println(f.getValueAsString());
}
}

Use filtered dataProvider contents when FileDownloader is called in Vaadin

I'm trying to download a csv file after applying filters to the DataProvider.
For some reason the filtered results are shown in the Grid, but the downloaded csv file still contains all data.
#AutoView
class FinancialTransactionsView : VerticalLayout(), View {
private val grid: Grid<FinancialTransaction>
private val yearField: ComboBox<Int>
private val dataProvider = DataProvider.ofCollection(FinancialTransaction.findAll())
private val fileDownloader: FileDownloader
init {
label("Financial Transactions") {
styleName = ValoTheme.LABEL_H1
}
yearField = comboBox("Select Year") {
setItems(listOf(2016, 2017, 2018))
addSelectionListener {
// Filter the data based on the selected year
if (it.value != it.oldValue) setDataProvider()
}
}
// Create FileDownloader and initialize with all contents in the DataProvider
fileDownloader = FileDownloader(createCsvResource())
val downloadButton = button("Download csv") {
styleName = ValoTheme.BUTTON_PRIMARY
onLeftClick {
// The idea here is to assign values from the filtered DataProvider to the FileDownloader
fileDownloader.fileDownloadResource = createCsvResource()
}
}
fileDownloader.extend(downloadButton)
fileDownloader.fileDownloadResource = createCsvResource()
grid = grid(dataProvider = dataProvider) {
expandRatio = 1f
setSizeFull()
addColumnFor(FinancialTransaction::companyId)
addColumnFor(FinancialTransaction::fiscalYear)
addColumnFor(FinancialTransaction::fiscalPeriod)
addColumnFor(FinancialTransaction::currency)
addColumnFor(FinancialTransaction::finalizedDebitAmountInCurrency)
addColumnFor(FinancialTransaction::finalizedCreditAmountInCurrency)
appendHeaderRow().generateFilterComponents(this, FinancialTransaction::class)
}
}
private fun createCsvResource(): StreamResource {
return StreamResource(StreamResource.StreamSource {
val csv = dataProvider.items.toList().toCsv()
try {
return#StreamSource csv.byteInputStream()
} catch (e: IOException) {
e.printStackTrace()
return#StreamSource null
}
}, "financial_transactions.csv")
}
private fun setDataProvider() {
dataProvider.clearFilters()
if (!yearField.isEmpty)
dataProvider.setFilterByValue(FinancialTransaction::fiscalYear, yearField.value)
}
}
toCsv() is an extension function List<FinancialTransaction> which returns a string containing csv data.
What can I do to get the filtered results in my csv file?
val csv = dataProvider.items.toList().toCsv()
I am not Kotlin guy, but I assume dataProvider.items is a shorthand to dataProvider.getItems() in Java, i.e. this method (and you use ListDataProvider)
https://vaadin.com/download/release/8.4/8.4.1/docs/api/com/vaadin/data/provider/ListDataProvider.html#getItems--
In Vaadin getItems() returns all items by passing all filters.
So instead you should do either of the following
dataProvider.fetch(..)
https://vaadin.com/download/release/8.4/8.4.1/docs/api/com/vaadin/data/provider/DataProvider.html#fetch-com.vaadin.data.provider.Query-
Where you give the filters you want to apply in the query, or
grid.getDataCommunicator.fetchItemsWithRange(..)
https://vaadin.com/download/release/8.4/8.4.1/docs/api/com/vaadin/data/provider/DataCommunicator.html#fetchItemsWithRange-int-int-
Which returns list of items with filters you have set applied, which I think is ideal for you
Thank you for using Vaadin-on-Kotlin!
I've just updated the Databases Guide which should hopefully answer all of your questions. If not, just let me know and I'll update the guides accordingly.
The ListDataProvider.items will not apply any filters and will always return all items.
You need to use the getAll() extension function in order to obey the filters set by the Grid.
This is now explained in the Exporting data from DataProviders chapter of the Databases Guide.
In your code, both the grid and the yearField will set the filter to the same data provider,
thus overwriting values set by each other. Please read the Chaining Data Providers chapter in the Databases Guide to learn how to AND multiple filters set by multiple components.
When you use private val dataProvider = DataProvider.ofCollection(FinancialTransaction.findAll()), that will load all transactions from the database in-memory. You can use a more memory-efficient way: private val dataProvider = FinancialTransaction.dataProvider (given that FinancialTransaction is an Entity)
Please let me know if this answers your questions. Thanks!

Apache Beam : Transform an objects having a list of objects to multiple TableRows to write to BigQuery

I am working on a beam pipeline to process a json and write it to bigquery. The JSON is like this.
{
"message": [{
"name": "abc",
"itemId": "2123",
"itemName": "test"
}, {
"name": "vfg",
"itemId": "56457",
"itemName": "Chicken"
}],
"publishDate": "2017-10-26T04:54:16.207Z"
}
I parse this using Jackson to the below structure.
class Feed{
List<Message> messages;
TimeStamp publishDate;
}
public class Message implements Serializable{
/**
*
*/
private static final long serialVersionUID = 1L;
private String key;
private String value;
private Map<String, String> eventItemMap = new HashMap<>();
this property translate the list of map as a single map with all the key-value pair together. because, the messages property will be parsed as list of HashMap objets for each key/value. This will be translated to a single map.
Now in my pipeline, I will convert the collection as
PCollection<KV<String, Feed>>
to write it to different tables based on a property in the class. I have written a transform to do this.
The requirement is to create multiple TableRows based on the number of message objects. I have a few more properties in the JSON to along with publishDate which would be added to the tableRow and each message properties.
So the table would be as follows.
id, name, field1, field2, message1.property1, message1.property2...
id, name, field1, field2, message2.property1, message2.property2...
I tried to create the below transformation. But, not sure how it will output multiple rows based on the message list.
private class BuildRowListFn extends DoFn<KV<String, Feed>, List<TableRow>> {
#ProcessElement
public void processElement(ProcessContext context) {
Feed feed = context.element().getValue();
List<Message> messages = feed.getMessage();
List<TableRow> rows = new ArrayList<>();
messages.forEach((message) -> {
TableRow row = new TableRow();
row.set("column1", feed.getPublishDate());
row.set("column2", message.getEventItemMap().get("key1"));
row.set("column3", message.getEventItemMap().get("key2"));
rows.add(row);
}
);
}
But, this also will be a List which I won't be able to apply the BigQueryIO.write transformation.
Updated as per the comment from "Eugene" aka #jkff
Thanks #jkff. Now, i have changed the code as you mentioned in the second paragraph. context.output(row) inside messages.forEach, after setting table row as
List<Message> messages = feed.getMessage();
messages.forEach((message) -> {
TableRow row = new TableRow();
row.set("column2", message.getEventItemMap().get("key1"));
context.output(row);
}
Now, when i try to write this collection to BigQuery, as
rows.apply(BigQueryIO.writeTableRows().to(getTable(projectId, datasetId, tableName)).withSchema(getSchema())
.withCreateDisposition(CreateDisposition.CREATE_IF_NEEDED)
.withWriteDisposition(WriteDisposition.WRITE_APPEND));
I am getting the below exception.
Exception in thread "main" org.apache.beam.sdk.Pipeline$PipelineExecutionException: java.lang.NullPointerException
at org.apache.beam.runners.direct.DirectRunner$DirectPipelineResult.waitUntilFinish(DirectRunner.java:331)
at org.apache.beam.runners.direct.DirectRunner$DirectPipelineResult.waitUntilFinish(DirectRunner.java:301)
at org.apache.beam.runners.direct.DirectRunner.run(DirectRunner.java:200)
at org.apache.beam.runners.direct.DirectRunner.run(DirectRunner.java:63)
at org.apache.beam.sdk.Pipeline.run(Pipeline.java:297)
at org.apache.beam.sdk.Pipeline.run(Pipeline.java:283)
at com.chefd.gcloud.analytics.pipeline.MyPipeline.main(MyPipeline.java:284)
Caused by: java.lang.NullPointerException
at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl$DatasetServiceImpl.insertAll(BigQueryServicesImpl.java:759)
at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl$DatasetServiceImpl.insertAll(BigQueryServicesImpl.java:809)
at org.apache.beam.sdk.io.gcp.bigquery.StreamingWriteFn.flushRows(StreamingWriteFn.java:126)
at org.apache.beam.sdk.io.gcp.bigquery.StreamingWriteFn.finishBundle(StreamingWriteFn.java:96)
Please help.
Thank you.
It seems that you are assuming that a DoFn can output only a single value per element. This is not the case: it can output any number of values per element - no values, one value, many values, etc. A DoFn can even output values to multiple PCollection's.
In your case, you simply need to call c.output(row) for every row in your #ProcessElement method, for example: rows.forEach(c::output). Of course you'll also need to change the type of your DoFn to DoFn<KV<String, Feed>, TableRow>, because the type of elements in its output PCollection is TableRow, not List<TableRow> - you're just producing multiple elements into the collection for every input element, but that doesn't change the type.
An alternative method would be to do what you currently did, also do c.output(rows) and then apply Flatten.iterables() to flatten the PCollection<List<TableRow>> into a PCollection<TableRow> (you might need to replace List with Iterable to get it to work). But the other method is easier.