How to write data to file in Kotlin - kotlin

A little while ago, I started learning Kotlin, and I have done its basics, variables, classes, lists, and arrays, etc. but the book I was learning from seemed to miss one important aspect, reading and writing to a file, maybe a function like "fwrite" in C++
So I searched google, and yes, reading and writing bytes were easy enough. However, I being used to C++'s open personality, wanted to make a "kind of" database.
In C++ I would simply make a struct and keep appending it to a file, and then read all the stored objects one by one, by placing "fread" in a for loop or just reading into an array of the struct in one go, as the struct was simply just the bytes allocated to the variables inside it.
However in Kotlin, there is no struct, instead, we use Data Class to group data. I was hoping there was an equally easy way to store data in a file in form of Data Class and read it into maybe a List of that class, or if that is not possible, maybe some other way to store grouped data that would be easy to read and write.

Easiest way is to use a serialization library. Kotlin already provides something for that
TL;DR;
Add KotlinX Serialization to your project, choose the serialization format you prefer (protobuf or cbor will fit, go for json if you prefer something more human readable although bigger in size), use the proper serializer for generating your ByteArray and write it to a file using Kotlin methods for that
Generating the ByteArray might be tricky, not sure as I'm telling this from memory. What I can tell for sure is that if you choose JSON you can get the string representation and write to a file. So I'm assuming the same will be valid for binary formats (but writing to a file in binary instead of strings)

What you need can be fulfilled by ROOM DATABASE
It is officially recommended by GOOGLE, It uses your Android application's internal Database which is made using SQLITE
You can read more info about ROOM at
https://developer.android.com/jetpack/androidx/releases/room?gclid=Cj0KCQjw5ZSWBhCVARIsALERCvwjmJqiRPAnSYjhOzhPXg8dJEYnqKVgcRxSmHRKoyCpnWAPQIWD4gAaAlBnEALw_wcB&gclsrc=aw.ds
It provided Data Object Class (DAO) and Entity Classes through which one can access the database TABLE using SQL Queries.
Also, it will check your queries at compile time for any errors in it.
Note: You need to have basic SQL Knowledge for building the queries for CRUD Operations

Related

Data serialization format for Standard ML

I am looking for a way to write compound data (e.g. lists) to a file that can later be read back into the program. In Lisps, this is simply a matter of writing s-expressions to a file, and reading it back in later (using write and read for Scheme; prin1 and read for Common Lisp). Is there a similar method for doing this in Standard ML? Is there anything built-in that can help? (By "built-in", I mean something that is part of the language or basis library).

How to write a cluster to a file in LabVIEW?

I am trying to use the LIBSVM-software in LabVIEW. Luckily there already exist wrappers on GitHub. I don't want to train the model again each time, I open LabVIEW but I got stuck with my limited knowledge of writing more complex structures to files.
The structure looks as follows
and contains integers, doubles, booleans, enums and arrays (1D) and an array of arrays... The size of the arrays may change.
What is a proper way to save and load such a cluster? Or do I have to unbundle everything and write it to an XML file?
If the cluster isn't going to change, then you can simply wire it directly to a Write to Binary File and then read it back.
If you want it to be more readable, you could probably use the built in XML functions to flatten it to XML and save it and then unflatten back, but I'm not sure how cleanly that handles changes.
If you're willing to install things, then there are libraries which serialize arbitrary clusters to INI files, like the OpenG variant configuration VIs or the MGI read/write anything VIs and these are easy to use and survive changes, although they do have limitations with some data types, like classes. I believe there are also some JSON options.

Is it safer putting data in implementation files instead of in header files or other data files?

The purpose is to increase the cost for users to cheat in games by hacking local game data, and safety is the main concern. Don't need to think about the working flow related issues between designers and programmers.
Situation: iOS game development, objective-c
To save some game setting data with simple structure such as the HP Max value for a boss, I got three plans:
Using plist files (or XML\SQLite etc., base64 encoding is optional);
Using macro #define and put these data in a specific header file say constants.h;
Write them with obj-c code in an implementation file. For example using a singleton instance GameData, put data in GameData.m and get them by calling its method.
My questions are:
Is plan 3 the safest one here?
Are there other better plans that are not too complicated?
When you use the 1st and 2nd plan to save data, is it right to write code with the thought that "all data even the code here are visible to users"? For example is #define kABC 100.0f a little bit safer(looks more confusing to hackers) than #define kEnemy01_HP_Max 100.0f?
Neither method is safe, nor is any of them safer than another, unless you encrypt the data. You are confusing data security/integrity with private encapsulation. They are not related: a hacker won't be kind enough to use your pre-defined setter/getter functions, they will check the binary executable which is your program. Anyone with a basic hex editor for your given platform will be able to see those data, if they know where to look.
EDIT:
Also, please note that variable/function/macro names etc are only present in your source code, they are not present in your executable. So giving them cryptic names will serve one purpose, and that is to confuse you, the programmer.
Use the GameData singleton you mentioned. Add 3 methods:
Make GameData capable to read its data from an unencrypted data
file.
Make GameData capable to write its data encrypted to a data
file.
Make GameData capable to read its data from an encrypted data
file.
Refer to: http://www.codeproject.com/Articles/831481/File-Encryption-Decryption-Tutorial-in-Cplusplus
For development use an unencrypted data file and use GameData to encrypt the data (methods 1 and 2).
Ship the encrypted data file and use GameData to decrypt it (method 3).

Why doesn't Visual Studio 2012 allow you to select Structure when creating a new VB.NET file? Are structures deprecated?

This is not a subjective question; I am mainly asking to see if structures are now deprecated or something in VB.NET.
It is also not generally a duplicate of a question asking when to use a structure or a class, as this is largely checking to see if such information has become outdated. Furthermore it is certainly not a duplicate of questions relating specifically to C#, as these are two different (albeit similar) languages. The difference between classes and structures is language-dependent, as can be demonstrated by VB.NET and C++.
In Visual Studio 2012, when creating a new VB.NET file, you get options for Module and Class, among other things, but there is no option for Structure. For instance:
If you simply select to add a new item, then the much more complete menu doesn't list it either:
This seems like an awfully big oversight, especially when there are meaningful differences between classes and structures in VB.NET, so I'm certainly suspicious that it's not really an oversight at all.
Are structures a deprecated practice now? Has the language been revised in some way that has made the difference between a structure and a class much more meaningless? Is there any technical or widely-held convention that I am unaware of here? Or is it just an oversight after all? Thanks.
EDIT
To make a long story short, my understanding is that, among one or two other things, structures tend to be more efficient for smaller amounts of code, and classes tend to be more efficient for larger amounts. This is because of differences between they ways that their memory is managed. Even though a lot of people always think in terms of classes in a language-agnostic kind of way, I thought there was a practice among fluent VB.NET developers to use structures as well.
No, structures are not deprecated. They have just never been on the Add Item list.
Which is probably because people haven't been willing to reserve a whole file for a single structure, preferring to put them in classes and modules. But you can if you want.
If you are concerned with class vs structure differences, you probably want to see Structs versus classes.
Just to add some more information... The type that you choose from the "Add" dialog only affects the initial template you get in the editor. It is perfectly valid to add a Class file, then edit it to turn it in to a structure, form, or even a module. Typically if you want to create something that isn't in the list you would choose "Code File" to get a blank document to customize as you want.
You can even create your own templates to add to that list. If you find yourself wanting to add a template for a structure you can do it fairly easily.
Here are some basic instructions on how to do that.
http://msdn.microsoft.com/en-us/library/tsyyf0yh.aspx
Some structure types like List<T>.Enumerator are used essentially the same way as objects, but a more common usage case for structures is as simple aggregate types which hold some data for the use of other types. The behavior of a type like KeyValuePair<TKey, TValue> is simply "Key and Value are properties of type TKey and TValue, which hold whatever outside code asked them to hold." While some companies' policies may require that every type reside within its own file and have its own associated documentation package, placing utility structures into a file with a package's static utility functions, static constants, etc. may make more sense than splitting them into separate files, especially if they don't have any substantial logic of their own.
Nothing is preventing a programmer from placing a structure into a file by itself, but the usage case was not considered sufficiently frequent to justify a special template for that purpose.

Object serialization practical uses?

How many software projects have you worked on used object serialization? I personally never came across a scenario where object serialization was used. One use case i can think of is, a server software storing objects to disk to save memory. Are there other types of software where object serialization is essential or preferred over a database?
I've used object serialization in a lot of my projects. Sometimes we use it to store computer-specific settings locally. I have also used XML serialization to simplify interaction and generation of XML documents. It is also very beneficial in communication protocols. Serialize on one end and re-inflate on the other end.
Well, converting objects to XML or JSON is a form of serialization that is quite common on the web. I've also worked on a project where objects were created and serialized to a binary file in one application and then imported into another custom application (though that's fragile since it uses C# and serialization has broken in the past between versions of the .NET framework). Also, application settings that have a complex structure may be useful to serialize. I also think remoting APIs use serialization to communicate. Basically, serialization in general is simply a way to store the states of your objects, and this has many different uses.
Here are few uses I can think of :
Send an object across network, the most common example is serializing objects across a cluster
Serialize object for (sort of) caching, ie save the state in a file and read it back later
Serialize passive/huge data to a file to minimize the memory consumption and read it back whenever required.
I'm using serialization to pass objects across a TCP socket. You put XmlSerializers on either side, and it parses your data into readily available objects. If you do a little ground work, you can get it so that you're basically passing objects back and forth, and it makes socket communication extremely easy, reducing it to nothing more than socket.Send(myObject);.
Interprocess communication is a biggie.
you can combine db & serialization. f.ex. when you have to store an object with a lot of attributes (often dynamic, i.e. one object attribute set will be different from another one) to the relational DB, and you don't want to create a new column per each attribute
We started out with a system that serialized all of the thousands of in-memory objects to disk every 15 minutes or so. When that started taking too long we switched over to a mixed mode of saving the objects into a relational db and pickle file (this was a python system btw). Eventually the majority of the data was stored in a relational database. Interestingly, the system was written in such a way that all of the application code couldn't care less what was going on down there. It was all done using XP and thousands of automated tests.
Document based applications such as word processors and vector graphics editors will often serialize the document model to disk when the user invokes the Save command. Serialization is often preferred over complex databases in these apps.
Using serialization saves you time each time you want to implement an import/export functionality.
Every time you need to export your system's data, create backups or store some kind of settings, you could use serialization instead and just save the state of the objects that represent the actual config, data or whatever else.
Only when you need a specific format of the exported/imported data, there is a sense in building a custom parser and exporter/importer.
Serialization is also change-proof. Whenever you change the format of the object that is involved in the exchange functionality, it is automatically exportable and you don't have to change the logic behind your export/import parts.
We used it for a backup & update functionality. It was basically serialized hibernate objects being backed up, then the DB schema is altered through the update and we delivered a helper class that "coverted" the old objects to the new DB schema. This way we had a pretty solid update mechanism that wouldnt break easily and does an automatic backup at the same time.
I've used XML serialization heavily on one project. The technique was used to persist to database data structures that had no common structure, so the data couldn't be stored directly. I also used serialization to separate application settings that could be changed at runtime.