Difference between properties File, Yaml & Json? - selenium

I'm a beginner in software testing. I'm working with selenium with page object design patterns. I want to keep the test data separately, but i'm confusing how to do it.
I want to know the difference between the usage of properties file, yaml, json.Which is most useful in software testing?
Which should I choose yaml, properties file, or json. So I need to keep the test data separately in json or properties file or yaml. Which is more people using nowadays ? As a tester using yaml, json and properties file is knowiing well. or following as particular pattern which is more easier. whats your suggestion ?

XML (Extensible Markup Language) is flexible and powerful markup capabilities. It is often used in configuration and preference files like those used for the Eclipse IDE. Most Web browsers have XML viewers, although XML is designed for structured data, making it a bit like looking at the internals of a database.
JavaScript Object Notation (JSON) is used with JavaScript, of course. It will be familiar to Web developers that use it for client/server communication.
YAML stands for YAML Ain’t Markup Language. It uses line and whitespace delimiters instead of explicitly marked blocks that could span one or more lines like XML and JSON. This approach is used in many programming languages, such as Python.
So it comes down to YAML or JSON-
Technically YAML is a superset of JSON. That is, in theory at least, a YAML parser can understand JSON, but not necessarily the other way around.
In general, there are certain things I like about YAML that are not available in JSON.
1) YAML is visually easier to look at. In fact the YAML homepage is itself valid YAML, yet it is easy for a human to read.
2)YAML has the ability to reference other items within a YAML file using "anchors." Thus it can handle relational information as one might find in a MySQL database.
3)YAML is more robust about embedding other serialization formats such as JSON or XML within a YAML file.
4)YAML, depending on how you use it, can be more readable than JSON
5)JSON is often faster and is probably still interoperable with more systems
6)Duplicate keys, which are potentially valid JSON, are definitely invalid YAML.
7)YAML has a ton of features, including comments and relational anchors. YAML syntax is accordingly quite complex, and can be hard to understand.
8)YAML can be used, directly, for complex tasks like grammar definitions, and is often a better choice than inventing a new language.
If you don't need any features which YAML has and JSON doesn't, I would prefer JSON because it is very simple and is widely supported (has a lot of libraries in many languages). YAML is more complex and has less support. I don't think the parsing speed or memory use will be very much different, and maybe not a big part of your program's performance.But JSON is the winner for performance (if relevant) and interoperability. YAML is better for human-maintained files.So basically use as per your requirements not what most people are using.

Related

It is possible to use Perl6 grammar on raster data ? (Use Case : Cloud Optimized GeoTIFF Validation)

A few questions scratch an itch around perl6 grammars and raster (binary in general) data. For what I understand, the text approach is to work at the grapheme-level trhough grammars, may we approach raster data that way ? Can we make custom grapheme definition to approach raster data or a basic unit of binary data to parse them using Grammars ?
Seeing that perl6 is defined by perl6 grammars, can we define similar grammars as kind of "validation" test with a basic case being if the grammar can parse the data, the data is well-formed and is structurally validated ? Using this approach for text data, it is kind of obvious with grammars as the basic unit are text-oriented but can we customize those back-end definition (by example, it's possible to overwrite the :sigspace to make rules and tokens parse with a another separator for grapheme) to enable the power of grammars in the binary data territory ?
Thanks!
For the background part:
During the past few weeks, I begin to learn-ish Perl6 by personal interest. After seeing this talk at FOSDEM 2019 and I begin to ask myself (and the people around me) about using using grammars to inspect/parse binary data. My usecase will be for example to replicate the Cloud Optimized Geotiff validator without the support of a GDAL binding (I didn't see one yet in perl6). It's clearly a learning project for me.
The Spec for Cloud Optimized Geotiff
For now, the basic idea is to parse the binary structure with the help of perl6 grammars if it possible as a first basic step, hoping to be able to inspect the data and metadata as a main goal.
Note : Not native speaker, if some parts need rewriting/precisions feel free to point out.
As only comments where posted, I will summarize all the answers I got from the comments here, my further research and the #perl6 IRC chatroom.
Concerning the support of binding for X library (in the test case, it was GDAL), the strategy inside the perl6 community is to either leverage :
Use the Inline::Foo modules aiming at launching and accessing the ecosystem of the Foo language (by example : Inline::Perl5, Inline::Python and so on). List of Inline::X modules from the Perl6 Module Directory ;
Use or write a binding using NativeCall to bind to dynamic libraries who follow the C Calling Covention ;
Use or write a native perl6 module.
Concerning the parsing of binary data, I'll split the subject in two parts :
Generally speaking ;
Leveraging Grammars ;
1. Generally speaking
Leveraging the P5pack module or using Inline::Perl5 to use the unpack/pack is actually (with perl6.c) the best to parse binary data structure (the former seems favoraed as it's native module).
Go to see first comment from #raiph to a SO anwser showing a basic use case.
2. Leveraging the grammars
With perl6.c, grammars can only parse text.
However, the question about parsing binary data seems to be moderatly hot (based on feedbacks seen on the #perl6 irc channel) and a few to document, yet not implemetend, seems to pave the way with a hope to see it happens in a future (near or distant?).
The last part of the #raiph's anwser list a lot of ressources aiming at that direction. Moreover, in the Synopses 05 - Regexes and Rules : line 432 a :bytes modifier is evoked. We will have to see at which point those modifiers will be implemented and what is missing to bring them to the language.
On the #perl6 irc channel, MasterDuke said « also, i think the nqp binary reading/writing ops that jnthn recently specced and nine implemented were a prerequisite for anything further ». I still have to investigated what exactly he is talking about but it seems to go to the good direction.
One of the main point, IMO, is the related to the grapheme definition based on the UTF-8 one. If we were able to overwrite the grapheme definition to a custom one for specialized grammar as we can for now overwrite the :sigspace modifier to affect what is the separators for rulesand tokens, we will access a new way to operate around data structure and grammars. For now, the grapheme is defined in the string-level not the grammar-level or meta. See #timotimo comments linking to the UTF-8 document describing the Grapheme Cluster Boundary Rules.
A way to bend the rules was linked by #jjmererlo : Parsing GFX3 format with perl6 grammars.

How to describe messagepack data structure is used in internal bin protocol? is ASN.1 or BNF suited for it?

My goal is to write specification of simple client-server application protocol for our project where will be few kinds of client: IOS(swift), Android(java) and Web(http/websocket) probably. Server is the python. Our team decided to use MessagePack as a data structure serializer for different requests/responses.
So now i think how to describe such data structures. I don't wanna write the whole description of specification manually and spent time for thinking over different rules and agreements. I would want to point to a notation system description for my colleagues of client development.
My question is a common.
How do you behave with such task? Do you write pure text in your native speaking language or use some notation system? Is it right to use notation system and existing serializer together? I meant ASN.1. It is seemed clear.

how to choose the right markup language or serialization format?

So far my use of markup languages and/or data serialization formats have been limited to using json and a little bit of XML to store data in my early attempts at leaning video game development. Recently though I have been trying writing different types of applications, and in the cases where I need to store data I have been mainly using the json format because that is what I am most comfortable using. I am starting to feel that I have been blindly throwing json at my problems, which I doubt is any better than blindly throwing XML at my problems.
Mainly my question is what things do you have to consider when choosing the right language or format and what are some common abuses or misuse of these tools that I should watch out for?

Compilable IDLs that serialize to JSON

I've used Protobuf before, and I was looking into Thrift, but I was wondering what the options were for IDLs that compile to (at least) C#, JS, Objective C and Java, but also serialize/deserialize JSON in all of those languages. Thrift mostly does that, but doesn't support JSON in OC, and I was concerned (perhaps unwarranted) about the maturity of its JSON interfaces. Are there any IDLs that use JSON as their primary serialization, but also compile to strongly typed bindings in all of the languages listed above?
Thanks!
Regarding Thrift: If there are any serialization protocols could be considered "primary", it would certainly be the binary format. However, we strive to introduce a common minimum set of protocols and transports for each language, one of which is JSON.
Next, please keep in mind that Thrift's JSON format might not be what you expect. The JSON format is especially designed for Thrift, the main goal is a compact representation of the data. The SimpleJSON protocol also available for some languages is more verbatim, but initially designed to be write only (although that viewpoint right now changes slightly).
I was concerned (perhaps unwarranted) about the maturity of its JSON interfaces
There is nothing to be concerned of, honestly. There are a few PHP-related issues with regard to proper string encoding but otherwise it works just fine - when available for the language of choice. If you don't mind, it is not that hard to write a JSON transport and we always welcome quality contributions. If you need help during that process, ask the mailing lists.

XStream <-> Alternative binary formats (e.g. protocol buffers)

We currently use XStream for encoding our web service inputs/outputs in XML. However we are considering switching to a binary format with code generator for multiple languages (protobuf, Thrift, Hessian, etc) to make supporting new clients easier and less reliant on hand-coding (also to better support our message formats which include binary data).
However most of our objects on the server are POJOs with XStream handling the serialization via reflection and annotations, and most of these libraries assume they will be generating the POJOs themselves. I can think of a few ways to interface an alternative library:
Write an XStream marshaler for the target format.
Write custom code to marshal the POJOs to/from the classes generated by the alternative library.
Subclass the generated classes to implement the POJO logic. May require some rewriting. (Also did I mention we want to use Terracotta?)
Use another library that supports both reflection (like XStream) and code generation.
However I'm not sure which serialization library would be best suited to the above techniques.
(1) might not be that much work since many serialization libraries include a helper API that knows how to read/write primitive values and delimiters.
(2) probably gives you the widest choice of tools: https://github.com/eishay/jvm-serializers/wiki/ToolBehavior (some are language-neutral). Flawed but hopefully not totally useless benchmarks: https://github.com/eishay/jvm-serializers/wiki
Many of these tools generate classes, which would require writing code to convert to/from your POJOs. Tools that work with POJOs directly typically aren't language-neutral.
(3) seems like a bad idea (not knowing anything about your specific project). I normally keep my message classes free of any other logic.
(4) The Protostuff library (which supports the Protocol Buffer format) lets you write a "schema" to describe how you want your POJOs serialized. But writing this schema might end up being more work and more error-prone than just writing code to convert between your POJOs and some tool's generated classes.
Protostuff can also automatically generate a schema via reflection, but this might yield a message format that feels a bit Java-centric.