This is about a typical problem in larger embedded system projects:
Data from a text-file/database must be hard coded to C-code. The data will change the control flow, table dimension etc.
What is the solution of choice?
Background:
In our (large) embedded software we have to connect hundreds of signals (outputs of finite state machines) with busses (e.g. a CAN-bus). We are using Simulink/Stateflow as model-based development tool (state machines) and auto-coding.
The connection will have to scale, do dataype conversions etc. Usually all the information for the conversion/connection is stored in a database or file (e.g. dbc-text-file).
Apparently the usual dynamic way: reading the database and dynamically connect/convert accordingly is not indicated if hard deterministic realtime capability must be ensured. This data has to be hard coded.
We have not found a realistic and more practical way other than to use the Simulink API: write external m-code which reads the data from file and automates "drawing a picture" of the entangeld connections into the Simulink model! This is finally auto-coded to C.
Needless to say, that this scripted "painting-code" - while effective - is not very reusable, maintainable etc. I can't find an effective solution even taking : compiler/code generation, model driven architecture, Autosar, model transformations, into consideration. Constructing for each external data-document an own compiler, which transforms the data to C-code, seems to be unrealistic ...
Is there a practical alternative? Is this a fundamental weakness of Simulink "language" (i.e. it has no underlying high level language like other model driven embedded-tools like modelica) ?
Your actual problem isn't really well described by your question.
The way I read it is that you have a complex set of data, that requires a complex code generation process. Whenever one needs a complex translation, you'll end up building complicated analysis and code generation machinery. That's generally not an easy task.
You've done it in two stages: 1) read the database and make Simulink, 2) let MATLAB compile the Simulink to source code. Your "database" content is in effect a specification (e.g., a DSL). You have a front end that reads the "spec" and interprets it into a Simulink model; you seem to be complaining that there is no underlying semantics for Simulink. Well... is there an underlying robust semantics for your specification DSL? That is probably part of your problem. Simulink itself has poorly defined semantics, too. The combination means that whatever ad hoc transformations you are doing from your DSL to Simulink, and from Simulink to C, is in effect ad hoc and likely difficult to maintain. We can argue about whether C has cleanly defined semantics; it sort of does but it isn't easy to see in the standards documents.
In any case, you are building a staged translator. Your first stage probably needs more structure and better analyses. Ideally you'd transform to an intermediate represention that was more formal; I always thought that Colored Petri nets were much better than Simulink, partly because they do have clear, formal semantics. (Modelica is pretty nice, too). Ideally you'd transform the intermediate stage into C using transformation rules you can control, rather that what Simulink happens to do.
This is easier if you have good foundation machinery on which to build translators. The best class of machinery for this purpose IMHO are program transformation systems. (I happen to build one
of them [see bio], that knows C and Simulink already, and could be taught Modelica). You can read a bit more about what is needed in this SO answer on what it takes to build translators.
It's difficult to make out what your exact question is, but here are two suggestions:
If you're looking for a code generator that can generate code from your state diagrams which you can understand try fsmpro.io
For making individual logic, you'll have to paste code rather than using any inbuilt utilities to design units.
Related
I am looking for recommendations for a very generic automation/task execution tool. The scope is somewhat between a script, a build system like make and orchestration tools like Ansible or Puppet. The best I can do is describe my rather vague 'requirements' and hope for clues how others have solved these problems. Sorry for the long description, I guess I don't really know what exactly I want he solution to do. I profit from programming answers on SO all the time but I am not entirely sure if my open ended question is acceptable here.
--
We work as data analysts/system validators in a corporate setting. We perform a range of diverse tasks and interact with lots of ever changing systems. Each little step we do is arguably mundane/easy, but the bigger picture only forms if lots of iterations with slightly different inputs or combinations are repeated. It is a bit like looking for a needle in a hay stack, but the concrete problem is slightly different every time. This makes it hard to use a normal script or automation tool, which require more structure to work. But doing things semi-manual without a big team does not allow us to cover all the analysis/cases we want/need.
To give an applied example: a typical tasks could involve setting up a big calculation in a vendor system, extracting their ASCII output from a web server and parsing it. Then we would suck raw input data from a set of configuration files and data bases. This is piped into some of our home grown replication tools/models living in C++. Then both the system's results and our replication is scanned for interesting outliers (e.g. regression tested) and only this subset is uploaded for human analysts to investigate, nicely presented in an Excel sheet.
We can do all these things easily by hand for a once-off or maybe using ad-hoc tools/scripts. We just can't do it repeatedly for ever so slightly different settings. We seem to need a library for 'common tasks' that are just specialized by some few inputs (e.g. task it to download a time series and scan for outliers - parameters would be db access/login and maybe parameters defining what an outlier is in that context). And then I need to chain these tasks together to make complex tasks repeatable and simple to build up from atomic steps.
I have not found anything really do something like this. There seems to be specialist scripting or tools for each niche available, but not something combining all the different tasks I need to perform.
I have been so far toying on and off with a minimalist sqlite database which controls a set of python 'scripts'/wrappers. These scripts take input parameters from the data base, and they are chained/piped based on the database. The scripts write their results back to the database, mostly as plain text and floats/ints. This kind of db interface is very error prone and complicated for humans; the idea is to have (template) scripts writing (concrete/parametrised) scripts to the db for execution, like rolling itself out before executing. Not sure if this is a smart idea, but the db is driving the scripts, without much interacting among these building block script; rather than having the conventional bunch of scripts calling each other and dumping some data into db as an after thought. So far we have lots of separate wrappers (scripts) to talk to all the systems and do the work, what is really missing is something tying it all together an controlling it.
I am interested (obviously) more in data/flow transparency, repeatability and chaining mini-programs together to bigger units, rather than speed or scaling to larger data sets. All the heavier lifting is either done in the systems we interact with, or it is delegated to C++ called from these python scripts. This is not a production system with more stability and fixed goals but rather a flexible analysis/investigation helper.
I really hope someone here has previously run into exactly that problem severely limiting our productivity, and we can just piggy back off your solution or ideas.
I would suggest that you consider staf (Software Test Automation Framework). It's open source, distributed, and cross-platform. It will run just about any task on just about any platform. It has a variety of plugin "Services" available for specific purposes, or you can create your own custom Service. You can also extend the functionality through scripting (jython) It's also well documented and reasonably well supported through user forums by IBM.
I've been thinking about this writing (apparently) by Mark Twain in which he starts off writing in English but throughout the text makes changes to the rules of spelling so that by the end he ends up with something probably best described as pseudo-German.
This made me wonder if there is interpreter for some established language in which one has access to the interpreter itself, so that you can change the syntax and structure of the language as you go along. For example, often an if clause is a keyword; is there a language that would let you change or redefine this on the fly? Imagine beginning a console session in one language, and by the end, working in another.
Clearly one could write an interpreter and run it, and perhaps there is no concrete distinction between doing this and modifying the interpreter. I'm not sure about this. Perhaps there are limits to the modifications you can make dynamically to any given interpreter?
These more open questions aside, I would simply like to know if there are any known interpreters that allow this at all? Or, perhaps, this ability is just a matter of extent and my question is badly posed.
There are certainly languages in which this kind of self-modifying behavior at the level of the language syntax itself is possible. Lisp programs can contain macros, which allow among other things the creation of new control constructs on the fly, to the extent that two Lisp programs that depend on extensive macro programming can look almost as if they are written in two different languages. Forth is somewhat similar in that a Forth interpreter provides a core set of just a dozen or so primitive operations on which a program must be built in the language of the problem domain (frequently some kind of real-world interaction that must be done precisely and programmatically, such as industrial robotics). A Forth programmer creates an interpreter that understands a language specific to the problem he or she is trying to solve, then writes higher-level programs in that language.
In general the common idea here is that of languages or systems that treat code and data as equivalent and give the user just as much power to modify one as the other. Every Lisp program is a Lisp data structure, for example. This is in contrast to a language such as Java, in which a sharp distinction is made between the program code and the data that it manipulates.
A related subject is that of self-modifying low-level code, which was a fairly common technique among assembly-language programmers in the days of minicomputers with complex instruction sets, and which spilled over somewhat into the early 8-bit and 16-bit microcomputer worlds. In this programming idiom, for purposes of speed or memory savings, a program would be written with the "awareness" of the location where its compiled or interpreted instructions would be stored in memory, and could alter in place the actual machine-level instructions byte by byte to affect its behavior on the fly.
Forth is the most obvious thing I can think of. It's concatenative and stack based, with the fundamental atom being a word. So you write a stream of words and they are performed in the order in which they're written with the stack being manipulated explicitly to effect parameter passing, results, etc. So a simple Forth program might look like:
6 3 + .
Which is the words 6, 3, + and .. The two numbers push their values onto the stack. The plus symbol pops the last two items from the stack, adds them and pushes the result. The full stop outputs whatever is at the top of the stack.
A fundamental part of Forth is that you define your own words. Since all words are first-class members of the runtime, in effect you build an application-specific grammar. Having defined the relevant words you might end up with code like:
red circle draw
That wold draw a red circle.
Forth interprets each sequence of words when it encounters them. However it distinguishes between compile-time and ordinary words. Compile-time words do things like have a sequence of words compiled and stored as a new word. So that's the equivalent of defining subroutines in a classic procedural language. They're also the means by which control structures are implemented. But you can also define your own compile-time words.
As a net result a Forth program usually defines its entire grammar, including relevant control words.
You can read a basic introduction here.
Prolog is an homoiconic language, allowing meta interpreters (MIs) to be declined in a variety of ways. A meta interpreter - interpreting the interpreter - is a common and useful native construct in Prolog.
See this page for an introduction to this argument. An interesting and practical technique illustrated is partial execution:
The overhead incurred by implementing these things using MIs can be compiled away using partial evaluation techniques.
I have designed a control system in simulink for my project. Now I need to convert this design into C code. But presently no specific hardware processor has been decided on which the code will reside. So I need to run my code from within matlab. I am very new to the industry, so I am unaware of the steps that are followed to change the control design from simulink to embedded C.
Since I have no practical experience about the workflow that I am supposed to follow can I please get some guidance on what are the general norms that has to be done in order to achieve my requirement.
Workflow recommendation:
Make sure your design is tested enough with Simulation. You don't want to detect simple errors when you control real hardware.
Investigate/decide on target requirements. Do you have limited resources (memory/speed) and must customize the generated code to fit a target interface you should use embedded coder. Otherwise Simulink coder could be enough (If you have embedded coder use it anyway).
Make sure your model interfaces match what you expect on target considering datatypes, sizes, logged data and states. If you have special requirements for how to interface the code, you need to set storage classes on signals and other data. If you can live with the default code interface your life will be a lot easier.
Set the proper target in configuration parameters/Code generation/System target file. grt.tlc for rapid prototyping code and ert.tlc for embedded code. Then you can look through optimization and code generation properties and set as you would like. If your target has specific datatypes you should also change the embedded hardware implementation to match datatypes on your target.
Generate code (ctrl-b).
Integrate the code in your target project. Call _initialize once first then in a time based loop set inputs, call _step and read outputs.
It is also possible to make you own custom target to customize the code interface and provide desired output directly, including compiling and downloadingn to target. This is mainly for rapid prototyping and I recommend doing it manually first a few times and then decide if it is worth the effort to automate.
You might want to start looking at some of the examples or videos of Simulink Coder and Embedded Coder. Simulink Coder is for generating C/C++ code, but not necessarily optimised for running on embedded processors (it may be for Rapid Prototyping or Hardware-in-the-Loop purposes for example). Embedded Coder is an add-on to Simulink Coder for optimising the generated code to run on embedded hardware.
You might also want to register some some of their webinars on that topic or look at some recorded ones (there are plenty to choose from).
What do you suggest for Data Access layer? Using ORMs like Entity Framework and Hibernate OR Code Generators like Subsonic, .netTiers, T4, etc.?
For me, this is a no-brainer, you generate the code.
I'm going to go slightly off topic here because there's a bigger underlying fallacy at play. The fallacy is that these ORM frameworks solve the object/relational impedence mismatch. This claim is a barefaced lie.
I find the best way to resolve the object/relational impedance mismatch is to either use OOP exclusively and use an object database or use the idioms of the relational database exclusively and ignore OOP.
The abstraction "everything is a table" is to me, much more powerful than the abstraction "everything is a class." It takes less code, less intellectual effort and leads to faster code when you code to the database rather than to an object model.
To me this seems obvious. If your application is data driven then surely your code should be data driven too? Yet to say this is hugely controversial.
The central problem here is that OOP becomes a really leaky abstraction when used in conjunction with a database. Code that look perfectly sensible when written to the idioms of OOP looks completely insane when you see the traffic that code generates at the database. When that messiness becomes a performance problem, OOP is the first casualty.
There is really no way to resolve this. Databases work with sets of data. OOP focus on instances of classes. Trying to marry the two is always going to end in divorce.
So to answer your question, I believe you should generate your classes and try and make them map the underlying database structure as closely as possible.
Perhaps controversially, I've always felt that using code generators for the ADO.NET plumbing is fundamentally solving the wrong problem.
At some point, hopefully not too long after learning about Connection Strings, SqlCommands, DataAdapters, and all that, we notice that:
Such code is ugly
It is very boring to write
It's very easy to miss something if you're doing it by hand
It has to be repeated every time you want to access the database
So, the problem to solve is "how to do the same thing lots of times fast"?
I say no.
Using code generators to make this process quick still means that you have a ton of code, all the same, all over your business classes (or your data access layer, if you separate that from your business logic).
And then, if you want to do something generically (like track stored procedure usage, for instance), you end up having to customise your code generator if it doesn't already have the feature you want. And even if it does, you still have to regenerate everything all the time.
I like to do things once, not many times, no matter how fast I can do them.
So I rolled my own Data Access class that knows how to add parameters, set up and close connections, manage transactions, and other cool stuff. It only had to be written once, and calling its methods from a Business object that needs some database stuff done consists of one line of code.
When I needed to make the application support multithreaded database accesses, it required a change to the Data Access class only, and all the business classes just do what they already did.
There is no right answer it all depends on your project. As Simon points out if your application is all data driven, then it might make sense depending on the size and complexity of the domain to use non oop paradigm. I had a lot of success building a system using a Transaction Script pattern, which passed XML Messages around the system.
However this system started to break down after five or six years as the application grew in size and complexity (5 or 6 webs, several web services, tons of COM+ components, legacy and .net code, 8+ databases with 800+ tables 4,000+ procedures). No one knew what anything was, and duplication was running rampant.
There are other ways to alleviate the maintance then OOP; however, if you have a very complex domain then hainvg a rich domain model is ideal IMHO, as it allows for the business rules to be expressed in nice encapsulated components.
To answer your question, avoid code generators if you can. Code generators are a recipe for disaster, but if you do go with code generation do not modify the generated code. Also be sure to have a good process in place that is easy for developers to get the new generated code.
I recommend using either the following: ORM or hand roll a lightweight DAL. I am currently transitioning a project over to nHibernate off my hand rolled DAL and am having a lot of success; however, I like having the option of using either option. Further if you properly seperate your concerns (Data Access from Business Layer from Presentation) you can have a single service layer that might talk to a Dao (Data Access Object) that for one object is an ORM but for another is hand rolled). I like this flexibility as it allows to apply the best tool to the job.
I like nHibernate over a hand rolled DAL because while my DAL does abstract away most of the ADO.Net code you still have to write the code that takes a data reader to an object or an object and creates the parameters.
I've always preferred to go the code generator route, especially in C# where you can make use of extended classes to add functionality to the basic data objects.
Hate to say this, but it depends. If you find an ORM tool that fits your needs go for it. We wrote our own system in small steps while developing the application. We are using C++ and there are not that many tools out there anyway. Ours ended up being a XML description of the database, from that the SQL generation script and the basic object layer and metadata were generated.
Do your homework and evaluate theses tools and you will find a good fit for your needs.
I've been a web developer for some time now, and have recently started learning some functional programming. Like others, I've had some significant trouble apply many of these concepts to my professional work. For me, the primary reason for this is I see a conflict between between FP's goal of remaining stateless seems quite at odds with that fact that most web development work I've done has been heavily tied to databases, which are very data-centric.
One thing that made me a much more productive developer on the OOP side of things was the discovery of object-relational mappers like MyGeneration d00dads for .Net, Class::DBI for perl, ActiveRecord for ruby, etc. This allowed me to stay away from writing insert and select statements all day, and to focus on working with the data easily as objects. Of course, I could still write SQL queries when their power was needed, but otherwise it was abstracted nicely behind the scenes.
Now, turning to functional-programming, it seems like with many of the FP web frameworks like Links require writing a lot of boilerplate sql code, as in this example. Weblocks seems a little better, but it seems to use kind of an OOP model for working with data, and still requires code to be manually written for each table in your database as in this example. I suppose you use some code generation to write these mapping functions, but that seems decidedly un-lisp-like.
(Note I have not looked at Weblocks or Links extremely closely, I may just be misunderstanding how they are used).
So the question is, for the database access portions (which I believe are pretty large) of web application, or other development requiring interface with a sql database we seem to be forced down one of the following paths:
Don't Use Functional Programming
Access Data in an annoying, un-abstracted way that involves manually writing a lot of SQL or SQL-like code ala Links
Force our functional Language into a pseudo-OOP paradigm, thus removing some of the elegance and stability of true functional programming.
Clearly, none of these options seem ideal. Has found a way circumvent these issues? Is there really an even an issue here?
Note: I personally am most familiar with LISP on the FP front, so if you want to give any examples and know multiple FP languages, lisp would probably be the preferred language of choice
PS: For Issues specific to other aspects of web development see this question.
Coming at this from the perspective of a database person, I find that front end developers try too hard to find ways to make databases fit their model rather than consider the most effective ways to use database which are not object oriented or functional but relational and using set-theory. I have seen this generally result in poorly performing code. And further it creates code that is difficult to performance tune.
When considering database access there are three main considerations - data integrity (why all business rules should be enforced at the database level not through the user interface), performance, and security. SQL is written to manage the first two considerations more effectively than any front end language. Because it is specifically designed to do that. The task of a database is far different than the task of a user interface. Is it any wonder that the type of code that is most effective in managing the task is conceptually different?
And databases hold information critical to the survival of a company. Is is any wonder that businesses aren't willing to experiment with new methods when their survival is at stake. Heck many businesses are unwilling to even upgrade to new versions of their existing database. So there is in inherent conservatism in database design. And it is deliberately that way.
I wouldn't try to write T-SQL or use database design concepts to create your user-interface, why would you try to use your interface language and design concepts to access my database? Because you think SQL isn't fancy (or new) enough? Or you don't feel comfortable with it? Just because something doesn't fit the model you feel most comfortable with, doesn't mean it is bad or wrong. It means that it is different and probably different for a legitimate reason. You use a different tool for a different task.
First of all, I would not say that CLOS (Common Lisp Object System) is "pseudo-OO". It is first class OO.
Second, I believe that you should use the paradigm that fits your needs.
You cannot statelessly store data, while a function is flow of data and does not really need state.
If you have several needs intermixed, mix your paradigms. Do not restrict yourself to only using the lower right corner of your toolbox.
You should look at the paper "Out of the Tar Pit" by Ben Moseley and Peter Marks, available here: "Out of the Tar Pit" (Feb. 6, 2006)
It is a modern classic which details a programming paradigm/system called Functional-Relational Programming. While not directly relating to databases, it discusses how to isolate interactions with the outside world (databases, for example) from the functional core of a system.
The paper also discusses how to implement a system where the internal state of the application is defined and modified using a relational algebra, which obviously is related to relational databases.
This paper will not give an an exact answer to how to integrate databases and functional programming, but it will help you design a system to minimize the problem.
Functional languages do not have the goal to remain stateless, they have the goal to make management of state explicit. For instance, in Haskell, you can consider the State monad as the heart of "normal" state and the IO monad a representation of state which must exist outside of the program. Both of these monads allow you to (a) explicitly represent stateful actions and (b) build stateful actions by composing them using referentially transparent tools.
You reference a number of ORMs, which, per their name, abstract databases as sets of objects. Truely, this is not what the information in a relational database represents! Per its name, it represents relational data. SQL is an algebra (language) for handling relationships on a relational data set and is actually quite "functional" itself. I bring this up so as to consider that (a) ORMs are not the only way to map database information, (b) that SQL is actually a pretty nice language for some database designs, and (c) that functional languages often have relational algebra mappings which expose the power of SQL in an idiomatic (and in the case of Haskell, typechecked) fashion.
I would say most lisps are a poor man's functional language. It's fully capable of being used according to modern functional practices, but since it doesn't require them the community is less likely to use them. This leads to a mixture of methods which can be highly useful but certainly obscures how pure functional interfaces can still use databases meaningfully.
I don't think the stateless nature of fp languages is a problem with connecting to databases. Lisp is a non-pure functional programming language so it shouldn't have any problem dealing with state. And pure functional programming languages like Haskell have ways of dealing with input and output that can be applied to using databases.
From your question it seems like your main problem lies in finding a good way to abstract away the record-based data you get back from your database into something that is lisp-y (lisp-ish?) without having to write a lot of SQL code. This seems more like a problem with the tooling/libraries than a problem with the language paradigm. If you want to do pure FP maybe lisp isn't the right language for you. Common lisp seems more about integrating good ideas from oo, fp and other paradigms than about pure fp. Maybe you should be using Erlang or Haskell if you want to go the pure FP route.
I do think the 'pseudo-oo' ideas in lisp have their merit too. You might want to try them out. If they don't fit the way you want to work with your data you could try creating a layer on top of Weblocks that allows you to work with your data the way you want. This might be easier than writing everything yourself.
Disclaimer: I'm not a Lisp expert. I'm mostly interested in programming languages and have been playing with Lisp/CLOS, Scheme, Erlang, Python and a bit of Ruby. In daily programming life I'm still forced to use C#.
If your database doesn't destroy information, then you can work with it in a functional manner consistent with "pure functional" programming values by working in functions of the entire database as a value.
If at time T the database states that "Bob likes Suzie", and you had a function likes which accepted a database and a liker, then so long as you can recover the database at time T you have a pure functional program that involves a database. e.g.
# Start: Time T
likes(db, "Bob")
=> "Suzie"
# Change who bob likes
...
likes(db "Bob")
=> "Alice"
# Recover the database from T
db = getDb(T)
likes(db, "Bob")
=> "Suzie"
To do this you can't ever throw away information you might use (which in all practicality means you cannot throw away information), so your storage needs will increase monotonically. But you can start to work with your database as a linear series of discrete values, where subsequent values are related to the prior ones through transactions.
This is the major idea behind Datomic, for example.
Not at all. There are a genre of databases known as 'Functional Databases', of which Mnesia is perhaps the most accessible example. The basic principle is that functional programming is declarative, so it can be optimised. You can implement a join using List Comprehensions on persistent collections and the query optimiser can automagically work out how to implement the disk access.
Mnesia is written in Erlang and there is at least one web framework (Erlyweb) available for that platform. Erlang is inherently parallel with a shared-nothing threading model, so in certain ways it lends itself to scalable architectures.
A database is the perfect way to keep track of state in a stateless API. If you subscribe to REST, then your goal is to write stateless code that interacts with a datastore (or some other backend) that keeps track of state information in a transparent way so that your client doesn't have to.
The idea of an Object-Relational Mapper, where you import a database record as an object and then modify it, is just as applicable and useful to functional programming as it is to object oriented programming. The one caveat is that functional programming does not modify the object in place, but the database API can allow you to modify the record in place. The control flow of your client would look something like this:
Import the record as an object (the database API can lock the record at this point),
Read the object and branch based on its contents as you like,
Package a new object with your desired modifications,
Pass the new object to the appropriate API call which updates the record on the database.
The database will update the record with your changes. Pure functional programming might disallow reassigning variables within the scope of your program, but your database API can still allow in-place updates.
I'm most comfortable with Haskell. The most prominent Haskell web framework (comparable to Rails and Django) is called Yesod. It seems to have a pretty cool, type-safe, multi-backend ORM. Have a look at the Persistance chapter in their book.
Databases and Functional Programming can be fused.
for example:
Clojure is a functional programming language based on relational database theory.
Clojure -> DBMS, Super Foxpro
STM -> Transaction,MVCC
Persistent Collections -> db, table, col
hash-map -> indexed data
Watch -> trigger, log
Spec -> constraint
Core API -> SQL, Built-in function
function -> Stored Procedure
Meta Data -> System Table
Note: In the latest spec2, spec is more like RMDB.
see: spec-alpha2 wiki: Schema-and-select
I advocate: Building a relational data model on top of hash-map to achieve a combination of NoSQL and RMDB advantages. This is actually a reverse implementation of posgtresql.
Duck Typing: If it looks like a duck and quacks like a duck, it must be a duck.
If clojure's data model like a RMDB, clojure's facilities like a RMDB and clojure's data manipulation like a RMDB, clojure must be a RMDB.
Clojure is a functional programming language based on relational database theory
Everything is RMDB
Implement relational data model and programming based on hash-map (NoSQL)