Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
Recently (January 2015) Microsoft open-sourced Bond, their framework for working with schematized data. In many aspects it is similar to Google's Protocol Buffers.
What are the biggest differences between the two? What are pros and cons, that is, in which situations I would like to use one, but not the other? Of course, I'm not talking about obvious things like consistency with other projects or already existing APIs, but rather the features of both libraries. To give an example, Bond has bonded<T> which, if I remember correctly, doesn't have a counterpart in Protocol Buffers.
In general, Bond has better type system and supports multiple protocols.
In particular, pros are:
Bond supports generics
Bond has different types to represent collections: vector<T>, map<T>, list<T>
Bond supports type-safe lazy deserialization (bonded<T>)
Bond supports multiple formats (fast binary, compact binary, XML, JSON) + marshaling and transcoding
Cons:
Bond doesn't support different types for fixed and variable integer encoding. In Bond, the way how integers are encoded is determined by the output format (fast or compact), but in Protocol Buffers, there are integer types that always have fixed size: fixed32 and fixed64.
Bond doesn't support union types (oneof in Protocol Buffers)
I did some tests, and it appears that size of simple messages in Bond and ProtoBuf binary formats are about the same. I compared serialization and deserialization time using Bond and C# ProtoBuf library: in my case Bond performed a bit better, you can find my source code on GitHub
To sum up, I think it's better to use Bond when you work with some complex types of data or when you need to represent the same data in different formats: e.g. store as binaries, but expose as JSON etc.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Context
This essay goes into detail describing "objects" and "abstract data types" (ADT) (and here is an older explanation by the same author)
Here is an excerpt:
Despite 25 years of research, there is still
widespread confusion about the two forms of data abstraction,
abstract data types and objects. This essay attempts to
explain the differences and also why the differences matter.
The typical response is a variant of
“objects are a kind of abstract data type”.
This response is consistent with most programming language
textbooks. [... But] the textbooks are
wrong! Objects and abstract data types are not the same
thing, and neither one is a variation of the other. They are
fundamentally different and in many ways complementary,
in that the strengths of one are the weaknesses of the other.
The issues are obscured by the fact that most modern programming
languages support both objects and abstract data
types, often blending them together into one syntactic form.
But syntactic blending does not erase fundamental semantic
differences which affect flexibility, extensibility, safety and
performance of programs. Therefore, to use modern programming
languages effectively, one should understand the
fundamental difference between objects and abstract data
types.
Question
Is there a concise explanation using modern, non-academic language examples? (If not, it would be great if someone provided one here or I might write my own answer when I have the time)
Of particular interest are the definitions and distinctions between objects and ADT's, and practical implications when writing code (or designing a language).
Caveat
I strongly recommend looking at the linked essay before commenting or answering.
Here is an example of type of insight I am looking for, also excerpted from the essay:
Abstract data types define operations that collect together the behaviors for a given action. Objects organize the matrix the other way, collecting together all the actions associated with a given representation. It is easier to add new operations in an ADT, and new representations using objects. [...] Object-oriented programs can use inheritance to add new operations.
Note that at least as far as the essay is concerned, as of Jan 3, 2014, Wikipedia is wrong (or at least incomplete) and so are most textbooks. The essay was written by a computer science professor after noticing the lack of understanding of these concepts, even among his academic peers.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
Can anyone please clear me about following terms in simple way
1-IDL Interface Definition Language
2-Interoperability
3-Portability
4-API
Thank you in advance
Interoperability generally just means that the system has been designed in such a way that other systems can communicate with it (either sending it information to store/process, request information from it, or both.)
An IDL is a meta-language that allows for a program (dll, etc.) to describe its inputs and outputs - it's an interface definition language because that's all it provides, an interface. Many specific implementations exist, but they're all very similar in function, most are similar in syntax, and they're ''all'' entirely declarative (they specify names of functions with inputs and outputs, but not what those functions ''do''). Often they're used specifically for calling functions via RPCs.
An API is more general than that - an IDL can specify an API, but so can a web service (SOAP or REST), or any other way for one application, dll, etc. to call functions in another. "Abstract" means just that - it's just the concept of having an interface to call a set of related functions without knowing or caring about their implementation. It's completely language independent.
Portability is a different concept - that generally means being able to compile or run your program on different platforms without a lot of work. Of course, APIs can help with that, if they abstract away platform differences. If you wanted to read images from disk into memory, for instance, you would do that very differently on Windows vs. Linux, somewhat differently on Windows 8 vs. Windows 95, and perhaps slightly differently in x64 vs. x86 versions of the same OS. If someone gave you wrappers so that you could compile or link to different files based on your platform, such that you could always call the same functions in your code and get back the same data regardless of platform, the functions themselves would be the API, the wrappers would be implementations of the API, and your code would be considered portable.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
I'm not sure if this type of question is allowed here, but it is related to software projects.
Is there a difference between scalable and extensible?
Is extensible a sub-set of scalable? I.e. Scalable (extensible)
Some regard it as the same, others regard it as different. What are the differences?
I am led to believe:
Scalable - make the system withstand more usage (bandwidth etc...) AND make it larger.
Extensible - add more functionality to the system.
Are they not the same?
Edit: If extensible is adding more functionality to the system and scalable can be deemed as making a system larger, is that not theoretically the same, proving that extensible is just a sub-set of scalable?
I am not a native speaker but I do think there is a difference.
If sth. is scalable, that means it can adopt to growth. This does not say how it adopts (that is, either by being so well-fitted already that it could take more requests, or by adding more resources of the same type, or by easily changing components).
Wikipedia says:
[It is the] ability to be enlarged to accommodate [some kind of] growth.
In theory it might also refer to a "downsizing" but that is normally not so interesting from an IT point of view.
You proposed:
Extensible - add more functionality to the system.
Possibly, but not necessarily. It might also refer to adding more capacity that serves the same purposes as before.
I'd say:
Scalability means a system is able to accomodate growth. I.e. the system grows.
Extensibility means you are able to (easily) add something to the system. I.e. something new is attached to the system - which does not have to be growth-related.
Agree with Observer. Just to add few more examples:
Extensibility:
How easily your software can support 'hooks' for new functionalities, interfaces, devices, input types etc.
This might also refer to how easily your software can support new services with least/no disruption to existing code and clients. For example, addition of a new endpoint to an existing webservice can be considered as a dimension of extensibility.
Scalability:
How easily your software will be able to deal with growing userbase / additional data, etc. Example: If your userbase grows in future, or you decide to save additional data for each entity, is your database scalable? Is your software scalable to user base growth?
When it comes to scalability, we also start talking about horizontal Vs vertical scalability and both of these are primarily referring to whether the system can scale on same infra/instance/deployment (vertical scalability) or we need to add some 'peers' to be able to take more load (horizontal scalability).
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
We're all familiar with basic ORM with relational databases: an object corresponds to a row and an attribute in that object to a column, though many ORMs add a lot of bells and whistles.
I'm wondering what other alternatives there are (besides raw access to the data). Alternatives that just work with relational databases would be great, but ones that could work with multiple types of backends besides just SQL (such as flat files, RSS, NoSQL, etc.) in a uniform manner would be even better. I'm more interested in ideas rather than specific implantations and what languages/platforms they work with, but please link to anything you think is interesting.
Your basic choices are:
Just use raw SQL.
Pick an ORM that meets your needs. Most platforms have a variety of choices. - for example the .NET platform supports LINQ, nHibernate, Entity Framework, etc.
Write your own ORM and/or data access framework.
I'm working in something more or less along the lines you have said. I think that data-store related tools have grown complex with time instead of simpler and more usable.
So, instead to add complexness, this kind of things must be as simple as:
Get something that points to the data
Use it to do something with the data (query or modify)
The thing you use to interact with the data should do some kind of (transparent) adaptation to the data-store you are working with, and done.
The translation thing may sound a bit ORM-like, but I'm speaking of something more generic:
Some kind of internal implementation to communicate with whatever you are working with (something similar to a JDBC driver, but without the need to work with SQL)
Some kind of mapping to convert data to java object (more or less like in ORM)
The implementation of these concepts I've developed is for java and you can see more of it at http://www.bryghts.com
Right now, I've only developed an engine for SQL related data-sources, but it's prepared for independence of it.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
I just wanted to know if you know of some projects that can help to decide whether the analyzed Source it is good code or bad RPG code.
I'm thinking on the terms of Software metric, McCabe Cyclomatic Number and all those things.
I know that those numbers are mere a hunch or two, but if you can present your management a point score they are happy and i get to modernize all those programs that otherwise work as specified but are painful to maintain.
so yeah .. know any code analyzers for (ILE)RPG ?
We have developed a tool called SourceMeter that can analyze source code conforming to RPG III and RPG IV versions (including free-form as well). It provides you the McCabe Cyclomatic Number and many other source code metrics that you can use to rate your RPG code.
If the issue is that the programs are painful to maintain, then the metric should reflect how how much pain is involved with maintaining them, such as "time to implement new feature X" vs "estimated time if codebase wasn't a steaming POS".
However, those are subjective (and always will be). IMO you're probably better off refactoring mercilessly to remove pain points from your development. You may want to look at the techniques of strangler applications to bring in a more modern platform to deliver new features without resorting to a Big Bang rewrite.
The SD Source Code Search Engine (SCSE) is a tool for rapidly search very large set of source code, using the langauge structure of each file to index the file according to code elements (identifiers, operators, constants, string literals, comments). The SD Source code engine is usable with a wide variety of langauges such as C, C++, C#, Java ... and there's a draft version of RPG.
To the OP's original question, the SCSE engine happens to compute various metrics over files as it indexes them, including SLOC, Comments, blank lines, and Halstead and Cyclomatic Complexity measures. The metrics are made available as byprooduct of the indexing step. Thus, various metrics for RPG could be obtained.
I've never seen one, although I wrote a primitive analyser for RPG400. With the advent of free form and subprocedures, it was too time consuming to modify. I wish there was an API that let me have access to the compiler lexical tables.
If you wanted to try it yourself, consider the notion of reading the bottom of the compiler listing and using the line numbers to at least get an idea of how long a variable lives. For instance, a global variable is 'worse' than a local variable. That can only be a guess because of GOTO and EXSR.
Lot of work.