Programming language without integer type [closed] - language-design

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
What would be shortcomings of a language without integer type (only float)? Which of them will be very serious or unsolvable? Suppose compiler is sufficiently smart to print rounded number when it knows, that this number is really integer. Was such idea discussed in scientific papers or implemented in some language (I cannot find neither).

Javascript is such a language: all numbers are double-precision (64-bit) floating point. There are a number of unfortunate consequences of this design choice, but clearly the result can be made to work...
Note that double-precision floating point can precisely represent integers of up to 53 bits. So, it can be more practical than you might think to use it as an integer-substitute.

I believe that the problem in question is in two points, which are:
The "float" data require more processor, while data "integer" less relative to "float"; Therefore depending on the size of the program can occur overload.
The other point would be the main memory space used, following the same as the processor where the data "float" require more space than the data "integer".
I hope to have contributed to answer your question.

There's relatively new proposal for decimal floating point number http://dec64.com/ . It would be really interesting to see a language supporting it natively.
As for shortcomings of only using floats, nothing which would be a real problem comes to mind. Some operations would be strange, as it is case in JavaScript and things like bit twiddling.

Related

Why OO Combines Code And Data Together? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I'm almost new to programming and I came to this question that:
why should object carry code along with data? isn't packing data enough?
For example:
Instead of having 5 employee objects that each has a getDataOfBirth() method (consuming more memory), have a single method in global space and have 5 object with only attributes(smaller objects).
Am I getting something wrong? Is my question even considered general and possible to be occurred in every newbie's mind?
The linguistic aspect of it:
This is an idea that OOP skeptics have been talking about for a long time, but it's more of a matter of preference I would say. If you are new to programming and already are thinking about these things, then maybe functional programming would make a lot of sense to you.
The memory aspect of it:
The functions are typically not stored inside the objects, so OO objects that have a lot of functions do typically not carry those functions around. This is however an implementation detail but most OOP languages should be thought of like that.
Especially in the case of natively compiled languages like C++, the code and the data will be separated into different memory areas altogether and will not really mix. That is also a bit of an implementation detail but all mainstream operating systems, as far as I know, will allocate memory with code separated from data. The functions of a class will be allocated in one area and the data of the objects in another, and normally all objects of the same class will use the same functions.

Binary serialisation of Rust data strucutures [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
What is the current state of serialisation-to-binary in Rust?
I have some large (1-10MB) data structure to be sent across a network, and don't want to encode them as JSON or hex (the two serialisers I have found).
I have found #[repr(packed)]. Is this what I should use, or is there something more portable?
#[repr(packed)] only makes your data small. It does not offer any format guarantees or serialization help.
You have a few choices here (ordered by my opinion from best to worst solution):
You can use the Cap'n proto implementation for Rust
https://github.com/dwrensha/capnproto-rust
It's not really serialization, more of a forced format for structs that are then sent over the network without any conversion
fast
You could write your own Serializer and Deserializer.
you have full control over the format
runtime overhead for every single datum
you need to implement lots of stuff
You can transmute your structs to a [u8] and send that
probably the fastest solution
you need to make sure that the compiler for the program on both sides is exactly the same, otherwise the formats don't match up.
Someone evil may send you bad data. When you transmute that back, you get buffer overflows and stuff
references in your data-structure will cause wild pointers and undefined behaviour
Don't use references

If I'm the only developer on a project, do I still need to use encapsulation? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I always hear that we need to encapsulate whenever we write object-oriented code. If I'm the only developer on a project, do I still need to use encapsulation?
One way to put an answer: Encapsulation, conceptually, exists for writing better, safer, less error-prone code. It doesn't exist, primarily, to facilitate teams working together on code (that might be a side effect, but that's not the purpose).
So the goods that encapsulation seeks to foster scale from one coder to many coders, and they are goods that do not really have to do with the number of coders, although those goods may find stronger expression the larger the project and teams are.
Encapsulation is there for a reason.
Someone has to maintain and manage your code after you are done, right? What if the project gets bigger and you get team members?
So, the answer is "yes", it is always best to use encapsulation whenever possible.
The fact you are asking this question makes me wonder you actually did not get the actual value of encapsulation as a means to reduce and thus deal with complexity.
My theoretical computer science professor used to tell me that in the end, if you think at the whole binary representation of a program, any program is just a number. Very big indeed but, only a number. And that is true, any other construct we use but 0 and 1 (i.e. C++, Java, Python, functional programming, object oriented programming, aspect oriented programming, etc..) is just because of the fact we need more abstract means to get the one number we need.

Are there Ciphers that get smaller? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Closed 8 years ago.
This question appears to be off-topic because it lacks sufficient information to diagnose the problem. Describe your problem in more detail or include a minimal example in the question itself.
This question does not appear to be about programming within the scope defined in the help center.
Improve this question
I'm playing around with text transformations - ciphers. From all that I have surveyed it seems that all of these algorithms either break even in terms of transformed message length, or get larger. Are there any known algorithms/text transformations that when applied to a message actually make the message smaller (not counting the key, of course)?
For instance, RSA, when you encode the message, makes the encrypted message quite a bit larger than the original. Is there any such thing as that only the message becomes smaller, instead of larger, after (encryption, transformation, etc whatever you want to call it)?
I'm not doing this as part of security, so whether or not it's hackable is not of any interest to me.
P.S. I've done a lot of research in this area already through search engines (google, wikipedia, etc) but I have found no results. I don't want to say that such a technique doesn't exist without at least posting the question publicly first.
Thanks!
Compression tries to make input smaller. Obviously lossless compression will not make every input smaller, since that's impossible.
You can encrypt the compressed input if you want that. In principle compression and encryption are orthogonal concepts, but in some situations the length of the compressed text can be used to attack the system.
At first I thought about language transformation. Some English phrases translate to a single Chinese symbol. That's not a rigorous, mathematical example, but I suppose it qualifies.
Alternatively, from a bit-wise perspective, it wouldn't be possible to cipher/encode 2 bits of information in 1 bit.

is there a point to "optimize" types on iOS devices? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
Here I was just writing some code that dealt with an integer value of -24/+24, and I made my method return an int... And I thought to myself-- should I really be using a short in this case? I know it might have mattered back in the day when memory for something was 48k-- but in today's modern world does it really matter?
Is it ok to just be "int happy", even when I know my numbers are going to be very small?
All ARM CPUs have 32-bit integer registers and at least a 32-bit wide L1 bus, so using a short will give absolutely no advantage [1], and may in some cases be detrimental to performance.
Leave the variable as an int and be safe in the knowledge that you'll get an optimum register width pretty much wherever you run the code.
[1] The exception to this rule being when using the NEON unit - in which case a 16-bit operation offers more parallelism than a 32-bit operation.