What is the technical term for the input used to calculate a checkdigit? - naming-conventions

For example:
code = '7777-5';
input = code.substring(0, 4); // Returns '7777'
checkdigit = f(input); // f() produces a checkdigit
assert.areEqual(code, input + "-" + checkdigit)
Is there a technical term for input used above?
Specifically I'm calculating checkdigits for ISBNs, but that shouldn't effect the answer.

Is "original number excluding the check digit" technical enough? :)
Actually, it's often the case, as in the link you posted, that the check digit or checksum ensures a property about the full input:
...[the check digit] must be such that the sum of all the ten digits, each multiplied by the integer weight, descending from 10 to 1, is a multiple of the number 11.
Thus, you'd check the full number and see if it meets this property.
It's "backwards" when you're initially generating the check digit. In that case, the function would be named generate_check_digit or similar, and I'd just name its parameter as "input".

Although I am not sure if there is a well-known specific technical term for the input, what LukeH suggested (message/data) seems common enough.
Wiki for checksum:
With this checksum, any transmission error that flips a single bit of the message, or an odd number of bits, will be detected as an incorrect checksum
Wiki for check digit:
A check digit is a form of redundancy check used for error detection, the decimal equivalent of a binary checksum. It consists of a single digit computed from the other digits in the message.

Related

The set of atomic irrational numbers used to express the character table and corresponding (unitary) representations

I want to calculate the irrational number, expressed by the following formula in gap:
3^(1/7). I've read through the related description here, but still can't figure out the trick. Will numbers like this appear in the computation of the character table and corresponding (unitary) representations?
P.S. Basically, I want to figure out the following question: For the computation of the character table and corresponding (unitary) representations, what is the minimum complete set of atomic irrational numbers used to express the results?
Regards,
HZ
You can't do that with GAP's standard cyclotomic numbers, as seventh roots of 3 are not cyclotomic. Indeed, suppose $r$ is such a root, i.e. a rot of the polynomial $f = x^7-3 \in \mathbb{Q}[x]$. Then $r$ is cyclotomic if and only if the field extension \mathbb{Q}[x] is a subfield of a cyclotomic field. By Kronecker-Weber this is equivalent to that field being an abelian extension, i.e., the Galois group is abelian. One can check that this is not the case here (the Galois group is a semidirect product of C_7 with C_6).
So, $r$ is not cyclotomic.

Kotlin: Convert Hex String to signed integer via signed 2's complement?

Long story short, I am trying to convert strings of hex values to signed 2's complement integers. I was able to do this in a single line of code in Swift, but for some reason I can't find anything analogous in Kotlin. String.ToInt or String.ToUInt just give the straight base 16 to base 10 conversion. That works for some positive values, but not for any negative numbers.
How do I know I want the signed 2's complement? I've used this online converter and according to its output, what I want is the decimal from signed 2's complement, not the straight base 16 to base 10 conversion that's easy to do by hand.
So, "FFD6" should go to -42 (correct, confirmed in Swift and C#), and "002A" should convert to 42.
I would appreciate any help or even any leads on where to look. Because yes I've searched, I've googled the problem a bunch and, no I haven't found a good answer.
I actually tried writing my own code to do the signed 2's complement but so far it's not giving me the right answers and I'm pretty at a loss. I'd really hope for a built in command that does it instead; I feel like if other languages have that capability Kotlin should too.
For 2's complement, you need to know how big the type is.
Your examples of "FFD6" and "002A" both have 4 hex digits (i.e. 2 bytes).  That's the same size as a Kotlin Short.  So a simple solution in this case is to parse the hex to an Int and then convert that to a Short.  (You can't convert it directly to a Short, as that would give an out-of-range error for the negative numbers.)
"FFD6".toInt(16).toShort() // gives -42
"002A".toInt(16).toShort() // gives 42
(You can then convert back to an Int if needed.)
You could similarly handle 8-digit (4-byte) values as Ints, and 2-digit (1-byte) values as Bytes.
For other sizes, you'd need to do some bit operations.  Based on this answer for Java, if you have e.g. a 3-digit hex number, you can do:
("FD6".toInt(16) xor 0x800) - 0x800 // gives -42
(Here 0x800 is the three-digit number with the top bit (i.e. sign bit) set.  You'd use 0x80000 for a five-digit number, and so on.  Also, for 9–16 digits, you'd need to start with a Long instead of an Int.  And if you need >16 digits, it won't fit into a Long either, so you'd need an arbitrary-precision library that handled hex…)

Is format ####0.000000 different to 0.000000?

I am working on some legacy code at the moment and have come across the following:
FooString = String.Format("{0:####0.000000}", FooDouble)
My question is, is the format string here, ####0.000000 any different from simply 0.000000?
I'm trying to generalize the return type of the function that sets FooDouble and so checking to make sure I don't break existing functionality hence trying to work out what the # add to it here.
I've run a couple tests in a toy program and couldn't see how the result was any different but maybe there's something I'm missing?
From MSDN
The "#" custom format specifier serves as a digit-placeholder symbol.
If the value that is being formatted has a digit in the position where
the "#" symbol appears in the format string, that digit is copied to
the result string. Otherwise, nothing is stored in that position in
the result string.
Note that this specifier never displays a zero that
is not a significant digit, even if zero is the only digit in the
string. It will display zero only if it is a significant digit in the
number that is being displayed.
Because you use one 0 before decimal separator 0.0 - both formats should return same result.

FORTRAN77 How to throw error for the following: division symbol, timeout, very big floating value:

1 for certain symbols like (/), (,) and (;) while taking input.
2. Timeout error while taking input
3. very big floating value as input
4. and for improper inputs like - 4/3
I found out how to time-out the program after a specific time:
https://gcc.gnu.org/onlinedocs/gcc-4.3.2/gfortran.pdf (find: alarm)
If I interpret your question correctly, you want to check user input for correct values and interpret lists and fractions. I'm assuming you mean from console, a la
read(*,*) character-variable
The other option is to use formatted input, for example
read(*,'(i4)') integer-variable
which would read an integer with four digits.
This method would possibly already remove some of your problems, because the user input has to match the specified format or the program reports a runtime error. It is possible to specify the number of input values as well (separated by whitespace, ',' or ';'). Hence if you know beforehand how many values you are getting, the user can enter lists. If you make the requirements clearer, it will be easier to help. Fortran is a bit finicky for I/O.
If you really need the input to be of a general not-defined-at-compile-time type, you'll have to parse the string. This is also true if you want the user to be able to enter fractions like '4/3'.
I'm not aware of a method to restrict the time which a user has to enter values. It may be possible, but I've never seen it.
For too big or improper values you just can, for example, wrap the read statement in an endless do loop and exit if the number(s) is/are correct
do
read(*,'(i6)') x
if ( (x.lt.1e5).and.(x.ge.0) ) exit
end do
This would request an integer x from the user until the input is smaller than 100 000 and at least 0.
edit after discussion in comments:
The following code may be what you want:
implicit none
integer :: x
character(len=10) :: y
y=''
print*,'Enter one integer:'
do
read(*,'(i10,a)') x,y
if( (y.eq.'').and.(x.lt.1e5) ) exit
print*,'Enter one valid integer, smaller than 100 000, only:'
end do
print*,x
end
It just reads until there is exactly one integer smaller than 100 000 in the input. If you want a better user experience you can catch 'very' invalid input (that the program complains about and stops) with the iostat parameter.
One thing though: on my two available compilers (GCC 4.4.7 and Intel fortran compiler 11.0) the forward slash '/' is not a valid integer input and the program stops. If that is different for your compiler the code above should still work, but I can't test that.

Give state diagrams of DFAs recognizing the following languages. In all parts the alphabet is {0,1 }

Im trying to get the hang of drawing DFAs. I have the following problem to do with my following attempt, was wondering if anyone could tell me if im correct, or if incorrect what im doing wrong. Thanks! Also, if anyone has a good resource to learn more about how to do these, it would be greatly appreciated.
Give state diagrams of DFAs recognizing the following languages. In all parts the alphabet is {0,1 }
{w | the length of w is at most 5}
Here are some clues.
"At most 5": this implies you must do some counting. In state machines, counting is accomplished by the context of each node. In other words, you will require a number of nodes, each with a special meaning, and that meaning will be your "counter value."
"At most 5": This means you must accept words of length 0, 1, 2, 3, 4, and 5. (All of which have unique values, hint hint.)
Your alphabet is {0,1}, but there are no requirements of the language of the frequency, ordering, or anything related to 0 and 1. This means every time there is a transition for 0, the same transition must be available to 1, and vice versa. (Or some equivalent relation that reduces to this rule - but this is in parentheses because it's not something you need to think about.)
Here are your errors:
You have no marked start state.
The strings "0", "" (the empty string), "1" are rejected, but are within the prescribed language. In other words, you are accepting only words that are exactly length 5, not all words that are length 5 and less.
Since the alphabet is {0, 1}, you must specify at EACH state what happens when either a 0 or a 1 is encountered. If you encounter an input character whose edge is NOT specified, by convention you are going to the dead state, a state that always returns to itself and is never accepted, but is left undrawn. This is why your right-most state is unnecessary, but your left states are incomplete.
Final, big hint: You can have more than one "Accept" or "Final" state.
I think the DFA shown above is wrong. It will accept strings up to length 5 so you should make all the first six states to be final states. You are accepting only '1's but it should also accept '0's......so attach 0 with all 1's.