OWL What's the difference between int and integer type - semantic-web

I have a data type property which is the number of parking inside the building. that is a data type property. i set the domain to building, but when i tried to set the range of that property, i found that i have two options, which are int and integer.
i couldn't find the difference between then. could you help please

OWL uses the XSD datatypes in its specification, so this question has the same answer as XSD: What is the difference between xs:integer and xs:int?, which says:
"The difference is the following: xs:int is a signed 32-bit integer. xs:integer is an integer unbounded value."

Related

RedisGraph: Specifing Integer type of values ? Max int?

Is there a way to specify which specific type of integer a property can use : int16, uint32 .. ?
or is it just NUMBER ??
Second : which is the biggest integer value that we can use in RedisGraph ?
Currently, all integers are stored as 64-bit signed integers, so the max size will always be INT64_MAX. The largest value is theoretically implementation-defined, but on all the systems I'm familiar with this resolves to 0x7fffffffffffffff, or 9,223,372,036,854,775,807.
Since RedisGraph does not use a schema to enforce the types of properties (a.val can be an integer on one node and a string on another), values are stored in a 16-byte struct with type data, so being able to specify smaller integer types would not result in space savings.

How to tell if an identifier is being assigned or referenced? (FLEX/BISON)

So, I'm writing a language using flex/bison and I'm having difficulty with implementing identifiers, specifically when it comes to knowing when you're looking at an assignment or a reference,
for example:
1) A = 1+2
2) B + C (where B and C have already been assigned values)
Example one I can work out by returning an ID token from flex to bison, and just following a grammar that recognizes that 1+2 is an integer expression, putting A into the symbol table, and setting its value.
examples two and three are more difficult for me because: after going through my lexer, what's being returned in ex.2 to bison is "ID PLUS ID" -> I have a grammar that recognizes arithmetic expressions for numerical values, like INT PLUS INT (which would produce an INT), or DOUBLE MINUS INT (which would produce a DOUBLE). if I have "ID PLUS ID", how do I know what type the return value is?
Here's the best idea that I've come up with so far: When tokenizing, every time an ID comes up, I search for its value and type in the symbol table and switch out the ID token with its respective information; for example: while tokenizing, I come across B, which has a regex that matches it as being an ID. I look in my symbol table and see that it has a value of 51.2 and is a DOUBLE. So instead of returning ID, with a value of B to bison, I'm returning DOUBLE with a value of 51.2
I have two different solutions that contradict each other. Here's why: if I want to assign a value to an ID, I would say to my compiler A = 5. In this situation, if I'm using my previously described solution, What I'm going to get after everything is tokenized might be, INT ASGN INT, or STRING ASGN INT, etc... So, in this case, I would use the former solution, as opposed to the latter.
My question would be: what kind of logical device do I use to help my compiler know which solution to use?
NOTE: I didn't think it necessary to post source code to describe my conundrum, but I will if anyone could use it effectively as a reference to help me understand their input on this topic.
Thank you.
The usual way is to have a yacc/bison rule like:
expr: ID { $$ = lookupId($1); }
where the the lookupId function looks up a symbol in the symbol table and returns its type and value (or type and storage location if you're writing a compiler rather than a strict interpreter). Then, your other expr rules don't need to care whether their operands come from constants or symbols or other expressions:
expr: expr '+' expr { $$ = DoAddition($1, $3); }
The function DoAddition takes the types and values (or locations) for its two operands and either adds them, producing a result, or produces code to do the addition at run time.
If possible redesign your language so that the situation is unambiguous. This is why even Javascript has var.
Otherwise you're going to need to disambiguate via semantic rules, for example that the first use of an identifier is its declaration. I don't see what the problem is with your case (2): just generate the appropriate code. If B and C haven't been used yet, a value-reading use like this should be illegal, but that involves you in control flow analysis if taken to the Nth degree of accuracy, so you might prefer to assume initial values of zero.
In any case you can see that it's fundamentally a language design problem rather than a coding problem.

datatype character declaration (0.05D) why is the D not redundant after declaration?

I'm taking a Visual Basic class and I've been taught to use a type character after declaring a constant variable that is a decimal, like so:
const VARIABLE_NAME As Decimal = 0.06D
It seems redundant to me to add the D at the end, as I have already declared the data type. im afraid to ask my teacher, because i assume she probably wont be able to give me a clear answer in front of the class. I previously took a class on micro-processors so I have some (little) understating of how floats are stored in memory using binary. Can anyone give me a clear explanation so I can share it with my other classmates?
The data type you declare for your entity (a constant) is not necessarily the data type of the expression used to initialize that entity. You declare the type on the left side of the =, and it does not extend to the right. If the data types do not match, a conversion will need to happen upon assignment.
As documented, the type of a literal expression is dictated by its shape. A literal that falls under Numeric, fractional part is interpreted as a Double by default.
If you enable Option Strict On (which you should), the declaration
Const VARIABLE_NAME As Decimal = 0.06
will fail with the error:
Option Strict On disallows implicit conversions from 'Double' to 'Decimal'.
This is because there is no implicit conversion from Double to Decimal, as the Double data type can possibly contain values that Decimal cannot represent.
To avoid the conversion, you provide a type character D that makes the literal Decimal in the first place.
Compare this to
Const VARIABLE_NAME As Decimal = 42
The left part is Decimal, the right part is Integer, but no compile error occurs even with Option Strict On, because now there is an implicit widening conversion from Integer to Decimal, because Decimal can represent all values an Integer can possibly have.

Casting from a Packed(8) type to a TMSTMP (DEC15) type in a Unicode system (and back)

Background:
I have several tables that are connected for maintenance in a view cluster (SE54). Each of these tables have the standard Created/Changed By/On fields. For created data updating the fields are easy, and I use event 05 (On Create) in the Table Maintenance generator. For defaulting the changing fields it's a little bit more involved. I have to use event 01 (Before Save), and then update the tables TOTAL[] and EXTRACT[] with the field values as needed.
When maintaining the table in SM30, the format of TOTAL[] and EXTRACT[] is the same as the view I'm maintaining with an additional flag to identify what type of change is made (update/create/delete)
However, when maintaining in SM54 (which is the business requirement), the format of TOTAL[] and EXTRACT[] is just an internal table of character lines.
Problem:
I can figure out what the type of the table that is being edited is. But when I try to move the character line to the type line I get the following run-time errors: (Depending on how I try to move/assign it)
ASSIGN_BASE_TOO_SHORT
UC_OBJECTS_NOT_CONVERTIBLE
UC_OBJECTS_NOT_CHAR
All my structures are in the following format:
*several generic (flat) types
CREATED TYPE TMSTMP, "not a flat type
CHANGED TYPE TMSTMP, "not a flat type
CREATED_BY TYPE ERNAM,
CHANGED_BY TYPE AENAM,
The root of the problem is that the two timestamp fields are not flat types. I can see in the character line, that the timestamps are represented by 8 Characters.
Edit: Only after finding the solution could I identify the Length(8) field as packed.
I have tried the following in vain:
"try the entire structure - which would be ideal
assign ls_table_line to <fs_of_the_correct_type> casting.
"try isolating just the timestamp field(s)
assign <just_the_8char_representation> to <fs_of_type_tmpstmp> casting.
I've tried a few other variations on the "single field only" option with no luck.
Any ideas how I can cast from the Character type to type TMSTMP and then back again in order to update the internal table values?
I've found that the following works:
In stead of using:
field-symbols: <structure> type ty_mystructure,
<changed> type tmstmp.
assign gv_sapsingle_line to <structure> casting. "causes a runtime error
assign gv_sap_p8_field to <changed> casting. "ditto
I used this:
field-symbols: <structure> type any,
<changed> type any.
assign gv_sapsingle_line to <structure> casting type ty_mystructure.
assign gv_sap_p8_field to <changed> casting type ty_tmstmp.
For some reason it didn't like that I predefined the field symbols.
I find that odd as the documentation states the following:
Casting with an Implicit Type Declaration Provided the field symbol is
either fully typed or has one of the generic built-in ABAP types – C,
N, P, or X – you can use the following statement:
ASSIGN ... TO <FS> CASTING.
When the system accesses the field symbol, the content of the
assigned data object is interpreted as if it had the same type as the
field symbol.
I can only assume that my structure wasn't compatible (due to the P8 -> TMSTMP conversion)
The length and alignment of the data object must be
compatible with the field symbol type. Otherwise the system returns a
runtime error. If the type of either the field symbol or the data
object is – or contains – a string, reference type, or internal table,
the type and position of these components must match exactly.
Otherwise, a runtime error occurs.

Is it possible to create custom byte in .net

I am creating a WCF in vb.net inside VS 2010. I have a handful of properties that are currently bytes (0 - 255) and represent different test scores. Is it possible for me to create my own type based on this that will only allow values between 0 and 110? For example, if I have
Dim a as Byte
a = 256
I will get "Constant expression not representable in type 'Byte'." before the code is compiled. I want to have something like this for my own type so the below code would give me "Constant expression not representable in type 'myByte'."
Dim a as myByte
a = 110
You can only use predefined (native) types, as Byte, and implement some features, like overloading operators to check minimum and maximum values. However, not every operator can be overloaded, what, in this case, includes the assignement operator '='.
Check http://msdn.microsoft.com/en-us/library/8edha89s%28v=vs.71%29.aspx and the tutorials if it helps somewhat.
To assign a value tp your type you can make use of properties or methods that set the value checking for boudaries and other conditions, perfectly doable.
But to define it as a native... negative, sir.
Nope, I don't think that's possible. You'll have to use a constructor to initialize your myByte instance and do the range check at runtime (not sure how useful that would be).