Scodec: How to create a codec for an optional byte - scodec

I must create a codec for a message that has the following specification
The message length is indicated by a byte of which the least significant bit is an extension flag that, when set indicates that the following (optional) byte must be used as the most significant byte. (Hope it make sense) It can be depicted as follows:
+----------------------------------------------------------------------------------+
| LENGTH |
| |
+----------------------------------+-----+-+---------------------------------------+
| | | | |
| Len1 (LSB) | Ext | | Len2 (MSB) - Optional |
+----+----+----+----+----+----+----+-----+ +----+----+----+----+----+----+----+----+
| | | | | | | | | | | | | | | | | |
| | | | | | | | + | | | | | | | | | |
+----+----+----+----+----+----+----+--|--+ +----+----+----+----+----+----+----+----+
|
|
v
Boolean: if true then Len2 is used
else only Len1
The length of the data that will follow is determined by this field(s). I would like to use the codec along with predefined codecs and combinators.
I guess it will involve using flatZip but I am not clear on how to incorporate flatZip into an HList combinator.
Any pointers to examples or documentation will be much appreciated.

One way to do this is using the scodec.codecs.optional combinator, which returns a Codec[Option[A]] given a Codec[Boolean] and a Codec[A].
val structure: Codec[(Int, Option[Int])] = uint(7) ~ optional(bool, uint8)
This gives us a codec of (Int, Option[Int]) - we need to convert this to a codec of Int. To do so, we'll need to provide a conversion from Int to (Int, Option[Int]) and another conversion in the reverse direction. We know the size field is at most 2^15 - 1 (7 LSB bits and 8 MSB bits), so converting from (Int, Option[Int]) to Int is total, whereas converting in the reverse direction could possibly fail -- for example, 2^16 cannot be represented in this structure. Hence, we can use widen to do the conversion:
val size: Codec[Int] = structure.widen[Int](
{ case (lsb, msb) => lsb + msb.map(_ << 7).getOrElse(0) },
{ sz =>
val msb = sz >>> 7
if (msb > 255 || msb < 0) Attempt.failure(Err(s"invalid size $sz"))
else Attempt.successful((sz & 0x7F, if (msb > 0) Some(msb) else None))
})
Finally, we can use this size codec to encode a variable length structure via variableSizeBytes:
val str: Codec[String] = variableSizeBytes(size, ascii)
This gives us a Codec[String] which prefixes the encoded string bytes with the size in bytes, encoded according to the scheme defined above.

Related

Clang-format: don't align long variable assignment to equals sign

Clang-format is wrapping my long variable assignments. In doing so, however, it's also aligning any continuation lines relative to the equals sign, like this:
long long variable = 1000 | 1100000000001 | 122365781 | 1256523983472934
| 2452346256 | 1646478592 | 126234753952359
| 435234523425345;
However, I want this to respect ContinuationIndentWidth:
long long variable = 1000 | 1100000000001 | 122365781 | 1256523983472934
| 2452346256 | 1646478592 | 126234753952359
| 435234523425345;
Is there any way to achieve this with clang-format?

How To Check Numerical Format in SQL Server 2008

I am converting some existing Oracle queries to MSSQL Server (2008) and can't figure out how to replicate the following Regex check:
SELECT SomeField
FROM SomeTable
WHERE NOT REGEXP_LIKE(TO_CHAR(SomeField), '^[0-9]{2}[.][0-9]{7}$');
That finds all results where the format of the number starts with 2 positive digits, followed by a decimal point, and 7 decimal places of data: 12.3456789
I've tried using STR, CAST, CONVERT, but they all seem to truncate the decimal to 4 decimal places for some reason. The truncating has prevented me from getting reliable results using LEN and CHARINDEX. Manually adding size parameters to STR gets slightly closer, but I still don't know how to compare the original numerical representation to the converted value.
SELECT SomeField
, STR(SomeField, 10, 7)
, CAST(SomeField AS VARCHAR)
, LEN(SomeField )
, CHARINDEX(STR(SomeField ), '.')
FROM SomeTable
+------------------+------------+---------+-----+-----------+
| Orig | STR | Cast | LEN | CHARINDEX |
+------------------+------------+---------+-----+-----------+
| 31.44650944 | 31.4465094 | 31.4465 | 7 | 0 |
| 35.85609 | 35.8560900 | 35.8561 | 7 | 0 |
| 54.589623 | 54.5896230 | 54.5896 | 7 | 0 |
| 31.92653899 | 31.9265390 | 31.9265 | 7 | 0 |
| 31.4523333333333 | 31.4523333 | 31.4523 | 7 | 0 |
| 31.40208955 | 31.4020895 | 31.4021 | 7 | 0 |
| 51.3047869443893 | 51.3047869 | 51.3048 | 7 | 0 |
| 51 | 51.0000000 | 51 | 2 | 0 |
| 32.220633 | 32.2206330 | 32.2206 | 7 | 0 |
| 35.769247 | 35.7692470 | 35.7692 | 7 | 0 |
| 35.071022 | 35.0710220 | 35.071 | 6 | 0 |
+------------------+------------+---------+-----+-----------+
What you want to do does not make sense in SQL Server.
Oracle supports a number data type that has a variable precision:
if a precision is not specified, the column stores values as given.
There is no corresponding data type in SQL Server. You have have a variable number (float/real) or a fixed number (decimal/numeric). However, both apply to ALL values in a column, not to individual values within a row.
The closest you could do is:
where somefield >= 0 and somefield < 100
Or if you wanted to insist that there is a decimal component:
where somefield >= 0 and somefield < 100 and floor(somefield) <> somefield
However, you might have valid integer values that this would filter out.
This answer gave me an option that works in conjunction with checking the decimal position first.
SELECT SomeField
FROM SomeTable
WHERE SomeField IS NOT NULL
AND CHARINDEX('.', SomeField ) = 3
AND LEN(CAST(CAST(REVERSE(CONVERT(VARCHAR(50), SomeField , 128)) AS FLOAT) AS BIGINT)) = 7
While I understand this is terrible by nearly all metrics, it satisfies the requirements.
The basis of checking formatting on this data type in inherently flawed as pointed out by several posters, however for this very isolated use case I wanted to document the workaround.

Not able to provide deicmal fields less than that is defined in fraction-digits

I am working with yang (RFC 6020). I have a leaf node 'Frequency' in yang. Frequency field is of type decimal64 and fraction-digits are defined as 6 and range from -90.000000 to 90.000000.
While trying to validate and save, following happens:
Number with 6 decimals gets saved eg. 34.000001
Number with no decimals gets saved eg. 34
But when I try to save number with decimal value less then 6 digits,
it doesn't get saved. It gives following error:
eg.
34.1:
"wrong fraction-digits 1 for type decimal64"
34.001 :
"wrong fraction-digits 3 for type decimal64"
34.00001 :
"wrong fraction-digits 5 for type decimal64"
Tried to search on net. Not much document is available on this.
Tried playing around with range parameter but it does not work.
leaf Frequency {
description "Frequency";
type decimal64 {
fraction-digits 6;
range "-90.000000..90.000000";
}
default 0;
}
I expect to be able to save values with/without decimal values where no of decimal values can vary from 0 to 6 digits. eg. 34, 34.1, 34.0004, 34.000001 etc
The value space for a decimal64 YANG type value with fraction-digits set to 6 are real numbers in the following range: -9223372036854.775808..9223372036854.775807. 34, 34.1, 34.001, 34.004, 34.00001 are all within this range and therefore valid values.
This is what the RFC says about decimal64 built-in type (RFC6020, Section 9.3, p1):
The decimal64 type represents a subset of the real numbers, which can
be represented by decimal numerals. The value space of decimal64 is
the set of numbers that can be obtained by multiplying a 64-bit
signed integer by a negative power of ten, i.e., expressible as
"i x 10^-n" where i is an integer64 and n is an integer between 1 and
18, inclusively.
So basically, d x 10^f, where d is a decimal64 value and f is fraction-digits, must result in a valid int64 value, which ranges from -9223372036854775808 to 9223372036854775807, inclusively.
Here is how fraction-digits is defined (RFC6020, Section 9.3.4, p1):
The "fraction-digits" statement, which is a substatement to the
"type" statement, MUST be present if the type is "decimal64". It
takes as an argument an integer between 1 and 18, inclusively. It
controls the size of the minimum difference between values of a
decimal64 type, by restricting the value space to numbers that are
expressible as "i x 10^-n" where n is the fraction-digits argument.
The following table lists the minimum and maximum value for each
fraction-digit value:
+----------------+-----------------------+----------------------+
| fraction-digit | min | max |
+----------------+-----------------------+----------------------+
| 1 | -922337203685477580.8 | 922337203685477580.7 |
| 2 | -92233720368547758.08 | 92233720368547758.07 |
| 3 | -9223372036854775.808 | 9223372036854775.807 |
| 4 | -922337203685477.5808 | 922337203685477.5807 |
| 5 | -92233720368547.75808 | 92233720368547.75807 |
| 6 | -9223372036854.775808 | 9223372036854.775807 |
| 7 | -922337203685.4775808 | 922337203685.4775807 |
| 8 | -92233720368.54775808 | 92233720368.54775807 |
| 9 | -9223372036.854775808 | 9223372036.854775807 |
| 10 | -922337203.6854775808 | 922337203.6854775807 |
| 11 | -92233720.36854775808 | 92233720.36854775807 |
| 12 | -9223372.036854775808 | 9223372.036854775807 |
| 13 | -922337.2036854775808 | 922337.2036854775807 |
| 14 | -92233.72036854775808 | 92233.72036854775807 |
| 15 | -9223.372036854775808 | 9223.372036854775807 |
| 16 | -922.3372036854775808 | 922.3372036854775807 |
| 17 | -92.23372036854775808 | 92.23372036854775807 |
| 18 | -9.223372036854775808 | 9.223372036854775807 |
+----------------+-----------------------+----------------------+
The tool you are using is wrong. The following is valid YANG:
typedef foobar {
type decimal64 {
fraction-digits 6;
range "-90.000000..90.000000";
}
default 34.00001;
}
YANG 1.1 (RFC7950) did not change this aspect of the language (the same applies).

More than 3 options are not working while declaring Xtext grammar

Particularly when I use more than 3 OR symbols.
datatype:
Integer | Float | Char | Blah | Blah
entity:
Class | Struct | Enumeration | Union
the complete grammar can be found here: https://gist.github.com/Mrprofessor/7b8df3f00c75ef2ac67bffd0a20e983c
The problem is that your grammar is ambigous
consider this model
Bla;
Blubb;
Pling;
are these Bits | Pointers | Labels | Entrys | Logicals | HwordLogicals | Bytes

Understanding the distance relationships in a Longitude and Latitude equation from an SQL query

In a PHP program that I did not develop I am able to enter, via a form, distance from a given US zipcode (radius in miles) from which to do a proximity search.
Let's take, for example, the city Gastonia, NC with a zipcode of 28054 and a radius distance of 10 miles.
The PHP code generates the SQL query dynamically. Before it gets to that point it does its calculations behind the scenes and gives these values:
:minlat (Float) 35.084832880851
:maxlat (Float) 35.374297119149
:minlon (Float) -81.305653802747
:maxlon (Float) -80.951286197253
It also gives this distance:
:distance (Float) 16093.47
I cannot see or manipulate the code that generates these values given the distance I entered into the form. However, I can override the values of each of these calculated variables.
I understand the :minlat and :minlon, it's the central point of my zipcode. What I don't understand is, what is :distance and what relationship does it have to :maxlat and :maxlon?
What type of measurement is :distance given that it started as 10 miles in the form?
Obviously :distance added to :maxlat or :maxlon doesn't make any sense.
What I ultimately want to be able to do is take a :minlat and :minlon, which I have a database of point, and then search a certain :distance.
So, if I wanted to search 20 miles, that would be :distance 32186 ish, but how does that affect maxlat and maxlon?
If you are interested in the entire SQL query, it's:
SELECT node.title AS node_title,
node.nid AS nid,
node.created AS node_created,
'node' AS field_data_field_item_photos_node_entity_type,
(COALESCE(ACOS(0.81684734668492*COS(RADIANS(location.latitude))*(0.15421945466762*COS(RADIANS(location.longitude)) + -0.9880366186544*SIN(RADIANS(location.longitude))) + 0.57685389156511*SIN(RADIANS(location.latitude))), 0.00000)*6370997.0816549) AS location_distance
FROM
{node} node
LEFT JOIN {location_instance} location_instance ON node.vid = location_instance.vid
LEFT JOIN {location} location ON location_instance.lid = location.lid
WHERE (( (node.status = '1')
AND (location.latitude > '35.084832880851'
AND location.latitude < '35.374297119149'
AND location.longitude > '-81.305653802747'
AND location.longitude < '-80.951286197253')
AND ((COALESCE(ACOS(0.81684734668492*COS(RADIANS(location.latitude))*(0.15421945466762*COS(RADIANS(location.longitude)) + -0.9880366186544*SIN(RADIANS(location.longitude))) + 0.57685389156511*SIN(RADIANS(location.latitude))), 0.00000)*6370997.0816549) < '16093.47') ))
Table structure
+-----------+---------------+------+-----+----------+-------+
| Field | Type | Null | Key | Default | Extra |
+-----------+---------------+------+-----+----------+-------+
| zip | varchar(16) | NO | MUL | 0 | |
| city | varchar(30) | NO | | | |
| state | varchar(30) | NO | | | |
| latitude | decimal(10,6) | NO | MUL | 0.000000 | |
| longitude | decimal(10,6) | NO | MUL | 0.000000 | |
| timezone | tinyint(4) | NO | | 0 | |
| dst | tinyint(4) | NO | | 0 | |
| country | char(2) | NO | MUL | | |
+-----------+---------------+------+-----+----------+-------+