Context Free Grammar : Kleene plus - grammar

I make natural number set with context free grammar.
N ::= 0
| 1
| 2
| 3
| 4
| 5
| 6
| 7
| 8
| 9
| kleene{...} plus
how can I express natural number, without kleene plus?
For example 1495

You could express a natural number recursively.
N ::= N | N N
For 1495, 1 would be a natural number followed by another natural number (4), 4 would be a natural number followed by another (9), and 9 would be a natural number followed by a single natural number (5).

DIGIT ::= 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9
N ::= DIGIT | DIGIT N

Related

Not able to provide deicmal fields less than that is defined in fraction-digits

I am working with yang (RFC 6020). I have a leaf node 'Frequency' in yang. Frequency field is of type decimal64 and fraction-digits are defined as 6 and range from -90.000000 to 90.000000.
While trying to validate and save, following happens:
Number with 6 decimals gets saved eg. 34.000001
Number with no decimals gets saved eg. 34
But when I try to save number with decimal value less then 6 digits,
it doesn't get saved. It gives following error:
eg.
34.1:
"wrong fraction-digits 1 for type decimal64"
34.001 :
"wrong fraction-digits 3 for type decimal64"
34.00001 :
"wrong fraction-digits 5 for type decimal64"
Tried to search on net. Not much document is available on this.
Tried playing around with range parameter but it does not work.
leaf Frequency {
description "Frequency";
type decimal64 {
fraction-digits 6;
range "-90.000000..90.000000";
}
default 0;
}
I expect to be able to save values with/without decimal values where no of decimal values can vary from 0 to 6 digits. eg. 34, 34.1, 34.0004, 34.000001 etc
The value space for a decimal64 YANG type value with fraction-digits set to 6 are real numbers in the following range: -9223372036854.775808..9223372036854.775807. 34, 34.1, 34.001, 34.004, 34.00001 are all within this range and therefore valid values.
This is what the RFC says about decimal64 built-in type (RFC6020, Section 9.3, p1):
The decimal64 type represents a subset of the real numbers, which can
be represented by decimal numerals. The value space of decimal64 is
the set of numbers that can be obtained by multiplying a 64-bit
signed integer by a negative power of ten, i.e., expressible as
"i x 10^-n" where i is an integer64 and n is an integer between 1 and
18, inclusively.
So basically, d x 10^f, where d is a decimal64 value and f is fraction-digits, must result in a valid int64 value, which ranges from -9223372036854775808 to 9223372036854775807, inclusively.
Here is how fraction-digits is defined (RFC6020, Section 9.3.4, p1):
The "fraction-digits" statement, which is a substatement to the
"type" statement, MUST be present if the type is "decimal64". It
takes as an argument an integer between 1 and 18, inclusively. It
controls the size of the minimum difference between values of a
decimal64 type, by restricting the value space to numbers that are
expressible as "i x 10^-n" where n is the fraction-digits argument.
The following table lists the minimum and maximum value for each
fraction-digit value:
+----------------+-----------------------+----------------------+
| fraction-digit | min | max |
+----------------+-----------------------+----------------------+
| 1 | -922337203685477580.8 | 922337203685477580.7 |
| 2 | -92233720368547758.08 | 92233720368547758.07 |
| 3 | -9223372036854775.808 | 9223372036854775.807 |
| 4 | -922337203685477.5808 | 922337203685477.5807 |
| 5 | -92233720368547.75808 | 92233720368547.75807 |
| 6 | -9223372036854.775808 | 9223372036854.775807 |
| 7 | -922337203685.4775808 | 922337203685.4775807 |
| 8 | -92233720368.54775808 | 92233720368.54775807 |
| 9 | -9223372036.854775808 | 9223372036.854775807 |
| 10 | -922337203.6854775808 | 922337203.6854775807 |
| 11 | -92233720.36854775808 | 92233720.36854775807 |
| 12 | -9223372.036854775808 | 9223372.036854775807 |
| 13 | -922337.2036854775808 | 922337.2036854775807 |
| 14 | -92233.72036854775808 | 92233.72036854775807 |
| 15 | -9223.372036854775808 | 9223.372036854775807 |
| 16 | -922.3372036854775808 | 922.3372036854775807 |
| 17 | -92.23372036854775808 | 92.23372036854775807 |
| 18 | -9.223372036854775808 | 9.223372036854775807 |
+----------------+-----------------------+----------------------+
The tool you are using is wrong. The following is valid YANG:
typedef foobar {
type decimal64 {
fraction-digits 6;
range "-90.000000..90.000000";
}
default 34.00001;
}
YANG 1.1 (RFC7950) did not change this aspect of the language (the same applies).

How to find articulation points in a graph using SQL

I'm trying to write a Postgres function that returns the results of every articulation point in an undirected graph. But I can't figure out how to do this properly in relation to relational programming. So for example, if the graph is
select * from graph;
source | target
--------+--------
1 | 2
2 | 1
1 | 3
3 | 1
2 | 3
3 | 2
2 | 4
4 | 2
2 | 5
5 | 2
4 | 5
5 | 4
(12 rows)
then the result should be
select articulation_point();
articulation_point
--------------------
2
(1 row)
But I have no idea how to go about this. I've read some articles on how to do this in a programming language like Python, but no idea how to approach it in Postgres.

Most efficient way to query a word & synonym table

I have a WORDTB table with words and their synonyms: ID, WORD1, WORD2, WORD3, WORD4, WORD5. These words are arranged according to their frequency. When any word is given I want to query and retrieve the most frequent synonym of that particular word which is the word in WORD1 column.
This is the query I tried and it works fine, but I think this is inefficient.
SELECT WORD1
FROM WORDTB
WHERE WORD1='xxxx'
OR WORD2='xxxx'
OR WORD3='xxxx'
OR WORD4='xxxx'
OR WORD5='xxxx'
Can anyone suggest a more efficient way of doing this.
A more scalable solution would be to use a single row for each word.
synonym_words(word_id, synonym_id, word, popularity)
Fields:
word_id: The primary key for a word.
synonym_id: The word_id of the first synonym word.
word: The synonym text.
popularity: The sort order for the list of synonyms, 1 being the most popular.
Sample table data:
word_id | synonym_id | word | popularity
==============================================
1 | 1 | start | 1
2 | 1 | begin | 2
3 | 1 | originate | 3
4 | 1 | initiate | 4
5 | 1 | commence | 5
6 | 1 | create | 6
7 | 1 | startle | 7
8 | 1 | leave | 8
9 | 9 | end | 1
10 | 9 | ending | 2
11 | 9 | last | 3
12 | 9 | goal | 4
13 | 9 | death | 5
14 | 9 | conclusion | 6
15 | 9 | close | 7
16 | 9 | closing | 8
Assuming that the words will not change but their popularity may over time, the query should not break if you were to change the popularity order of the words so that the most popular synonym for a word was changed. You want your query to return the most popular word (popularity = 1) which shares the same synonym_id as the word used in the search.
SQL query:
SELECT word FROM synonym_words
WHERE synonym_id = (SELECT synonym_id FROM synonym_words WHERE word = 'conclusion')
AND popularity = 1

How to perform the following query with Postgres?

This is the example
banzai=# select letter_id, length_id, word from words;
letter_id | length_id | word
-----------+-----------+-------
1 | 1 | run
3 | 1 | tea
2 | 1 | cat
2 | 2 | cast
2 | 3 | coast
1 | 3 | roast
1 | 2 | rest
3 | 2 | team
3 | 3 | toast
(9 rows)
banzai=# select letter from letters;
letter
--------
R
C
T
(3 rows)
banzai=# select length from lengths;
length
--------
4
5
3
(3 rows)
banzai=# select length, letter, word from words, lengths, letters where words.length_id = lengths.id and words.letter_id = letters.id;
length | letter | word
--------+--------+-------
3 | C | cat
3 | R | run
3 | T | tea
4 | R | rest
4 | C | cast
4 | T | team
5 | R | roast
5 | C | coast
5 | T | toast
(9 rows)
I want to produce the following table in HTML
R T C
3 run tea cat
4 rest team cast
5 roast toast coast
I have a method in my java (backend) code that will produce the data in json. Angularjs (frontend) will take the json and present the table in html
As you want JSON this will return a single object:
select json_object_agg(length, o)
from (
select length, json_object_agg(letter, word) as o
from
words w
inner join
lengths l on w.length_id = l.id
inner join
letters t on w.letter_id = t.id
group by length
) s;
json_object_agg
----------------------------------------------------------------------------------------------------------------------------------------------------------------
{ "4" : { "R" : "rest", "C" : "cast", "T" : "team" }, "5" : { "R" : "roast", "C" : "coast", "T" : "toast" }, "3" : { "C" : "cat", "R" : "run", "T" : "tea" } }
The above query is for 9.4. In 9.3 it is a bit more difficult but it can be done as well.

How to get top 3 frequencies in MySQL?

In MySQL I have a table called "meanings" with three columns:
"person" (int),
"word" (byte, 16 possible values)
"meaning" (byte, 26 possible values).
A person assigns one or more meanings to each word:
person word meaning
-------------------
1 1 4
1 2 19
1 2 7 <-- Note: second meaning for word 2
1 3 5
...
1 16 2
Then another person, and so on. There will be thousands of persons.
I need to find for each of the 16 words the top three meanings (with their frequencies). Something like:
+--------+-----------------+------------------+-----------------+
| Word | 1st Most Ranked | 2nd Most Ranked | 3rd Most Ranked |
+--------+-----------------+------------------+-----------------+
| 1 | meaning 5 (35%) | meaning 19 (22%) | meaning 2 (13%) |
| 2 | meaning 8 (57%) | meaning 1 (18%) | meaning 22 (7%) |
+--------+-----------------+------------------+-----------------+
...
Is it possible to solve this with a single MySQL query?
Well, if you group by word and meaning, you can easily get the % of people who use each word/meaning combination out of the dataset.
In order to limit the number of meanings for each word returned, you will need create some sort of filter per word/meaning combination.
Seems like you just want the answer to your homework, so I wont post more than this, but this should be enough to get you on the right track.
Of course you can do
SELECT * FROM words WHERE word = 2 ORDER BY meaning DESC LIMIT 3
But this is cheating since you need to create a loop.
Im working on a better solution
I believe the problem I had a while ago looks similar. I ended up with the #counter thing.
Note about the problem
Let's suppose there is only one person, who says:
+--------+----------------+
| Person | Word | Meaning |
+--------+----------------+
| 1 | 1 | 7 |
| 1 | 1 | 3 |
| 1 | 2 | 8 |
+--------+----------------+
The report should read:
+--------+------------------+------------------+-----------------+
| Word | 1st Most Ranked | 2nd Most Ranked | 3rd Most Ranked |
+--------+------------------+------------------+-----------------+
| 1 | meaning 7 (100%) | meaning 3 (100%) | NULL |
| 2 | meaning 8 (100%) | NULL | NULL |
+--------+------------------+------------------+-----------------+
The following is not OK (50% frequency is absurd in a population of one person):
+--------+------------------+------------------+-----------------+
| Word | 1st Most Ranked | 2nd Most Ranked | 3rd Most Ranked |
+--------+------------------+------------------+-----------------+
| 1 | meaning 7 (50%) | meaning 3 (50%) | NULL |
| 2 | meaning 8 (100%) | NULL | NULL |
+--------+------------------+------------------+-----------------+
The intended meaning of the frequencies is "How many people think this meaning corresponds to that word"?
So it's not merely about counting "cases", but about counting persons in the table.