Construct Binary Search Tree - binary-search-tree

Construct Binary Search Tree in the following order:
JAN, MAR, JUN, FEB, JUL, MAY, APR, SEP, AUG, OCT, NOV, DEC.
How am I supposed to solve the problem? How do I know which is greater than whom?

I think you would have something like this:
JAN
/ \
FEB MAR
\
MAY
\
APR
\
JUN
\
SEP
\
AUG
\
OCT
\
NOV
\
DEC
so, you would have to start by creating a new node for the root of the tree and setting its value to the first element in the list, which is "JAN" in this case.
For each subsequent element in the list, compare the element to the root node. If the element is less than the root node, add it as the left child of the root node. If the element is greater than the root node, add it as the right child of the root node.
Repeat this process for the newly added node, comparing it to its parent node and adding it as the left or right child as appropriate.

Related

What is the difference between the permissions tags -rwxr-xr-x and -rwxrwxrwx?

Before trying to assemble sequence data, I get a file size estimate for my raw READ1/READ2 files by running the command ls -l -h from the directory the files are in. The output looks something like this:
-rwxrwxrwx# 1 catharus2021 staff 86M Jun 11 15:03 pluvialis-dominica_JJW362-READ1.fastq.gz
-rwxrwxrwx# 1 catharus2021 staff 84M Jun 11 15:03 pluvialis-dominica_JJW362-READ2.fastq.gz
For a previous run using the identical command, but a different batch of data, the output was as such:
-rwxr-xr-x 1 catharus2021 staff 44M Mar 16 2018 lagopus_lagopus_alascensis_JJW1970_READ1.fastq.gz
-rwxr-xr-x 1 catharus2021 staff 52M Mar 16 2018 lagopus_lagopus_alascensis_JJW1970_READ2.fastq.gz
It doesn't seem to be affecting any downstream commands, but does anyone know why the strings at the very beginning (-rwxrwxrx# vs. -rwxr-xr-x) are different? I assume that they're permissions flags, but google has been less-than-informative when I try to type those in and search.
Thanks in advance for your time.
The coding describes who can access a file in which way. It is oredred:
owner - group - world
rwxr-xr-x
user can read, write and execute
group can only read and execute
world can only read and execute
This prevents that other people can overwrite your data. If you change to
rwxrwxrwx
Everybody can overwrite your data.

AWK complains about number of fields when extracting variables

I have a script to parse a TeamCity directory map file. The script works, but I want to know why refactoring it into using variables breaks it with a seemingly unrelated error message and how I can still have it work using variables.
MAP=/opt/TeamCity/buildAgent/work/directory.map
sed -n -e '1,3d;1,/#/{/#/!p}' $MAP | \
awk ' {
n=split($0, array, "->");
printf(substr(array[1], 6) substr(array[2],2,16) "\n");
}
'
This prints
nicecorp::Master 652293808ace4eb5
nicecorp::Reset Database 652293808ace4eb5
nicecorp::test-single-steps 652293808ace4eb5
nicecorp::Develop 652293808ace4eb5
nicecorp::Pull Requests 652293808ace4eb5
Which is pretty much what I want.
The refactoring that breaks
But then I was trying to extract the sub strings into variables, and the script broke. I changed the last printf statement into this
proj=substr(array[1], 6);
tcdir=substr(array[2],2,16);
printf($proj" " $tcdir);
That just prints this error, although I thought it was more or less the same?
awk: program limit exceeded: maximum number of fields size=32767
FILENAME="-" FNR=1 NR=1
This error seems a bit weird, given that my total input is about 500 bytes, 60 times less than the limit they complain about with regards to fields.
AWK version: mawk (1994)
Data format ($ head -10 directory.map):
#Don't edit this file!
#Nov 5, 2019 1:49:26 PM UTC
--version=2
bt30=nicecorp::Master -> 652293808ace4eb5 |?| Oct 29, 2019 4:14:27 PM UTC |:| default
bt32=nicecorp::Reset Database -> 652293808ace4eb5 |?| Oct 30, 2019 1:01:48 PM UTC |:| default
bt33=nicecorp::test-single-steps -> b96874cc9acaf874 |?| Nov 4, 2019 4:20:13 PM UTC |:| default
bt33=nicecorp::test-single-steps -> 652293808ace4eb5 |?| Nov 5, 2019 9:00:37 AM UTC |:| default
bt28=nicecorp::Develop -> 652293808ace4eb5 |?| Nov 5, 2019 1:07:53 PM UTC |:| default
bt29=nicecorp::Pull Requests -> 652293808ace4eb5 |?| Nov 5, 2019 1:18:08 PM UTC |:| default
#
The source of the problem is that the print statement in the refactor is using shell notation for variable ($proj instead of proj, $tcdir instead of tcdir).
When those values are numeric (e.g., tcdir=652293808ace4eb5 for the first line), awk (mawk in this case) will try to print 652293808-th column. Current version of gawk will not fail here - they will realize there are only few columns, and will show empty string for those field (or the full line for $0, if the value is non numeric)
Older version may attempt to extend the field list array to match the requested number, resulting in limit exceeded message.
Also note two minor issues - refactored code uses proj as format - it will get confused if '%' is included. Also, missing newlines. Did you really mean printf and not print ?
Fix:
proj=substr(array[1], 6);
tcdir=substr(array[2],2,16);
# Should consider print, instead of printf
printf(proj " " tcdir "\n");
# print proj, tcdir
The problem was syntax. I was using the shell style $tcdir to insert the value of the variable instead of simply tcdir. By (some unknown to me) means the tcdir portion of $tcdir is resolved to some numeric field value, meaning I am trying to print the value of a field, not the variable tcdir.

SchemaCrawler not able to find MSSQL tables

I'm trying to export a schema from MSSQL database using SchemaCrawler & jTDS driver (version 1.3.1).
The command is:
./schemacrawler.sh \
-server=sqlserver \
-password= \
-command=schema \
-outputformat=png \
-outputfile=./output/result.png \
-infolevel=standard \
-schemas=.*XYZ.*DOMAIN.user.* \
-tabletypes=TABLE \
-tables=.* \
-routinetypes= \
-loglevel=ALL \
-url=jdbc:jtds:sqlserver://server.com:1433/XYZ\;instance=dbinstance\;useNTLMv2=TRUE\;domain=DOMAIN\;user=user\;password=pwd
The DB tables were created under the user's schema, e.g.: DOMAIN\user.Table1
The connection URL and regex to match the schema work fine - when I launch the Database Manager with the same connection string, I can see all the tables listed as DOMAIN\user.table.
However, when I run the script from a Docker container, I'm getting:
Mar 14, 2017 8:53:18 PM schemacrawler.crawl.SchemaCrawler crawlTables
INFO: Crawling tables
Mar 14, 2017 8:53:18 PM schemacrawler.crawl.TableRetriever retrieveTables
INFO: Retrieving tables for schema: "XYZ"."DOMAIN\user"
Mar 14, 2017 8:53:18 PM schemacrawler.crawl.TableRetriever retrieveTables
FINER: Retrieving table types: [TABLE]
Mar 14, 2017 8:53:18 PM schemacrawler.crawl.MetadataResultSet close
INFO: Processed 0 rows for <retrieveTables>
Mar 14, 2017 8:53:18 PM schemacrawler.crawl.SchemaCrawler crawlTables
INFO: Retrieved 0 tables
Any ideas why it can't see the tables?
Please see Making Connections to a Database, on the SchemaCrawler website, and see if that helps. Please try -schemas=.*user\.dbo.* and see if that works for you.
Sualeh Fatehi, SchemaCrawler

Refresh my new plasmoid - reparse source

I'm learning how to develop Kde Plasma 5 plasmoids, and testing it with a small widget, consistent of just two qmls. I read some information sources, like https://techbase.kde.org or https://api.kde.org/frameworks/ and created a package structure and sources for my test plasmoid, which looks like this:
$ ls -lR test
test:
total 8
drwxr-xr-x 3 alberto alberto 4096 nov 26 14:28 contents
-rw-r--r-- 1 alberto alberto 459 nov 26 14:28 metadata.desktop
test/contents:
total 4
drwxr-xr-x 2 alberto alberto 4096 nov 26 14:33 ui
test/contents/ui:
total 8
-rw-r--r-- 1 alberto alberto 275 nov 26 14:28 main.qml
-rw-r--r-- 1 alberto alberto 465 nov 26 14:33 RootContainer.qml
The RootContainer is just the fullRepresentation of the widget, and contains only a label with the text "prueba1". So, as i read in the documentation, i used the command plasmapkg2 to install the widget as follows:
$ plasmapkg2 --install test
pluginname: "org.matrixland.test"
Generated "/home/xxx/.local/share/plasma/plasmoids//kpluginindex.json" ( 3 plugins)
/home/xxx/Programación/proyectos/plasmoides/test instalado con éxito
Then, i can use it in the kde desktop and everything is fine. It is shown in the desktop, with the text label.
But now, if i change the text of the label, "prueba2", and i remove and reinstall the plugin as follows
$ plasmapkg2 --remove test
Constructing a KPluginInfo object from old style JSON. Please use kcoreaddons_desktop_to_json() for "" instead of kservice_desktop_to_json() in your CMake code.
Calling KPluginInfo::property("X-KDE-PluginInfo-Name") is deprecated, use KPluginInfo::pluginName() in "/usr/lib/x86_64-linux-gnu/qt5/plugins/plasma/packagestructure/plasma_packagestructure_share.so" instead.
Constructing a KPluginInfo object from old style JSON. Please use kcoreaddons_desktop_to_json() for "" instead of kservice_desktop_to_json() in your CMake code.
Calling KPluginInfo::property("X-KDE-PluginInfo-Name") is deprecated, use KPluginInfo::pluginName() in "/usr/lib/x86_64-linux-gnu/qt5/plugins/plasma/packagestructure/plasma_packagestructure_share.so" instead.
Generated "/home/xxx/.local/share/plasma/plasmoids//kpluginindex.json" ( 2 plugins)
/home/xxx/Programación/proyectos/plasmoides/test desinstalado con éxito
>xxx#eleanor:~/Programación/proyectos/plasmoides$ plasmapkg2 --install test
pluginname: "org.matrixland.test"
Generated "/home/alberto/.local/share/plasma/plasmoids//kpluginindex.json" ( 3 plugins)
/home/alberto/Programación/proyectos/plasmoides/test instalado con éxito
If now, i add it again to the desktop, i see the old text instead of the new one. I checked in the /home/xxx/.local/share/plasma/plasmoids/org.matrixland.test directory that the source is up to date and refreshed, so i can't guess why am i obtaining the old text instead of the new one.
Obviously my problem is that none of the changes i make in the qml is reflected in the widget, not only text changes. I don't know if i am doing something wrong, or if i must do anything else to update the widget. Can anybody help me with that?
This is because the old QML is "cached" still. You'll need to restart plasmashell in order to see the changes.
killall plasmashell; kstart5 plasmashell
I used this script to reinstall applets when I want to test live. When I want to test quickly though, I'll use plasmoidviewer with:
plasmoidviewer -a package -l bottomedge -f horizontal
like in this script.

Dump incremental file location

How does dump create the incremental backup? It seems I should use the same file name when I create a level 1 dump:
Full backup:
dump -0aLuf /mnt/bkup/backup.dump /
and then for the incremental
dump -1aLuf /mnt/bkup/backup.dump /
What happens if I dump the level 1 to a different file:
dump -1aLuf /mnt/bkup/backup1.dump /
I am trying to understand how dump keeps track of the changes. I am using a ext3 file system.
This is my /etc/dumpdates:
# cat /etc/dumpdates
/dev/sda2 0 Wed Feb 13 10:55:42 2013 -0600
/dev/sda2 1 Mon Feb 18 11:41:00 2013 -0600
My level 0 for this system was around 11GB and then I ran level 1 today and I used the same filename and the size was around 5 GB.
I think I figured out the issue. It looks like dump adds information in the file so it knows when the previous level occurred.
Level 0 backup
# file bkup_tmp_0_20130220
bkup_tmp_0_20130220: new-fs dump file (little endian), This dump Wed Feb 20 14:29:31 2013, Previous dump Wed Dec 31 18:00:00 1969, Volume 1, Level zero, type: tape header, Label my-label, Filesystem /tmp, Device /dev/sda3, Host myhostname, Flags 3
Level 1 backup, after some change
# file bkup_tmp_1_20130220
bkup_tmp_1_20130220: new-fs dump file (little endian), This dump Wed Feb 20 14:30:48 2013, Previous dump Wed Feb 20 14:29:31 2013, Volume 1, Level 1, type: tape header, Label my-label, Filesystem /tmp, Device /dev/sda3, Host myhostname, Flags 3