SchemaCrawler not able to find MSSQL tables - sql

I'm trying to export a schema from MSSQL database using SchemaCrawler & jTDS driver (version 1.3.1).
The command is:
./schemacrawler.sh \
-server=sqlserver \
-password= \
-command=schema \
-outputformat=png \
-outputfile=./output/result.png \
-infolevel=standard \
-schemas=.*XYZ.*DOMAIN.user.* \
-tabletypes=TABLE \
-tables=.* \
-routinetypes= \
-loglevel=ALL \
-url=jdbc:jtds:sqlserver://server.com:1433/XYZ\;instance=dbinstance\;useNTLMv2=TRUE\;domain=DOMAIN\;user=user\;password=pwd
The DB tables were created under the user's schema, e.g.: DOMAIN\user.Table1
The connection URL and regex to match the schema work fine - when I launch the Database Manager with the same connection string, I can see all the tables listed as DOMAIN\user.table.
However, when I run the script from a Docker container, I'm getting:
Mar 14, 2017 8:53:18 PM schemacrawler.crawl.SchemaCrawler crawlTables
INFO: Crawling tables
Mar 14, 2017 8:53:18 PM schemacrawler.crawl.TableRetriever retrieveTables
INFO: Retrieving tables for schema: "XYZ"."DOMAIN\user"
Mar 14, 2017 8:53:18 PM schemacrawler.crawl.TableRetriever retrieveTables
FINER: Retrieving table types: [TABLE]
Mar 14, 2017 8:53:18 PM schemacrawler.crawl.MetadataResultSet close
INFO: Processed 0 rows for <retrieveTables>
Mar 14, 2017 8:53:18 PM schemacrawler.crawl.SchemaCrawler crawlTables
INFO: Retrieved 0 tables
Any ideas why it can't see the tables?

Please see Making Connections to a Database, on the SchemaCrawler website, and see if that helps. Please try -schemas=.*user\.dbo.* and see if that works for you.
Sualeh Fatehi, SchemaCrawler

Related

Returning a map object in cypher

I need to create edges between a set of nodes but it is not guaranteed that the edge is not exists already, I need to know which edges has been created so I can increment the edges counter for the two connected nodes.
I want to know the edges count for every node without querying the graph each time.
Example:
MERGE (u:user {id:999049043279872})
MERGE (g1:group {id:346709075951616})
MERGE (g2:group {id:346709075951617})
MERGE (g1)-[m1:member]->(u)
MERGE (g2)-[m2:member]->(u)
Sometimes the user is already a member of the group so I don't want to increment the counter in this case.
I tried to use the result statistics but it returns the created relationships number only, I thought also about using a map and then fill the content using ON CREATE SET after MERGE:
WITH {g1:0, g2:0} as res
MERGE (u:user {id:999049043279872})
MERGE (g1:group {id:346709075951616})
MERGE (g2:group {id:346709075951617})
MERGE (g1)-[m1:member]->(u)
ON CREATE SET res.g1 = 1
MERGE (g2)-[m2:member]->(u)
ON CREATE SET res.g2 = 1
RETURN res
But it does not works; the server crashes immediately after executing the query.
Exception:
------ FAST MEMORY TEST ------
17235:M 28 Feb 2022 16:56:50.016 # main thread terminated
17235:M 28 Feb 2022 16:56:50.017 # Bio thread for job type #0 terminated
17235:M 28 Feb 2022 16:56:50.017 # Bio thread for job type #1 terminated
17235:M 28 Feb 2022 16:56:50.018 # Bio thread for job type #2 terminated
Fast memory test PASSED, however your memory can still be broken.
Please run a memory test for several hours if possible.
------ DUMPING CODE AROUND EIP ------
Symbol: (null) (base: (nil))
Module: /lib/x86_64-linux-gnu/libc.so.6 (base 0x7fbfe3dcc000)
$ xxd -r -p /tmp/dump.hex /tmp/dump.bin
$ objdump --adjust-vma=(nil) -D -b binary -m i386:x86-64 /tmp/dump.bin
=== REDIS BUG REPORT END. Make sure to include from START to END. ===
Please report the crash by opening an issue on github:
http://github.com/redis/redis/issues
Suspect RAM error? Use redis-server --test-memory to verify it.
Segmentation fault
Any ideas?
Thanks in advance
Neo4j stores already a counter inside each node to count the number of relationships and to provide a fast count access. When you want to get the number of members in a group, you can simply do:
MATCH (g:group)
return size((g)<-[:member]-())

What is the difference between the permissions tags -rwxr-xr-x and -rwxrwxrwx?

Before trying to assemble sequence data, I get a file size estimate for my raw READ1/READ2 files by running the command ls -l -h from the directory the files are in. The output looks something like this:
-rwxrwxrwx# 1 catharus2021 staff 86M Jun 11 15:03 pluvialis-dominica_JJW362-READ1.fastq.gz
-rwxrwxrwx# 1 catharus2021 staff 84M Jun 11 15:03 pluvialis-dominica_JJW362-READ2.fastq.gz
For a previous run using the identical command, but a different batch of data, the output was as such:
-rwxr-xr-x 1 catharus2021 staff 44M Mar 16 2018 lagopus_lagopus_alascensis_JJW1970_READ1.fastq.gz
-rwxr-xr-x 1 catharus2021 staff 52M Mar 16 2018 lagopus_lagopus_alascensis_JJW1970_READ2.fastq.gz
It doesn't seem to be affecting any downstream commands, but does anyone know why the strings at the very beginning (-rwxrwxrx# vs. -rwxr-xr-x) are different? I assume that they're permissions flags, but google has been less-than-informative when I try to type those in and search.
Thanks in advance for your time.
The coding describes who can access a file in which way. It is oredred:
owner - group - world
rwxr-xr-x
user can read, write and execute
group can only read and execute
world can only read and execute
This prevents that other people can overwrite your data. If you change to
rwxrwxrwx
Everybody can overwrite your data.

AWK complains about number of fields when extracting variables

I have a script to parse a TeamCity directory map file. The script works, but I want to know why refactoring it into using variables breaks it with a seemingly unrelated error message and how I can still have it work using variables.
MAP=/opt/TeamCity/buildAgent/work/directory.map
sed -n -e '1,3d;1,/#/{/#/!p}' $MAP | \
awk ' {
n=split($0, array, "->");
printf(substr(array[1], 6) substr(array[2],2,16) "\n");
}
'
This prints
nicecorp::Master 652293808ace4eb5
nicecorp::Reset Database 652293808ace4eb5
nicecorp::test-single-steps 652293808ace4eb5
nicecorp::Develop 652293808ace4eb5
nicecorp::Pull Requests 652293808ace4eb5
Which is pretty much what I want.
The refactoring that breaks
But then I was trying to extract the sub strings into variables, and the script broke. I changed the last printf statement into this
proj=substr(array[1], 6);
tcdir=substr(array[2],2,16);
printf($proj" " $tcdir);
That just prints this error, although I thought it was more or less the same?
awk: program limit exceeded: maximum number of fields size=32767
FILENAME="-" FNR=1 NR=1
This error seems a bit weird, given that my total input is about 500 bytes, 60 times less than the limit they complain about with regards to fields.
AWK version: mawk (1994)
Data format ($ head -10 directory.map):
#Don't edit this file!
#Nov 5, 2019 1:49:26 PM UTC
--version=2
bt30=nicecorp::Master -> 652293808ace4eb5 |?| Oct 29, 2019 4:14:27 PM UTC |:| default
bt32=nicecorp::Reset Database -> 652293808ace4eb5 |?| Oct 30, 2019 1:01:48 PM UTC |:| default
bt33=nicecorp::test-single-steps -> b96874cc9acaf874 |?| Nov 4, 2019 4:20:13 PM UTC |:| default
bt33=nicecorp::test-single-steps -> 652293808ace4eb5 |?| Nov 5, 2019 9:00:37 AM UTC |:| default
bt28=nicecorp::Develop -> 652293808ace4eb5 |?| Nov 5, 2019 1:07:53 PM UTC |:| default
bt29=nicecorp::Pull Requests -> 652293808ace4eb5 |?| Nov 5, 2019 1:18:08 PM UTC |:| default
#
The source of the problem is that the print statement in the refactor is using shell notation for variable ($proj instead of proj, $tcdir instead of tcdir).
When those values are numeric (e.g., tcdir=652293808ace4eb5 for the first line), awk (mawk in this case) will try to print 652293808-th column. Current version of gawk will not fail here - they will realize there are only few columns, and will show empty string for those field (or the full line for $0, if the value is non numeric)
Older version may attempt to extend the field list array to match the requested number, resulting in limit exceeded message.
Also note two minor issues - refactored code uses proj as format - it will get confused if '%' is included. Also, missing newlines. Did you really mean printf and not print ?
Fix:
proj=substr(array[1], 6);
tcdir=substr(array[2],2,16);
# Should consider print, instead of printf
printf(proj " " tcdir "\n");
# print proj, tcdir
The problem was syntax. I was using the shell style $tcdir to insert the value of the variable instead of simply tcdir. By (some unknown to me) means the tcdir portion of $tcdir is resolved to some numeric field value, meaning I am trying to print the value of a field, not the variable tcdir.

Modify Backup Script to Run 4x/week Instead of Daily

I'm looking at modifying a backup script that has been setup for me on my server. The script currently runs each morning to backup all of my domains under the /var/www/vhosts/ directory and I'd like to have it run only four times per week (Sun, Tue, Thu, Sat) instead of daily, if possible. I'm relatively new to the scripting language/commands and was wondering if someone might be able to help me with this? Here is the current script:
umask 0077
BPATH="/disk2/backups/vhosts_backups/`date +%w`"
LOG="backup.log"
/bin/rm -rf $BPATH/*
for i in `ls /var/www/vhosts` on
do
tar czf $BPATH/$i.tgz -C /var/www/vhosts $i 2>>$BPATH/backup.log
done
Thank you,
Jason
To answer my own question (in case it could benefit anyone else), it turns out that the backup script was scheduled through crontab, and that's what needed the adjustment. I did crontab -e and modified the 4th field below from an * to "0,2,4,6" (for Sun, Tue, Thu, Sat).
5 1 * * 0,2,4,6 /root/scripts/vhosts_backup.sh

Dump incremental file location

How does dump create the incremental backup? It seems I should use the same file name when I create a level 1 dump:
Full backup:
dump -0aLuf /mnt/bkup/backup.dump /
and then for the incremental
dump -1aLuf /mnt/bkup/backup.dump /
What happens if I dump the level 1 to a different file:
dump -1aLuf /mnt/bkup/backup1.dump /
I am trying to understand how dump keeps track of the changes. I am using a ext3 file system.
This is my /etc/dumpdates:
# cat /etc/dumpdates
/dev/sda2 0 Wed Feb 13 10:55:42 2013 -0600
/dev/sda2 1 Mon Feb 18 11:41:00 2013 -0600
My level 0 for this system was around 11GB and then I ran level 1 today and I used the same filename and the size was around 5 GB.
I think I figured out the issue. It looks like dump adds information in the file so it knows when the previous level occurred.
Level 0 backup
# file bkup_tmp_0_20130220
bkup_tmp_0_20130220: new-fs dump file (little endian), This dump Wed Feb 20 14:29:31 2013, Previous dump Wed Dec 31 18:00:00 1969, Volume 1, Level zero, type: tape header, Label my-label, Filesystem /tmp, Device /dev/sda3, Host myhostname, Flags 3
Level 1 backup, after some change
# file bkup_tmp_1_20130220
bkup_tmp_1_20130220: new-fs dump file (little endian), This dump Wed Feb 20 14:30:48 2013, Previous dump Wed Feb 20 14:29:31 2013, Volume 1, Level 1, type: tape header, Label my-label, Filesystem /tmp, Device /dev/sda3, Host myhostname, Flags 3