Dot Net Cisco Command Line Console Parser [closed] - vb.net

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I'm Trying to write a Cisco Command Line Parser to have an automated Graphical User Interface replacement for the Cisco console output. I have been able to get the ping time using Regular Expressions from a ping output and graph it, but am now stuck with more detailed out put of other commands like "Show interfaces" command,
any ideas how I can parse the Show Interface command output and extract all the useful info which i need?
Here is a "Show Interfaces" out put example:
FastEthernet0/0 is up, line protocol is up
Hardware is MV96340 Ethernet, address is 0018.189d.1df0 (bia 0018.189d.1df0)
Description: IP+ connection
Internet address is 164.128.251.50/24
MTU 1500 bytes, BW 100000 Kbit/sec, DLY 100 usec,
reliability 255/255, txload 1/255, rxload 1/255
Encapsulation ARPA, loopback not set
Keepalive set (10 sec)
Full-duplex, 100Mb/s, 100BaseTX/FX
ARP type: ARPA, ARP Timeout 04:00:00
Last input 00:00:00, output 00:00:00, output hang never
Last clearing of "show interface" counters never
Input queue: 0/75/3718/0 (size/max/drops/flushes); Total output drops: 0
Queueing strategy: fifo
Output queue: 0/40 (size/max)
5 minute input rate 2000 bits/sec, 6 packets/sec
5 minute output rate 3000 bits/sec, 10 packets/sec
152817108 packets input, 1043050554 bytes
Received 77347880 broadcasts (67140888 IP multicasts)
0 runts, 0 giants, 3351 throttles
381823 input errors, 0 CRC, 0 frame, 0 overrun, 381823 ignored
0 watchdog
0 input packets with dribble condition detected
--More-- 99065802 packets output, 440637782 bytes, 0 underruns
0 output errors, 0 collisions, 2 interface resets
300246 unknown protocol drops
0 babbles, 0 late collision, 0 deferred
0 lost carrier, 0 no carrier
0 output buffer failures, 0 output buffers swapped out
FastEthernet0/1 is administratively down, line protocol is down
Hardware is MV96340 Ethernet, address is 0018.189d.1df1 (bia 0018.189d.1df1)
MTU 1500 bytes, BW 100000 Kbit/sec, DLY 100 usec,
reliability 255/255, txload 1/255, rxload 1/255
Encapsulation ARPA, loopback not set
Keepalive set (10 sec)
Auto-duplex, Auto Speed, 100BaseTX/FX
ARP type: ARPA, ARP Timeout 04:00:00
Last input never, output never, output hang never
Last clearing of "show interface" counters never
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0
Queueing strategy: fifo
Output queue: 0/40 (size/max)
5 minute input rate 0 bits/sec, 0 packets/sec
5 minute output rate 0 bits/sec, 0 packets/sec
0 packets input, 0 bytes
Received 0 broadcasts (0 IP multicasts)
--More-- 0 runts, 0 giants, 0 throttles
--More-- 0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
0 watchdog
0 input packets with dribble condition detected
0 packets output, 0 bytes, 0 underruns
0 output errors, 0 collisions, 0 interface resets
0 unknown protocol drops
0 babbles, 0 late collision, 0 deferred
0 lost carrier, 0 no carrier
0 output buffer failures, 0 output buffers swapped out
Tunnel0 is up, line protocol is up
Hardware is Tunnel
Interface is unnumbered. Using address of FastEthernet0/0 (164.128.251.50)
MTU 17912 bytes, BW 100 Kbit/sec, DLY 50000 usec,
reliability 255/255, txload 1/255, rxload 1/255
Encapsulation TUNNEL, loopback not set
Keepalive not set
Tunnel source 164.128.251.50 (FastEthernet0/0), destination 164.128.32.1
Tunnel Subblocks:
src-track:
Tunnel0 source tracking subblock associated with FastEthernet0/0
Set of tunnels with source FastEthernet0/0, 1 member (includes iterators), on interface
Tunnel protocol/transport PIM/IPv4
--More-- Tunnel TOS/Traffic Class 0xC0, Tunnel TTL 255
--More-- Tunnel transport MTU 1472 bytes
Tunnel is transmit only
Tunnel transmit bandwidth 8000 (kbps)
Tunnel receive bandwidth 8000 (kbps)
Last input never, output 28w1d, output hang never
Last clearing of "show interface" counters never
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0
Queueing strategy: fifo
Output queue: 0/0 (size/max)
5 minute input rate 0 bits/sec, 0 packets/sec
5 minute output rate 0 bits/sec, 0 packets/sec
0 packets input, 0 bytes, 0 no buffer
Received 0 broadcasts (0 IP multicasts)
0 runts, 0 giants, 0 throttles
0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored, 0 abort
44 packets output, 2464 bytes, 0 underruns
0 output errors, 0 collisions, 0 interface resets
0 unknown protocol drops
0 output buffer failures, 0 output buffers swapped out
Virtual-Access1 is up, line protocol is up
Hardware is Virtual Access interface
Description: Internally created by SSLVPN context TEST
MTU 1406 bytes, BW 100000 Kbit/sec, DLY 100000 usec,
--More-- reliability 255/255, txload 1/255, rxload 1/255
--More-- Encapsulation SSL
Internal vaccess
Vaccess status 0x0, loopback not set
Keepalive set (10 sec)
DTR is pulsed for 5 seconds on reset
Last input never, output never, output hang never
Last clearing of "show interface" counters 29w5d
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0
Queueing strategy: fifo
Output queue: 0/40 (size/max)
5 minute input rate 0 bits/sec, 0 packets/sec
5 minute output rate 0 bits/sec, 0 packets/sec
0 packets input, 0 bytes, 0 no buffer
Received 0 broadcasts (0 IP multicasts)
0 runts, 0 giants, 0 throttles
0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored, 0 abort
0 packets output, 0 bytes, 0 underruns
0 output errors, 0 collisions, 0 interface resets
0 unknown protocol drops
0 output buffer failures, 0 output buffers swapped out
0 carrier transitions
Interface_Long_Split = Regex.Split(Result_Long, "(POS[0-9]/[0-9]/[0-9])|(POS[0-9]/[0-9])|(GigabitEthernet[0-9]/[0-9])|(FastEthernet[0-9]/[0-9])")
Dim count As Integer = 0
For i = 0 To Interface_Long_Split.Length
If Regex.IsMatch(Interface_Long_Split(i), "(POS[0-9]/[0-9]/[0-9])|(POS[0-9]/[0-9])|(GigabitEthernet[0-9]/[0-9])|(FastEthernet[0-9]/[0-9])") = True Then
ReDim Preserve Interfaces_List(count)
Interfaces_List(count) = Interface_Long_Split(i)
count = count + 1
End If

imho you are probably on a hiding to nothing.
you could try parsing those complex outputs a line at a time rather than as one big blob.

Related

Media and Data Integrity Errors

I was wondering if anyone can tell me what these mean. From most people posting about them, there is no more than double digits. However, I have 1051556645921812989870080 Media and Data Integrity Errors on my SK hynix PC711 on my new HP dev one. Thanks!
Here's my entire smartctl output
`smartctl 7.3 2022-02-28 r5338 [x86_64-linux-6.0.7-arch1-1] (local build)
Copyright (C) 2002-22, Bruce Allen, Christian Franke, www.smartmontools.org
=== START OF INFORMATION SECTION ===
Model Number: SK hynix PC711 HFS001TDE9X073N
Serial Number: KDB3N511010503A37
Firmware Version: HPS0
PCI Vendor/Subsystem ID: 0x1c5c
IEEE OUI Identifier: 0xace42e
Total NVM Capacity: 1,024,209,543,168 [1.02 TB]
Unallocated NVM Capacity: 0
Controller ID: 1
NVMe Version: 1.3
Number of Namespaces: 1
Namespace 1 Size/Capacity: 1,024,209,543,168 [1.02 TB]
Namespace 1 Formatted LBA Size: 512
Namespace 1 IEEE EUI-64: ace42e 00254f98f1
Local Time is: Wed Nov 9 13:58:37 2022 EST
Firmware Updates (0x16): 3 Slots, no Reset required
Optional Admin Commands (0x001f): Security Format Frmw_DL NS_Mngmt Self_Test
Optional NVM Commands (0x005f): Comp Wr_Unc DS_Mngmt Wr_Zero Sav/Sel_Feat Timestmp
Log Page Attributes (0x1e): Cmd_Eff_Lg Ext_Get_Lg Telmtry_Lg Pers_Ev_Lg
Maximum Data Transfer Size: 64 Pages
Warning Comp. Temp. Threshold: 84 Celsius
Critical Comp. Temp. Threshold: 85 Celsius
Namespace 1 Features (0x02): NA_Fields
Supported Power States
St Op Max Active Idle RL RT WL WT Ent_Lat Ex_Lat
0 + 6.3000W - - 0 0 0 0 5 5
1 + 2.4000W - - 1 1 1 1 30 30
2 + 1.9000W - - 2 2 2 2 100 100
3 - 0.0500W - - 3 3 3 3 1000 1000
4 - 0.0040W - - 3 3 3 3 1000 9000
Supported LBA Sizes (NSID 0x1)
Id Fmt Data Metadt Rel_Perf
0 + 512 0 0
1 - 4096 0 0
=== START OF SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED
SMART/Health Information (NVMe Log 0x02)
Critical Warning: 0x00
Temperature: 34 Celsius
Available Spare: 100%
Available Spare Threshold: 5%
Percentage Used: 0%
Data Units Read: 13,162,025 [6.73 TB]
Data Units Written: 3,846,954 [1.96 TB]
Host Read Commands: 156,458,059
Host Write Commands: 128,658,566
Controller Busy Time: 116
Power Cycles: 273
Power On Hours: 126
Unsafe Shutdowns: 15
Media and Data Integrity Errors: 1051556645921812989870080
Error Information Log Entries: 0
Warning Comp. Temperature Time: 0
Critical Comp. Temperature Time: 0
Temperature Sensor 1: 34 Celsius
Temperature Sensor 2: 36 Celsius
Error Information (NVMe Log 0x01, 16 of 256 entries)
No Errors Logged`
Encountered a similar SMART reading from the same model.
I'm seeing a reported Media and Data Integrity Errors rate of a value that's over 2 ^ 84.
It could just be an error with its SMART implementation or the utility reading from it.
Converting your reported value of 1051556645921812989870080 to hex, we get 0xdead0000000000000000 big endian and 0x0000000000000000adde little endian.
Similarly, when I convert my value to hex, I get 0xffff0000000000000000 big endian and 0x0000000000000000ffff little endian, where f is just denotes a value other than 0.
I'm going to assume that the Media and Data Integrity Errors value has no actual meaning with regard to real errors. I doubt that both of us would have values that are padded with 16 0's when converted to hex. Something is sending/receiving/parsing bad data.
If you poke around the other reported SMART values in your post, and on my end, some of them don't seem to make much sense, either.

Output progress over time in hashcat

I am analysing the amount of hashes cracked over a set period of time.
I am looking to save the current status of the crack every 10 seconds.
'''
Recovered........: 132659/296112 (44.80%) Digests, 0/1 (0.00%) Salts
Recovered/Time...: CUR:3636,N/A,N/A AVG:141703,8502198,204052756 (Min,Hour,Day)
Progress.........: 15287255040/768199139595 (1.99%)
'''
I want these 3 lines of the status saved every 10 seconds or so.
Is it possible to do this within hashcat or will I need to make a separate script in python?
Getting the status every 10 seconds
You can enable printing the status with --status and you can set the status to prints every X seconds with --status-timer X. You can see these command line arguments on the hashcat options wiki page, or hashcat --help.
Example: hashcat -a 0 -m 0 example.hash example.dict --status --status-timer 10
Saving all the statuses
I'm assuming that you just want to save everything that gets printed by hashcat while it's running. An easy way to do this is just copy everything from stdout into a file. This is a popular s/o question, so we'll just use this answer.
To be safe, let's use -a which appends to the file, so we don't accidentally overwrite previous runs. All we need to do is put | tee -a file.txt after our hashcat call.
Solution
Give this a shot, it should save all the statuses (and everything else from stdout) to output.txt:
hashcat -a A -m M hashes.txt dictionary.txt --status --status-timer 10 | tee -a output.txt
Just swap out A, M, hashes.txt, and dictionary.txt with the arguments you're using.
If you need help getting just the "Recovered" lines from this output file, or if this doesn't work on your computer (I'm on OSX), let me know in a comment.
In addition to Andrew Zick's answer, note that for machine-readable status, hashcat has native support for machine-readable output - see the --machine-readable option. This produces tab-separated output like so:
STATUS 5 SPEED 111792 1000 EXEC_RUNTIME 0.007486 CURKU 1 PROGRESS 62 62 RECHASH 0 1 RECSALT 0 1 REJECTED 0 UTIL -1
STATUS 5 SPEED 14247323 1000 EXEC_RUNTIME 0.038953 CURKU 36 PROGRESS 2232 2232 RECHASH 0 1 RECSALT 0 1 REJECTED 0 UTIL -1
STATUS 5 SPEED 36929864 1000 EXEC_RUNTIME 1.661804 CURKU 1296 PROGRESS 80352 80352 RECHASH 0 1 RECSALT 0 1 REJECTED 0 UTIL -1
STATUS 5 SPEED 66538858 1000 EXEC_RUNTIME 3.237319 CURKU 46656 PROGRESS 28926722892672 RECHASH 0 1 RECSALT 0 1 REJECTED 0 UTIL -1
STATUS 5 SPEED 63562975 1000 EXEC_RUNTIME 3.480536 CURKU 1679616 PROGRESS 104136192 104136192 RECHASH 0 1 RECSALT 0 1 REJECTED 0 UTIL -1
... which is exactly what tools like Hashtopolis use to provide a front-end to hashcat output.
For machine-readable output, the options --outfile, and --outfile-format are available. See the Format section of the output of hashcat --help for the options to --outfile-format:
- [ Outfile Formats ] -
# | Format
===+========
1 | hash[:salt]
2 | plain
3 | hex_plain
4 | crack_pos
5 | timestamp absolute
6 | timestamp relative

TEZ mapper resource request

We recently migrated from MapReduce to TEZ for executing Hive queries on EMR. We are seeing cases where for the exact hive query launches very different number of mappers. See Map 3 phase below. On the first run it requested for 305 resources and on another run it requested for 4534 mappers. ( Please ignore the KILLED status because I manually killed the query.) Why does this happen ? How can we change it to be based on underlying data size instead ?
Run 1
----------------------------------------------------------------------------------------------
VERTICES MODE STATUS TOTAL COMPLETED RUNNING PENDING FAILED KILLED
----------------------------------------------------------------------------------------------
Map 1 container KILLED 5 0 0 5 0 0
Map 3 container KILLED 305 0 0 305 0 0
Map 5 container KILLED 16 0 0 16 0 0
Map 6 container KILLED 1 0 0 1 0 0
Reducer 2 container KILLED 333 0 0 333 0 0
Reducer 4 container KILLED 796 0 0 796 0 0
----------------------------------------------------------------------------------------------
VERTICES: 00/06 [>>--------------------------] 0% ELAPSED TIME: 14.16 s
----------------------------------------------------------------------------------------------
Run 2
----------------------------------------------------------------------------------------------
VERTICES MODE STATUS TOTAL COMPLETED RUNNING PENDING FAILED KILLED
----------------------------------------------------------------------------------------------
Map 1 .......... container SUCCEEDED 5 5 0 0 0 0
Map 3 container KILLED 4534 0 0 4534 0 0
Map 5 .......... container SUCCEEDED 325 325 0 0 0 0
Map 6 .......... container SUCCEEDED 1 1 0 0 0 0
Reducer 2 container KILLED 333 0 0 333 0 0
Reducer 4 container KILLED 796 0 0 796 0 0
----------------------------------------------------------------------------------------------
VERTICES: 03/06 [=>>-------------------------] 5% ELAPSED TIME: 527.16 s
----------------------------------------------------------------------------------------------
This article explains the process in which Tez allocates resources. https://cwiki.apache.org/confluence/display/TEZ/How+initial+task+parallelism+works
If Tez grouping is enabled for the splits, then a generic grouping
logic is run on these splits to group them into larger splits. The
idea is to strike a balance between how parallel the processing is and
how much work is being done in each parallel process.
First, Tez tries to find out the resource availability in the cluster for these tasks. For that, YARN provides a headroom value (and
in future other attributes may be used). Lets say this value is T.
Next, Tez divides T with the resource per task (say M) to find out how many tasks can run in parallel at one (ie in a single wave). W =
T/M.
Next W is multiplied by a wave factor (from configuration - tez.grouping.split-waves) to determine the number of tasks to be used.
Lets say this value is N.
If there are a total of X splits (input shards) and N tasks then this would group X/N splits per task. Tez then estimates the size of
data per task based on the number of splits per task.
If this value is between tez.grouping.max-size & tez.grouping.min-size then N is accepted as the number of tasks. If
not, then N is adjusted to bring the data per task in line with the
max/min depending on which threshold was crossed.
For experimental purposes tez.grouping.split-count can be set in configuration to specify the desired number of groups. If this config
is specified then the above logic is ignored and Tez tries to group
splits into the specified number of groups. This is best effort.
After this the grouping algorithm is executed. It groups splits by node locality, then rack locality, while respecting the group size
limits.

Fortran non advancing reading of a text file

I have a text file with a header of information followed by lines with just numbers, which are the data to be read.
I don't know how many lines are there in the header, and it is a variable number.
Here is an example:
filehandle: 65536
total # scientific data sets: 1
file description:
This file contains a Northern Hemisphere polar stereographic map of snow and ice coverage at 1024x1024 resolution. The map was produced using the NOAA/NESDIS Interactive MultisensorSnow and Ice Mapping System (IMS) developed under the directionof the Interactive Processing Branch (IPB) of the Satellite Services Division (SSD). For more information, contact: Mr. Bruce Ramsay at bramsay#ssd.wwb.noaa.gov.
Data Set # 1
Data Label:
Northern Hemisphere 1024x1024 Snow & Ice Chart
Coordinate System: Polar Stereographic
Data Type: BYTE
Format: I3
Dimensions: 1024 1024
Min/Max Values: 0 165
Units: 8-bit Flag
Dimension # 0
Dim Label: Longitude
Dim Format: Device Coordinates
Dim Units: Pixels
Dimension # 1
Dim Label: Latitude
Dim Format: Device Coordinates
Dim Units: Pixels
1 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0
2 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0
3 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0
..........................
I open the file using:
open(newunit=U, file = ValFile, STATUS = 'OLD', ACCESS = 'SEQUENTIAL', ACTION = 'READ')
Then, I read the file line by line and test for the type of line: header line or data line:
ios = 0
do while ( .NOT. is_iostat_end(ios) )
read(U, '(A)', iostat = ios, advance = 'NO') line ! Shouldn't advance to next line
if (is_iostat_end(ios)) stop "End of file reached before data section."
tol = getTypeOfLine(line, nValues) ! nValues = 1024, needed to test if line is data.
if ( tol > 0 ) then ! If the line holds data.
exit ! Exits the loop
else
read(U, '(A)', iostat = ios, advance = 'YES') line ! We advance to the next line
end if
end do
But the first read in the loop, always advances to the next line, and this is a problem.
After exiting the above loop, enter a new loop to read the data:
read(U, '(1024I1)', iostat = ios) Values(c,:)
The 1024 set of data can span some lines, but each set is a row in the matrix "Values".
The problem is that this second loop doesn't read the last line read in the testing loop (which is the first line of data).
A possible solution is to read the lines in the testing loop, without advancing to the next line. I used for this, advance='no', but it still advances to the next line, Why?.
A non-advancing read will still set the file position to before start of the next record if the end of the current record is encountered while reading from the file to satisfy the items in the output item list of the read statement - non-advancing doesn't mean "never-advancing". You can use the value assigned to the variable nominated in an iostat specifier for the read statement to see if the end of the current record was reached - use the IS_IOSTAT_EOR intrinsic or test against the equivalent value from ISO_FORTRAN_ENV.
(Implicit in the above is the fact that a non-advancing read still advances over the file positions that correspond to items actually read... hence once that getTypeOfLine procedure decides that it has a line of data at least part of that line has already been read. Unless you reposition the file subsequent "data" read statements will miss that part.)

Hiding Xvfb Teminal Logs

Every time I execute my tests in headless firefox using Xvfb I get large chunk of logs. These logs display different parameters and their values .
I was wondering if I can disable these logs somehow, I googled a bit but could not find anything useful
Following logs are displayed and I want to disable these.
`5 XSELINUXs still allocated at reset
SCREEN: 0 objects of 168 bytes = 0 total bytes 0 private allocs
DEVICE: 4 objects of 96 bytes = 384 total bytes 0 private allocs
CLIENT: 0 objects of 152 bytes = 0 total bytes 0 private allocs
WINDOW: 0 objects of 32 bytes = 0 total bytes 0 private allocs
PIXMAP: 1 objects of 16 bytes = 16 total bytes 0 private allocs
GC: 0 objects of 56 bytes = 0 total bytes 0 private allocs
CURSOR: 0 objects of 8 bytes = 0 total bytes 0 private allocs
CURSOR_BITS: 0 objects of 8 bytes = 0 total bytes 0 private allocs
DBE_WINDOW: 0 objects of 24 bytes = 0 total bytes 0 private allocs
TOTAL: 5 objects, 400 bytes, 0 allocs
4 DEVICEs still allocated at reset
DEVICE: 4 objects of 96 bytes = 384 total bytes 0 private allocs
CLIENT: 0 objects of 152 bytes = 0 total bytes 0 private allocs
WINDOW: 0 objects of 32 bytes = 0 total bytes 0 private allocs
PIXMAP: 1 objects of 16 bytes = 16 total bytes 0 private allocs
GC: 0 objects of 56 bytes = 0 total bytes 0 private allocs
CURSOR: 0 objects of 8 bytes = 0 total bytes 0 private allocs
CURSOR_BITS: 0 objects of 8 bytes = 0 total bytes 0 private allocs
DBE_WINDOW: 0 objects of 24 bytes = 0 total bytes 0 private allocs
TOTAL: 5 objects, 400 bytes, 0 allocs
1 PIXMAPs still allocated at reset
PIXMAP: 1 objects of 16 bytes = 16 total bytes 0 private allocs
GC: 0 objects of 56 bytes = 0 total bytes 0 private allocs
CURSOR: 0 objects of 8 bytes = 0 total bytes 0 private allocs
CURSOR_BITS: 0 objects of 8 bytes = 0 total bytes 0 private allocs
DBE_WINDOW: 0 objects of 24 bytes = 0 total bytes 0 private allocs
TOTAL: 1 objects, 16 bytes, 0 allocs
`
Same problem here. Did not find a clean solution to disable them so I included a pre-build task in Jenkins to clean those log files before running the automated tests. And as my automated tests are launched very regularly by Jenkins, logs are often cleaned. This way I don't risk any disk-full issue.
There are a number of ways to resolve this problem. I would first suggest initializing Xvfb in a separate terminal from where you're running your code. Xvfb log messages will be dumped in the terminal it's running in.
Another solution would be to use a wrapper. If for example you're coding in python, you could try https://github.com/cgoldberg/xvfbwrapper
You can get those Verbose messages to go away if you re-direct the log output from Xvfb to /dev/null.
In my case, I was using Xvfb plugin in Jenkins and running Selenium tests using Firefox on a CentOS machine. Got the same verbose messages.
I resolved it by UN-CHECKING one of the box which was for "Logging Xvfb log output" --OR
you can also do it as at the $ or # prompt:
/usr/bin/Xvfb :99 -ac -screen 0 1600x1200x16 2>/dev/null 1>&2 &