Getting oh-my-zsh 'history' to display command date and time - oh-my-zsh

The .zshrc has the following lines:
# Uncomment the following line if you want to change the command execution time
# stamp shown in the history command output.
# You can set one of the optional three formats:
# "mm/dd/yyyy"|"dd.mm.yyyy"|"yyyy-mm-dd"
# or set a custom format using the strftime function format specifications,
# see 'man strftime' for details.
# HIST_STAMPS="mm/dd/yyyy"
But uncommenting and running history does not work.

The .zshrc comment text is misleading.
Use:
HIST_STAMPS="%d/%m/%y %T"
To show day, month, year and time respectively.

I'm running zsh 5.7.1 (x86_64-apple-darwin19.0) with omz.
HIST_STAMPS="mm/dd/yyyy" now works as intended.

Related

zsh declare PROMPT using multiple lines

I would like to declare my ZSH prompt using multiple lines and comments, something like:
PROMPT="
%n # username
#
%m # hostname
\ # space
%~ # directory
$
\ # space
"
(e.g. something like perl regex's "ignore whitespace mode")
I could swear I used to do something like this, but cannot find those old files any longer. I have searched for variations of "zsh declare prompt across multiple lines" but haven't quite found it.
I know that I can use \ for line continuation, but then we end up with newlines and whitespaces.
edit: Maybe I am misremembering about comments - here is an example without comments.
Not exactly what you are looking for, but you don't need to define PROMPT in a single assignment:
PROMPT="%n" # username
PROMPT+="#%m" # #hostname
PROMPT+=" %~" # directory
PROMPT+="$ "
Probably closer to what you wanted is the ability to join the elements of an array:
prompt_components=(
%n # username
" " # space
%m # hostname
" " # space
"%~" # directory
"$"
)
PROMPT=${(j::)prompt_components}
Or, you could let the j flag add the space delimiters, rather than putting them in the array:
# This is slightly different from the above, as it will put a space
# between the director and the $ (which IMO would look better).
# I leave it as an exercise to figure out how to prevent that.
prompt_components=(
"%n#%m" # username#hostname
"$~" # directory
"$"
)
PROMPT=${(j: :)prompt_components}

Jmeter non GUI mode csv report not showing latency

I am trying to get the jmeter html report for file transfer in SFTP protocol.
I am using SSH SFTP Protocol plugin and added Simple Data Writer to that thread group.
I have created my own sftp server using Apache MINA. Jmeter script will hit the server which i created and uploads the file.
Script Parameters:
Thread Group - 250
Ramp up period - 50
Loop Count - 1
After running the script in non GUI mode as nohup sh jmeter.sh -n -t Singlepart_MultipleThread_RampUp.jmx -l Singlepart_MultipleThread_RampUp.jtl. I do get a csv generated which i convert into html report command jmeter -g <csv> -o <destination_folder>.
The html report created shows Latency vs Time and Latency vs Request as zero and even the csv report shows latency column as zero.
Below is my user.properties file
user.properties
# Latencies Over Time graph definition
jmeter.reportgenerator.graph.latenciesOverTime.classname=org.apache.jmeter.report.processor.graph.impl.LatencyOverTimeGraphConsumer
jmeter.reportgenerator.graph.latenciesOverTime.title=Latencies Over Time
jmeter.reportgenerator.graph.latenciesOverTime.property.set_granularity=${jmeter.reportgenerator.overall_granularity}
# Latencies Vs Request graph definition
jmeter.reportgenerator.graph.latencyVsRequest.classname=org.apache.jmeter.report.processor.graph.impl.LatencyVSRequestGraphConsumer
jmeter.reportgenerator.graph.latencyVsRequest.title=Latencies Vs Request
jmeter.reportgenerator.graph.latencyVsRequest.exclude_controllers=true
jmeter.reportgenerator.graph.latencyVsRequest.property.set_granularity=${jmeter.reportgenerator.overall_granularity}
jmeter.properties
#---------------------------------------------------------------------------
# Results file configuration
#---------------------------------------------------------------------------
# This section helps determine how result data will be saved.
# The commented out values are the defaults.
# legitimate values: xml, csv, db. Only xml and csv are currently supported.
jmeter.save.saveservice.output_format=csv
# The below properties are true when field should be saved; false otherwise
#
# assertion_results_failure_message only affects CSV output
jmeter.save.saveservice.assertion_results_failure_message=true
#
# legitimate values: none, first, all
jmeter.save.saveservice.assertion_results=all
#
jmeter.save.saveservice.data_type=true
jmeter.save.saveservice.label=true
jmeter.save.saveservice.response_code=true
# response_data is not currently supported for CSV output
jmeter.save.saveservice.response_data=true
# Save ResponseData for failed samples
jmeter.save.saveservice.response_data.on_error=false
jmeter.save.saveservice.response_message=true
jmeter.save.saveservice.successful=true
jmeter.save.saveservice.thread_name=true
jmeter.save.saveservice.time=true
jmeter.save.saveservice.subresults=true
jmeter.save.saveservice.assertions=true
jmeter.save.saveservice.latency=true
# Only available with HttpClient4
#jmeter.save.saveservice.connect_time=true
jmeter.save.saveservice.samplerData=true
#jmeter.save.saveservice.responseHeaders=false
#jmeter.save.saveservice.requestHeaders=false
#jmeter.save.saveservice.encoding=false
jmeter.save.saveservice.bytes=true
# Only available with HttpClient4
jmeter.save.saveservice.sent_bytes=true
jmeter.save.saveservice.url=true
jmeter.save.saveservice.filename=false
jmeter.save.saveservice.hostname=false
jmeter.save.saveservice.thread_counts=true
jmeter.save.saveservice.sample_count=false
jmeter.save.saveservice.idle_time=true
# Timestamp format - this only affects CSV output files
# legitimate values: none, ms, or a format suitable for SimpleDateFormat
#jmeter.save.saveservice.timestamp_format=ms
#jmeter.save.saveservice.timestamp_format=yyyy/MM/dd HH:mm:ss.SSS
# For use with Comma-separated value (CSV) files or other formats
# where the fields' values are separated by specified delimiters.
# Default:
#jmeter.save.saveservice.default_delimiter=,
# For TAB, one can use:
#jmeter.save.saveservice.default_delimiter=\t
# Only applies to CSV format files:
# Print field names as first line in CSV
#jmeter.save.saveservice.print_field_names=true
# Optional list of JMeter variable names whose values are to be saved in the result data files.
# Use commas to separate the names. For example:
#sample_variables=SESSION_ID,REFERENCE
# N.B. The current implementation saves the values in XML as attributes,
# so the names must be valid XML names.
# By default JMeter sends the variable to all servers
# to ensure that the correct data is available at the client.
# Optional xml processing instruction for line 2 of the file:
# Example:
#jmeter.save.saveservice.xml_pi=<?xml-stylesheet type="text/xsl" href="../extras/jmeter-results-detail-report.xsl"?>
# Default value:
#jmeter.save.saveservice.xml_pi=
# Prefix used to identify filenames that are relative to the current base
#jmeter.save.saveservice.base_prefix=~/
# AutoFlush on each line written in XML or CSV output
# Setting this to true will result in less test results data loss in case of Crash
# but with impact on performances, particularly for intensive tests (low or no pauses)
# Since JMeter 2.10, this is false by default
#jmeter.save.saveservice.autoflush=false
So basically facing issue at two places:
How to get the latency value?
When i provide Ramp up value as 1, the script with Thread Group =50 takes around 16 seconds to complete the upload, whereas if i give Ramp up something other than 1 such as 10 then the script ends after 10 secs exact, irrespective of file is getting uploaded or not and providing vague results in html report as well.
Any idea how to solve this. Or need to do anything else in script.
You cannot as the plugin you're using doesn't call SampleResult.setLatency() function anywhere
theoretically it should be possible to request the functionality from the plugin developers
Setting 10 seconds ramp-up period for 50 virtual users means that JMeter starts with 1 virtual user and gradually increases the load to 50 within 10 seconds duration. Make sure to have enough loops defined in the Thread Group as you may run into the situation when 1st user has already finished uploading the file and was terminated and 2nd hasn't need started so you have maximum 1 user concurrency (it can be checked using Active Threads Over Time listener). See JMeter Test Results: Why the Actual Users Number is Lower than Expected for more detailed explanation if needed.

monit alert based of previous log line in check file

In the following auth.log
Mon DD HH:MM:SS SFTPHOST internal-sftp[21583]: realpath "/path/to/*.txt"
Mon DD HH:MM:SS SFTPHOST internal-sftp[21583]: sent status No such file
I only want an alert on "sent status No such file" IFF the previous line does NOT contain *. As a stretch goal it would be nice to check that that line has the same PID (number in the square brackets).
Any way to do that? Or am I using the wrong tool?
You can do that with a CHECK PROGRAM combined with a custom script that will do all the hard work (something similar to https://stackoverflow.com/a/17228241/374236 if I understand you correctly).

making scripts for specific output from the mysql slow log

I want to select word from the file and copy the others line till next #.
Means I have mysql slow query like below. from it I want to select current date time query and till the next # .
Please guide for the same.
# Time: **161205 10:27:39**
# localhost []
# Query_time: 5.517501 Lock_time: 0.034388 Rows_sent: 50 Rows_examined: 27061434
SET timestamp=1480913859;
SELECT ,NULL,NULL,(SELECT
GROUP_CONCAT(project_master_name)
FROM
project_inquiry_detail pid,project_master pm
WHERE
order by T.InquiryDate desc , TL.rowid desc limit 0,50;
**# Time: 161205 14:53:50**
Is that what you are looking for?
sed -n -r -e '/^SELECT /p; /^(SELECT |#)/!{p};' mysql.log
This one works for me
sed -n -r '/^SELECT/,/^#/p' slow.log
The logic is simply "don't print anything unless told so" (the -n switch) and print only lines between (and including) a SELECT at the beginning and a # at the beginning of the line.

file seek in wlst / Jython 2.2.1 fails for lines longer than 8091 characters

For a CSV file generated in WLST / Jython 2.2.1 i want to update the header, the first line of the output file, when new metrics have been detected. This works fine by using seek to go to the first line and overwriting the line. But it fails when the number of characters of the first line exceeds 8091 characters.
I made simplified script which does reproduce the issue i am facing here.
#!/usr/bin/python
#
import sys
global maxheaderlength
global initheader
maxheaderlength=8092
logFilename = "test.csv"
# Create (overwrite existing) file
logfileAppender = open(logFilename,"w",0)
logfileAppender.write("." * maxheaderlength)
logfileAppender.write("\n")
logfileAppender.close()
# Append some lines
logfileAppender = open(logFilename,"a",0)
logfileAppender.write("2nd line\n")
logfileAppender.write("3rd line\n")
logfileAppender.write("4th line\n")
logfileAppender.write("5th line\n")
logfileAppender.close()
# Seek back to beginning of file and add data
logfileAppender = open(logFilename,"r+",0)
logfileAppender.seek(0) ;
header = "New Header Line" + "." * maxheaderlength
header = header[:maxheaderlength]
logfileAppender.write(header)
logfileAppender.close()
When maxheaderlength is 8091 or lower i do get the results as expected. The file test.csv starts with “New Header Line" followed by 8076 dots and
followed by the lines
2nd line
3rd line
4th line
5th line
When maxheaderlength is 8092> the test.csv results as a file starting with 8092 dots followed by "New Header Line" and then followed by 8077 dots. The 2nd ... 5th line are now show, probably overwritten by the dots.
Any idea how to work around or fix this ?
I too was able to reproduce this extremely odd behaviour and indeed it works correctly in Jython 2.5.3 so I think we can safely say this is a bug in 2.2.1 (which unfortunately you're stuck with for WLST).
My usual recourse in these circumstances is to fall back to using native Java methods. Changing the last block of code as follows seems to work as expected :-
# Seek back to beginning of file and add data
from java.io import RandomAccessFile
logfileAppender = RandomAccessFile(logFilename, "rw")
logfileAppender.seek(0) ;
header = "New Header Line" + "." * maxheaderlength
header = header[:maxheaderlength]
logfileAppender.writeBytes(header)
logfileAppender.close()