How can I set/get BIOS settings using iLO connection, I need to automate the configuring of BIOS setting that I do manually. It does not matter what programming/scripting language to use, and also I have a wide variety of machines vendors(IBM, HP, DELL) so let me know how to do that on any of them.
for HP use conrep!
description:
command-line utility for capturing/applying BIOS settings using xml definition files.
capture current settings to xml file
/opt/hp/conrep -s -f <filename>
apply settings from xml input file
/opt/hp/conrep -l -f <filename>
help
Usage /opt/hp/conrep -s | -l [-f output filename] [-x xml configuration filename] [-?]
-s Saves the current configuration to a file.
-l Loads configuration setting from a file.
-f Name of the output file.
-x Name of the XML definition file. Provide the /opt/hp/conrep.xml full path.
If not present, the XML configuration will default to conrep.xml
If not present, the output filename default to conrep.dat
Error Codes:
0 - Success
1 - Bad XML File
2 - Bad Data File
4 - Admin Password set
5 - No XML Tag
6 - Platform is not supported with current XML definition file.
Related
Files are being written to a directory using the COPY query:
Copy (SELECT * FROM animals) To '/var/lib/postgresql/data/backups/2020-01-01/animals.sql' With CSV DELIMITER ',';
However if the directory 2020-01-01 does not exist, we get the error
could not open file "/var/lib/postgresql/data/backups/2020-01-01/animals.sql" for writing: No such file or directory
PostgeSQL server is running inside a Docker container with the volume mapping /mnt/backups:/var/lib/postgresql/data/backups
The Copy query is being sent from a Node.js app outside of the Docker container.
The mapped host directory /mnt/backups was created by Docker Compose and is owned by root, so the Node.js app sending the COPY query is unable to create the missing directories due to insufficient permissions.
The backup file is meant to be transferred out of the Docker container to the Docker host.
Question: Is it possible to use an SQL query to ask PostgreSQL 11.2 to create a directory if it does not exist? If not, how will you recommend the directory creation be done?
Using Node.js 12.14.1 on Ubuntu 18.04 host. Using PostgreSQL 11.2 inside container, Docker 19.03.5
An easy way to solve it is to create the file directly into the client machine. Using STDOUT from COPY you can let the query output be redirected to the client standard output, which you can catch and save in a file. For instance, using psql in the client machine:
$ psql -U your_user -d your_db -c "COPY (SELECT * FROM animals) TO STDOUT WITH CSV DELIMITER ','" > file.csv
Creating an output directoy in case it does not exist:
$ mkdir -p /mnt/backups/2020-01/ && psql -U your_user -d your_db -c "COPY (SELECT * FROM animals) TO STDOUT WITH CSV DELIMITER ','" > /mnt/backups/2020-01/file.csv
On a side note: try to avoid exporting files into the database server. Although it is possible, I consider it a bad practice. Doing so you will either write a file into the postgres system directories or give the postgres user permission to write somewhere else, and it is something you shouldn't be comfortable with. Export data directly to the client either using COPY as I mentioned or follow the advice from #Schwern. Good luck!
Postgres has its own backup and restore utilities which are likely to be a better choice than rolling your own.
When used with one of the archive file formats and combined with pg_restore, pg_dump provides a flexible archival and transfer mechanism. pg_dump can be used to backup an entire database, then pg_restore can be used to examine the archive and/or select which parts of the database are to be restored. The most flexible output file formats are the “custom” format (-Fc) and the “directory” format (-Fd). They allow for selection and reordering of all archived items, support parallel restoration, and are compressed by default. The “directory” format is the only format that supports parallel dumps.
A simple backup rotation script might look like this:
#!/bin/sh
table='animals'
url='postgres://username#host:port/database_name'
date=`date -Idate`
file="/path/to/your/backups/$date/$table.sql"
mkdir -p `dirname $file`
pg_dump $url -w -Fc --table=$table -f $file
To avoid hard coding the database password, -w means it will not prompt for a password and instead look for a password file. Or you can use any of many Postgres authentication options.
The Bitvise SSH Client version history states that v8.15 supports directory mirroring:
The graphical SSH Client and sftpc now support recursive directory mirroring. A directory and all of its subdirectories and files can be synchronized either in the upload or download direction.
I can find it in the GUI, but I can't find how to do using sftpc.exe. There is no mention of mirroring in sftpc.exe -help.
How can I do directory mirroring from the command line?
You point out a tangential design issue in sftpc: getting help for SFTP commands requires you to use sftpc interactively and connect to the server. You can then get help from the interactive prompt.
This is inconvenient, so I opened a feature request for us to make the interactive help available from the command line, as well.
The help text you are looking for is as follows - for the put command:
sftp> help put
USAGE: put local-path [remote-path] [-bg | -fg] [-s] [-o] [-r]
[-f] [-noTime] [-m=mode] [-dm=mode] [-mirror [-erase]]
[-b | -lf | -std | -tlf | -t]
DESCRIPTION: Upload file.
PARAMETERS:
-bg Start (queue) upload in background.
-fg Start upload in foreground.
-s Include subdirectories (recursive).
-r Synchronize file content. If synchronization is not available,
resume existing incomplete files using a heuristic resume.
Heuristic resume MAY result in an inconsistent destination file
if the destination file content has been modified in the middle.
-o Synchronize file content. If synchronization is not available,
force existing file to be overwritten. If -r is also specified,
heuristic resume is tried first.
-del Remove local file after successful upload.
-f Assume remote-path is a file (not a directory)
-noTime Do not synchronize file modification times.
-m=mode Set the access mode for remote files to 'mode'.
-dm=mode Set the access mode for new remote directories to 'mode'.
If directory already exists, access mode will not be changed.
-mirror Mirror local-path to remote-path. Local files that do not exist
remotely will be uploaded. Remote files that are different than
their local versions will be overwritten.
-erase With -mirror, erase remote files that are not present locally.
FILE TRANSFER MODE - if present, overrides mode selected with "type":
-b Upload files as binary; no conversions.
-lf Auto-detect text files. In text files, replace CRLF with LF.
Binary files are unaffected.
-std Auto-detect text files. Upload text files using the SFTP v4+ text
file transfer mechanism. Binary files are unaffected. Not
available when SFTP version 3 or lower is in use.
-tlf Upload all files as textual. Replace all CRLF bytes with LF.
-t Upload all files using the SFTP v4+ text file transfer mechanism.
Not available when SFTP version 3 or lower is in use.
And for the get command:
sftp> help get
USAGE: get remote-path [local-path] [-bg | -fg] [-s] [-o] [-r]
[-f] [-noTime] [-lit] [-mirror [-erase]]
[-b | -lf | -std | -tlf | -t]
DESCRIPTION: Download file.
PARAMETERS:
-bg Start (queue) download in background.
-fg Start download in foreground.
-s Include subdirectories (recursive).
-r Synchronize file content. If synchronization is not available,
resume existing incomplete files using a heuristic resume.
Heuristic resume MAY result in an inconsistent destination file
if the destination file content has been modified in the middle.
-o Synchronize file content. If synchronization is not available,
force existing file to be overwritten. If -r is also specified,
heuristic resume is tried first.
-del Remove remote file after successful download.
-f Assume remote-path is a file (not a directory).
-noTime Do not synchronize file modification times.
-lit Treat remote-path literally (not a wildcard pattern).
-mirror Mirror remote-path to local-path. Remote files that do not exist
locally will be downloaded. Local files that are different than
their remote versions will be overwritten.
-erase With -mirror, erase local files that are not present remotely.
FILE TRANSFER MODE - if present, overrides mode selected with "type":
-b Download files as binary; no conversions.
-lf Auto-detect text files. In text files, replace LF with CRLF.
Binary files are unaffected.
-std Behaves same as -lf when downloading. Not available when SFTP
version 3 or lower is in use.
-tlf Download all files as textual. Replace all LF bytes with CRLF.
-t Download all files using the SFTP v4 text file transfer mechanism.
Not available when SFTP version 3 or lower is in use.
I hope this helps!
I don't normally monitor Stack Overflow, so please feel free to call my attention by opening a support case with Bitvise if you need me to look at something else.
I recommend also using the latest Bitvise SSH Client version. Currently, this is 8.35. It's free of charge for use in any environment, and we try to ensure that each version is a strict upgrade that does not introduce new difficulties. We want there to be no reason to stay behind. :-)
I'm running apache jmeter 3.3 on centos command line and generating ".jtl" Summary Report file using following command
./jmeter -n -t requests.jmx -l log.jtl
Can I generate some file and view result tree by importing file to apache jmeter GUI. If yes , then how.
To do that, just add a View Result Tree to your test and fill in the filename field:
Ensure you check the fields you want by clicking on "Configure":
Note that the more you save things the more you impact performances of JMeter
You can run your test as:
./jmeter -Jjmeter.save.saveservice.output_format=xml -Jjmeter.save.saveservice.response_data=true -Jjmeter.save.saveservice.samplerData=true -Jjmeter.save.saveservice.requestHeaders=true -Jjmeter.save.saveservice.url=true -Jjmeter.save.saveservice.responseHeaders=true -n -t requests.jmx -l log.jtl
Alternatively you can add the next lines to user.properties file (lives in "bin" folder of your JMeter installation)
jmeter.save.saveservice.output_format=xml
jmeter.save.saveservice.response_data=true
jmeter.save.saveservice.samplerData=true
jmeter.save.saveservice.requestHeaders=true
jmeter.save.saveservice.url=true
jmeter.save.saveservice.responseHeaders=true
This way JMeter will store results in a way they could be examined in the View Results Tree listener.
References:
Configuring JMeter
Results file configuration
Apache JMeter Properties Customization Guide
xmllint does work fine with http://somesite.xml
But it doesn't work with https://somesite.xml
xmllint https://somesite.xml
warning: failed to load external entity "https://somesite.xml"
As a workaround, you could use another utility like curl or wget to download the file first, then pipe it to xmllint.
curl --silent "https://somesite.xml" | xmllint -
Notes:
Use - ("hyphen/minus") for xmllint's filename argument to get its XML input from the standard input stream instead of from a file or URL.
You might want to use --silent (-s) to suppress curl progress/error messages, to prevent those from being parsed by xmllint.
Quotes might be required around the URL if it contains special characters.
This should work for xmllint's XML input over HTTPS, but not sure about a DTD or schema; you might need to download that to a local file first, using a separate curl or wget command.
I'm looking for pages defined by the ToolTwist Designer that use a specific widget. I've tried using grep to search for the widget name but nothing is found, even though I've checked it is in the file.
This is the command I'm using to check a specific page definition:
grep myproject.TestWidget widgets/test_pages/testPage/scratch_me/conf.xml
Any ideas what I'm doing wrong?
The widget definition files are stored as UTF-16 (but my change in the future). Before running grep, convert the file like this:
iconv -f UTF-16 -t ISO-8859-15 widgets/test_pages/testPage/scratch_me/conf.xml \
| grep myproject.TestWidget