I am not able to find objdump command in IBM AIX 5.1 machine. Actually I want to get the assembly instructions (disassemble) from a library generated in AIX. Linux has objdump command and solaris dis command to do this. What is the equivalent command in IBM AIX?
You can use the dis command to disassemble object files on AIX, it should come with xlc.
It may be easier to install the GNU bintools suite to just get objdump though. Its available from the AIX linux toolbox.
I have only part of an answer. Following up on #CoreyStup, I found the dis command in /opt/IBM/xlc/16.1.0/exe/dis (not the bin directory). But it was very recalcitrant, and seemed unwilling to print to stdout or stderr. I did find it was writing the output a filename created by replacing the .o on the command line with .s. So:
% /opt/IBM/xlc/16.1.0/exe/dis aix/ktraceback.o
% ls -l aix/ktraceback.s
-rw-r--r-- 1 ota staff 10432 Nov 19 14:01 aix/ktraceback.s
% /opt/IBM/xlc/16.1.0/exe/dis -o /tmp/foo.s aix/ktraceback.o
% ls /tmp/foo.s
-rw-r--r-- 1 ota staff 10432 Nov 19 14:06 /tmp/foo.s
Using strings -a -n2, I was able to extract a possible usage message, but it was unclear what most of the options do, with the exception of -o.
dis disassembler version 1.27.0.1 Nov 9 2018 08:18:36
%s [-D] [-G] [-g] [-h] [-i] [-k] [-L] [-l] [-M] [-m <architecture>]
[-o <file name>] [-p <level>] [-r] [-R] [-S] [-T] [-t] [ filename ]
-D
disassemble .data and .bss only
-G
do not print symbolic debugging information
-g
print symbolic debugging information (default)
-H
print BO branch hints
-h
print headers
-i
line input mode
-k
do not interpret traceback table
-L
print linker section
-l
print line number table
-M
print text maps
-e
print except entries
-m
force architecture selection:
pwr|pwrx|pwr2|pwr2s|p2sc|com|403|601|602|603|603e|604|604e|620|
ppc|ppcgr|ppc64|rs64a|rs64b|rs64c|pwr3|pwr4|pwr4x|pwr5|pwr5x|
pwr6|pwr6e|pwr7|pwr8|pwr9|[ppc]970|440|440d|450|450d
-o
output to file
-p
print level
-R
print relative offsets (no added labels)
-r
print relocation table
-S
suppress printing symbolic definitions
-T
disassemble .text only
-t
print symbol table
Related
I'm trying to take continuous traces which are written to files that are limited by both duration (-G option) and size (-C option). The files are automatically named with the -w option, and finally the files are compressed with the -z gzip option. Altogether what I have is:
tcpdump -i eth0 -w /home/me/pcaps/MyTrace_%Y-%m-%d_%H%M%S.pcap -s 0 -C 100 -G 3600 -Z root -z gzip &
The problem is that with the -C option, the current file count is appended onto the name, so I wind up with files ending in: .pcap2.gz .pcap3.gz .pcap4.gz, etc. I would much prefer to have them end as: _2.pcap.gz _3.pcap.gz _4.pcap.gz, etc.
But if I remove .pcap from the -w option, I wind up with 2.gz 3.gz 4.gz
This could work if I could include options in the "-z" command like -z "gzip -S .pcap.gz" so that gzip itself appends the .pcap or if I could use an alias like pcap_gzip="gzip -S .pcap.gz" and then -z pcap_gzip, but neither option seems to be working, the latter producing this error: compress_savefile:execlp(gzip -S pcap.gz, /home/me/pcaps/MyTrace_2018-08-07_105308_27): No such file or directory
I encountered the same problem today, In CentOS6. I found your problem, but the answer did not work to me.
In fact, it only needs to be adjusted slightly, that is, the absolute path of the saved file name and the name of the script to be executed is written, for example
tcpdump -i em1 ... -s 0 -G 10 -w '/home/Svr01_std_%Y%m%d_%H%M%S.pcap' -Z root -z /home/pcapup2arcive.sh
I found out that although the alias doesn't work, I was able to put the same commands in a script and invoke the script via tcpdump -z.
pcap_gzip.sh:
#!/bin/bash
gzip -S .pcap.gz "$#"
Then:
tcpdump -i eth0 -w /home/me/pcaps/MyTrace_%Y-%m-%d_%H%M%S -s 0 -C 100 -G 3600 -Z root -z pcap_gzip.sh &
My problem is that I have a cluster-server with Torque PBS and want to use it to run a sequence-comparison with the program rapsearch.
The normal RapSearch command is:
./rapsearch -q protein.fasta -d database -o output -e 0.001 -v 10 -x t -z 32
Now I want to run it with 2 nodes on the cluster-server.
I've tried with: echo "./rapsearch -q protein.fasta -d database -o output -e 0.001 -v 10 -x t -z 32" | qsub -l nodes=2 but nothing happened.
Do you have any suggestions? Where I'm wrong? Help please.
Standard output (and error output) files are placed in your home directory by default; take a look. You are looking for a file named STDIN.e[numbers], it will contain the error message.
However, I see that you're using ./rapsearch but are not really being explicit about what directory you're in. Your problem is therefore probably a matter of changing directory into the directory that you submitted from. When your terminal is in the directory of the rapsearch executable, try echo "cd \$PBS_O_WORKDIR && ./rapsearch [arguments]" | qsub [arguments] to submit your job to the cluster.
Other tips:
You could add rapsearch to your path if you use it often. Then you can use it like a regular command anywhere. It's a matter of adding the line export PATH=/full/path/to/rapsearch/bin:$PATH to your .bashrc file.
Create a submission script for use with qsub. Here is a good example.
my task is to analyze a memory dump. I've found the location of a PDF-File and I want to analyze it with virustotal. But I can't figure out how to "download" it from the memory dump.
I've already tried it with this command:
python vol.py -f img.vmem dumpfiles -r pdf$ -i --name -D dumpfiles/
But in my dumpfile-directory there is just a .vacb file which is not a valid pdf.
I think you may have missed a command line argumenet from your command:
python vol.py -f img.vmem dumpfiles -r pdf$ -i --name -D dumpfiles/
If you are not getting a .dat file in your output folder you can add -u:
-u, --unsafe Relax safety constraints for more data
Can't test this with out access to the dump but you should be able to rename the .dat file created to .pdf.
So it should look something like this:
python vol.py -f img.vmem dumpfiles -r pdf$ -i --name -D dumpfiles/ -u
You can check out the documentation on the commands here
VACB is "virtual address control block". Your output type seems to be wrong.
Try something like:
$ python vol.py -f img.vmem dumpfiles --output=pdf --output-file=bla.pdf --profile=[your profile] -D dumpfiles/
or check out the cheat sheet: here
I am using CLion as IDE. After building the output is an executable file example. What I would like to achieve is make .hex file from it and upload it to my AVR via avrdude. I read and tried some possible solutions here
xxd -p example | tr -d '\n' > example.hex
and
avrdude -u -c usbasp-clone -p atmega8 -P /dev/bus/usb/001/006 -U flash:w:example.hex
but avrdude outputs
avrdude: input file example.hex auto detected as invalid format
avrdude: invalid input file format: -1
avrdude: read from file 'example.hex' failed
Any ideas here?
The tool for extracting sections from an executable and converting them into another format is objcopy.
avr-objcopy -j .text -j .data -O ihex example example.hex
Or if your avrdude is built with ELF support then you can use the executable directly.
avrdude -c usbasp-clone -p atmega8 -U flash:w:example
I would like to pass many macros into rpmuild without having to type out each macro manually, or even have a long makefile with -D 'foo bar' -D 'foo bar' -D 'foo bar' many times. I want to pass these macros into rpmbuild "all at once".
Let me describe my use case - I have a script called buildid that reports information about a build;
user#host: buildid -k tag
1.8.0-1444293343
user#host: buildid -k buildhost.platform
Linux-4.0.7-300.fc22.x86_64-x86_64-with-fedora-22-Twenty_Two
user#host: buildid -k version.formatted.gnu
1.8.0
I use these values in a RPM .spec file like this;
rpmbuild -ba foo.spec -D "tag `buildid -k tag`" -D "buildhost_platform `buildid -k buildhost.platform`" -D "version `buildid -k version.formatted.gnu`"
This is the sucky part - a long command line, with lots of typing. Even if I use a Makefile, it's still ugly.
My buildid script is pretty flexible though, and can save these buildid values to a file (.buildid_rpmmacros) or whatever, but better, can just print them out in a nice format like this;
user#host: buildid -f rpmmacros
%buildhost.hostname myhost.example.com
%buildhost.platform Linux-4.0.7-300.fc22.x86_64-x86_64-with-fedora-22-Twenty_Two
%buildhost.release 4.0.7-300.fc22.x86_64
%buildhost.system Linux
%buildhost.version #1 SMP Mon Jun 29 22:15:06 UTC 2015
%git.branch master
%git.revision 48a30d610cf1ab57dcc6947b2366b6a5e9a1fcc6
%git.revision.short 48a30d6
%tag 1.8.0-1444293343
%timestamp 1444293343
%version.formatted.gnu 1.8.0
%version.formatted.short 1.8.0
%version.formatted.win 1.8.0.0
%version.major 1
%version.minor 8
%version.release
%version.revision 0
If I could do something like this, it would be ideal;
rpmbuild -ba foo.spec --macros-stdin < `buildid -f rpmmacros`
Finally, the macros are project/RPM specific, not global. This means storing them in ~/.rpmmacros would not be a viable solution. I can save to a file easily (buildid -nF rpmmacros), but I'm already persisting them to a file in ini format, and just want to output them temporarily in RPM macro format (buildid -f rpmmacro)
Shameless plug - if you're interested in the buildid tool; https://github.com/jamesread/buildid
If you want to have reproducible builds you should use 'mock' rather than directly calling rpmbuild.
Mock can have configs and you put those macros to those config. E.g:
$ cp /etc/mock/fedora-22-x86_64.cfg ~/my-project-fedora-22-x86_64.cfg
$ vi ~/my-project-fedora-22-x86_64.cfg
put there this line:
config_opts['macros']['%Add_your_macro_name_here'] = "add macro value here"
And now you can build it with those macros defined:
$ mock -r ~/my-project-fedora-22-x86_64.cfg foo.src.rpm