Removing S0 record form SREC file - vxworks

I'm working on embedded system running on WindRiver's VxWorks 653. After building binary files converted to SREC with objcopy for burning into target device. But this SREC files contain an S0 record with directory where it was built, so build same code placed in two different directories will end with different SREC files. Is it possible to turn off this S0 record adding to result file, without manual operations?

You can simply post-process the output of objcopy with grep to remove any S0 record, e.g.:
objcopy ... -O srec input_file temp_file && \
grep -v ^S0 temp_file > output_file && \
rm temp_file

Related

Create a zip archive with Cmake without parent directory?

When I do this:
add_custom_target(
mystuff
COMMAND ${CMAKE_COMMAND} -E tar "cvf" "${CMAKE_CURRENT_BINARY_DIR}/mystuff.zip" --format=zip -- ${CMAKE_CURRENT_SOURCE_DIR}/stuff/
)
against a directory organized like this:
stuff/
file1.txt
file2.txt
file3.txt
The resulting zip file contains:
stuff/
file1.txt
file2.txt
file3.txt
but I want: (no parent directory)
file1.txt
file2.txt
file3.txt
If I were doing this outside of cmake, I would use the -C argument (change directory)
How to do this with cmake?
You could change the working directory of the tar command, something like this:
add_custom_target(
mystuff
WORKING_DIRECTORY "${CMAKE_CURRENT_SOURCE_DIR}/stuff"
COMMAND ${CMAKE_COMMAND} -E tar "cvf" "${CMAKE_CURRENT_BINARY_DIR}/mystuff.zip" --format=zip .
)
You could make the whole thing simpler by removing the nested call to CMake, and calling zip directly:
add_custom_target(
mystuff
WORKING_DIRECTORY "${CMAKE_CURRENT_SOURCE_DIR}/stuff"
COMMAND zip -r "${CMAKE_CURRENT_BINARY_DIR}/mystuff.zip" .
)
(Note: original didn't work, see comment below)
Tell tar to include only the contents of the directory (it will do so recursively, but will leave off the top level).
tar "cvf" "${CMAKE_CURRENT_BINARY_DIR}/mystuff.zip" --format=zip \
-- ${CMAKE_CURRENT_SOURCE_DIR}/stuff/*

How to make CMake 'FILE' command dependent on TARGET or OUTPUT?

I've got some add_custom_command() stuff in a CMake build to do some things after building an elf file target: convert it to srec, fill various areas with 0xFF and create a binary image, generate a CRC and get the size of the image. add_custom_command() can have DEPENDS, making it run only when the elf file is regenerated, which is great.
What I also want to do is the create a new file using FILE() that contains the binary filename, crc, and size (maybe in simple JSON format), but the documentation implies that I can't do this file activity after the things I mention above have happened.
# This command creates the FF-filled binary file. It uses objcopy to create the srec
# file to operate on.
add_custom_command(
OUTPUT ThreadingApp.bin filled.srec
MAIN_DEPENDENCY ThreadingApp.elf
COMMAND ${CMAKE_OBJCOPY} ARGS -O srec ThreadingApp.elf ThreadingApp.srec
COMMAND srec_cat.exe ThreadingApp.srec -offset - -minimum-addr ThreadingApp.srec
−fill 0xFF -over ThreadingApp.srec -o filled.srec
COMMAND srec_cat.exe filled.srec -o ThreadingApp.bin -binary
)
# This command creates the CRC
add_custom_command(
OUTPUT crc.out size.out
MAIN_DEPENDENCY filled.srec
COMMAND srec_cat.exe filled.srec -crc32-b-e 0x08100000 -crop 0x08100000 0x08100004 -o - -hex-dump > crc.out
COMMAND srec_info.exe filled.srec > size.out
)
file(
WRITE ThreadingApp.json
)
Looking at the synopsis of 'FILE', I don't see how I can make this happen only after my custom commands have already run. Any suggestions on how to achieve this within CMake? My alternative is to write a separate Python script to execute within an add_custom_command to create the json file.
Reading
file(READ <filename> <out-var> [...])
file(STRINGS <filename> <out-var> [...])
file(<HASH> <filename> <out-var>)
file(TIMESTAMP <filename> <out-var> [...])
Writing
file({WRITE | APPEND} <filename> <content>...)
file({TOUCH | TOUCH_NOCREATE} [<file>...])
file(GENERATE OUTPUT <output-file> [...])
Filesystem
file({GLOB | GLOB_RECURSE} <out-var> [...] [<globbing-expr>...])
file(RENAME <oldname> <newname>)
file({REMOVE | REMOVE_RECURSE } [<files>...])
file(MAKE_DIRECTORY [<dir>...])
file({COPY | INSTALL} <file>... DESTINATION <dir> [...])
file(SIZE <filename> <out-var>)
file(READ_SYMLINK <linkname> <out-var>)
file(CREATE_LINK <original> <linkname> [...])
Path Conversion
file(RELATIVE_PATH <out-var> <directory> <file>)
file({TO_CMAKE_PATH | TO_NATIVE_PATH} <path> <out-var>)
Transfer
file(DOWNLOAD <url> <file> [...])
file(UPLOAD <file> <url> [...])
Locking
file(LOCK <path> [...])
Unlike to add_custom_command, which COMMAND is executed on build stage, FILE() is executed immediately, at configuration stage when CMakeLists.txt is processed.
You may however put FILE() invocation into separate CMake script, and run this script with add_custom_command. With that approach the script with FILE() will be executed at build stage of your project, and you may use OUTPUT or TARGET option:
# File: my_script.cmake
file(...)
# File: CMakeLists.txt:
add_custom_command(OUTPUT | TARGET ...
COMMAND ${CMAKE_COMMAND} -P "${CMAKE_CURRENT_SOURCE_DIR}/my_script.cmake"
)
Aside from demonstrated CMake scripting mode (cmake -P) there is a command line mode (cmake -E), which can perform basic operations without needing to write a script.
# File: CMakeLists.txt:
add_custom_command(OUTPUT | TARGET ...
COMMAND ${CMAKE_COMMAND} -E echo "<content-of-the-file>" > "</path/to/file>"
)
See also that question about redirecting output in add_custom_command: How to redirect the output of a CMake custom command to a file?.

Make pattern match variables are not expanded

I'm trying to build some PDFs in a Makefile using Sphinx. The resulting PDF has broken references, so I want to fix those using pdftk.
Goal
So what I want to do for all PDFs I build is this:
# Creates the PDF files.
$(SPHINXBUILD) -b pdf $(ALLSPHINXOPTS) source/pdf/ $(BUILDDIR)/pdf_broken
# Go through all PDFs and fix them.
pdftk $(BUILDDIR)/pdf_broken/thepdf.pdf output $(BUILDDIR)/pdf/thepdf.pdf
Attempt with Make
So to do this with Make I have written this Makefile:
# Build PDF (results in broken references)
$(BUILDDIR)/pdf_broken/%.pdf:
$(SPHINXBUILD) -b pdf $(ALLSPHINXOPTS) source/pdf/ $(BUILDDIR)/pdf_broken
# This fixes the broken pdfs and produces the final result.
$(BUILDDIR)/pdf/%.pdf: $(BUILDDIR)/pdf_broken/%.pdf
mkdir -p $(BUILDDIR)/pdf/
pdftk $^ output $#
pdf: $(BUILDDIR)/pdf/%.pdf
Expected result
I'm using Pattern matching as I understand it from reading the manual:
http://www.tack.ch/gnu/make-3.82/make_91.html
Where $< as I understand it should be the prerequisite expanded so from my above example:
$(BUILDDIR)/pdf_broken/thepdf.pdf
and then $# should be the target:
$(BUILDDIR)/pdf/thepdf.pdf
So my recipe pdftk $^ output $# should run the command:
pdftk $(BUILDDIR)/pdf_broken/thepdf.pdf output $(BUILDDIR)/pdf/thepdf.pdf
Actual result
But this is not what is happening. Instead, this is run:
pdftk build/pdf_broken/%.pdf output build/pdf/%.pdf
Which obviously gives me an error:
Error: Unable to find file.
Error: Failed to open PDF file:
build/pdf_broken/%.pdf
Question
So my question is, what have I missundestood with how the pattern matching works, and how is the correct way to solve this using Make?
You should likely lookup pattern rules. In any case, it looks like you have a single command to generate all the files in the broken directory. This should have its own rule, and should likely output a dummy file to indicate it is complete. Your rule to fix the pdf files should be dependent on this dummy target being created.
It should be something like:
// get a list of expected output files:
PDF_SOURCES:=$(wildcard source/pdf/*)
PDF_OUTS:=$(patsubst $(PDF_SOURCES),source/pdf/%.pdf,$(BUILDDIR)/pdf/%.pdf);
// just for debugging:
$(info PDF_SOURCES = $(PDF_SOURCES))
$(info PDF_OUTS = $(PDF_OUTS))
// default rule
all: $(PDF_OUTS)
#echo done
// rule to build BUILDIR:
$(BUILDDIR)/pdf:
mkdir -p $#
// rule to build all broken files in one go:
// (note: generates a file .dosphynx, which is used to keep track
// of when the rule was run last. This rule will be run if the
// timestamp of any of the sources are newer.
.do_sphynx: $(PDF_SOURCES) | $(BUILDDIR)/pdf
$(SPHINXBUILD) -b pdf $(ALLSPHINXOPTS) source/pdf/ $(BUILDDIR)/pdf_broken
touch $#
// create a dependency of all output files on do_sphynx
$(PDF_OUTS): .do_sphynx
// patern rule to fix pdf files
$(BUILDDIR)/pdf/%.pdf : $(BUILDDIR)/pdf_broken/%.pdf
pdftk $< output $#
I've not tested this, so its possible it may have a syntax error in it..
---------------------- EDIT -------------
Ok, since $(PDF_OUTS) cannot be determined at makefile read time, perhaps you should do:
// get a list of expected output files:
PDF_SOURCES:=$(wildcard source/pdf/*)
all: .do_fix
#echo done
$(BUILDDIR)/pdf:
mkdir -p $#
.do_sphynx: $(PDF_SOURCES) | $(BUILDDIR)/pdf
$(SPHINXBUILD) -b pdf $(ALLSPHINXOPTS) source/pdf/ $(BUILDDIR)/pdf_broken
touch $#
.do_fix: .do_sphynx
#for src in $$(ls source/pdf/*.pdf); do \
trg=$${src/#"source/pdf"/"$(BUILD_DIR)/pdf"}; \
[[ $$src -nt $$trg ]] && \
echo "$$src ==> $$trg" && pdftk $$src output $$trg; \
done
touch $#
One note -- the -nt comparator in the if will return true if $trg does not exist, so it will cover the case where the file is missing, or the target is older than the source. Again not tested, but it should work.

AWK to process compressed files and printing original (compressed) file names

I would like to process multiple .gz files with gawk.
I was thinking of decompressing and passing it to gawk on the fly
but I have an additional requirement to also store/print the original file name in the output.
The thing is there's 100s of .gz files with rather large size to process.
Looking for anomalies (~0.001% rows) and want to print out the list of found inconsistencies ALONG with the file name and row number that contained it.
If I could have all the files decompressed I would simply use FILENAME variable to get this.
Because of large quantity and size of those files I can't decompress them upfront.
Any ideas how to pass filename (in addition to the gzip stdout) to gawk to produce required output?
Assuming you are looping over all the files and piping their decompression directly into awk something like the following will work.
for file in *.gz; do
gunzip -c "$file" | awk -v origname="$file" '.... {print origname " whatever"}'
done
Edit: To use a list of filenames from some source other than a direct glob something like the following can be used.
$ ls *.awk
a.awk e.awk
$ while IFS= read -d '' filename; do
echo "$filename";
done < <(find . -name \*.awk -printf '%P\0')
e.awk
a.awk
To use xargs instead of the above loop will require the body of the command to be in a pre-written script file I believe which can be called with xargs and the filename.
this is using combination of xargs and sh (to be able to use pipe on two commands: gzip and awk):
find *.gz -print0 | xargs -0 -I fname sh -c 'gzip -dc fname | gawk -v origfile="fname" -f printbadrowsonly.awk >> baddata.txt'
I'm wondering if there's any bad practice with the above approach…

script to run a certain program with input from a given directory

So I need to run a bunch of (maven) tests with testfiles being supplied as an argument to a maven task.
Something like this:
mvn clean test -Dtest=<filename>
And the test files are usually organized into different directories. So I'm trying to write a script which would execute the above 'command' and automatically feed the name of all files in a given dir to the -Dtest.
So I started out with a shellscript called 'run_test':
#!/bin/sh
if test $# -lt 2; then
echo "$0: insufficient arguments on the command line." >&1
echo "usage: $0 run_test dirctory" >&1
exit 1
fi
for file in allFiles <<<<<<< what should I put here? Can I somehow iterate thru the list of all files' name in the given directory put the file name here?
do mvn clean test -Dtest= $file
exit $?
The part where I got stuck is how to get a list of filenames.
Thanks,
Assuming $1 contains the directory name (validation of the user input is a separate issue), then
for file in $1/*
do
[[ -f $file ]] && mvn clean test -Dtest=$file
done
will run the comand on all files. If you want to recurse into subdirectories then you need to use the find command
for file in $(find $1 -type f)
do
etc...
done
#! /bin/sh
# Set IFS to newline to minimise problems with whitespace in file/directory
# names. If we also need to deal with newlines, we will need to use
# find -print0 | xargs -0 instead of a for loop.
IFS="
"
if ! [[ -d "${1}" ]]; then
echo "Please supply a directory name" > &2
exit 1
else
# We use find rather than glob expansion in case there are nested directories.
# We sort the filenames so that we execute the tests in a predictable order.
for pathname in $(find "${1}" -type f | LC_ALL=C sort) do
mvn clean test -Dtest="${pathname}" || break
done
fi
# exit $? would be superfluous (it is the default)