How to add page numbers to Postscript/PDF - pdf

If you've got a large document (500 pages+) in Postscript and want to add page numbers, does anyone know how to do this?

Based on rcs's proposed solution, I did the following:
Converted the document to example.pdf and ran pdflatex addpages, where addpages.tex reads:
\documentclass[8pt]{article}
\usepackage[final]{pdfpages}
\usepackage{fancyhdr}
\topmargin 70pt
\oddsidemargin 70pt
\pagestyle{fancy}
\rfoot{\Large\thepage}
\cfoot{}
\renewcommand {\headrulewidth}{0pt}
\renewcommand {\footrulewidth}{0pt}
\begin{document}
\includepdfset{pagecommand=\thispagestyle{fancy}}
\includepdf[fitpaper=true,scale=0.98,pages=-]{example.pdf}
% fitpaper & scale aren't always necessary - depends on the paper being submitted.
\end{document}
or alternatively, for two-sided pages (i.e. with the page number consistently on the outside):
\documentclass[8pt]{book}
\usepackage[final]{pdfpages}
\usepackage{fancyhdr}
\topmargin 70pt
\oddsidemargin 150pt
\evensidemargin -40pt
\pagestyle{fancy}
\fancyhead{}
\fancyfoot{}
\fancyfoot[LE,RO]{\Large\thepage}
\renewcommand{\headrulewidth}{0pt}
\renewcommand{\footrulewidth}{0pt}
\begin{document}
\includepdfset{pages=-,pagecommand=\thispagestyle{fancy}}
\includepdf{target.pdf}
\end{document}
Easy way to change header margins:
% set margins for headers, won't shrink included pdfs
% you can remove the topmargin/oddsidemargin/evensidemargin lines
\usepackage[margin=1in,includehead,includefoot]{geometry}

you can simply use
pspdftool
http://sourceforge.net/projects/pspdftool
in this way:
pspdftool 'number(x=-1pt,y=-1pt,start=1,size=10)' input.pdf output.pdf
see these two examples (unnumbered and numbered pdf with pspdftool)
unnumbered pdf
http://ge.tt/7ctUFfj2
numbered pdf
http://ge.tt/7ctUFfj2
with this as the first command-line argument:
number(start=1, size=40, x=297.5 pt, y=10 pt)

I used to add page numbers to my pdf using latex like in the accepted answer.
Now I found an easier way:
Use enscript to create empty pages with a header containing the page number, and then use pdftk with the multistamp option to put the header on your file.
This bash script expects the pdf file as it's only parameter:
#!/bin/bash
input="$1"
output="${1%.pdf}-header.pdf"
pagenum=$(pdftk "$input" dump_data | grep "NumberOfPages" | cut -d":" -f2)
enscript -L1 --header='||Page $% of $=' --output - < <(for i in $(seq "$pagenum"); do echo; done) | ps2pdf - | pdftk "$input" multistamp - output $output

I was looking for a postscript-only solution, using ghostscript. I needed this to merge multiple PDFs and put a counter on every page. Only solution I found was an old gs-devel posting, which I heavily simplified:
%!PS
% add page numbers document bottom right (20 units spacing , harcoded below)
% Note: Page dimensions are expressed in units of the default user space (72nds of an inch).
% inspired by https://www.ghostscript.com/pipermail/gs-devel/2005-May/006956.html
globaldict /MyPageCount 1 put % initialize page counter
% executed at the end of each page. Before calling the procedure, the interpreter
% pushes two integers on the operand stack:
% 1. a count of previous showpage executions for this device
% 2. a reason code indicating the circumstances under which this call is being made:
% 0: During showpage or (LanguageLevel 3) copypage
% 1: During copypage (LanguageLevel 2 only)
% 2: At device deactivation
% The procedure must return a boolean value specifying whether to transmit the page image to the
% physical output device.
<< /EndPage {
exch pop % remove showpage counter (unused)
0 eq dup { % only run and return true for showpage
/Helvetica 12 selectfont % select font and size for following operations
MyPageCount =string cvs % get page counter as string
dup % need it twice (width determination and actual show)
stringwidth pop % get width of page counter string ...
currentpagedevice /PageSize get 0 get % get width from PageSize on stack
exch sub 20 sub % pagewidth - stringwidth - some extra space
20 moveto % move to calculated x and y=20 (0/0 is the bottom left corner)
show % finally show the page counter
globaldict /MyPageCount MyPageCount 1 add put % increment page counter
} if
} bind >> setpagedevice
If you save this to a file called pagecount.ps you can use it on command line like this:
gs \
-dBATCH -dNOPAUSE \
-sDEVICE=pdfwrite -dPDFSETTINGS=/prepress \
-sOutputFile=/path/to/merged.pdf \
-f pagecount.ps -f input1.pdf -f input2.pdf
Note that pagecount.ps must be given first (technically, right before the the input file which the page counting should start with).
If you don't want to use an extra .ps file, you can also use a minimized form like this:
gs \
-dBATCH -dNOPAUSE \
-sDEVICE=pdfwrite -dPDFSETTINGS=/prepress \
-sOutputFile=/path/to/merged.pdf \
-c 'globaldict /MyPageCount 1 put << /EndPage {exch pop 0 eq dup {/Helvetica 12 selectfont MyPageCount =string cvs dup stringwidth pop currentpagedevice /PageSize get 0 get exch sub 20 sub 20 moveto show globaldict /MyPageCount MyPageCount 1 add put } if } bind >> setpagedevice' \
-f input1.pdf -f input2.pdf
Depending on your input, you may have to use gsave/grestore at the beginning/end of the if block.

This might be a solution:
convert postscript to pdf using ps2pdf
create a LaTeX file and insert the pages using the pdfpages package (\includepdf)
use pagecommand={\thispagestyle{plain}} or something from the fancyhdr package in the arguments of \includepdf
if postscript output is required, convert the pdflatex output back to postscript via pdf2ps

Further to captaincomic's solution, I've extended it to support the starting of page numbering at any page.
Requires enscript, pdftk 1.43 or greater and pdfjam (for pdfjoin utility)
#!/bin/bash
input="$1"
count=$2
blank=$((count - 1))
output="${1%.pdf}-header.pdf"
pagenum=$(pdftk "$input" dump_data | grep "NumberOfPages" | cut -d":" -f2)
(for i in $(seq "$blank"); do echo; done) | enscript -L1 -B --output - | ps2pdf - > /tmp/pa$$.pdf
(for i in $(seq "$pagenum"); do echo; done) | enscript -a ${count}- -L1 -F Helvetica#10 --header='||Page $% of $=' --output - | ps2pdf - > /tmp/pb$$.pdf
pdfjoin --paper letter --outfile /tmp/join$$.pdf /tmp/pa$$.pdf /tmp/pb$$.pdf &>/dev/null
cat /tmp/join$$.pdf | pdftk "$input" multistamp - output "$output"
rm /tmp/pa$$.pdf
rm /tmp/pb$$.pdf
rm /tmp/join$$.pdf
For example.. place this in /usr/local/bin/pagestamp.sh and execute like:
pagestamp.sh doc.pdf 3
This will start the page number at page 3.. useful when you have coversheets, title pages and table of contents, etc.
The unfortunate thing is that enscript's --footer option is broken, so you cannot get the page numbering at the bottom using this method.

I liked the idea of using pspdftool (man page) but what I was after was page x out of y format and the font style to match the rest of the page.
To find out about the font names used in the document:
$ strings input.pdf | grep Font
To get the number of pages:
$ pdfinfo input.pdf | grep "Pages:" | tr -s ' ' | cut -d" " -f2
Glue it together with a few pspdftool commands:
$ in=input.pdf; \
out=output.pdf; \
indent=30; \
pageNumberIndent=49; \
pageCountIndent=56; \
font=LiberationSerif-Italic; \
fontSize=9; \
bottomMargin=40; \
pageCount=`pdfinfo $in | grep "Pages:" | tr -s ' ' | cut -d" " -f2`; \
pspdftool "number(x=$pageNumberIndent pt, y=$bottomMargin pt, start=1, size=$fontSize, font=\"$font\")" $in tmp.pdf; \
pspdftool "text(x=$indent pt, y=$bottomMargin pt, size=$fontSize, font=\"$font\", text=\"page \")" tmp.pdf tmp.pdf; \
pspdftool "text(x=$pageCountIndent pt, y=$bottomMargin pt, size=$fontSize, font=\"$font\", text=\"out of $pageCount\")" tmp.pdf $out; \
rm tmp.pdf;
Here is the result:

Oh, it's a long time since I used postscript, but a quick dip into the blue book will tell you :) www-cdf.fnal.gov/offline/PostScript/BLUEBOOK.PDF
On the other hand, Adobe Acrobat and a bit of javascript would also do wonders ;)
Alternatively, I did find this: http://www.ghostscript.com/pipermail/gs-devel/2005-May/006956.html, which seems to fit the bill (I didn't try it)

You can use the free and open source pdftools to add page numbers to a PDF file with a single command line.
The command line you could use is (on GNU/Linux you have to escape the $ sign in the shell, on Windows it is not necessary):
pdftools.py --input-file ./input/wikipedia_algorithm.pdf --output ./output/addtext.pdf --text "\$page/\$pages" br 1 1 --overwrite
Regarding the --text option:
The first parameter is the text to add. Some placeholders are available. $page stands for the current page number, while $pages stands for the total number of pages in the PDF file. Thus the option so formulated would add something like "1/10" for the first page of a 10-page PDF document, and so on for the following pages
The second parameter is the anchor point of the text box. "br" will position the bottom right corner of the text box
The third parameter is the horizontal position of the anchor point of the text box as a percentage of the page width. Must be a number between 0 and 1, with the dot . separating decimals
The fourth parameter option is the vertical position of the anchor point on the text box as a percentage of the page height. Must be a number between 0 and 1, with the dot . separating decimals
Disclaimer: I'm the author of pdftools

I am assuming you are looking for a PS-based solution. There is no page-level operator in PS that will allow you to do this. You need to add a footer-sort of thingy in the PageSetup section for each page. Any scripting language should be able to help you along.

I tried pspdftool (http://sourceforge.net/projects/pspdftool).
I eventually got it to work, but at first I got this error:
pspdftool: xreftable read error
The source file was created with pdfjoin from pdfjam, and contained a bunch of scans from my Epson Workforce as well as generated tag pages. I couldn't figure out a way to fix the xref table, so I converted to ps with pdf2ps and back to pdf with pdf2ps. Then I could use this to get nice page numbers on the bottom right corner:
pspdftool 'number(start=1, size=20, x=550 pt, y=10 pt)' input.pdf output.pdf
Unfortunately, it means that any text-searchable pages are no longer searchable because the text was rasterized in the ps conversion. Fortunately, in my case it doesn't matter.
Is there any way to fix or empty the xref table of a pdf file without losing what pages are searchable?

I took captaincomic's solution and added support for filenames containing spaces, plus giving some more informations about the progress
#!/bin/bash
clear
echo
echo This skript adds pagenumbers to a given .pdf file.
echo
echo This skript needs the packages pdftk and enscript
echo if not installed the script will fail.
echo use the command sudo apt-get install pdftk enscript
echo to install.
echo
input="$1"
output="${1%.pdf}-header.pdf"
echo input file is $input
echo output file will be $output
echo
pagenum=$(pdftk "$input" dump_data | grep "NumberOfPages" | cut -d":" -f2)
enscript -L1 --header='||Page $% of $=' --output - < <(for i in $(seq "$pagenum"); do echo; done) | ps2pdf - | pdftk "$input" multistamp - output "$output"
echo done.

I wrote the following shell script to solve this for LaTeX beamer style slides produced with inkscape (I pdftk cat the slides together into the final presentation PDF & then add slide numbers using the script below):
#!/bin/sh
# create working directory
tmpdir=$(mktemp --directory)
# read un-numbered beamer slides PDF from STDIN & create temporary copy
cat > $tmpdir/input.pdf
# get total number of pages
pagenum=$(pdftk $tmpdir/input.pdf dump_data | awk '/NumberOfPages/{print $NF}')
# generate latex beamer document with the desired number of empty but numbered slides
printf '%s' '
\documentclass{beamer}
\usenavigationsymbolstemplate{}
\setbeamertemplate{footline}[frame number]
\usepackage{forloop}
\begin{document}
\newcounter{thepage}
\forloop{thepage}{0}{\value{thepage} < '$pagenum'}{
\begin{frame}
\end{frame}
}
\end{document}
' > $tmpdir/numbers.tex
# compile latex file into PDF (2nd run needed for total number of pages) & redirect output to STDERR
pdflatex -output-directory=$tmpdir numbers.tex >&2 && pdflatex -output-directory=$tmpdir numbers.tex >&2
# add empty numbered PDF slides as background to (transparent background) input slides (page by
# page) & write results to STDOUT
pdftk $tmpdir/input.pdf multibackground $tmpdir/numbers.pdf output -
# remove temporary working directory with all intermediate files
rm -r $tmpdir >&2
The script reads STDIN & writes STDOUT printing diagnostic pdflatex output to STDERR.
So just copy-paste the above code in a text file, say enumerate_slides.sh, make it executable (chmod +x enumerate_slides.sh) & call it like this:
./enumerate_slides.sh < input.pdf > output.pdf [2>/dev/null]
It should be easy to adjust this to any other kind of document by adjusting the LaTeX template to use the proper documentclass, paper size & style options.
edit:
I replaced echo by $(which echo) since in ubuntu symlinks /bin/sh to dash which overrides the echo command by a shell internal interpreting escape sequences by default & not providing the -E option to override this behaviour. Note that alternatively you could escape all \ in the LaTeX template as \\.
edit:
I replaced $(which echo) by printf '%s' since in zsh, which echo returns echo: shell built-in command instead of /bin/echo.
See this question for details why I decided to use printf in the end.

Maybe pstops (part of psutils) can be used for this?

I have used LibreOffice Calc for this. Adding a page number field is easy using Insert->Field->Page Number. And then you can copy-and-paste this field to other pages; fortunately the position is not changed and the copy-and-paste can be done quickly with down arrow key and Ctrl+V. Worked for me for a 30 page article. Maybe prone to errors for a 500+ one!

Related

Is there a way discard previous pdfmark metadata?

I was trying to automate adding title, bookmarks and such to some PDFs I need. The way I came up with was to create a simple pdfmark script like this:
% pdfmark.ps
[ /Title (My document)
/Author(Me)
/DOCINFO pdfmark
[ /Title (First chapter)
/Page 1
/OUT pdfmark
Then generate a new PDF with ghostscript using:
gs -q -dNOPAUSE -dBATCH -sDEVICE=pdfwrite -sOutputFile=out.pdf in.pdf pdfmark.ps
If in.pdf doesn't have any pdfmark data it works fine, however if it does things don't work out nicely: for example title/author aren't modified and bookmarks are appended instead of replaced.
Since I don't want to mess around modifying the PDF's corresponding postscript, I was trying to find if there is some command to add to pdfmark.ps that can delete (or overwrite) previous metadata.
I'll leave PostScript to others and show how to remove a PDF outline using the qpdf package (for qpdf and fix-qdf) and GNU sed.
From the qpdf manual:
In QDF mode, qpdf creates PDF files in what we call QDF form.
A PDF file in QDF form, sometimes called a QDF file, is a completely
valid PDF file that has %QDF-1.0 as its third line (after the pdf
header and binary characters) and has certain other characteristics.
The purpose of QDF form is to make it possible to edit PDF files,
with some restrictions, in an ordinary text editor.
(For a non-GNU/Linux system adapt the commands below.)
qpdf --qdf --compress-streams=n --decode-level=generalized \
--object-streams=disable -- in.pdf - |
sed --binary \
-e '/^[ ][ ]*\/Outlines [0-9][0-9]* [0-9] R/ s/[1-9]/0/g' |
fix-qdf > tmp.qdf
qpdf --coalesce-contents --compression-level=9 \
--object-streams=generate -- tmp.qdf out.pdf
where:
1st qpdf command converts the PDF file to QDF form for editing
sed orphans outlines in the QDF file by rooting them at non-existing obj 0
fix-qdf repairs the QDF after editing
2nd qpdf converts and compresses QDF to PDF
qpdf input cannot be pipelined, it needs to seek
The sed command changes digits to zeros in the line containing
the indented text /Outlines.
Note that GNU sed is used for the non-standard --binary option
to avoid mishaps on an OS distinguishing between text and binary files.
Similarly, to strip annotations replace /Outlines with /Annots in
the -e above, or insert it in a second -e option to do both.
Another patch utility than sed will do; often just one byte has
to be changed.
To quickly strip all non-page data (docinfo, outlines a.o. but not
annotations) qpdf's --empty option may be useful:
qpdf --coalesce-contents --compression-level=9 \
--object-streams=generate \
--empty --pages in.pdf 1-z -- out.pdf

pdftk extract random pages from script variable

I have some pdf files about 2000 pages. They are randomly generated.
I need to extract some pages that contain some specific patterns, that changes its page number for every pdf.
With some steps using pdfToText and AWK, I can get the page numbers and I store some info into a csv file like that:
PatternA ; 1 3 5 7
PatternB ; 1 8 10 22
I have been trying to do a loop to get and process each line from this csv into the cat option from pdftk command, but it aways return error:
$IFS=$(printf '\n\t')
for line in `cat job.csv`
do
pattern=`echo $line ¦ cut -d ';' -f 1`
pages=`echo $line ¦ cut -d ';' - f 2`
pdftk input.pdf cat $pages output $pattern
done
When echoing pattern and pages variables, everything are ok. But the pdftk command returns error if I try to get pages from $pages variable:
Error: Unexpected text in page range end, here:
1 3 5 7
Exiting.
Acceptable keywords, for example: "even" or "odd".
To rotate pages, use: "north" "south" "east"
"west" "left" "right" or "down"
Errors encountered. No output created.
Done. Input errors, so no output created.
What I am doing wrong?
Thanks!
[SOLVED]
I guess.. I don't know if it is the best choice, but works:
Instead directly execute the pdftk command:
pdftk input.pdf cat $pages output $pattern
I stored entire command into a new variable and them, ran the eval command:
cmd="pdftk input.pdf cat $pages output $pattern"
eval $cmd
So, it worked like a charm...
If there are more elegant solution, I'll appreciate!

splitting PDF files in 50 pages interval

I have a Ghostscript to split PDF books in 50 pages interval. The problem is the GS is removing the transparency (I think this is called alpha channel in technical terms: http://www.peteryu.ca/tutorials/publishing/pdf_manipulation_tips) of the annotations.
Look at the following paragraph from a book. The highlight was fully readable before the splitting.
Now, it is blacked out.
So, I am looking for a way to do the splitting using other tools like PDFtk or any other tool which will not flatten my annotations.
Ultimately, I want to run the script on a folder of files using Hazel in Mac.
Here is the Ghostscript if it helps: ($1 is Hazel's way of importing the file, I think).
echo "Page count: "
ournum=`gs -q -dNODISPLAY -c "("$1") (r) file runpdfbegin pdfpagecount = quit" 2>/dev/null`
declare -i counter;
declare -i counterplus;
counter=1;
while [ $counter -le $ournum ] ; do
echo $counter
newname=`echo $1 | sed -e s/\.pdf//g`
reallynewname=$newname-$counter.pdf
counterplus=$counter+50;
yes | gs -dBATCH -sOutputFile=$reallynewname -dFirstPage=$counter - dLastPage=$counterplus -sDEVICE=pdfwrite "$1" >& /dev/null
counter=$counterplus
done;
Can you guys help me with this?
Thanks

How to extract table data from PDF as CSV from the command line?

I want to extract all rows from here while ignoring the column headers as well as all page headers, i.e. Supported Devices.
pdftotext -layout DAC06E7D1302B790429AF6E84696FCFAB20B.pdf - \
| sed '$d' \
| sed -r 's/ +/,/g; s/ //g' \
> output.csv
The resulting file should be in CSV spreadsheet format (comma separated value fields).
In other words, I want to improve the above command so that the output doesn't brake at all. Any ideas?
I'll offer you another solution as well.
While in this case the pdftotext method works with reasonable effort, there may be cases where not each page has the same column widths (as your rather benign PDF shows).
Here the not-so-well-known, but pretty cool Free and OpenSource Software Tabula-Extractor is the best choice.
I myself am using the direct GitHub checkout:
$ cd $HOME ; mkdir svn-stuff ; cd svn-stuff
$ git clone https://github.com/tabulapdf/tabula-extractor.git git.tabula-extractor
I wrote myself a pretty simple wrapper script like this:
$ cat ~/bin/tabulaextr
#!/bin/bash
cd ${HOME}/svn-stuff/git.tabula-extractor/bin
./tabula $#
Since ~/bin/ is in my $PATH, I just run
$ tabulaextr --pages all \
$(pwd)/DAC06E7D1302B790429AF6E84696FCFAB20B.pdf \
| tee my.csv
to extract all the tables from all pages and convert them to a single CSV file.
The first ten (out of a total of 8727) lines of the CVS look like this:
$ head DAC06E7D1302B790429AF6E84696FCFAB20B.csv
Retail Branding,Marketing Name,Device,Model
"","",AD681H,Smartfren Andromax AD681H
"","",FJL21,FJL21
"","",Luno,Luno
"","",T31,Panasonic T31
"","",hws7721g,MediaPad 7 Youth 2
3Q,OC1020A,OC1020A,OC1020A
7Eleven,IN265,IN265,IN265
A.O.I. ELECTRONICS FACTORY,A.O.I.,TR10CS1_11,TR10CS1
AG Mobile,Status,Status,Status
which in the original PDF look like this:
It even got these lines on the last page, 293, right:
nabi,"nabi Big Tab HD\xe2\x84\xa2 20""",DMTAB-NV20A,DMTAB-NV20A
nabi,"nabi Big Tab HD\xe2\x84\xa2 24""",DMTAB-NV24A,DMTAB-NV24A
which look on the PDF page like this:
TabulaPDF and Tabula-Extractor are really, really cool for jobs like this!
Update
Here is an ASCiinema screencast (which you also can download and re-play locally in your Linux/MacOSX/Unix terminal with the help of the asciinema command line tool), starring tabula-extractor:
As Martin R commented, tabula-java is the new version of tabula-extractor and active. 1.0.0 was released on July 21st, 2017.
Download the jar file and with the latest java:
java -jar ./tabula-1.0.0-jar-with-dependencies.jar \
--pages=all \
./DAC06E7D1302B790429AF6E84696FCFAB20B.pdf
> support_devices.csv
What you want is rather easy, but you're having a different problem also (I'm not sure you are aware of it...).
First, you should add -nopgbrk for ("No pagebreaks, please!") to your command. Because these pesky ^L characters which otherwise appear in the output then need not be filtered out later.
Adding a grep -vE '(Supported Devices|^$)' will then filter out all the lines you do not want, including empty lines, or lines with only spaces:
pdftotext -layout -nopgbrk \
DAC06E7D1302B790429AF6E84696FCFAB20B.pdf - \
| grep -vE '(Supported Devices|^$|Marketing Name)' \
| gsed '$d' \
| gsed -r 's# +#,#g' \
| gsed '# ##g' \
> output2.csv
However, your other problem is this:
Some of the table fields are empty.
Empty fields appear with the -layout option as a series of space characters, sometimes even two in the same row.
However, the text columns are not spaced identically from page to page.
Therefor you will not know from line to line how many spaces you need to regard as a an "empty CSV field" (where you'd need an extra , separator).
As a consequence, your current code will show only one, two or three (instead of four) fields for some lines, and these fields end up in the wrong columns!
There is a workaround for this:
Add the -x ... -y ... -W ... -H ... parameters to pdftotext to crop the PDF column-wise.
Then append the columns with a combination of utilities like paste and column.
The following command extracts the first columns:
pdftotext -layout -x 38 -y 77 -W 176 -H 500 \
DAC06E7D1302B790429AF6E84696FCFAB20B.pdf - > 1st-columns.txt
These are for second, third and fourth columns:
pdftotext -layout -x 214 -y 77 -W 176 -H 500 \
DAC06E7D1302B790429AF6E84696FCFAB20B.pdf - > 2nd-columns.txt
pdftotext -layout -x 390 -y 77 -W 176 -H 500 \
DAC06E7D1302B790429AF6E84696FCFAB20B.pdf - > 3rd-columns.txt
pdftotext -layout -x 567 -y 77 -W 176 -H 500 \
DAC06E7D1302B790429AF6E84696FCFAB20B.pdf - > 4th-columns.txt
BTW, I cheated a bit: in order to get a clue about what values to use for -x, -y, -W and -H I did first run this command in order to find the exact coordinates of the column header words:
pdftotext -f 1 -l 1 -layout -bbox \
DAC06E7D1302B790429AF6E84696FCFAB20B.pdf - | head -n 10
It's always good if you know how to read and make use of pdftotext -h. :-)
Anyway, how to append the four text files as columns side by side, with the proper CVS separator in between, you should find out yourself. Or ask a new question :-)
This can be done easily with an IntelliGet (http://akribiatech.com/intelliget) script as below
userVariables = brand, name, device, model;
{ start = Not(Or(Or(IsSubstring("Supported Devices",Line(0)),
IsSubstring("Retail Branding",Line(0))),
IsEqual(Length(Trim(Line(0))),0)));
brand = Trim(Substring(Line(0),10,44));
name = Trim(Substring(Line(0),45,79));
device = Trim(Substring(Line(0),80,114));
model = Trim(Substring(Line(0),115,200));
output = Concat(brand, ",", name, ",", device, ",", model);
}
For the case where you want to extract that tabular data from PDF over which you have control at creation time (for timesheets contracts your employees have to sign), the following solution will be cleaner:
Create a PDF form with field IDs.
Let people fill and save the PDF forms.
Use a Apache PDFBox, an open source tool that allows to extract form data from a PDF. It includes a command-line example tool PrintFields that you would call as follows to print the desired field information:
org.apache.pdfbox.examples.interactive.form.PrintFields file.pdf
For other options, see this question.
As an alternative to the above workflow, maybe you could also use a digital signature web service that allows PDF form filling and export of the data to tables. Such as SignRequest, which allows to create templates and later export the data of signed documents. (Not affiliated, just found this myself.)

How to get few lines from a .gz compressed file without uncompressing

How to get the first few lines from a gziped file ?
I tried zcat, but its throwing an error
zcat CONN.20111109.0057.gz|head
CONN.20111109.0057.gz.Z: A file or directory in the path name does not exist.
zcat(1) can be supplied by either compress(1) or by gzip(1). On your system, it appears to be compress(1) -- it is looking for a file with a .Z extension.
Switch to gzip -cd in place of zcat and your command should work fine:
gzip -cd CONN.20111109.0057.gz | head
Explanation
-c --stdout --to-stdout
Write output on standard output; keep original files unchanged. If there are several input files, the output consists of a sequence of independently compressed members. To obtain better compression, concatenate all input files before compressing
them.
-d --decompress --uncompress
Decompress.
On some systems (e.g., Mac), you need to use gzcat.
On a mac you need to use the < with zcat:
zcat < CONN.20111109.0057.gz|head
If a continuous range of lines needs be, one option might be:
gunzip -c file.gz | sed -n '5,10p;11q' > subFile
where the lines between 5th and 10th lines (both inclusive) of file.gz are extracted into a new subFile. For sed options, refer to the manual.
If every, say, 5th line is required:
gunzip -c file.gz | sed -n '1~5p;6q' > subFile
which extracts the 1st line and jumps over 4 lines and picks the 5th line and so on.
If you want to use zcat, this will show the first 10 rows
zcat your_filename.gz | head
Let's say you want the 16 first row
zcat your_filename.gz | head -n 16
This awk snippet will let you show not only the first few lines - but a range you can specify. It will also add line numbers which i needed for debugging an error message pointing to a certain line way down in a gzipped file.
gunzip -c file.gz | awk -v from=10 -v to=20 'NR>=from { print NR,$0; if (NR>=to) exit 1}'
Here is the awk snippet used in the one liner above. In awk NR is a built-in variable (Number of records found so far) which usually is equivalent to a line number. the from and to variable are picked up from the command line via the -v options.
NR>=from {
print NR,$0;
if (NR>=to)
exit 1
}