How to extract table data from PDF as CSV from the command line? - pdf

I want to extract all rows from here while ignoring the column headers as well as all page headers, i.e. Supported Devices.
pdftotext -layout DAC06E7D1302B790429AF6E84696FCFAB20B.pdf - \
| sed '$d' \
| sed -r 's/ +/,/g; s/ //g' \
> output.csv
The resulting file should be in CSV spreadsheet format (comma separated value fields).
In other words, I want to improve the above command so that the output doesn't brake at all. Any ideas?

I'll offer you another solution as well.
While in this case the pdftotext method works with reasonable effort, there may be cases where not each page has the same column widths (as your rather benign PDF shows).
Here the not-so-well-known, but pretty cool Free and OpenSource Software Tabula-Extractor is the best choice.
I myself am using the direct GitHub checkout:
$ cd $HOME ; mkdir svn-stuff ; cd svn-stuff
$ git clone https://github.com/tabulapdf/tabula-extractor.git git.tabula-extractor
I wrote myself a pretty simple wrapper script like this:
$ cat ~/bin/tabulaextr
#!/bin/bash
cd ${HOME}/svn-stuff/git.tabula-extractor/bin
./tabula $#
Since ~/bin/ is in my $PATH, I just run
$ tabulaextr --pages all \
$(pwd)/DAC06E7D1302B790429AF6E84696FCFAB20B.pdf \
| tee my.csv
to extract all the tables from all pages and convert them to a single CSV file.
The first ten (out of a total of 8727) lines of the CVS look like this:
$ head DAC06E7D1302B790429AF6E84696FCFAB20B.csv
Retail Branding,Marketing Name,Device,Model
"","",AD681H,Smartfren Andromax AD681H
"","",FJL21,FJL21
"","",Luno,Luno
"","",T31,Panasonic T31
"","",hws7721g,MediaPad 7 Youth 2
3Q,OC1020A,OC1020A,OC1020A
7Eleven,IN265,IN265,IN265
A.O.I. ELECTRONICS FACTORY,A.O.I.,TR10CS1_11,TR10CS1
AG Mobile,Status,Status,Status
which in the original PDF look like this:
It even got these lines on the last page, 293, right:
nabi,"nabi Big Tab HD\xe2\x84\xa2 20""",DMTAB-NV20A,DMTAB-NV20A
nabi,"nabi Big Tab HD\xe2\x84\xa2 24""",DMTAB-NV24A,DMTAB-NV24A
which look on the PDF page like this:
TabulaPDF and Tabula-Extractor are really, really cool for jobs like this!
Update
Here is an ASCiinema screencast (which you also can download and re-play locally in your Linux/MacOSX/Unix terminal with the help of the asciinema command line tool), starring tabula-extractor:

As Martin R commented, tabula-java is the new version of tabula-extractor and active. 1.0.0 was released on July 21st, 2017.
Download the jar file and with the latest java:
java -jar ./tabula-1.0.0-jar-with-dependencies.jar \
--pages=all \
./DAC06E7D1302B790429AF6E84696FCFAB20B.pdf
> support_devices.csv

What you want is rather easy, but you're having a different problem also (I'm not sure you are aware of it...).
First, you should add -nopgbrk for ("No pagebreaks, please!") to your command. Because these pesky ^L characters which otherwise appear in the output then need not be filtered out later.
Adding a grep -vE '(Supported Devices|^$)' will then filter out all the lines you do not want, including empty lines, or lines with only spaces:
pdftotext -layout -nopgbrk \
DAC06E7D1302B790429AF6E84696FCFAB20B.pdf - \
| grep -vE '(Supported Devices|^$|Marketing Name)' \
| gsed '$d' \
| gsed -r 's# +#,#g' \
| gsed '# ##g' \
> output2.csv
However, your other problem is this:
Some of the table fields are empty.
Empty fields appear with the -layout option as a series of space characters, sometimes even two in the same row.
However, the text columns are not spaced identically from page to page.
Therefor you will not know from line to line how many spaces you need to regard as a an "empty CSV field" (where you'd need an extra , separator).
As a consequence, your current code will show only one, two or three (instead of four) fields for some lines, and these fields end up in the wrong columns!
There is a workaround for this:
Add the -x ... -y ... -W ... -H ... parameters to pdftotext to crop the PDF column-wise.
Then append the columns with a combination of utilities like paste and column.
The following command extracts the first columns:
pdftotext -layout -x 38 -y 77 -W 176 -H 500 \
DAC06E7D1302B790429AF6E84696FCFAB20B.pdf - > 1st-columns.txt
These are for second, third and fourth columns:
pdftotext -layout -x 214 -y 77 -W 176 -H 500 \
DAC06E7D1302B790429AF6E84696FCFAB20B.pdf - > 2nd-columns.txt
pdftotext -layout -x 390 -y 77 -W 176 -H 500 \
DAC06E7D1302B790429AF6E84696FCFAB20B.pdf - > 3rd-columns.txt
pdftotext -layout -x 567 -y 77 -W 176 -H 500 \
DAC06E7D1302B790429AF6E84696FCFAB20B.pdf - > 4th-columns.txt
BTW, I cheated a bit: in order to get a clue about what values to use for -x, -y, -W and -H I did first run this command in order to find the exact coordinates of the column header words:
pdftotext -f 1 -l 1 -layout -bbox \
DAC06E7D1302B790429AF6E84696FCFAB20B.pdf - | head -n 10
It's always good if you know how to read and make use of pdftotext -h. :-)
Anyway, how to append the four text files as columns side by side, with the proper CVS separator in between, you should find out yourself. Or ask a new question :-)

This can be done easily with an IntelliGet (http://akribiatech.com/intelliget) script as below
userVariables = brand, name, device, model;
{ start = Not(Or(Or(IsSubstring("Supported Devices",Line(0)),
IsSubstring("Retail Branding",Line(0))),
IsEqual(Length(Trim(Line(0))),0)));
brand = Trim(Substring(Line(0),10,44));
name = Trim(Substring(Line(0),45,79));
device = Trim(Substring(Line(0),80,114));
model = Trim(Substring(Line(0),115,200));
output = Concat(brand, ",", name, ",", device, ",", model);
}

For the case where you want to extract that tabular data from PDF over which you have control at creation time (for timesheets contracts your employees have to sign), the following solution will be cleaner:
Create a PDF form with field IDs.
Let people fill and save the PDF forms.
Use a Apache PDFBox, an open source tool that allows to extract form data from a PDF. It includes a command-line example tool PrintFields that you would call as follows to print the desired field information:
org.apache.pdfbox.examples.interactive.form.PrintFields file.pdf
For other options, see this question.
As an alternative to the above workflow, maybe you could also use a digital signature web service that allows PDF form filling and export of the data to tables. Such as SignRequest, which allows to create templates and later export the data of signed documents. (Not affiliated, just found this myself.)

Related

Process part of a line through the shell pipe

I would like to process part of each line of command output, leaving the rest untouched.
Problem
Let's say I have some du output:
❯ du -xhd 0 /usr/lib/gr*
3.2M /usr/lib/GraphicsMagick-1.3.40
584K /usr/lib/grantlee
12K /usr/lib/graphene-1.0
4.2M /usr/lib/graphviz
4.0K /usr/lib/grcrt1.o
224K /usr/lib/groff
Now I want to process each path with another command, for example running pacman -Qo on it, leaving the remainder of the line untouched.
Approach
I know I can use awk {print $2} to get only the path, and could probably use it in a convoluted for loop to weld it back together, but maybe there is an elegant way, ideally easy to type on the fly, producing this in the end:
3.2M /usr/lib/GraphicsMagick-1.3.40/ is owned by graphicsmagick 1.3.40-2
584K /usr/lib/grantlee/ is owned by grantlee 5.3.1-1
12K /usr/lib/graphene-1.0/ is owned by graphene 1.10.8-1
4.2M /usr/lib/graphviz/ is owned by graphviz 7.1.0-1
4.0K /usr/lib/grcrt1.o is owned by glibc 2.36-7
224K /usr/lib/groff/ is owned by groff 1.22.4-7
Workaround
This is the convoluted contraption I am living with for now:
❯ du -xhd 0 /usr/lib/gr* | while read line; do echo "$line $(pacman -Qqo $(echo $line | awk '{print $2}') | paste -s -d',')"; done | column -t
3.2M /usr/lib/GraphicsMagick-1.3.40 graphicsmagick
584K /usr/lib/grantlee grantlee,grantleetheme
12K /usr/lib/graphene-1.0 graphene
4.2M /usr/lib/graphviz graphviz
4.0K /usr/lib/grcrt1.o glibc
224K /usr/lib/groff groff
But multiple parts of it are pacman-specific.
du -xhd 0 /usr/lib/gr* | while read line; do echo "$line" | awk -n '{ORS=" "; print $1}'; pacman --color=always -Qo $(echo $line | awk '{print $2}') | head -1; done | column -t
3.2M /usr/lib/GraphicsMagick-1.3.40/ is owned by graphicsmagick 1.3.40-2
584K /usr/lib/grantlee/ is owned by grantlee 5.3.1-1
12K /usr/lib/graphene-1.0/ is owned by graphene 1.10.8-1
4.2M /usr/lib/graphviz/ is owned by graphviz 7.1.0-1
4.0K /usr/lib/grcrt1.o is owned by glibc 2.36-7
224K /usr/lib/groff/ is owned by groff 1.22.4-7
This is a more generic solution, but what if there are three columns of output and I want to process only the middle one?
It grows in complexity, and I thought there must be a simpler way avoiding duplication.
Use a bash loop
(
IFS=$'\t'
while read -r -a fields; do
fields[1]=$(pacman -Qo "${fields[1]}")
printf '%s\n' "${fields[*]}"
done
)
Use a simple shell loop.
du -xhd 0 /usr/lib/gr* |
while read -r size package; do
pacman --color=always -Qo "$package" |
awk -v sz="$size" '{
printf "%s is owned by %s\n", sz, $0 }'
done
If you want to split out parts of the output from pacman, Awk makes that easy to do; for example, the package name is probably in Awk's $1 and the version in $2.
(Sorry, don't have pacman here; perhaps edit your question to show its output if you need more details. Going forward, please take care to ask the actual question you need help with, so you don't have to move the goalposts by editing after you have received replies - this is problematic for many reasons, not least of which because the answers you already received will seem wrong or unintelligible if they no longer answer the question as it stands after your edit.)
These days, many tools have options to let you specify which fields exactly you want to output, and a formatting option to produce them in machine-readable format. The pacman man page mentions a --machinereadable option, though it does not seem to be of particular use here. Many modern tools will produce JSON, which can be unwieldy to handle in shell scripts, but easy if you have a tool like jq which understands JSON format (less convenient if the only available output format is XML; some tools will let you get the result as CSV, which is mildly clumsy but relatively easy to parse). Maybe also look for an option like --format for specifying how exactly to arrange the output. (In curl it's called -w/--write-out.)

Get full list of Groups and Projects in Gitlab Cloud

I'm trying to get a full list of Projects and groups in out Gitlab cloud account.
I'm currently using their documentation as reference (bear in mind though I'm no developer) and using Linux command line to do so. Here's the documentation I'm trying to use:
https://docs.gitlab.com/ee/api/projects.html
https://docs.gitlab.com/ee/api/groups.html#list-a-groups-projects
I'm using the following command to get the data and parse in a readable format that I will export to csv or spreadsheet afterwards:
curl --header "PRIVATE-TOKEN: $TOKEN" "https://gitlab.com/api/v4/projects/?owned=yes&per_page=1000&page=1" | python -m json.tool | grep -E "http_url_to_repo|visibility" | awk '!(NR%2){print$0p}{p=$0}' | awk '{print $4,$2}' | sed -E 's/\"|\,//g' > gitlab.txt
My problem is that the code only return about 100 of the 280 repositories we have. It doesn't seem to get it recursively from all the groups and subgroups.
Any ideas on how I can improve this search to get everything ?
Thank you
It seems it can get only 100 per page so you will have to run it two times - first with page=1 and next with page=2. And for second page you will need >> to append to existing file gitlab.txt
curl --header "..." "https://...&per_page=100&page=1" | ... > gitlab.txt
curl --header "..." "https://...&per_page=100&page=2" | ... >> gitlab.txt
Or you will have to write script which first get all pages and later send it to pipe. You may also try to use for-loop in bash

How to get few lines from a .gz compressed file without uncompressing

How to get the first few lines from a gziped file ?
I tried zcat, but its throwing an error
zcat CONN.20111109.0057.gz|head
CONN.20111109.0057.gz.Z: A file or directory in the path name does not exist.
zcat(1) can be supplied by either compress(1) or by gzip(1). On your system, it appears to be compress(1) -- it is looking for a file with a .Z extension.
Switch to gzip -cd in place of zcat and your command should work fine:
gzip -cd CONN.20111109.0057.gz | head
Explanation
-c --stdout --to-stdout
Write output on standard output; keep original files unchanged. If there are several input files, the output consists of a sequence of independently compressed members. To obtain better compression, concatenate all input files before compressing
them.
-d --decompress --uncompress
Decompress.
On some systems (e.g., Mac), you need to use gzcat.
On a mac you need to use the < with zcat:
zcat < CONN.20111109.0057.gz|head
If a continuous range of lines needs be, one option might be:
gunzip -c file.gz | sed -n '5,10p;11q' > subFile
where the lines between 5th and 10th lines (both inclusive) of file.gz are extracted into a new subFile. For sed options, refer to the manual.
If every, say, 5th line is required:
gunzip -c file.gz | sed -n '1~5p;6q' > subFile
which extracts the 1st line and jumps over 4 lines and picks the 5th line and so on.
If you want to use zcat, this will show the first 10 rows
zcat your_filename.gz | head
Let's say you want the 16 first row
zcat your_filename.gz | head -n 16
This awk snippet will let you show not only the first few lines - but a range you can specify. It will also add line numbers which i needed for debugging an error message pointing to a certain line way down in a gzipped file.
gunzip -c file.gz | awk -v from=10 -v to=20 'NR>=from { print NR,$0; if (NR>=to) exit 1}'
Here is the awk snippet used in the one liner above. In awk NR is a built-in variable (Number of records found so far) which usually is equivalent to a line number. the from and to variable are picked up from the command line via the -v options.
NR>=from {
print NR,$0;
if (NR>=to)
exit 1
}

Show filename and line number in grep output

I am trying to search my rails directory using grep. I am looking for a specific word and I want to grep to print out the file name and line number.
Is there a grep flag that will do this for me? I have been trying to use a combination of -n and -l but these are either printing out the file names with no numbers or just dumping out a lot of text to the terminal which can't be easily read.
ex:
grep -ln "search" *
Do I need to pipe it to awk?
I think -l is too restrictive as it suppresses the output of -n. I would suggest -H (--with-filename): Print the filename for each match.
grep -Hn "search" *
If that gives too much output, try -o to only print the part that matches.
grep -nHo "search" *
grep -rin searchstring * | cut -d: -f1-2
This would say, search recursively (for the string searchstring in this example), ignoring case, and display line numbers. The output from that grep will look something like:
/path/to/result/file.name:100: Line in file where 'searchstring' is found.
Next we pipe that result to the cut command using colon : as our field delimiter and displaying fields 1 through 2.
When I don't need the line numbers I often use -f1 (just the filename and path), and then pipe the output to uniq, so that I only see each filename once:
grep -ir searchstring * | cut -d: -f1 | uniq
I like using:
grep -niro 'searchstring' <path>
But that's just because I always forget the other ways and I can't forget Robert de grep - niro for some reason :)
The comment from #ToreAurstad can be spelled grep -Horn 'search' ./, which is easier to remember.
grep -HEroine 'search' ./ could also work ;)
For the curious:
$ grep --help | grep -Ee '-[HEroine],'
-E, --extended-regexp PATTERNS are extended regular expressions
-e, --regexp=PATTERNS use PATTERNS for matching
-i, --ignore-case ignore case distinctions
-n, --line-number print line number with output lines
-H, --with-filename print file name with output lines
-o, --only-matching show only nonempty parts of lines that match
-r, --recursive like --directories=recurse
Here's how I used the upvoted answer to search a tree to find the fortran files containing a string:
find . -name "*.f" -exec grep -nHo the_string {} \;
Without the nHo, you learn only that some file, somewhere, matches the string.

How to add page numbers to Postscript/PDF

If you've got a large document (500 pages+) in Postscript and want to add page numbers, does anyone know how to do this?
Based on rcs's proposed solution, I did the following:
Converted the document to example.pdf and ran pdflatex addpages, where addpages.tex reads:
\documentclass[8pt]{article}
\usepackage[final]{pdfpages}
\usepackage{fancyhdr}
\topmargin 70pt
\oddsidemargin 70pt
\pagestyle{fancy}
\rfoot{\Large\thepage}
\cfoot{}
\renewcommand {\headrulewidth}{0pt}
\renewcommand {\footrulewidth}{0pt}
\begin{document}
\includepdfset{pagecommand=\thispagestyle{fancy}}
\includepdf[fitpaper=true,scale=0.98,pages=-]{example.pdf}
% fitpaper & scale aren't always necessary - depends on the paper being submitted.
\end{document}
or alternatively, for two-sided pages (i.e. with the page number consistently on the outside):
\documentclass[8pt]{book}
\usepackage[final]{pdfpages}
\usepackage{fancyhdr}
\topmargin 70pt
\oddsidemargin 150pt
\evensidemargin -40pt
\pagestyle{fancy}
\fancyhead{}
\fancyfoot{}
\fancyfoot[LE,RO]{\Large\thepage}
\renewcommand{\headrulewidth}{0pt}
\renewcommand{\footrulewidth}{0pt}
\begin{document}
\includepdfset{pages=-,pagecommand=\thispagestyle{fancy}}
\includepdf{target.pdf}
\end{document}
Easy way to change header margins:
% set margins for headers, won't shrink included pdfs
% you can remove the topmargin/oddsidemargin/evensidemargin lines
\usepackage[margin=1in,includehead,includefoot]{geometry}
you can simply use
pspdftool
http://sourceforge.net/projects/pspdftool
in this way:
pspdftool 'number(x=-1pt,y=-1pt,start=1,size=10)' input.pdf output.pdf
see these two examples (unnumbered and numbered pdf with pspdftool)
unnumbered pdf
http://ge.tt/7ctUFfj2
numbered pdf
http://ge.tt/7ctUFfj2
with this as the first command-line argument:
number(start=1, size=40, x=297.5 pt, y=10 pt)
I used to add page numbers to my pdf using latex like in the accepted answer.
Now I found an easier way:
Use enscript to create empty pages with a header containing the page number, and then use pdftk with the multistamp option to put the header on your file.
This bash script expects the pdf file as it's only parameter:
#!/bin/bash
input="$1"
output="${1%.pdf}-header.pdf"
pagenum=$(pdftk "$input" dump_data | grep "NumberOfPages" | cut -d":" -f2)
enscript -L1 --header='||Page $% of $=' --output - < <(for i in $(seq "$pagenum"); do echo; done) | ps2pdf - | pdftk "$input" multistamp - output $output
I was looking for a postscript-only solution, using ghostscript. I needed this to merge multiple PDFs and put a counter on every page. Only solution I found was an old gs-devel posting, which I heavily simplified:
%!PS
% add page numbers document bottom right (20 units spacing , harcoded below)
% Note: Page dimensions are expressed in units of the default user space (72nds of an inch).
% inspired by https://www.ghostscript.com/pipermail/gs-devel/2005-May/006956.html
globaldict /MyPageCount 1 put % initialize page counter
% executed at the end of each page. Before calling the procedure, the interpreter
% pushes two integers on the operand stack:
% 1. a count of previous showpage executions for this device
% 2. a reason code indicating the circumstances under which this call is being made:
% 0: During showpage or (LanguageLevel 3) copypage
% 1: During copypage (LanguageLevel 2 only)
% 2: At device deactivation
% The procedure must return a boolean value specifying whether to transmit the page image to the
% physical output device.
<< /EndPage {
exch pop % remove showpage counter (unused)
0 eq dup { % only run and return true for showpage
/Helvetica 12 selectfont % select font and size for following operations
MyPageCount =string cvs % get page counter as string
dup % need it twice (width determination and actual show)
stringwidth pop % get width of page counter string ...
currentpagedevice /PageSize get 0 get % get width from PageSize on stack
exch sub 20 sub % pagewidth - stringwidth - some extra space
20 moveto % move to calculated x and y=20 (0/0 is the bottom left corner)
show % finally show the page counter
globaldict /MyPageCount MyPageCount 1 add put % increment page counter
} if
} bind >> setpagedevice
If you save this to a file called pagecount.ps you can use it on command line like this:
gs \
-dBATCH -dNOPAUSE \
-sDEVICE=pdfwrite -dPDFSETTINGS=/prepress \
-sOutputFile=/path/to/merged.pdf \
-f pagecount.ps -f input1.pdf -f input2.pdf
Note that pagecount.ps must be given first (technically, right before the the input file which the page counting should start with).
If you don't want to use an extra .ps file, you can also use a minimized form like this:
gs \
-dBATCH -dNOPAUSE \
-sDEVICE=pdfwrite -dPDFSETTINGS=/prepress \
-sOutputFile=/path/to/merged.pdf \
-c 'globaldict /MyPageCount 1 put << /EndPage {exch pop 0 eq dup {/Helvetica 12 selectfont MyPageCount =string cvs dup stringwidth pop currentpagedevice /PageSize get 0 get exch sub 20 sub 20 moveto show globaldict /MyPageCount MyPageCount 1 add put } if } bind >> setpagedevice' \
-f input1.pdf -f input2.pdf
Depending on your input, you may have to use gsave/grestore at the beginning/end of the if block.
This might be a solution:
convert postscript to pdf using ps2pdf
create a LaTeX file and insert the pages using the pdfpages package (\includepdf)
use pagecommand={\thispagestyle{plain}} or something from the fancyhdr package in the arguments of \includepdf
if postscript output is required, convert the pdflatex output back to postscript via pdf2ps
Further to captaincomic's solution, I've extended it to support the starting of page numbering at any page.
Requires enscript, pdftk 1.43 or greater and pdfjam (for pdfjoin utility)
#!/bin/bash
input="$1"
count=$2
blank=$((count - 1))
output="${1%.pdf}-header.pdf"
pagenum=$(pdftk "$input" dump_data | grep "NumberOfPages" | cut -d":" -f2)
(for i in $(seq "$blank"); do echo; done) | enscript -L1 -B --output - | ps2pdf - > /tmp/pa$$.pdf
(for i in $(seq "$pagenum"); do echo; done) | enscript -a ${count}- -L1 -F Helvetica#10 --header='||Page $% of $=' --output - | ps2pdf - > /tmp/pb$$.pdf
pdfjoin --paper letter --outfile /tmp/join$$.pdf /tmp/pa$$.pdf /tmp/pb$$.pdf &>/dev/null
cat /tmp/join$$.pdf | pdftk "$input" multistamp - output "$output"
rm /tmp/pa$$.pdf
rm /tmp/pb$$.pdf
rm /tmp/join$$.pdf
For example.. place this in /usr/local/bin/pagestamp.sh and execute like:
pagestamp.sh doc.pdf 3
This will start the page number at page 3.. useful when you have coversheets, title pages and table of contents, etc.
The unfortunate thing is that enscript's --footer option is broken, so you cannot get the page numbering at the bottom using this method.
I liked the idea of using pspdftool (man page) but what I was after was page x out of y format and the font style to match the rest of the page.
To find out about the font names used in the document:
$ strings input.pdf | grep Font
To get the number of pages:
$ pdfinfo input.pdf | grep "Pages:" | tr -s ' ' | cut -d" " -f2
Glue it together with a few pspdftool commands:
$ in=input.pdf; \
out=output.pdf; \
indent=30; \
pageNumberIndent=49; \
pageCountIndent=56; \
font=LiberationSerif-Italic; \
fontSize=9; \
bottomMargin=40; \
pageCount=`pdfinfo $in | grep "Pages:" | tr -s ' ' | cut -d" " -f2`; \
pspdftool "number(x=$pageNumberIndent pt, y=$bottomMargin pt, start=1, size=$fontSize, font=\"$font\")" $in tmp.pdf; \
pspdftool "text(x=$indent pt, y=$bottomMargin pt, size=$fontSize, font=\"$font\", text=\"page \")" tmp.pdf tmp.pdf; \
pspdftool "text(x=$pageCountIndent pt, y=$bottomMargin pt, size=$fontSize, font=\"$font\", text=\"out of $pageCount\")" tmp.pdf $out; \
rm tmp.pdf;
Here is the result:
Oh, it's a long time since I used postscript, but a quick dip into the blue book will tell you :) www-cdf.fnal.gov/offline/PostScript/BLUEBOOK.PDF
On the other hand, Adobe Acrobat and a bit of javascript would also do wonders ;)
Alternatively, I did find this: http://www.ghostscript.com/pipermail/gs-devel/2005-May/006956.html, which seems to fit the bill (I didn't try it)
You can use the free and open source pdftools to add page numbers to a PDF file with a single command line.
The command line you could use is (on GNU/Linux you have to escape the $ sign in the shell, on Windows it is not necessary):
pdftools.py --input-file ./input/wikipedia_algorithm.pdf --output ./output/addtext.pdf --text "\$page/\$pages" br 1 1 --overwrite
Regarding the --text option:
The first parameter is the text to add. Some placeholders are available. $page stands for the current page number, while $pages stands for the total number of pages in the PDF file. Thus the option so formulated would add something like "1/10" for the first page of a 10-page PDF document, and so on for the following pages
The second parameter is the anchor point of the text box. "br" will position the bottom right corner of the text box
The third parameter is the horizontal position of the anchor point of the text box as a percentage of the page width. Must be a number between 0 and 1, with the dot . separating decimals
The fourth parameter option is the vertical position of the anchor point on the text box as a percentage of the page height. Must be a number between 0 and 1, with the dot . separating decimals
Disclaimer: I'm the author of pdftools
I am assuming you are looking for a PS-based solution. There is no page-level operator in PS that will allow you to do this. You need to add a footer-sort of thingy in the PageSetup section for each page. Any scripting language should be able to help you along.
I tried pspdftool (http://sourceforge.net/projects/pspdftool).
I eventually got it to work, but at first I got this error:
pspdftool: xreftable read error
The source file was created with pdfjoin from pdfjam, and contained a bunch of scans from my Epson Workforce as well as generated tag pages. I couldn't figure out a way to fix the xref table, so I converted to ps with pdf2ps and back to pdf with pdf2ps. Then I could use this to get nice page numbers on the bottom right corner:
pspdftool 'number(start=1, size=20, x=550 pt, y=10 pt)' input.pdf output.pdf
Unfortunately, it means that any text-searchable pages are no longer searchable because the text was rasterized in the ps conversion. Fortunately, in my case it doesn't matter.
Is there any way to fix or empty the xref table of a pdf file without losing what pages are searchable?
I took captaincomic's solution and added support for filenames containing spaces, plus giving some more informations about the progress
#!/bin/bash
clear
echo
echo This skript adds pagenumbers to a given .pdf file.
echo
echo This skript needs the packages pdftk and enscript
echo if not installed the script will fail.
echo use the command sudo apt-get install pdftk enscript
echo to install.
echo
input="$1"
output="${1%.pdf}-header.pdf"
echo input file is $input
echo output file will be $output
echo
pagenum=$(pdftk "$input" dump_data | grep "NumberOfPages" | cut -d":" -f2)
enscript -L1 --header='||Page $% of $=' --output - < <(for i in $(seq "$pagenum"); do echo; done) | ps2pdf - | pdftk "$input" multistamp - output "$output"
echo done.
I wrote the following shell script to solve this for LaTeX beamer style slides produced with inkscape (I pdftk cat the slides together into the final presentation PDF & then add slide numbers using the script below):
#!/bin/sh
# create working directory
tmpdir=$(mktemp --directory)
# read un-numbered beamer slides PDF from STDIN & create temporary copy
cat > $tmpdir/input.pdf
# get total number of pages
pagenum=$(pdftk $tmpdir/input.pdf dump_data | awk '/NumberOfPages/{print $NF}')
# generate latex beamer document with the desired number of empty but numbered slides
printf '%s' '
\documentclass{beamer}
\usenavigationsymbolstemplate{}
\setbeamertemplate{footline}[frame number]
\usepackage{forloop}
\begin{document}
\newcounter{thepage}
\forloop{thepage}{0}{\value{thepage} < '$pagenum'}{
\begin{frame}
\end{frame}
}
\end{document}
' > $tmpdir/numbers.tex
# compile latex file into PDF (2nd run needed for total number of pages) & redirect output to STDERR
pdflatex -output-directory=$tmpdir numbers.tex >&2 && pdflatex -output-directory=$tmpdir numbers.tex >&2
# add empty numbered PDF slides as background to (transparent background) input slides (page by
# page) & write results to STDOUT
pdftk $tmpdir/input.pdf multibackground $tmpdir/numbers.pdf output -
# remove temporary working directory with all intermediate files
rm -r $tmpdir >&2
The script reads STDIN & writes STDOUT printing diagnostic pdflatex output to STDERR.
So just copy-paste the above code in a text file, say enumerate_slides.sh, make it executable (chmod +x enumerate_slides.sh) & call it like this:
./enumerate_slides.sh < input.pdf > output.pdf [2>/dev/null]
It should be easy to adjust this to any other kind of document by adjusting the LaTeX template to use the proper documentclass, paper size & style options.
edit:
I replaced echo by $(which echo) since in ubuntu symlinks /bin/sh to dash which overrides the echo command by a shell internal interpreting escape sequences by default & not providing the -E option to override this behaviour. Note that alternatively you could escape all \ in the LaTeX template as \\.
edit:
I replaced $(which echo) by printf '%s' since in zsh, which echo returns echo: shell built-in command instead of /bin/echo.
See this question for details why I decided to use printf in the end.
Maybe pstops (part of psutils) can be used for this?
I have used LibreOffice Calc for this. Adding a page number field is easy using Insert->Field->Page Number. And then you can copy-and-paste this field to other pages; fortunately the position is not changed and the copy-and-paste can be done quickly with down arrow key and Ctrl+V. Worked for me for a 30 page article. Maybe prone to errors for a 500+ one!