I've been using pandas.to_csv on a mac to save csv.zip files. When I try to zless or zcat them on my mac, I see gibberish. I can read_csv them just fine. locale gives me
LANG="en_US.UTF-8"
LC_COLLATE="en_US.UTF-8"
LC_CTYPE="en_US.UTF-8"
LC_MESSAGES="en_US.UTF-8"
LC_MONETARY="en_US.UTF-8"
LC_NUMERIC="en_US.UTF-8"
LC_TIME="en_US.UTF-8"
LC_ALL=
If that helps. Does anyone know what I need to do to be able to open them from the command line?
Seems you have to unzip the file. You can use unzip -c to print to stdout and pipe to something else. You'll some meta data at the beginning you can trim with tail.
Related
I'm trying to extract the strings for localization. There are so many files where some of the strings are tagged as NSLocalizedStrings, and some of them are not.
I'm able to grab the NSLocalizedStrings using ibtool and genstrings, but I'm unable to extract the plain strings without NSLocalizedString.
I'm not good at regex, but I came up with this "[^(]#\""
and with the help of grep:
grep -i -r -I "[^(]#\"" * > out.txt
It worked, and all the strings were actually grabbed into a txt file, but the problem is ,
if in my code there is a line:
..... initWithTitle:#"New Sketch".....
I only expect the grep to grab the #"New Sketch" part, but it grabs the whole line.
So in the out.txt file, I see initWithTitle:#"New Sketch", along with some unwanted lines.
How can I write the regex to grab only the strings in double quotes ?
I tried the grep command with the regex mentioned in here, but it gave me syntax error .
For ex, I tried:
grep -i -r -I (["'])(?:(?=(\\?))\2.)*?\1 * > out.txt
and it gave me
-bash: syntax error near unexpected token `('
In xcode, open your project. Go to Editor->Export For Localization...It will create the folder of files. Everything that was marked for localization will be extracted there. No need to parse it yourself. It will be in the XML format.
If you wanna go hard way, you can then parse those files the way you're trying to do it now ?! It will also have Storyboard strings there too, btw.
The programm a2ps does not support utf-8. At least my version does only
support the latin-X encodings:
a2ps --list=encoding
Version:
GNU a2ps 4.14
How can I convert a simple utf-8 text to postscript or pdf?
If what you actually want is to use a2ps or enscript (which is a similar tool), and if your single need is to use them with some UTF-8 document, you only have to convert your document to ISO-8859-1 or some supported encoding. Various tools allow this. For instance, here is a workflow for enscript (but you can surely do the same with a2ps):
cat document.txt | iconv -c -f utf-8 -t ISO-8859-1 | enscript -o document.ps
But you may lose some characters during the conversion because such encodings have a smaller range than UTF-8.
On the other hand, if UTF-8 is a requirement, you may rather have to look for some recent tool allowing to convert UTF-8 to PDF. I wrote myself a Python program called txt2pdf; you may find it here. Have also a look at tools like pandoc, gimli, rst2pdf or wkhtmltopdf.
You can use Vim. Open the file and execute the command :hardcopy > output.ps in normal mode. You can also do this directly from the shell. Executing
$ vim -c ":hardcopy > output.ps" -c ":quit" input.txt
in your shell will open Vim, generate the output.ps, and then close Vim.
Use paps! For instance I use it as follow:
paps --font="Monospace 10" input.txt > output.ps
and I have no problem with utf encoding.
If you need a pdf file then
pdf2ps output.ps
I've gotten acceptable results (for printing code listings) from https://github.com/arsv/u2ps
https://gitlab.com/gnomify/u2ps is the replacement of gnome-u2ps.
If the text file is small, paps converts to text to ps, which then can be fed to ps2pdf. The problem is ps file from paps causes ps2pdf to create a very big pdf file. If that is ok, this is possible. Currently, I am having a large file size pdf from paps.
There's a utility based on gnome libraries and named gnome-u2ps. It has less functionality than a2ps, and it seems that it is not maintained anymore.
I am writing a package using the GNU build system. The documentation hence is in the texinfo format. As a result, executing make converts the texinfo file into the info format, and executing make pdf automatically produces a pdf file.
In the texinfo file, I have something like this:
#verbatim
awk '{...}' data.txt
#end verbatim
However, in the pdf, the "basic" single quotes (U+0027) in the awk command above are transformed into "curvy" single quotes (U+2019) so that, if one does a copy-paste of the command from the pdf into a terminal, bash complains ("syntax error"). This forces the user to edit the command he just copy-pasted. Same problem occurs if I replace #verbatim by #example. I searched the texinfo manual but couldn't find a way to specify apostrophes. I am using texinfo version 5.2.
Karl Berry (via the bug-texinfo mailing list) told me to add 2 lines to my texi file (more info):
#codequoteundirected on
#codequotebacktick on
as well as add the latest version of texinfo.tex to my package.
I would like to overwrite the name of input file with the same name of output file owing to limited disk space that I have in my system. Is it possible? I know this is not recommended, but I have the input files already backup. I will have a loop in a shell to do the cut command.
#!/bin/bash
for i in {1..1000}
do
cut --delimiter=' ' --fields=1,3-7 input$i.txt > input$i.txt
done
You could always use a temporary file to which you redirect, and then when you're sure everything went fine, you rename it to the original file.
some gnu utils commands have a -i option (such as sed) that allow you to change a file in place .....most of file filtering and editing (like cut) can be done using sed.
The shell will parse the command and handle the redirections first. When it sees "> afile" it will truncate "afile" and open it for writing. Your data is now destroyed. Then the shell hands the filename to cut which now has nothing to read.
This is how I learned:
some | pipeline < my_file > my_file.tmp
ln my_file my_file.bak # this is a hard link
mv my_file.tmp my_file
That keeps the original data in place for as long as possible.
If you're having disk space issues, you will have to read the input file into memory entirely.
In case of very limited disk space (disk quota) you could try to place a compressed source file in ram (/dev/shm) and use that as the source (uncompressing it to stdout and piping that to your script).
Is it possible to search multiple pdf files using the 'grep' command. It doesn't seem to work, how do people search content on multiple pdf files?
Well, PDF is a binary format, and grep can search binary files as if they were text
grep -a
or you can just use pdftotext (which comes with xpdf) like this:
pdftotext whee.pdf | grep pattern
You don't mention which OS you're using, but under Mac OS X you can use mdfind from the command line:
mdfind -onlyin search/directory/path "kind:pdf search text"
use something like Solr or clucene I think they can do what you want.
Pdf is a binary format, that's why searching it with grep is not that helpful. You can search the strings is a pdf with grep like this:
ls dir_with_pdfs/*.pdf|xargs strings|grep "keyword"
Or you can use the pdf2text command on pdf's and then search result with grep.
This tool pdfgrep will do the work. It has a syntax similar to grep. To search in several files just a simple shell script. For example:
$> ls Documents/*.pdf | xargs pdfgrep -n -H "system"
Documents/2005-DoddGutierrezRO-MAN1.pdf:1: designed episodic memory system
Documents/2005-DoddGutierrezRO-MAN1.pdf:1: how ISAC's episodic memory system is
Documents/2005-DoddGutierrezRO-MAN1.pdf:1: cognitive system employs a combination
....
PDF is a binary dump of objects used to display the pages. There may be some meta data you can grep but the actual page text is in a Postscript stream and may be encoded in a variety of ways. Its also not guaranteed to be in any order. You need to think of PDF as more like a Vector image file than a text file.
There is a short article explaining text in PDFs in more detail at http://pdf.jpedal.org/java-pdf-blog/bid/27187/Understanding-the-PDF-file-format-text-streams
If you have pdftotext installed via the popplar package, then try this perl script :
#!/usr/bin/perl
my $p = shift;
foreach my $fn (#ARGV) {
open(F,"pdftotext $fn - |");
while (<F>) { print "$fn:$_" if /$p/; }
close(F);
}