Something is corrupting my .LESS files. They look fine in a text editor (VS2013 or Sublime) but when I try and compile them they have extra strange characters in them.
I get the same error if I try to compile using grunt or web essentials.
Why is this what the LESS compiler is reading?
��/ / C o r e v a r i a b l e s a n d m i x i n s
What is happening here? I'm guessing it has something to do with file encoding??? See screen shot above.
Toggle show whitespace characters (Edit / Advanced / View White Space);
Remove first two chars;
Save file;
Related
I am using the tabulizer library in r to capture data from a table located inside a PDF on a public website
(https://www.waterboards.ca.gov/sandiego/water_issues/programs/basin_plan/docs/update082812/Chpt_2_2012.pdf).
The example table that I am interested in is on page 23 of the PDF (p. 2-21, document has a couple of blankpages at beginning). The table has a non-standard format and also different symbols (non-alphanumeric characters in the cells).
I want to extract most if not all tables from this document.
I want to end up with a table that has characters with codes (i.e., black circles with 999, white circles with 777, plus signs with -99, etc).
Tabulizer does a good job for the most part converting the dark circles into consistent alphanumeric codes, and keeping the plus signs, but runs into problems on the REC1 column with white
circles, which is odd since it does seems to recognize exotic characters on other columns.
Could anyone please help fix this? I also tried selecting the table area but the output was worse. Below is the r code I am using.
I know I can complete this process by hand for all the tables in the document using PDF's built-in select and export tools but would like to automate the process.
library("tabulizer")
f2 <- "https://www.waterboards.ca.gov/sandiego/water_issues/programs/basin_plan/docs/update082812/Chpt_2_2012.pdf"
tab <- extract_tables(f2, pages = 23, method = 'lattice')
head(tab[[1]])
df <- as.data.frame(tab)
write.csv(df, file = "test.csv")
I am new to R Markdown and I am trying to generate a pdf report for my class. I have a small figure that I do not want taking up so many lines and surrounded by white space, so I'd like to wrap text around it. It can be aligned left or right, it doesn't matter.
I have tried a solution I found here, but
{r plot, out.width="4in", out.extra='style="float:right; padding:10px"', echo=F}
knitr::include_graphics("myplot.png")
throws the following error:
"! Package keyval Error: style undefined."
and the document won't compile. I have searched high and low and can't find a solution anywhere.
I have an R markdown file that I want to convert to PDF using knitr (or sweave).
For example:
---
output: pdf_document
---
```{python}
for x in range(3):
for y in range(3):
print(x+y)
```
If I knit to PDF and copy paste the for loop part back to a text editor, the tabs are gone. This is certainly expected behaviour considering how anti whitespace-preservation markdown and pdf are, but can I still somehow preserve the actual whitespace characters when knitting from R markdown to PDF?
It is not possible, there's nothing that can be done other than making sure the document looks right, i.e. visually the whitespace is there.
It comes down to what the PDF format actually is, read the accepted answer to this question - https://superuser.com/questions/198392/how-to-copy-text-out-of-a-pdf-without-losing-formatting
I'm having problems doing something I thought was straight forward. Read the value of a input field and then write the value to a text file. I got it to work but only partially and inconsistently. What happens is, text a) gets cut off (not all the data entered in the field is written to the file) and b) spaces get added between each character so the line ends up looking like this "T H I S IS W H A T Y O U R T E X T V A L U E"
I'm GUESSING this is an issue with text being 'chunked' but never writing all the chunks to the file and I can't explain the spacing issue, encoding maybe? Anyway heres my code:
//(obviously there is an html field called "a1Agent" and an object called PI)
PI.Name = document.getElementById("a1Agent").value;
fs.writeFile("c:\\Users\\Me\\Desktop\\values.txt", PI.Name);
The encoding of string you get from a input widget (after you edit it) is actually utf16le, see more here:
https://github.com/rogerwang/node-webkit/issues/1669#issuecomment-42515857
How can i reformat file(s) in IntelliJ and join all lines that are split.
I know that I can do that individually by selecting lines and "join lines" with CTRL + SHIFT + J
Since we changed our code formatting wrap policy recently I want to be able to join lines in all files based on the updated wrap setting. (Settings > Code Style > General > Right margin)
The only thing is that IntelliJ seems happy to split lines based on wrap setting, but will silently deny to join lines based on that setting.
Unlike the question Force code formatter in IntelliJ to join lines, I am not satisfied by splitting lines or joining manually (as the accepted answer suggests). I want IntelliJ to join lines automatically.
Bonus question: Which other editors can do this?
Disable the following code style option - Project Settings - Code Style - Wrapping and Braces - Keep when reformatting - Line breaks
IntelliJ IDEA 15
File > Settings... > Editor > Code Style > Java > Wrapping and Braces > Keep when reformatting > uncheck Line breaks
(if you want to have the same setting for another type of file, choose it from Code Style)
Go to Code > Reformat Code (Ctrl + Alt + L).