VS2013 WinFX.targets mangled, can't open new/old project files - .net-4.0

I have a bit of a strange problem and no amount of googling has given me any answers or solutions. I'm not entirely sure if it belongs on Stack Overflow, but since it involves VS/NET Framework, I thought I'd give it a try.
Whenever I try to create a new project or open an existing project in Visual Studio 2013, I get the following error:
"Unable to read the project file ".vcxproj".
C:\WINDOWS\Microsoft.NET\Framework\v4.0.30319\Microsoft.WinFx.targets(654,31): The project file could not be loaded. ';' is an unexpected token. The expected token is '='. Line 654, position 31."
I've discovered that the mentioned Microsoft.WinFx.targets file is damaged. Line 654 through 665 is filled with random binary data, beginning at column 31 as in the error message.
I have no clue what to do now. I can't reinstall .NET Framework 4.0 because it's already a part of Windows 8 and thus I can't run the standalone/web installers. I can't delete the file because it's protected by TrustedInstaller (which also leads me to the fact that I have no clue how or where to get a non-damaged copy of the file).
Just a bit of background info: I recently installed Windows 8, then upgraded to 8.1. I initially had Visual Studio 2013 Ultimate RC installed which worked at first. I hadn't used it in a week or two when I tried to load the Quake 3 source and first encountered the error. I thought it was because I was using an RC, so I uninstalled VS and installed VS13 Professional. Of course, this didn't solve my problem.
Any tips on how to proceed or insight into what may have happened?
The random data is the following:
;;Z<Ö<Û<æ<=F=Q=•=_>e>j>s>€>œ>«>°>¹>È>Ô>Ù>â>ñ>ö>ÿ>???+?0?9?H?M?V?i?n?w?Š??˜?«?°?¹?Ì?Ñ?Ú?í?ò?û? € ¼ 000/040=0P0U0^0q0v00’0—0 0³0¸0½0|2‚2“2™2½2Å23/373A3y3—3¸3¾3å3ð344Ð4à4 595C5g5†5Œ5—5ž5©5¾5ô586E6J6a6r6w6‰6¨6Ä6Û6÷677-7#7M7|7š7+8%9-9J9Z9};†;9>Q>]>b>€>°>(?ñ? ü ˆ001Ò1Ø1ã1ê12
222i2o2Œ2“2™2¶2É23:3D3O3“3í34,4#4v4Ú45D5H5L5P5T5C6^6Ü6¿7É7é7'8q8w8}8Š8“8™8H:Q:X:]:c:É:Ï:;f;y;„;Œ;”;œ;¤;¬;´;¼;Ä;Ì;Ô;Ü;ä;ì;ô;ü;<<<^>d>j>p>v>†>Œ>’>˜>ž>¤>µ>½>Ý>å>ú> ??? ?C?J?c?…?”?œ?ô?ø?   ¨ )000¬0½0Í0Ø0þ011#1/151=1C1S1Y1e1k1Ž1™1£1¨1¿1Ñ1Þ1é1ï1ø12222#2)23292?2Ð2Ø2ï2Ò34%4+464a4‰44”4Æ4â4ô45555%5*545#5F5L5V5\5b5æ5î56è6>•> >Ë>ð>? ?m?¬?¹? ° d ‹0Ø0â0°1ú12ë2:3D3'4E4W4¦4å4ò4Í56$6û6E7V7-8|8†8a9~9‘9£9â9ï9Á:;;æ;0>Ÿ>¬>‡?Ô?Þ? À € µ0ÿ01ç162#2F3Q3Õ3à344c4}4°4ó45Å5!616x6…6»6j7è7ý78g8È8Î8Þ8è899A9K9Æ9×9è9ù9M:’:ñ:;U;e<<ï<=y=•=æ=B>U>w>¶>?)?}? Ð | ›0·011M1Ç1ã172Œ2Ÿ2!3f3Å3Ù3-4K5g5á5ý5w6“6ç6C7V7ò7188¤8ô8: :Ž:ª:;4;…;Ú;í;
<ì>?p?w?Ø?è?ò? à x 0%0K0U0Ô0å0ö01n1³12*2z23©3434¡4½45j5}5Ÿ5Þ5A6U6©6Ê7æ78|8ö89f9»9Î9ð9/:’:¦:ö: <%<“<¯<=9=Š=ß=ò=>$>??y?º? ð # Â0,1o1€1*2B2´2/373a3q3~36ñ6 708ÿ8ù9:(:3:·<=¼=Ù>d?y? L Ÿ0ó0K1Œ1›1©1Þ1{4U5ƒ5”6'7q9Å9:<<ˆ<ª<±<(=^=å=ì=>>°>·>8?Y?¾?Å?ç?î? h p0w0Ù0à01 1˜1©2À2$3Ê4ó4515Ï5,6C6ˆ6š6ã6767 8‚8ý8X9½91:8:Z:a:à:Ž;x<Å<6===_=f=Ø=î=>/>P>ý>˜?ç? d F0%1©1$3¨3¯3P4á4x5Ž5¯5Ï5ð56~6…6§6®6À78L899#:x:´:Ò:Ü:ã:é:);/;4;E;I<m<w<Â<÷< =^=ö=F>Ø>G?u? 0 d (0/011Ø1^2l2{2‚22š2ü23$3+3 44È4Ï4À5Ó5ñ56,6=6y6†6˜6«6É6ñ677H7M79Ã9è9:Á:ç:;2;ï<=5= # ” 171Y1|1Ž1³1ÿ1°2È2&3.373<3L3Q33464=4G4S4¥4Ï4a5§5°5¸6!7z7€7ˆ7ž7£7±7¹7É7ç7Ù809f9è9õ9M:o:º:©;0<C<<}<™<µ<Ñ<â>>>^>}>œ>»>Û>û>?:?Y? P h 000L0i0T1~12,2³2Ý2Æ3â3ÿ3Î45A5Y5ø56(6E6a6}6™6ª6ó67J788V8#99¯:Ó:Ý:(;];†;Ê;b<²<6=¥=Ó=ˆ>>P?Ò? \ h0{0™0¶0Ô0å0
1a1r1â132;2D2I2Y2^2•2 3>3E3O3[3»3å3t4º4Ã4Ð57Ø7-<Æ<Í<E=Q=>>>C>¾>’? p D 0™12+2:2W2c2l2„2Š2ª3À3À657)9o9ý9C:Ë:j;§;Ø;ø;z<–<(=Ó=#>²> € \ ª01B1J1Y1v1‚1‹1£1®1´1p2&3C5n5Í5.686V6‡6§6/7K7Ý799‘99:; ;•;¡;„< =e=û=h>³>M?•? < o1{1ð1ü1o2{2ó2ÿ2t3€3N4Z4Ï4Û4Ÿ56–618ð8°9;®;K<L=…?‘?   # 0P0v0~0$1=1Á122ä3“4Æ4X5–5©5²56a7!8|;ƒ;¸;Ë;<]=¦>ì>‘? ° $ Ô102Õ2 66<6O6‡6k7Ÿ7Ä7ò7±8 Å3ó344w4’44¸4È4å4\5 Ï3 n3 0 ‚3 # ( D45575u8_9 =.?2?6?:?>?B?F?J? P 0151?28Ï>? # å1é1í1ñ1õ1ù1ý12Y2'3?3c36†7#;c=g=k=o=s=w={==×=¥>´>Í> p ( ç1(3¾6ì8ð8ô8ø8ü8 999ž9³:Ë:Ç; € ( ï1z8ã8µ;¹;½;Á;Å;É;Í;Ñ;J<=4=X= ( t0µ1B5€7„7ˆ7Œ77”7˜7œ7ô7m8‹8±8   E8Ý8û8!9€9“9®9û<ë= ° , ’1´3¸3¼3À3Ä3È3Ì3Ð3X5ð5646ë67ô7C> À 5L588
888888.8 Ð Ñ9 à õ<#>;>,? ð ;5 <[7³9Ì:X;û=f> F99¤9k;; # 0[0b0÷1
25::¸:Õ:ó:;+;P;;œ;R<‘<ä<è<ÿ<
==L=P=˜=p>ˆ>> 0 P J0j0ó0è1ð1ô1¢2ì2ð2÷2<3#3G3Œ33—3ä3è3g485ð6ø6ü67”8˜8Ÿ8D9H9ì;<&<1<4?j?¬? # È 0 0'0 0§0(141^11º1¿1¨2¯2Ï2Ô283S3X3r3À3˜4n55Œ5·5Ä5â546<6T6X6™6Û6è677x9l:y:¢:¯:Ð:Ø:ô:ø:þ:;A;Q;À;È;Í;Ô;Ú;à;å;ë;ñ;ú; <
<<<G>R>ô>? ?'?.?5?B?K?i?¶?º?¾?Â?Æ?Ê?Î?Ò? P œ 0 0<0e0Ñ0è011O1“1š1¥1¼1
2&27344n4u4Å4Ð4ß4õ455c5k5t5|5†5¡5µ5Ë5Þ56,6i6‚6©6ö6
7‰7©782888b89–9: ::;
;7;#;L;d;j;p;w; ?&?4?:?^?d?€?¯?µ?Ê?Ð?í? ˆ :0#0[0a0‚0ˆ0Ë0$1(151W1o1w11ˆ11á1ö152U2z2†2ò23s3•3ù3.4o4…4Õ4ë4;5Q5È5p7ð7[8n88Ÿ8²8Ä89$9ø;¨>?m??é? p ” 0s0º0ü01n1„1à1ö1€2‡2ó23o3‘3494{4‘4í45_5u5ð5÷5c6…6ä67m7³7ô7
8Z8p8À8Ö8X9_9ì9(:¢:¯:È:æ:";J;è;ï;<9…> >Á>Ü>:?c?¨?´?»?î?ø? € ¸ Z0»0è0û0151P1]1j1r1†1”1¢1«155,575D5Y5_5g5o5ª5¼5Û5â5ð5!606D6Q6e6Ð6Ö6C7 7§7 88`8g8À8Í8Ø8ï8ú89 9&9.969y9Ž9—9±9Å9:::x::Ø:ß:#;Ã; <,<9E>˜>ó>X?e?v?‚?–?£?·?¿?Å?×?Ý? ˆ 80D0 0b1i1å1ì1G2t2°2×23°3Ó3h4o4©4º4í4ó4H5€5™5æ5ü5636³6à6ç67E7Ð7Ù7à7ú738E8y8 8p9w9©9Ñ9ˆ::Å:ì:¸;Ñ;«<±<¼<=9=À=>!>+>A>„>¶>Þ>p?   ð c2i2n2‘2—2œ2À2š3¤3±3Ù3ß3ü3444h4o4~4„4¨4®4¶4Ñ4×4Ý4î4ô4ú4h5v5¥5«5Å5<6G6^6ž6Ô6Þ6ä6ñ6 7.747B7G7O7U7c7h7p7v7„7‰7‘7—7¥7ª7 8'8n8s8­8²8¹8¾8Å8Ê8Ø899B9H9è9õ9û9+:1:S:Z:

Related

sqlQuery in R fails when called via source() [duplicate]

The following, when copied and pasted directly into R works fine:
> character_test <- function() print("R同时也被称为GNU S是一个强烈的功能性语言和环境,探索统计数据集,使许多从自定义数据图形显示...")
> character_test()
[1] "R同时也被称为GNU S是一个强烈的功能性语言和环境,探索统计数据集,使许多从自定义数据图形显示..."
However, if I make a file called character_test.R containing the EXACT SAME code, save it in UTF-8 encoding (so as to retain the special Chinese characters), then when I source() it in R, I get the following error:
> source(file="C:\\Users\\Tony\\Desktop\\character_test.R", encoding = "UTF-8")
Error in source(file = "C:\\Users\\Tony\\Desktop\\character_test.R", encoding = "utf-8") :
C:\Users\Tony\Desktop\character_test.R:3:0: unexpected end of input
1: character.test <- function() print("R
2:
^
In addition: Warning message:
In source(file = "C:\\Users\\Tony\\Desktop\\character_test.R", encoding = "UTF-8") :
invalid input found on input connection 'C:\Users\Tony\Desktop\character_test.R'
Any help you can offer in solving and helping me to understand what is going on here would be much appreciated.
> sessionInfo() # Windows 7 Pro x64
R version 2.12.1 (2010-12-16)
Platform: x86_64-pc-mingw32/x64 (64-bit)
locale:
[1] LC_COLLATE=English_United Kingdom.1252
[2] LC_CTYPE=English_United Kingdom.1252
[3] LC_MONETARY=English_United Kingdom.1252
[4] LC_NUMERIC=C
[5] LC_TIME=English_United Kingdom.1252
attached base packages:
[1] stats graphics grDevices utils datasets methods
[7] base
loaded via a namespace (and not attached):
[1] tools_2.12.1
and
> l10n_info()
$MBCS
[1] FALSE
$`UTF-8`
[1] FALSE
$`Latin-1`
[1] TRUE
$codepage
[1] 1252
On R/Windows, source runs into problems with any UTF-8 characters that can't be represented in the current locale (or ANSI Code Page in Windows-speak). And unfortunately Windows doesn't have UTF-8 available as an ANSI code page--Windows has a technical limitation that ANSI code pages can only be one- or two-byte-per-character encodings, not variable-byte encodings like UTF-8.
This doesn't seem to be a fundamental, unsolvable problem--there's just something wrong with the source function. You can get 90% of the way there by doing this instead:
eval(parse(filename, encoding="UTF-8"))
This'll work almost exactly like source() with default arguments, but won't let you do echo=T, eval.print=T, etc.
We talked about this a lot in the comments to my previous post but I don't want this to get lost on page 3 of comments: You have to set the locale, it works with both input from the R-console (see screenshot in comments) as well as with input from file see this screenshot:
The file "myfile.r" contains:
russian <- function() print ("Американские с...");
The console contains:
source("myfile.r", encoding="utf-8")
> Error in source(".....
Sys.setlocale("LC_CTYPE","ru")
> [1] "Russian_Russia.1251"
russian()
[1] "Американские с..."
Note that the file-in fails and it points to the same character as the original poster's error (the one after "R). I can not do this with Chinese because i would have to install "Microsoft Pinyin IME 3.0", but the process is the same, you just replace the locale with "chinese" (the naming is a bit inconsistent, consult the documentation).
I think the problem lies with R. I can happily source UTF-8 files, or UCS-2LE files with many non-ASCII characters in. But some characters cause it to fail. For example the following
danish <- function() print("Skønt H. C. Andersens barndomsomgivelser var meget fattige, blev de i hans rige fantasi solbeskinnede.")
croatian <- function() print("Dodigović. Kako se Vi zovete?")
new_testament <- function() print("Ne provizu al vi trezorojn sur la tero, kie tineo kaj rusto konsumas, kaj jie ŝtelistoj trafosas kaj ŝtelas; sed provizu al vi trezoron en la ĉielo")
russian <- function() print ("Американские суда находятся в международных водах. Япония выразила серьезное беспокойство советскими действиями.")
is fine in both UTF-8 and UCS-2LE without the Russian line. But if that is included then it fails. I'm pointing the finger at R. Your Chinese text also appears to be too hard for R on Windows.
Locale seems irrelevant here. It's just a file, you tell it what encoding the file is, why should your locale matter?
For me (on windows) I do:
source.utf8 <- function(f) {
l <- readLines(f, encoding="UTF-8")
eval(parse(text=l),envir=.GlobalEnv)
}
It works fine.
Building on crow's answer, this solution makes RStudio's Source button work.
When hitting that Source button, RStudio executes source('myfile.r', encoding = 'UTF-8')), so overriding source makes the errors disappear and runs the code as expected:
source <- function(f, encoding = 'UTF-8') {
l <- readLines(f, encoding=encoding)
eval(parse(text=l),envir=.GlobalEnv)
}
You can then add that script to an .Rprofile file, so it will execute on startup.
I encounter this problem when a try to source a .R file containing some Chinese characters. In my case, I found that merely set "LC_CTYPE" to "chinese" is not enough. But setting "LC_ALL" to "chinese" works well.
Note that it's not enough to get encoding right when you read or write plain text file in Rstudio (or R?) with non-ASCII. The locale setting counts too.
PS. the command is Sys.setlocale(category = "LC_CTYPE",locale = "chinese"). Please replace locale value correspondingly.
On windows, when you copy-paste a unicode or utf-8 encoded string into a text-control that is set to single-byte-input (ascii... depending on locale), the unknown bytes will be replaced by questionmarks. If i take the first 4 characters of your string and copy-paste it into e.g. Notepad and then save it, the file becomes in hex:
52 3F 3F 3F 3F
what you have to do is find an editor which you can set to utf-8 before copy-pasting the text into it, then the saved file (of your first 4 characters) becomes:
52 E5 90 8C E6 97 B6 E4 B9 9F E8 A2 AB
This will then be recognized as valid utf-8 by [R].
I used "Notepad2" for trying this, but i am sure there are many more.

How to set an Icon in NSIS install (CMake)

The documentation for CPACK_PACKAGE_ICON is very limited on cmake wiki page.
The following is not working for me (as per):
set(CPACK_PACKAGE_ICON "${CMAKE_CURRENT_SOURCE_DIR}/images/MyIcon.bmp")
include(CPack)
It leads to:
File: "C:/proj/my_library/images/MyIcon.bmp" -> no files found.
Usage: File [/nonfatal] [/a] ([/r] [/x filespec [...]] filespec [...] |
/oname=outfile one_file_only)
Error in macro MUI_HEADERIMAGE_INIT on macroline 24
Error in macro MUI_GUIINIT on macroline 3
Error in macro MUI_FUNCTION_GUIINIT on macroline 4
Error in macro MUI_INSERT on macroline 11
Error in macro MUI_LANGUAGE on macroline 7
Error in script "C:/proj/bin-win/_CPack_Packages/win32/NSIS/project.nsi" on line 574 -- aborting creation process
So how does one actually set a working icon during the install process of a NSIS installer ? Also what format is actually needed for the icon ?
After some trial-and-error I finally found out two tricks required:
The syntax is actually:
set(CPACK_PACKAGE_ICON "${CMAKE_CURRENT_SOURCE_DIR}/images\\\\MyIcon.bmp")
And the BMP file is restricted to an older format, which is not the default for imagemagick. Eg:
$ file MyIcon.bmp
MyIcon.bmp: PC bitmap, Windows 98/2000 and newer format, 128 x 128 x 24
what is needed is this:
$ convert MyIcon.bmp BMP3:MyIcon2.bmp
$ file MyIcon2.bmp
MyIcon2.bmp: PC bitmap, Windows 3.x format, 128 x 128 x 24
The first representation (Windows 98/2000 and newer format) did not work for me.
For me that command in CMakeLists.txt works fine:
set(CPACK_NSIS_MUI_ICON "${CMAKE_CURRENT_SOURCE_DIR}/images\\\\icon.ico")
I found it here https://cmake.org/cmake/help/v3.0/module/CPackNSIS.html

How can I convert a PDF from Google Docs to images? [or: GoogleDocs' PDF export is horrible!]

I exported a document from Google Docs as PDF (just simple pages and one of the pre-defined themes) and, like I do usually, I used ImageMagick's convert to get pages converted to images, but it failed (even with the latest version) and showed no errors.
GhostScript also failed.
Other tools such as pdfinfo, mutool or qpdf don't report any error, yet it still fails even if rebuild or clean commands are applied.
Only pdfimages complains and gives me Syntax Error: Missing or invalid Coords in shading dictionary
Ok, I tried to reproduce some bugs, using Google Slides.
However, my bugs are different from yours. Read on for some details...
Google Docs does indeed create a horrible PDF syntax today. I say 'today', because I gave up with Google Docs years ago. The reason: it was always very unstable for me in the past. GoogleDocs' developers seem to change the code they activate for users all the time, and debugging the created PDFs for me was always a moving target.
When I exported to PDF the slideshow I created, and then did run the tools you mentioned on it,...
... I got 4 different results within 20 minutes!
In one case, Mac OS X's Preview.app was unable to render anything else but 3 white pages, while Adobe's Acrobat Pro rendered it (without error message) somehow garbled and different from the GoogleDocs web preview.
In another case, Acrobat Pro showed 3 white pages, while Preview.app rendered it in a garbled way!
Unfortunately, I didn't save the different versions for closer inspection. The lastest PDF I analysed gave however the following details.
Ghostscript:
pdfkungfoo#mbp:> gs -o PDFExportBug-%03d.jpg -sDEVICE=jpeg PDFExportBug.pdf
GPL Ghostscript 9.10 (2013-08-30)
Copyright (C) 2013 Artifex Software, Inc. All rights reserved.
This software comes with NO WARRANTY: see the file PUBLIC for details.
Processing pages 1 through 3.
Page 1
**** Error reading a content stream. The page may be incomplete.
**** File did not complete the page properly and may be damaged.
Page 2
**** Error reading a content stream. The page may be incomplete.
**** File did not complete the page properly and may be damaged.
Page 3
**** Error reading a content stream. The page may be incomplete.
**** File did not complete the page properly and may be damaged.
**** This file had errors that were repaired or ignored.
**** Please notify the author of the software that produced this
**** file that it does not conform to Adobe's published PDF
**** specification.
ImageMagick:
convert creates white-only images from the PDF pages.
(That's no wonder because it does not process the PDFs directly, but employs Ghostscript as it's delegate to convert the PDF to a raster format first, which is then familiar ground for ImageMagick to continue with processing... You can see details of this process by adding -verbose to your ImageMagick command line.)
qpdf
Using qpdf --check yields this result:
pdfkungfoo#mbp:> qpdf --check PDFExportBug.pdf
qpdf --check PDFExportBug.pdf
checking GoogleSlidesPDFExportBug.pdf
PDF Version: 1.4
File is not encrypted
File is not linearized
PDFExportBug.pdf (file position 9269):
unknown token while reading object (0.0000-11728996)
pdfimages:
Unlike what you discovered, my error message was this:
pdfkungfoo#mbp:> pdfimages -list PDFExportBug.pdf
page num type width height color comp bpc enc interp object ID x-ppi y-ppi size ratio
--------------------------------------------------------------------------------------------
Syntax Warning (9276): Badly formatted number
Syntax Warning (9292): Badly formatted number
Syntax Warning (9592): Badly formatted number
Syntax Warning (9608): Badly formatted number
Syntax Warning (4907): Badly formatted number
Syntax Warning (4907): Badly formatted number
Syntax Warning (9908): Badly formatted number
Syntax Warning (9924): Badly formatted number
Syntax Warning (8212): Badly formatted number
Syntax Warning (8212): Badly formatted number
When I check with a text editor the file-offsets of 9276, 9292, ... 8212 for numbers, I indeed do find the following lines in the PDF code:
Line 412: 0.0000-11728996
Line 413: 0.0000-11728996
Line 466: 0.0000-11728996
Line 467: 0.0000-11728996
Line 522: 0.0000-11728996
Line 523: 0.0000-11728996
PDF code in text editor:
Looking at the context of these lines, one sees the following:
32
0
obj
<<
/ShadingType
2
/ColorSpace
/DeviceRGB
/Function
<<
/FunctionType
2
/Domain
[
0
1
]
/Range
[
0
1
0
1
0
1
]
/C0
[
0.5882353
0.05882353
0.05882353
]
/C1
[
0.78431374
0.1254902
0.03529412
]
/N
1
>>
/Coords
[
0.000000000000053689468
0.0000
-11728996
0.0000
-11728996
26.832815
]
/Extend
[
true
true
]
>>
endobj
That's true! GoogleDocs gave me a PDF that created a newline after each single token!
PDF code, if Google had formatted it less horribly:
These lines are part of a code snippet that should probably be formatted like this, if the Google PDF export wasn't as horrible as it in fact is:
32 0 obj
<<
/ShadingType 2
/ColorSpace /DeviceRGB
/Function << /FunctionType 2
/Domain [ 0 1 ]
/Range [ 0 1 0 1 0 1 ]
/C0 [ 0.5882353 0.05882353 0.05882353 ]
/C1 [ 0.78431374 0.1254902 0.03529412 ]
/N 1
>>
/Coords [ 0.000000000000053689468 0.0000 -11728996 0.0000 -11728996 26.832815 ]
/Extend [ true true ]
>>
endobj
PDF code compared to the PDF specification:
So GoogleDoc's PDF uses /ShadingType 2 (for axial shading). This Shading Type requires a 'shading dictionary' with an entry for the /Coords key that should have as value an array of 4 numbers [x0 y0 x1 y1]. These numbers would specify the starting and ending coordinates of the axis (expressed in the shading’s target coordinate space).
However, instead of a /Coords array of 4 numbers it uses one of 6 numbers: [0.000000000000053689468 0.0000 -11728996 0.0000 -11728996 26.832815].
But Coords arrays with 6 numbers are to be used by /ShadingType 3 (radial shading).
The 6 numbers [x0 y0 r0 x1 y1 r1] then represent, according to ISO 32000:
"[...] the centres and radii of the starting and ending circles, expressed in the shading’s target coordinate space. The radii r0 and r1 shall both be greater than or equal to 0. If one radius is 0, the corresponding circle shall be treated as a point; if both are 0, nothing shall be painted."
15 minutes later, I exported the PDF again, but now I got these lines:
/Coords
[
0.000000000000053689468
0.0000-11728996
0.0000-11728996
26.832815
]
As you'll notice, now indeed the /Coords array has 4 entries -- but 0.0000-11728996 isn't a valid number!
In any case, the particular numbers in my objects 32, 33 and 34 do look funny somehow:
Either they are meant to be 6 numbers:
[0.000000000000053689468 0.0000 -11728996 0.0000 -11728996 26.832815]
Then they can only be meant for a /ShadingType 3 (radial shading)
But they are noted in the context of /ShadingType 2 (axial shading)
Or they are meant to be 4 numbers:
[0.000000000000053689468 0.0000-11728996 0.0000-11728996 26.832815]
Then 0.0000-11728996 are not valid numbers.
Fix
So the fix could be in...
...either change the /ShadingType 2 to /ShadingType 3 and keep the array of 6 numbers
...or keep the /ShadingType 2 and throw away 2 of the 6 numbers to keep only 4 (but which?)
I decided (arbitrarily, by chance) to try with ShadingType 2 first and delete these two numbers: -11728996 0.0000.
I was lucky: the PDF now lets convert process the PDF pages into JPEGs (which means the Ghostscript command called by convert was also working correctly).
Good luck with your continued using of GoogleDocs when creating PDFs...
...but don't count me in!
Update
Here is a link to a GoogleDoc currently exhibiting one of the bug variants explained above:
To see the bug, save it as a PDF. Then open it in a text editor.
Should the doc from this link stop to export buggy PDFs and stop to exhibit one of the details I've described above, then Google has applied a fix... (until they break it again?!?)

Moose throwing an error when trying to build a distribution with Dist::Zilla

I'm having a bit of trouble with building a Dist::Zilla distribution. Every time I try and build it, or really do anything (i.e., test, smoke, listdeps, whatever it is), I get this error message:
Attribute name must be provided before calling reader at /home/mxe/perl5/perlbrew/perls/perl-5.14.0/lib/site_perl/5.14.0/MooseX/LazyRequire/Meta/Attribute/Trait/LazyRequire.pm line 40
MooseX::LazyRequire::Meta::Attribute::Trait::LazyRequire::__ANON__('Dist::Zilla::Dist::Builder=HASH(0x53d06e0)') called at /home/mxe/perl5/perlbrew/perls/perl-5.14.0/lib/site_perl/5.14.0/x86_64-linux/Class/MOP/Mixin/AttributeCore.pm line 45
Class::MOP::Mixin::AttributeCore::default('Moose::Meta::Class::__ANON__::SERIAL::5=HASH(0x50c9c30)', 'Dist::Zilla::Dist::Builder=HASH(0x53d06e0)') called at reader Dist::Zilla::name (defined at /home/mxe/perl5/perlbrew/perls/perl-5.14.0/lib/site_perl/5.14.0/Dist/Zilla.pm line 41) line 6
Dist::Zilla::name('Dist::Zilla::Dist::Builder=HASH(0x53d06e0)') called at /home/mxe/perl5/perlbrew/perls/perl-5.14.0/lib/site_perl/5.14.0/Dist/Zilla/Dist/Builder.pm line 264
Dist::Zilla::Dist::Builder::build_in('Dist::Zilla::Dist::Builder=HASH(0x53d06e0)', undef) called at /home/mxe/perl5/perlbrew/perls/perl-5.14.0/lib/site_perl/5.14.0/Dist/Zilla/Dist/Builder.pm line 315
Dist::Zilla::Dist::Builder::ensure_built_in('Dist::Zilla::Dist::Builder=HASH(0x53d06e0)') called at /home/mxe/perl5/perlbrew/perls/perl-5.14.0/lib/site_perl/5.14.0/Dist/Zilla/Dist/Builder.pm line 304
Dist::Zilla::Dist::Builder::ensure_built('Dist::Zilla::Dist::Builder=HASH(0x53d06e0)') called at /home/mxe/perl5/perlbrew/perls/perl-5.14.0/lib/site_perl/5.14.0/Dist/Zilla/Dist/Builder.pm line 322
Dist::Zilla::Dist::Builder::build_archive('Dist::Zilla::Dist::Builder=HASH(0x53d06e0)') called at /home/mxe/perl5/perlbrew/perls/perl-5.14.0/lib/site_perl/5.14.0/Dist/Zilla/App/Command/build.pm line 30
Dist::Zilla::App::Command::build::execute('Dist::Zilla::App::Command::build=HASH(0x4aeb7b0)', 'Getopt::Long::Descriptive::Opts::__OPT__::2=HASH(0x4bdb150)', 'ARRAY(0x3873fc8)') called at /home/mxe/perl5/perlbrew/perls/perl-5.14.0/lib/site_perl/5.14.0/App/Cmd.pm line 231
App::Cmd::execute_command('Dist::Zilla::App=HASH(0x3c6f418)', 'Dist::Zilla::App::Command::build=HASH(0x4aeb7b0)', 'Getopt::Long::Descriptive::Opts::__OPT__::2=HASH(0x4bdb150)') called at /home/mxe/perl5/perlbrew/perls/perl-5.14.0/lib/site_perl/5.14.0/App/Cmd.pm line 170
App::Cmd::run('Dist::Zilla::App') called at /home/mxe/perl5/perlbrew/perls/perl-5.14.0/bin/dzil line 15
Or something along those lines. The contents of my dist.ini are as follows:
author = Gary Warman <email#host.com>
license = None ; this is an all rights reserved license
copyright_holder = Gary Warman
copyright_year = 2011
[ReadmeFromPod]
[#Filter]
-bundle = #Basic
-remove = Readme
[AutoPrereqs]
[OurPkgVersion] ; use this instead of [PkgVersion]
[PodWeaver]
[MetaNoIndex]
file = perlcritic.rc
[MetaJSON]
[NextRelease]
format = %-9v %{yyyy-MM-dd}d ; make Changes Spec happy
[#TestingMania]
disable = NoTabsTests ; TestSynopsis optional if synopsis is not perl or if it's a largely generated codebase
critic_config = perlcritic.rc
[ExtraTests]
[PodSpellingTests]
wordlist = Pod::Wordlist::hanekomu ;optional
spell_cmd = aspell list
[PruneFiles]
filenames = dist.ini
filenames = weaver.ini
[#Git]
[Git::NextVersion]
first_version = 0.1.0 ; use semantic versioning if you don't know what this means read: http://semver.org/ may switch to the semantic versioning plugin at some point.
[CheckChangesHasContent]
[Clean] ; optional, this cleans up directories upon running dzil release.
So, anyone here have any idea what's going on so I can resolve this?
Your dist.ini file must contain an assignment for name, which is what that horrid error is reporting. Insert, at the very very top of your file:
name = My-Awesome-Dist
...and it will work.

Why Rebol Copy Big File fails with really big files whereas windows explorer doesn't?

I tried carl function
http://www.rebol.com/article/0281.html
with 155 Mo it works.
Then I tested with 7 Go it fails without saying the limit.
Why is there a limit I can't see anything in code that puts a limit.
There's no error message
>> copy-file to-rebol-file "D:\#mirror_ftp\cpmove.tar" to-rebol-file "D:\#mirror_ftp\testcopy.tar"
0:00
== none
>>
REBOL uses 32-bit signed integers, so it can't read files bigger than 2147483647 bytes (2^31-1) which is roughly 2GB. REBOL3 uses 64-bit integers, so won't have such limitation.