I am using asset_sync 1.1.0 and fog 1.25.0. When asset_sync uploads my files to S3 and I request them back I find an empty header in the response Content-Encoding:. However when I upload them manually using S3fox, a firefox extension, this header doesn't exist.
My question is how can I disable this header from being added by asset_sync?
Notes: There are multiple issues opened at asset_sync repo on github but with no reply so far, and they closed an unsolved issue too.
The problem was with the yaml file, asset_sync uses default configuration if the the yaml file had any problems.
The only related code I could see came from either setting gzip in config for asset_sync or having .gz extensions on files (see: https://github.com/rumblelabs/asset_sync/blob/3c966cd7f0cc4471459638a8772b0fda65603391/lib/asset_sync/storage.rb#L164). Is either of those the case for you? Do you have a contained reproduction case that I could look at? Thanks!
Related
I have an express container that serves static files and it works perfectly when built and deployed locally. However, when I build and deploy it to Cloud Run, the dynamic html is returned but static assets like css files are 404. Are there any known limitations with Cloud Runthat May be contributing to this difficult-to-diagnose issue?
Searching the problem, I found this issue and realize the correct answer is masked because it's in the last comment on SO.
Google cloud use a file called .gcloudignore to not ignore some files while uploading.
If the file does not exist, .git and .gitignore files are used, and typically some assets will be in your .gitignore.
https://cloud.google.com/sdk/gcloud/reference/topic/gcloudignore
I would like to download phantomjs binary within a Gradle build script.
Where are they hosted ?
There is an official download repository for PhantomJS: https://bitbucket.org/ariya/phantomjs/downloads/
No.
They are only hosted on bitbucket: https://bitbucket.org/ariya/phantomjs/downloads/
To use them with Gradle, the only option found to be working so far is to use the gradle-download-task plugin. To implement caching, you can copy paste this code.
Otherwise, another potential option is to declare a custom ivy dependency, but so far I haven't found a way to get it to work. The issue is that bitbucket redirects to an Amazon S3 location, and the HEAD request that Gradle first issues to get the file size fails, probably because of a request signature problem.
I use dita-ot to render to pdf.
Recently, I upgraded from dita-ot 1.8.M2 to 2.5.1
Updating my pdf plugins was quite a bit of work, but the only thing that I don´t get to work properly is hyphenation.
I did it all as described on the Apache website.
The relevant instruction in detail:
"Download the precompiled JAR from OFFO and place it either in the
{fop-dir}/lib directory, or in a directory of your choice (and append
the full path to the JAR to the environment variable
FOP_HYPHENATION_PATH)."
That is how it worked with dita-ot 1.8.M2, where the {fop-dir} was placed in the "org.dita.pdf2" plugin.
Now, {fop-dir} is in the "org.dita.pdf2.fop" plugin. Maybe this is the reason, why "fop-hyph.jar" is obviously not found by the process? But what about the environment variable?
Has anybody a solution?
I found the solution by myself: I just added the attribute <xsl:attribute name="hyphenate">true</xsl:attribute> to the attribute set common.block inside of the attribute file commons-attr.xsl.
I found out that not FOP or the jar file is the cause, when I compared a FO file generated with the old dita-ot (with hyphenation) to a FO file of the new dita-ot. What was missing was the hyphenate=true attribute in each block.
Thanks for your patience!
I have to download all site content and then parse the downloaded folder for "*.pdf" files. I am downloading site using wget -r --no-parent http://www.example.com/ But the problem is that sometimes link looks this
http://www.foodmanufuture.eu/dpubs?f=K20
and the dowloaded pdf is downloaded with name "dpubs?f=K20" and file format is not specified, it does not look like this "dpubs?f=K20.pdf", is there a way to check how many pdf files I have in this folder?
Have you tried the --content-disposition flag? From the man page:
If this is set to on, experimental (not fully-functional) support for "Content-Disposition" headers is enabled. This can currently result in extra round-trips to the server for a "HEAD" request, and is known to suffer from a few bugs, which is why it is not currently enabled by default. This option is useful for some file-downloading CGI programs that use "Content-Disposition" headers to describe what the name of a downloaded file should be.
So it tries to ask the server for a filename. I tried it for the URL you gave and it seemed to work.
You could use the command
file filename
Like this:
file pdfurl-guide
pdfurl-guide: PDF document, version 1.5
You could use:
file *
To know exactly which files in your folder are pdf files
It looks like middleman s3_sync doesn't upload my robots.txt. Is there a way to enable it to always upload a specific file?
It depends on the version of Middleman S3_Sync that you are using.
Versions 3.0.x build the list of files based on the content of the build directory. In that case, copying the file into the build directory will include it in the sync.
Versions 3.3.x moved to the Middleman sitemap in preparation of MM 4. It currently only syncs the files that Middleman is aware of. Copying a file into the build directory doesn't make S3_Sync aware of it.
In the second case, there are two options available.
The first one is to move robot.txt to the source directory. This will include it in the sitemap and it will be sync'ed.
The second is to open an issue (or even better, a pull request) that will ask for the ability to include files that originate from outside of the source directory.
It would help to get the version of Middleman and s3_sync that you are using.