How to Enable Gzip Compression for SVG Images on Google Cloud Storage? - gzip

Checking my website with Google Pagespeed Insights, I get the following warning:
Enable compression Compressing resources with gzip or deflate can
reduce the number of bytes sent over the network. Enable compression
for the following resources to reduce their transfer size by 2.9KiB
(56% reduction). Compressing
https://MY_BUCKET.storage.googleapis.com/logo.svg could save 561B (49%
reduction).
It seems Google Cloud Storage does not have gzip enabled for svg's?
How can I enable gzip compression also for svg file type?

Turns out you have to manually compress the svg:
gzip -9 -S 'z' *.svg
and then upload it with the Content-Encoding:
gsutil -h "Content-Encoding:gzip" -h "Content-Type:image/svg+xml" cp logo.svgz gs://MY_BUCKET/logo.svgz
Source: https://cloud.google.com/storage/docs/gsutil/addlhelp/WorkingWithObjectMetadata#content-encoding
UPDATE:
This combines the two above commands, as mentioned by #jterrace
gsutil -h "Content-Encoding:gzip" -h "Content-Type:image/svg+xml" cp -Z logo.svg gs://MY_BUCKET/logo.svg

Related

Can i download a file from a private bucket using wget or curl? I have only access and secret key credentials

I have a private bucket that has a single zip file, a need to download it but i can't use aws cli or aws cmd. Can i do it using wget ou curl?
Yes, in theory you can make a HTTP request using the Amazon S3 REST API but for that to work you need to authenticate correctly which you can do for example with a Authentication Header which you must then pass to curl or wget.
The problem is though that you need to write code to create those valid signatures you need to provide in the Authentication Header so it might be a bit of a hassle.
I managed to do this with this script. If you call it download.sh assuming you have a bucket called my-bucket and a file in it called file.zip, and your aws key environment variables set, you should be able to download the file by calling.
./download.sh file.zip my-bucket
I adapted this from a similar script to upload a file I found here
#!/usr/bin/env bash
# Download a file from s3 without any 3rd party tools
# thanks https://www.gyanblog.com/aws/how-upload-aws-s3-curl/
file_path=$1
bucket=$2
set -eu pipefail
# about the file
filepath="/${bucket}/${file_path}"
# metadata
contentType="application/octet-stream"
dateValue=`date -R`
signature_string="GET\n\n${contentType}\n${dateValue}\n${filepath}"
#s3 keys
s3_access_key=$AWS_ACCESS_KEY_ID
s3_secret_key=$AWS_SECRET_ACCESS_KEY
#prepare signature hash to be sent in Authorization header
signature_hash=`echo -en ${signature_string} | openssl sha1 -hmac ${s3_secret_key} -binary | base64`
# actual curl command to do PUT operation on s3
curl -sSo ${file_path} \
-H "Host: ${bucket}.s3.amazonaws.com" \
-H "Date: ${dateValue}" \
-H "Content-Type: ${contentType}" \
-H "Authorization: AWS ${s3_access_key}:${signature_hash}" \
https://${bucket}.s3.amazonaws.com/${file_path}

How do I make fuseki serve gzipped content?

I started a fuseki sever with the command:
./fuseki-server --gzip=yes --update --loc=DB /dataset
Then, after posting some data, I tried to download the gzipped content with the command:
curl -X GET \
-H "Accept: application/x-gzip" \
-H "Accept-Encoding: gzip" \
http://localhost:3030/dataset
But the content was not gzipped. Do I need additional headers/configuration to make gzipping work?
The standalone Fuseki 2.4.0 does not support gzip encoding in the standalone server. The feature seems to have got lost at some time.
Recorded as:
https://issues.apache.org/jira/browse/JENA-1200
You can set it if you use the Fuseki WAR file by configuring Apache Tomcat or Eclipse Jetty or other webapp container server.

Set content-encoding to specific files via aws command

I deploy static application by aws command.
I copy all files from my folder to s3 bucket by this command :
aws s3 sync C:\app s3://myBucket
I want to set content-encoding to gzip just to js,jpg,and html files.
I succeed to do it for all folder by this command : --content-encoding gzip
How can I do it just for the specific files type ?
This is old, but I needed to find a way to do this: I'm not using Cloudfront, and the proxy we are using doesn't handle gzip... so:
Exclude all files
Include individual file types as needed
Set the appropriate encoding/options
Below I'm also adding access control and cache-control, and deleting any files in s3 not present in the local directory
I've separated the JS/CSS from all of the images, html but that is probably not necessary.
I did however have a lot of trouble by not explicitly setting the content-encoding/cache for each individual --include, so I've set it like below to make it clearer.
The AWS docs that I could find don't mention any of this stuff
aws s3 sync ./dist/ s3://{bucket} \
--exclude "*" \
--include "static/*.ico" --acl "public-read" --cache-control no-cache \
--include "static/*.png" --acl "public-read" --cache-control no-cache \
--include "*.html" --acl "public-read" --cache-control no-cache \
--include "static/img/*.svg" --acl "public-read" --cache-control no-cache \
--delete \
aws s3 sync ./dist/ s3://{bucket} \
--exclude "*" \
--include "static/js/*.js" --acl "public-read" --cache-control no-cache --content-encoding gzip \
--include "static/css/*.css" --acl "public-read" --cache-control no-cache --content-encoding gzip \
--delete \
Pretty neat little speed improvement for serving only from s3.

How to transfer files from a remote server to my Amazon S3 instance?

I have about 15 gigs of data in 5 files that I need to transfer to an Amazon S3 bucket, they are currently hosted on a remote server that I have no scripting or shell access to - I can only download them VIA an httpd link.
How can I transfer these files to my Amazon S3 bucket without first having to download them to my local machine then re-upload them to S3?
If you want to automate the process, use AWS SDK.
Like in following case use AWS PHP SDK:
use Aws\Common\Aws;
$aws = Aws::factory('/path/to/your/config.php');
$s3 = $aws->get('S3');
$s3->putObject(array(
'Bucket' => 'your-bucket-name',
'Key' => 'your-object-key',
'SourceFile' => '/path/to/your/file.ext'
));
More details:
http://blogs.aws.amazon.com/php/post/Tx9BDFNDYYU4VF/Transferring-Files-To-and-From-Amazon-S3
http://docs.aws.amazon.com/aws-sdk-php/guide/latest/service-s3.html
Given that you only have 5 files, use the S3 file uploader http://console.aws.amazon.com/s3/home?region=us-east-1 (Actions, Upload) after you have downloaded the files to some intermediate machine. EC2 running Windows might be the best solution as the upload to S3 would be very fast. You can download Chrome onto your EC2 instance from chrome.google.com, or use the existing web browser (IE) to do the job.
[1] SSH with keys
sh-keygen -f ~/.ssh/id_rsa -q -P ""
cat ~/.ssh/id_rsa.pub
Place this SSH key into your ~/.ssh/authorized_keys file
mkdir ~/.ssh
chmod 0700 ~/.ssh
touch ~/.ssh/authorized_keys
chmod 0644 ~/.ssh/authorized_keys
[2] Snapshot ZFS, minimize transfer with LZMA, send with RSYNC
zfs snapshot zroot#150404-SNAPSHOT-ZROOT
zfs list -t snapshot
Compress to file with lzma (more effective than bzip2)
zfs send zroot#150404-SNAPSHOT-ZROOT | lzma -9 > /tmp/snapshots/zroot#150404-SNAPSHOT-ZROOT.lzma
rsync -avz -e "ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null" --progress --partial /tmp/snapshots/zroot#150404-SNAPSHOT-ZROOT.lzma <username>#<ip-address>:/
[3] Speedup transfer with MBUFFER, send with ZFS Send/Recieve
Start the receiver first. This listens on port 9090, has a 1GB buffer, and uses 128kb chunks (same as zfs):
mbuffer -s 128k -m 1G -I 9090 | zfs receive zremote
Now we send the data, also sending it through mbuffer:
zfs send -i zroot#150404-SNAPSHOT-ZROOT zremote#150404-SNAPSHOT-ZROOT | mbuffer -s 128k -m 1G -O <ip-address>:9090
[4] Speedup transfer by only sending diff
zfs snapshot zroot#150404-SNAPSHOT-ZROOT
zfs snapshot zroot#150405-SNAPSHOT-ZROOT [e.g. one day later]
zfs send -i zroot#150404-SNAPSHOT-ZROOT zroot#150405-SNAPSHOT-ZROOT | zfs receive zremote/data
See also my notes

Livestream with crtmpserver - I can't find the live file

I use crtmpserver to make my RTMP server. I use Adobe Flash Media Live Encoder 3.2 to publish the livestream to the server. It is OK. I use a webflash player to receive the live stream. It is OK.
Now, I want to find the live file in the server, but I can't find the file. Which folder is the livestream?
If you want to stream a .flv file:
In every crtmpserver application there is a property called mediaFolder that by default refers to folder media.
mediaFolder="./media",
Then the streaming URL of the file is as:
rtmp://<server IP address>/<application name>/<file name>
If you want to stream a live stream:
When you define an stream acceptor in the acceptors section, you can specify the name of the stream by localStreamName as:
{
ip="0.0.0.0",
port=9005,
protocol="inboundTcpTs",
localStreamName="tcpchan5"
},
Then the URL of this stream is as:
rtmp://<server IP address>/<application name>/tcpchan5
To receive the input stream and feed the RTMP server, you may use FFmpeg:
ffmpeg -i <input_stream> -vcodec libx264 -s 320x240 -vb 512k -async 1 -acodec libvo_aacenc -ab 32k -ac 1 -f mpegts tcp://<server IP address>:<server feed port>
For example:
ffmpeg -i udp://224.11.11.11:2000 -vcodec libx264 -s 320x240 -vb 512k -async 1 -acodec libvo_aacenc -ab 32k -ac 1 -f mpegts tcp://127.0.0.1:9000