Uploading large files to Amazon S3 - amazon-s3

I've manage to get the following script to work with smaller files. But when I try to upload files around 10MB or more, it says that it is completed but the file does not show up in my S3 bucket.
Any ideas why it will upload smaller files and not file 10MB or greater?
<?php
//System path for our website folder
define('DOCROOT', realpath(dirname(__FILE__)).DIRECTORY_SEPARATOR);
//URL for our website
define('WEBROOT', htmlentities(
substr($_SERVER['REQUEST_URI'], 0, strcspn($_SERVER['REQUEST_URI'], "\n\r")),
ENT_QUOTES
));
//Which bucket are we placing our files into
$bucket = 'bucket.mysite.com';
// This will place uploads into the '20100920-234138' folder in the $bucket bucket
$folder = date('Ymd-His').'/'; //Include trailing /
//Include required S3 functions
require_once DOCROOT."includes/s3.php";
//Generate policy and signature
list($policy, $signature) = S3::get_policy_and_signature(array(
'bucket' => $bucket,
'folder' => $folder,
));
?>
<script type="text/javascript">
$(document).ready(function() {
$("#file_upload").uploadify({
'uploader' : '<?= WEBROOT ?>files/uploadify/uploadify.swf',
'buttonText' : 'Browse',
'cancelImg' : '<?= WEBROOT ?>files/uploadify/cancel.png',
'script' : 'http://s3.amazonaws.com/<?= $bucket ?>',
'scriptAccess' : 'always',
'method' : 'post',
'scriptData' : {
"AWSAccessKeyId" : "<?= S3::$AWS_ACCESS_KEY ?>",
"key" : "${filename}",
"acl" : "authenticated-read",
"policy" : "<?= $policy ?>",
"signature" : "<?= $signature ?>",
"success_action_status" : "201",
"key" : encodeURIComponent(encodeURIComponent("<?= $folder ?>${filename}")),
"fileext" : encodeURIComponent(encodeURIComponent("")),
"Filename" : encodeURIComponent(encodeURIComponent(""))
},
'fileExt' : '*.*',
'fileDataName' : 'file',
'simUploadLimit' : 2,
'multi' : true,
'auto' : true,
'onError' : function(errorObj, q, f, err) { console.log(err); },
'onComplete' : function(event, ID, file, response, data) { console.log(file); }
});
});
</script>
<?php
class S3 {
public static $AWS_ACCESS_KEY = '< Your access key >';
public static $AWS_SECRET_ACCESS_KEY = '< Your secrete key >';
/*
* Purpose:
* Actionscript encodes '+' characters in the signature incorrectly - it makes
* them a space instead of %2B the way PHP does. This causes uploadify to error
* out on upload. This function recursively generates a new policy and signature
* until a signature without a + character is created.
* Accepts: array $data
* Returns: policy and signature
*/
public static function get_policy_and_signature( array $data )
{
$policy = self::get_policy_doc( $data );
$signature = self::get_signature( $policy );
if ( strpos($signature, '+') !== FALSE )
{
$data['timestamp'] = intval(#$data['timestamp']) + 1;
return self::get_policy_and_signature( $data );
}
return array($policy, $signature);
}
public static function get_policy_doc(array $data)
{
return base64_encode(
'{'.
'"expiration": "'.gmdate('Y-m-d\TH:i:s\Z', time()+60*60*24+intval(#$data['timestamp'])).'",'.
'"conditions": '.
'['.
'{"bucket": "'.$data['bucket'].'"},'.
'["starts-with", "$key", ""],'.
'{"acl": "authenticated-read"},'.
//'{"success_action_redirect": "'.$SWFSuccess_Redirect.'"},'.
'{"success_action_status": "201"},'.
'["starts-with","$key","'.str_replace('/', '\/', $data['folder'] ).'"],'.
'["starts-with","$Filename",""],'.
'["starts-with","$folder",""],'.
'["starts-with","$fileext",""],'.
'["content-length-range",0,5242880]'.
']'.
'}'
);
}
public static function get_signature( $policy_doc ) {
return base64_encode(hash_hmac(
'sha1', $policy_doc, self::$AWS_SECRET_ACCESS_KEY, true
));
}
}

Problem solved issue was with this line
'["content-length-range",0,5242880]'
I commented it out and it works as it should. with no limits in size.

Related

pion/laravel-chunk-upload Laravel not working with large files

I am using resumable.js and Laravel Chunk for uploading large files. It works for small files but didn't work for > 500mb. Chunk is part the file but it didn't re-compile the file to a given directory.
Namespaces
use Illuminate\Http\Request;
use Illuminate\Http\UploadedFile;
use Pion\Laravel\ChunkUpload\Exceptions\UploadMissingFileException;
use Pion\Laravel\ChunkUpload\Handler\AbstractHandler;
use Pion\Laravel\ChunkUpload\Handler\HandlerFactory;
use Pion\Laravel\ChunkUpload\Receiver\FileReceiver;
use Illuminate\Support\Facades\Storage;
Controller
$receiver = new FileReceiver('file', $request, HandlerFactory::classFromRequest($request));
if (!$receiver->isUploaded()) {
// file not uploaded
}
$fileReceived = $receiver->receive(); // receive file
if ($fileReceived->isFinished()) { // file uploading is complete / all chunks are uploaded
$file = $fileReceived->getFile(); // get file
$extension = $file->getClientOriginalExtension();
$fileName = str_replace('.'.$extension, '', $file->getClientOriginalName()); //file name without extenstion
$fileName .= '_' . md5(time()) . '.' . $extension; // a unique file name
$disk = Storage::disk('new');
$path = $disk->putFileAs('resources/techpacks', $file, $fileName);
// delete chunked file
unlink($file->getPathname());
// return [
// 'path' => asset('storage/' . $path),
// 'filename' => $fileName
// ];
}
Resumable
let browseFile = $('#browseFile');
let resumable = new Resumable({
target: "{{ url('admin/techpack/insert') }}",
query:{_token:'{{ csrf_token() }}'} ,// CSRF token
fileType: ['zip'],
chunkSize: 10*1024*1024, // default is 1*1024*1024, this should be less than your maximum limit in php.ini
headers: {
'Accept' : 'application/json'
},
testChunks: false,
throttleProgressCallbacks: 1,
});
resumable.assignBrowse(browseFile[0]);
resumable.on('fileAdded', function (file) { // trigger when file picked
showProgress();
resumable.upload() // to actually start uploading.
});
resumable.on('fileProgress', function (file) { // trigger when file progress update
updateProgress(Math.floor(file.progress() * 100));
});
resumable.on('fileSuccess', function (file, response) { // trigger when file upload complete
// response = JSON.parse(response)
console.log(response)
});
resumable.on('fileError', function (file, response) { // trigger when there is any error
alert('file uploading error.')
});
let progress = $('.progress');
function showProgress() {
progress.find('.progress-bar').css('width', '0%');
progress.find('.progress-bar').html('0%');
progress.find('.progress-bar').removeClass('bg-success');
progress.show();
}
function updateProgress(value) {
progress.find('.progress-bar').css('width', `${value}%`)
progress.find('.progress-bar').html(`${value}%`)
}
function hideProgress() {
progress.hide();
}
Chunk Folder
After Re-Building
Added to Target Folder
But I can't open the final file. Please guide me.
Windows Error
Resumable Error

Laravel 5.8 rest client how to save api token in the .env

I should like to insert the return token from api into the .env in when after i want pass it header in
<!-- language: php -->
class GuzzleController extends Controller
{
public function getToken()
{
$client = new Client();
$request = $client->request('POST', 'http://192.168.53.27:1996/api/login/',
[
'form_params' => [
'user_name' => 'userName',
'password' => 'Passs',
]
]);
return json_decode((string)$request->getBody(), true);
}
}
As same question has been answere here;
This method should save new value to your .env file
private function setEnvironmentValue($envKey, $envValue)
{
$envFile = app()->environmentFilePath();
$str = file_get_contents($envFile);
$str .= "\n"; // In case the searched variable is in the last line without \n
$keyPosition = strpos($str, "{$envKey}=");
$endOfLinePosition = strpos($str, PHP_EOL, $keyPosition);
$oldLine = substr($str, $keyPosition, $endOfLinePosition - $keyPosition);
$str = str_replace($oldLine, "{$envKey}={$envValue}", $str);
$str = substr($str, 0, -1);
$fp = fopen($envFile, 'w');
fwrite($fp, $str);
fclose($fp);
}
usage
$this->setEnvironmentValue('DEPLOY_SERVER', 'forge#122.11.244.10');

My file not in uploads directory after upload was successful

I try to upload file using Yii2 file upload and the file path was successful saved to the database but the file was not saved to the directory I specify.. below is my code..
<?php
namespace backend\models;
use yii\base\Model;
use yii\web\UploadedFile;
use yii\Validators\FileValidator;
use Yii;
class UploadForm extends Model
{
/**
* #var UploadedFile
*/
public $image;
public $randomCharacter;
public function rules(){
return[
[['image'], 'file', 'skipOnEmpty' => false, 'extensions'=> 'png, jpg,jpeg'],
];
}
public function upload(){
$path = \Yii::getAlias("#backend/web/uploads/");
$randomString = "";
$length = 10;
$character = "QWERTYUIOPLKJHGFDSAZXCVBNMlkjhgfdsaqwertpoiuyzxcvbnm1098723456";
$randomString = substr(str_shuffle($character),0,$length);
$this->randomCharacter = $randomString;
if ($this->validate()){
$this->image->saveAs($path .$this->randomCharacter .'.'.$this->image->extension);
//$this->image->saveAs(\Yii::getAlias("#backend/web/uploads/{$randomString}.{$this->image->extension}"));
return true;
}else{
return false;
}
}
}
The controller to create the fileupload
namespace backend\controllers;
use Yii;
use backend\models\Product;
use backend\models\ProductSearch;
use yii\web\Controller;
use yii\web\NotFoundHttpException;
use yii\filters\VerbFilter;
use backend\models\UploadForm;
use yii\web\UploadedFile;
public function actionCreate()
{
$addd_at = time();
$model = new Product();
$upload = new UploadForm();
if($model->load(Yii::$app->request->post())){
//get instance of the uploaded file
$model->image = UploadedFile::getInstance($model, 'image');
$upload->upload();
$model->added_at = $addd_at;
$model->image = 'uploads/' .$upload->randomCharacter .'.'.$model->image->extension;
$model->save();
return $this->redirect(['view', 'product_id' => $model->product_id]);
} else{
return $this->render('create', [
'model' => $model,
]);
}
}
Does it throw any errors?
This is propably permission issue. Try changing the "uploads" directory permission to 777 (for test only).
You load your Product ($model) with form data.
if($model->load(Yii::$app->request->post()))
But Uploadform ($upload) never gets filled in your script. Consequently, $upload->image will be empty.
Since you declare 'skipOnEmpty' => false in the file validator of the UploadForm rules, the validation on $upload will fail.
That is why your if statement in the comments above (if($upload->upload()) doesn't save $model data.
I don't see why you would need another model to serve this purpose. It only complicates things, so I assume its because you copied it from a tutorial. To fix and make things more simple, just do the following things:
Add property to Product model
public $image;
Add image rule to Product model
[['image'], 'file', 'skipOnEmpty' => false, 'extensions'=> 'png, jpg,jpeg'],
Adjust controller create action
public function actionCreate()
{
$model = new Product();
if($model->load(Yii::$app->request->post()) && $model->validate()) {
// load image
$image = UploadedFile::getInstance($model, 'image');
// generate random filename
$rand = Yii::$app->security->generateRandomString(10);
// define upload path
$path = 'uploads/' . $rand . '.' . $image->extension;
// store image to server
$image->saveAs('#webroot/' . $path);
$model->added_at = time();
$model->image = $path;
if($model->save()) {
return $this->redirect(['view', 'product_id' => $model->product_id]);
}
} else {
return $this->render('create', [
'model' => $model,
]);
}
}
Something like this should do the trick.
Your UploadForm class is already on Backend so on function upload of UploadForm Class it should be like this:
Change this line:
$path = \Yii::getAlias("#backend/web/uploads/");
to this:
$path = \Yii::getAlias("uploads")."/";

How to let the user choose the upload directory?

I have a form used to upload images in my blog engine. The files are uploaded to web/uploads, but I'd like to add a "choice" widget to let the users pick from a list of folders, for instance 'photos', 'cliparts', 'logos'.
Here's my form
class ImageForm extends BaseForm
{
public function configure()
{
$this->widgetSchema->setNameFormat('image[%s]');
$this->setWidget('file', new sfWidgetFormInputFileEditable(
array(
'edit_mode'=>false,
'with_delete' => false,
'file_src' => '',
)
));
$this->setValidator('file', new mysfValidatorFile(
array(
'max_size' => 500000,
'mime_types' => 'web_images',
'path' => 'uploads',
'required' => true
)
));
$this->setWidget('folder', new sfWidgetFormChoice(array(
'expanded' => false,
'multiple' => false,
'choices' => array('photos', 'cliparts', 'logos')
)
));
$this->setValidator('folder', new sfValidatorChoice(array(
'choices' => array(0,1,2)
)));
}
}
and here is my action :
public function executeAjout(sfWebRequest $request)
{
$this->form = new ImageForm();
if ($request->isMethod('post'))
{
$this->form->bind(
$request->getParameter($this->form->getName()),
$request->getFiles($this->form->getName())
);
if ($this->form->isValid())
{
$this->form->getValue('file')->save();
$this->image = $this->form->getValue('file');
}
}
I'm using a custom file validator :
class mySfValidatorFile extends sfValidatorFile
{
protected function configure($options = array(), $messages =
array())
{
parent::configure();
$this->addOption('validated_file_class',
'sfValidatedFileFab');
}
}
class sfValidatedFileFab extends sfValidatedFile
{
public function generateFilename()
{
return $this->getOriginalName();
}
}
So how do I tell the file upload widget to save the image in a different folder ?
You can concatenate the directory names you said ('photos', 'cliparts', 'logos') to the sf_upload_dir as the code below shows, you will need to create those directories of course.
$this->validatorSchema['file'] = new sfValidatorFile(
array('path' => sfConfig::get('sf_upload_dir' . '/' . $path)
));
Also, you can have those directories detailes in the app.yml configuration file and get them calling to sfConfig::get() method.
I got it to work with the following code :
public function executeAdd(sfWebRequest $request)
{
$this->form = new ImageForm();
if ($request->isMethod('post'))
{
$this->form->bind(
$request->getParameter($this->form->getName()),
$request->getFiles($this->form->getName())
);
if ($this->form->isValid())
{
//quel est le dossier ?
switch($this->form->getValue('folder'))
{
case 0:
$this->folder = '/images/clipart/';
break;
case 1:
$this->folder = '/images/test/';
break;
case 2:
$this->folder = '/images/program/';
break;
case 3:
$this->folder = '/images/smilies/';
break;
}
$filename = $this->form->getValue('file')->getOriginalName();
$this->form->getValue('file')->save(sfConfig::get('sf_web_dir').$this->folder.$filename);
//path :
$this->image = $this->folder.$filename;
}
}

Is it possible to change headers on an S3 object without downloading the entire object?

I've uploaded a bunch of images to Amazon S3, and now want to add a Cache-Control header to them.
Can the header be updated without downloading the entire image? If so, how?
It's beta functionality, but you can specify new meta data when you copy an object. Specify the same source and destination for the copy, and this has the effect of just updating the meta data on your object.
PUT /myObject HTTP/1.1
Host: mybucket.s3.amazonaws.com
x-amz-copy-source: /mybucket/myObject
x-amz-metadata-directive: REPLACE
x-amz-meta-myKey: newValue
This is out of beta and is available by doing a put command and copying the object as documented here. It is also available in their SDK's. For example with C#:
var s3Client = new AmazonS3Client("publicKey", "privateKey");
var copyRequest = new CopyObjectRequest()
.WithDirective(S3MetadataDirective.REPLACE)
.WithSourceBucket("bucketName")
.WithSourceKey("fileName")
.WithDestinationBucket("bucketName")
.WithDestinationKey("fileName)
.WithMetaData(new NameValueCollection { { "x-amz-meta-yourKey", "your-value }, { "x-amz-your-otherKey", "your-value" } });
var copyResponse = s3Client.CopyObject(copyRequest);
This is how you do it with AWS SDK for PHP 2:
<?php
require 'vendor/autoload.php';
use Aws\Common\Aws;
use Aws\S3\Enum\CannedAcl;
use Aws\S3\Exception\S3Exception;
const MONTH = 2592000;
// Instantiate an S3 client
$s3 = Aws::factory('config.php')->get('s3');
// Settings
$bucketName = 'example.com';
$objectKey = 'image.jpg';
$maxAge = MONTH;
$contentType = 'image/jpeg';
try {
$o = $s3->copyObject(array(
'Bucket' => $bucketName,
'Key' => $objectKey,
'CopySource' => $bucketName . '/'. $objectKey,
'MetadataDirective' => 'REPLACE',
'ACL' => CannedAcl::PUBLIC_READ,
'command.headers' => array(
'Cache-Control' => 'public,max-age=' . $maxAge,
'Content-Type' => $contentType
)
));
// print_r($o->ETag);
} catch (Exception $e) {
echo $objectKey . ': ' . $e->getMessage() . PHP_EOL;
}
?>
with the amazon aws-sdk, Doing a copy_object with extra headers seems to do the trick for setting caching control headers for an existing S3 Object.
=====================x===============================================
<?php
error_reporting(-1);
require_once 'sdk.class.php';
// UPLOAD FILES TO S3
// Instantiate the AmazonS3 class
$options = array("key" => "aws-key" , "secret" => "aws-secret") ;
$s3 = new AmazonS3($options);
$bucket = "bucket.3mik.com" ;
$exists = $s3->if_bucket_exists($bucket);
if(!$exists) {
trigger_error("S3 bucket does not exists \n" , E_USER_ERROR);
}
$name = "cows-and-aliens.jpg" ;
echo " change headers for $name \n" ;
$source = array("bucket" => $bucket, "filename" => $name);
$dest = array("bucket" => $bucket, "filename" => $name);
//caching headers
$offset = 3600*24*365;
$expiresOn = gmdate('D, d M Y H:i:s \G\M\T', time() + $offset);
$headers = array('Expires' => $expiresOn, 'Cache-Control' => 'public, max-age=31536000');
$meta = array('acl' => AmazonS3::ACL_PUBLIC, 'headers' => $headers);
$response = $s3->copy_object($source,$dest,$meta);
if($response->isOk()){
printf("copy object done \n" );
}else {
printf("Error in copy object \n" );
}
?>
=======================x================================================
In Java, try this
S3Object s3Object = amazonS3Client.getObject(bucketName, fileKey);
ObjectMetadata metadata = s3Object.getObjectMetadata();
Map customMetaData = new HashMap();
customMetaData.put("yourKey", "updateValue");
customMetaData.put("otherKey", "newValue");
metadata.setUserMetadata(customMetaData);
amazonS3Client.putObject(new PutObjectRequest(bucketName, fileId, s3Object.getObjectContent(), metadata));
You can also try copy object. Here metadata will not copy while copying an Object.
You have to get metadata of original and set to copy request.
This method is more recommended to insert or update metadata of an Amazon S3 object
ObjectMetadata metadata = amazonS3Client.getObjectMetadata(bucketName, fileKey);
ObjectMetadata metadataCopy = new ObjectMetadata();
metadataCopy.addUserMetadata("yourKey", "updateValue");
metadataCopy.addUserMetadata("otherKey", "newValue");
metadataCopy.addUserMetadata("existingKey", metadata.getUserMetaDataOf("existingValue"));
CopyObjectRequest request = new CopyObjectRequest(bucketName, fileKey, bucketName, fileKey)
.withSourceBucketName(bucketName)
.withSourceKey(fileKey)
.withNewObjectMetadata(metadataCopy);
amazonS3Client.copyObject(request);
Here is a helping code in Python.
import boto
one_year = 3600*24*365
cckey = 'cache-control'
s3_connection = S3Connection()
bucket_name = 'my_bucket'
bucket = s3_connection.get_bucket(bucket_name validate=False)
for key in bucket:
key_name = key.key
if key.size == 0: # continue on directories
continue
# Get key object
key = bucket.get_key(key_name)
if key.cache_control is not None:
print("Exists")
continue
cache_time = one_year
#set metdata
key.set_metadata(name=cckey, value = ('max-age=%d, public' % (cache_time)))
key.set_metadata(name='content-type', value = key.content_type)
# Copy the same key
key2 = key.copy(key.bucket.name, key.name, key.metadata, preserve_acl=True)
continue
Explanation: Code adds new metadata to the existing key and then copies the same file.