I need to retrieve an image from S3 and post it to Twitter using Laravel. The image is upload with Vapor (Vue js) and path URL stored in the database. Example URL (https://rippleflare-s3-dev.s3.amazonaws.com/7f87e5a4-f157-483e-8dcb-59a735194747). The image is accessible via the browser but when I try to get the image like this
Storage::disk('s3')->get($post->media_url)
Where the medial url is https://rippleflare-s3-dev.s3.amazonaws.com/7f87e5a4-f157-483e-8dcb-59a735194747 I get the error
Illuminate \ Contracts \ Filesystem \ FileNotFoundException
https://rippleflare-s3-dev.s3.amazonaws.com/7f87e5a4-f157-483e-8dcb-59a735194747
My question, how do I retrieve a file from S3 in Laravel using the file URL (path)
you can also define in a model to get your image path like this:
public function getMediaUrlAttribute()
{
return Storage::disk('s3')->url($this->media_url);
}
so it will gives your media url
Related
I am using Strapi V4 as my CMS and using ECS S3 as the media storage.
I am using https://www.npmjs.com/package/#strapi/provider-upload-aws-s3 as the provider upload plugin.
I am able to upload media assets to the bucket but get error 403 forbidden when I try to GET the assets from the bucket.
I have done the necessary additions to plugins.js and middlewares.js files.
I am now trying to add a parameter "x-emc-namespace": "my-bucket-key" to the request header in the HTTPS call that Strapi API makes to the bucket.
I have tried the Strapi webhooks approach mentioned here but that didn't help in adding a parameter to the request header.
So, my question is how to add a parameter in the Strapi v4 request header.
I'm trying to make a photo gallery app that gets its photos from an S3 bucket. The images are fetched through presigned urls. The problem is that when I use the Image component from Next, I get the following error: "url" parameter is valid but upstream response is invalid. I can't seem to find the problem. I already configured the domain in the next.config.js file.
I found the solution: Upgrade Next.js
I am trying to use the GitHub API documented in https://developer.github.com/v3/repos/contents to read the content of a file. According to the document, it is to use "GET /repos/:owner/:repo/contents/:path". So I am trying to figure out how to actually use this API.
I am using Postman to test this API. For example, the owner name is "babybee", the repo name is "test_repo", what's the complete url for this? I tried "/repos/babybee/test_repo/contents/release/release_info.json" and obviously it doesn't work. Let's say, if the file path is "release/release_info.json" under root path, what would the actual complete url be for it to be working?
For the owner babybee, repo test-repo and file release/release_info.json, to get the contents, you should be using the API as below:
https://api.github.com/repos/babybee/test-repo/contents/release/release_info.json
I want to use the vue-pdf library that is an implementation of pdfjs for vuejs 2.x in order to do the following
Download a PDF from an oauth2 protected endpoint using axios
Render the PDF (octet-stream) using vue-pdf library
and the tricky parts are
Access protected resources
Render PDF that comes as octet stream
Currently there are no examples in the repo to show case these.
After fiddling around with the library, I managed to do achieve the rendering of a pdf from a protected endpoint using the following approach
Make an axios ajax request for the protected resource using necessary auth header and the response type as responseType: 'blob'
Magically create a URL from the downloaded blob object
Set the blob URL in a data variable that is then used by <pdf> component
I have created a Pull Request to the vue-pdf repository with a working example. In the PR replace the URL of axios request with a REST endpoint that returns an octet-stream and you should be all good.
The resulting pdf viewer shown below
I am using PhantomJS 1.9.7 to scrape a web page. I need to send the returned page content to S3. I am currently using the filesystem module included with PhantomJS to save to the local file system and using a php script to scan the directory and ship the files off to S3. I would like to completely bypass the local filesystem and send the files directly from PhantomJS to S3. I could not find a direct way to do this within PhantomJS.
I toyed with the idea of using the child_process module and pass in the content as an argument, like so:
var execFile = require("child_process").execFile;
var page = require('webpage').create();
var content = page.content;
execFile('php', '[path/to/script.php, content]', null, function(err,stdout,stdin){
console.log("execFileSTDOUT:", JSON.stringify(stdout));
console.log("execFileSTDERR:", JSON.stringify(stderr));
});
which would call a php script directly to accomplish the upload. This will require using an additional process to call a CLI command. I am not comfortable with having another asynchronous process running. What I am looking for is a way to send the content directly to S3 from the PhantomJS script similar to what the filesystem module does with the local filesystem.
Any ideas as to how to accomplish this would be appreciated. Thanks!
You could just create and open another page and point it to your S3 service. Amazon S3 has a REST API and a SOAP API and REST seems easier.
For SOAP you will have to manually build the request. The only problem might be the wrong content-type. Though it looks as if it was implemented, but I cannot find a reference in the documentation.
You could also create a form in the page context and send the file that way.