I implemented a solution in Blazor webassembly that allows the user to upload an image and store the file on the server and the filepath in the database. When I load the image on client I send the image from the sever to client as bytes and transform the image bytes in a base64 string and display the image. This works well but from what I read, this is not a good practice because the base64 tranformation is increasing the file size by 30% and the images cannot be cached on client. I would want to let the browser download the file from the server path and display the image. My questions are the following:
How do I set the server 'root' location in order for the browser to know where to get the files ? I'm using IWebHostEnvironment on the server to get the local path.
Is there any way to secure the files to allow only the authorised user to download the files from the server ?
Should I add the images in the same folder as the solution or I should make a different folder outside of the solution ?
I'm currently using IIS Server with ASP.NET CORE 5 in a Blazor WebAssembly solution.
EDIT 1:
Controller method:
[HttpGet(API.GetFile)]
public async Task<FileContentResult> GetFile([FromQuery] string fileType,string encoding,string name)
{
var path = Path.Combine(_webHost.ContentRootPath, "Media", "Images", name);
byte[] file = await System.IO.File.ReadAllBytesAsync(path);
return File(file, encoding);
}
Link used to access the file :
https://localhost:44339/GetFile?encoding=image/jpeg&name=3020_1_21.JPEG
Generated tag in the browser:
<img style="width:300px;height:300px" src="https://localhost:44339/GetFile?encoding=image/jpeg&name=3020_1_21.JPEG">
As I mentioned in comments, the link works in Postman, it retrives the image, but it does not work in the browser, there is no file retrieved.
Related
I have created a Test folder on my Google Drive. I want to upload files in this folder using the google drive hard-coded URL. Anyone can access this folder and anyone can add and delete files because I have provided full access to this folder. But the system throws an error "the remote server returned an error (400) bad request" on the UploadFile request. Below is my code. Please help to resolve the issue. Thanks
private void Upload(string fileName)
{
var client = new WebClient();
var uri = new Uri("https://drive.google.com/drive/folders/1yPkWZf03yhihjkejQYrhuS_SMxh9j8AP?usp=sharing");
try
{
client.Encoding = Encoding.UTF8;
client.UploadFile(uri, fileName);
//client.Headers.Add("fileName", Path.GetFileName(fileName));
//var data = File.ReadAllBytes(fileName);
//client.UploadDataAsync(uri, data);
}
catch (Exception ex)
{
MessageBox.Show(ex.Message);
}
}
This will never work. C#'s WebClient's UploadDataAsync performs a standard HTTP POST to the specified Uri. Your Uri isn't the kind that accept HTTP POST. Instead it shows Google Drive's web app and let user manage files in the folder, whether it is adding files or removing files.
To add files to Google Drive folder programmatically, use Google Drive API.
Refer to: https://developers.google.com/drive/api/guides/folder
The guide shows how to create folder and upload files to specified folder.
This guide: https://developers.google.com/workspace/guides/get-started shows how to get started if you have never used any of Google's API.
I've created a new ASP.NET Core web app dotnet new webapp.
I've added d3.js library to it, that is fine.
When I try to run the calendar example it tries to load a CSV file from the server, I've added this file into the wwwroot directory, but when the response comes back from the dev-server, it is base64 encoded.
This cause the JSON parsing of d3 to break.
This is the GitHub repo of the project if you want to run and see the error.
Any ideas why Kestrel is encoding the file this way?
Not sure what d3.csv() does but it could possibly that you just need to set the Content-Type properly. By default, Content-Type: application/octet-stream is used for file types that are unknown.
In your Configure method in Startup.cs, set a mapping for the .csv file type.
var provider = new FileExtensionContentTypeProvider();
// Add csv mapping
provider.Mappings[".csv"] = "text/csv"; // Try text/plain if text/csv doesn't work
app.UseStaticFiles(new StaticFileOptions
{
ContentTypeProvider = provider
});
Have a SPA with a redux client and an express webapi. One of the use cases is to upload a single file from the browser to the express server. Express is using the multer middleware to decode the file upload and place it into an array on the req object. Everything works as expected when running on localhost.
However when the app is deployed to AWS, it does not function as expected. Deployment pushes the express api to an AWS Lambda function, and the redux client static assets are served by Cloudfront CDN. In that environment, the uploaded file does make it to the express server, is handled by multer, and the file does end up as the first (and only) item in the req.files array where it is expected to be.
The problem is that the file contains the wrong bytes. For example when I upload a sample image that is 2795 bytes in length, the file that ends up being decoded by multer is 4903 bytes in length. Other images I have tried always end up becoming larger by approximately the same factor by the time multer decodes and puts them into the req.files array. As a result, the files are corrupted and are not displaying as images.
The file is uploaded like so:
<input type="file" name="files" onChange={this.onUploadFileSelected} />
...
onUploadFileSelected = (e) => {
const file = e.target.files[0]
var formData = new FormData()
formData.append("files", file)
axios.post('to the url', formData, { withCredentials: true })
.then(handleSuccessResponse).catch(handleFailResponse)
}
I have tried setting up multer with both MemoryStorage and DiskStorage. Both work, both on localhost and in the aws lambda, however both exhibit the same behavior -- the file is a larger size and corrupted in the store.
I have also tried setting up multer as both a global middleware (via app.use) and as a route-specific middleware on the upload route (via routes.post('the url', multerMiddlware, controller.uploadAction). Again, both exhibit the same behavior. Multer middleware is configured like so:
const multerMiddleware = multer({/* optionally set dest: '/tmp' */})
.array('files')
One difference is that on localhost, both the client and express are served over http, whereas in aws, both the client and express are served over https. I don't believe this makes a difference, but I have yet been unable to test -- either running localhost over https, or running in aws over http.
Another peculiar thing I noticed was that when the multer middleware is present, other middlewares do not seem to function as expected. Rather than the next() function moving flow down to the controller action, instead, other middlewares will completely exit before the controller action invocation, and when the controller invocation exits, control does not flow back into the middlware after the next() call. When the multer middleware is removed, other middlewares do function as expected. However this observation is on localhost, where the entire end-to-end use case does function as expected.
What could be messing up the uploaded image file payload when deployed to the cloud, but not on localhost? Could it really be https making the difference?
Update 1
When I upload this file (11228 bytes)
Here is the HAR chrome is giving me for the local (expected) file upload:
"postData": {
"mimeType": "multipart/form-data; boundary=----WebKitFormBoundaryC4EJZBZQum3qcnTL",
"text": "------WebKitFormBoundaryC4EJZBZQum3qcnTL\r\nContent-Disposition: form-data; name=\"files\"; filename=\"danludwig.png\"\r\nContent-Type: image/png\r\n\r\n\r\n------WebKitFormBoundaryC4EJZBZQum3qcnTL--\r\n"
}
Here is the HAR chrome is giving me for the aws (corrupted) file upload:
"postData": {
"mimeType": "multipart/form-data; boundary=----WebKitFormBoundaryoTlutFBxvC57UR10",
"text": "------WebKitFormBoundaryoTlutFBxvC57UR10\r\nContent-Disposition: form-data; name=\"files\"; filename=\"danludwig.png\"\r\nContent-Type: image/png\r\n\r\n\r\n------WebKitFormBoundaryoTlutFBxvC57UR10--\r\n"
}
The corrupted image file that is saved is 19369 bytes in length.
Update 2
I created a text file with the text hello world that is 11 bytes long and uploaded it. It does NOT become corrupted in aws. This is the case even if I upload it with the txt or png suffix, it ends up as 11 bytes in length when persisted.
Update 3
Tried uploading with a much larger text file (12132 bytes long) and had the same result as in update 2 -- the file is persisted intact, not corrupted.
Potential answers:
Found this https://forums.aws.amazon.com/thread.jspa?threadID=252327
API Gateway does not natively support multipart form data. It is
possible to configure binary passthrough to then handle this multipart
data in your integration (your backend integration or Lambda
function).
It seems that you may need another approach if you are using API Gateway events in AWS to trigger the lambda that hosts your express server.
Or, you could configure API Gateway to work with binary payloads per https://stackoverflow.com/a/41770688/304832
Or, upload directly from your client to a signed s3 url (or a public one) and use that to trigger another lambda event.
Until we get a chance to try out different API Gateway settings, we found a temporary workaround: using FileReader to convert the file to a base64 text string, then submit that. The upload does not seem to have any issues as long as the payload is text.
I'm using node.js (express) on Heroku, where the slug size is limited to 300MB.
In order to keep my slug small, I'd like to use git-lfs to track my express' public folder.
In that way all my assets (images, videos...) are uploaded to a lfs-store (say AWS S3) and git-lfs leaves a pointer file (with probably the S3 URL in it?).
I'd like express redirects to the remote S3 file when serving files from the public folder.
My problem is I don't kwon how to retrieve the URL from the pointer file's content...
app.use('/public/:pointerfile', function (req, res, next) {
var file = req.params.pointerfile;
fs.readFile('public/'+file, function (er, data) {
if (er) return next(er);
var url = retrieveUrl(data); // <-- HELP ME HERE with the retrieveUrl function
res.redirect(url);
});
});
Don't you think it will not be too expensive to make express read and parse potentially all the public/* files. Maybe I could cache the URL once parsed?
Actually the pointer file doesn't contain any url information in it (as can be seen in the link you provided, or here) - it just keeps the oid(Object ID) for the blob which is just its sha256.
You can however achieve what you're looking for using the oid and the lfs api that allows you to download specific oids using the batch request.
You can tell what is the endpoint that's used to store your blobs from .git/config which can accept non-default lfsurl tags such as:
[remote "origin"]
url = https://...
fetch = +refs/heads/*:refs/remotes/origin/*
lfsurl = "https://..."
or a separate
[lfs]
url = "https://..."
If there's no lfsurl tag then you're using GitHub's endpoint (which may in turn redirect to S3):
Git remote: https://git-server.com/user/repo.git
Git LFS endpoint: https://git-server.com/user/repo.git/info/lfs
Git remote: git#git-server.com:user/repo.git
Git LFS endpoint: https://git-server.com/user/repo.git/info/lfs
But you should work against it and not S3 directly, as GitHub's redirect response will probably contain some authentication information as well.
Check the batch response doc to see the response structure - you will basically need to parse the relevant parts and make your own call to retrieve the blobs (which is what git lfs would've done in your stead during checkout).
A typical response (taken from the doc I referenced) would look something like:
{
"_links": {
"download": {
"href": "https://storage-server.com/OID",
"header": {
"Authorization": "Basic ...",
}
}
}
}
So you would GET https://storage-server.com/OID with whatever headers was returned from the batch response - the last step will be to rename the blob that was returned (it's name will typically be just the oid as git lfs uses checksum based storage) - the pointer file has the original resource's name so just rename the blob to that.
I've finally made a middleware for this: express-lfs with a demo here: https://expresslfs.herokuapp.com
There you can download a 400Mo file as a proof.
See usage here: https://github.com/goodenough/express-lfs#usage
PS: Thanks to #fundeldman for good advices in his answer ;)
My requirement is iam working on Windows8 App, i need to send an Registration XML file to MVC4 Application which is acting as a service. Windows8 acts a client and MVC4 acts as Server.
In server there will be already an Registration xml file, i need to check with that xml file any Username is already exist or not. If exits then i need send message that "username already exists please chosse other username." If not then I have to add a extra node in the registration.xml at server.
So now i need the Code how to send an XML file as an object in Windows8 App and how to receive/accept xml file in MVC4 Application.
Why do you want to send and receive xml?
Just send the user registration info to the server using standard HTTP POST. Server will check the local xml file, and insert the new user info or return a validation error to the client.
No need to send XML back and fourth.
Also IMHO xml file storage is a poor choice for a server backend data store, the file should be locked frequently to avoid concurrency problems, which will cause performance issues.
My suggestion is to get a free database engine, or even better, Windows Azure Mobile Services.
If you insist, You can pass your xml as a regular string to an MVC action:
public ActionResult Validate(string xmlContent)
{
XDocument doc1 = XDocument.Parse(xmlContent);
//Do your manipulation here
}
Here is a link on how to manipulate XML in .NET
As for sending the xml to the server from Windows Phone, I think this answer helps.