When I receive a file from C# to iformfile, how do I know the file path of that file?
If I don't know, how do I set the path?
public async Task<JObject> files(IFormFile files){
string filePath = "";
var fileStream = System.IO.File.OpenRead(filePath)
}
In this document, some instructions on IFormFile are introduced:
Files uploaded using the IFormFile technique are buffered in memory or on disk on the server before processing. Inside the action method, the IFormFile contents are accessible as a Stream.
So for IFormFile, you need to save it locally before using the local path.
For example:
// Uses Path.GetTempFileName to return a full path for a file, including the file name.
var filePath = Path.GetTempFileName();
using (var stream = System.IO.File.Create(filePath))
{
// The formFile is the method parameter which type is IFormFile
// Saves the files to the local file system using a file name generated by the app.
await formFile.CopyToAsync(stream);
}
After this, you can get the local file path, namely filePath.
Related
I'm attempting to cache XML files, using .NET Core 5.0.
I'm adapting the example from this page
https://learn.microsoft.com/en-us/aspnet/core/fundamentals/change-tokens?view=aspnetcore-5.0
I have the following code, that successfully caches the file (it doesn't load from disk every time), but when the file changes, the file content is not re-cached.
public XmlDocument loadXML(string strFileName) {
XmlDocument xml;
// Try to obtain the file contents from the cache.
if (_cache.TryGetValue(strFileName, out xml)) {
return xml;
}
xml = this.createNewDocument();
xml.Load(strFileName);
if (xml != null) {
var changeToken = _fileProvider.Watch(strFileName);
var cacheEntryOptions = new MemoryCacheEntryOptions()
.SetSlidingExpiration(TimeSpan.FromMinutes(5))
.AddExpirationToken(changeToken);
// Put the file contents into the cache.
_cache.Set(strFileName, xml, cacheEntryOptions);
}
return xml;
}
}
I figured it out, hope this helps someone in the future.
I pass in a fully qualified file name like "d:\website\file.xml" as "strFileName". This is needed to properly load the XML.
However _fileProvider.Watch() method requires a relative URL, not a qualified file name, so in this case it needs to be "/file.xml".
So I convert "d:\website\file.xml" to "/file.xml" and call _fileProvider.Watch( "/file.xml" );
I have a .NET Core API that I'd like to extend to save uploaded images asynchronously.
Using ImageSharp I should be able to check uploads and resize if predefined size limits are exceeded. However I can't get a simple async save working.
A simple (non-async) save to file works without problem:
My Controller extracts IFormFile from the upload and calls the following method without any problem
public static void Save(IFormFile image, string imagesFolder)
{
var fileName = Path.Combine(imagesFolder, image.FileName);
using (var stream = image.OpenReadStream())
using (var imgIS = Image.Load(stream, out IImageFormat format))
{
imgIS.Save(fileName);
}
}
ImageSharp is currently lacking async methods so a workaround is necessary.
The updated code below saves the uploaded file but the format is incorrect - when viewing the file I get the message "It appears we don't support this file format".
The format is extracted from the ImageSharp Load method. and used when saving to MemoryStream.
MemoryStream CopyToAsync method is used to save to FileStream to make the upload asynchronous.
public static async void Save(IFormFile image, string imagesFolder)
{
var fileName = Path.Combine(imagesFolder, image.FileName);
using (var stream = image.OpenReadStream())
using (var imgIS = Image.Load(stream, out IImageFormat format))
using (var memoryStream = new MemoryStream())
using (var fileStream = new FileStream(fileName, FileMode.OpenOrCreate))
{
imgIS.Save(memoryStream, format);
await memoryStream.CopyToAsync(fileStream).ConfigureAwait(false);
fileStream.Flush();
memoryStream.Close();
fileStream.Close();
}
}
I can't work out whether the issue is with ImageSharp Save to MemoryStream, or the MemoryStream.CopyToAsync.
I'm currently getting 404 on SixLabors docs - hopefully not an indication that the project has folded.
How can I make the upload async and save to file in the correct format?
CopyToAsync copies a stream starting at its current position. You must change the current position of memoryStream back to start before copying:
// ...
memoryStream.Seek(0, SeekOrigin.Begin);
await memoryStream.CopyToAsync(fileStream).ConfigureAwait(false);
// ...
I have a mvc 4.5 application where I show a grid. The first column of the grid is a document name. The document name is an hyper link to the actual document that is hosted on our site and is available via a url. The documents can be pdf or doc or ppt. I can access these documents only via url and I do not have access to the actual physical document on our server.
I am providing users an option to select one or many of these documents from the grid and then they can download them. What I am trying to achieve is read each of the selected documents via the url and write it to a zip file and make the zip file downloadable. So users will be downloading one file instead of multiple files.
I have tried to stream the documents via url in memory and then add it to the zip file using ZipArchive Library from Microsoft. This is not working for me.
I was able to add documents that was on disk to zip file using Zip Archive and it works great. But I do not have access to the physical document as I can access the documents only through URL. My next option is to download each of these documents into a temp location on server and then add it to zip file using Zip Archive.But I am trying to avoid downloading files into a temp location
Please suggest how I can achieve reading documents via url in memory and adding each of these document to zip file and make zip file downloadable.
Any help will be appreciated.
Thank you Cbroe for commenting. I figured the answer. The problem was I was reading the pdf from the url and convert it to a memory stream and then was trying to add the memory stream to ZipArchive which was not working but instead I extracted the byte array out of the memory stream and then added it to the zip archive and it worked.
Here is the code snippet that might be useful for some one. My first contribution to Stack OverFlow.
public FileResult DownloadZip()
{
MemoryStream memoryStream = new MemoryStream();
using (var archive = new ZipArchive(memoryStream, ZipArchiveMode.Create, true))
{
var demoFile = archive.CreateEntry("Pdf123.pdf");
var convertedStream = ConvertTobyte("http://www.example.com/Pdf123.pdf");
using (var entryStream = demoFile.Open())
{
entryStream.Write(convertedStream, 0, convertedStream.Length);
}
demoFile = archive.CreateEntry("Pdf456.pdf");
convertedStream = ConvertTobyte("http://www.example.com/Pdf456.pdf");
using (var entryStream = demoFile.Open())
{
entryStream.Write(convertedStream, 0, convertedStream.Length);
}
}
//This option is to write the zip to your local disk
using (var fileStream = new FileStream(#"C:\Temp\test.zip", FileMode.Create))
{
memoryStream.Seek(0, SeekOrigin.Begin);
memoryStream.CopyTo(fileStream);
}
//This option is to donload the zip via browser
memoryStream.Seek(0, SeekOrigin.Begin);
return new FileStreamResult(memoryStream, "application/zip")
{
FileDownloadName = "Archive.zip"
};
}
private static byte[] ConvertTobyte(string fileUrl)
{
byte[] imageData = null;
using (var wc = new System.Net.WebClient())
imageData = wc.DownloadData(fileUrl);
return imageData;
}
Hi I'm trying to upload a file to sharepoint 2010 using the client api with meta data and also checkin the file after I'm done. Below is my code:
public void UploadDocument(SharePointFolder folder, String filename, Boolean overwrite)
{
var fileInfo = new FileInfo(filename);
var targetLocation = String.Format("{0}{1}{2}", folder.ServerRelativeUrl,
Path.AltDirectorySeparatorChar, fileInfo.Name);
using (var fs = new FileStream(filename, FileMode.Open))
{
SPFile.SaveBinaryDirect(mClientContext, targetLocation, fs, overwrite);
}
// doesn't work
SPFile newFile = mRootWeb.GetFileByServerRelativeUrl(targetLocation);
mClientContext.Load(newFile);
mClientContext.ExecuteQuery();
//check out to make sure not to create multiple versions
newFile.CheckOut();
// use OverwriteCheckIn type to make sure not to create multiple versions
newFile.CheckIn("test", CheckinType.OverwriteCheckIn);
mClientContext.Load(newFile);
mClientContext.ExecuteQuery();
//SPFile uploadFile = mRootWeb.GetFileByServerRelativeUrl(targetLocation);
//uploadFile.CheckOut();
//uploadFile.CheckIn("SOME VERSION COMMENT I'D LIKE TO ADD", CheckinType.OverwriteCheckIn);
//mClientContext.ExecuteQuery();
}
I'm able to upload the file but I can't add any meta data and file is checked out. I want to add some meta data and checkin the file after I'm done.
My SharePointFolder class has the serverRelativeUrl of the folder path to upload to. Any help greatly appreciated.
You need a credential before the executeQuery(); and SaveBinaryDirect();
For example:
mClientContext.Credentials = new NetworkCredential("LoginID","LoginPW", "LoginDomain");
SPFile newFile = mRootWeb.GetFileByServerRelativeUrl(targetLocation);
mClientContext.Load(newFile);
mClientContext.ExecuteQuery();
I am building REST application. I want to upload a file and I want to save it for example in /WEB-INF/resource/uploads
How can I get path to this directory ? My Controller looks like this
#RequestMapping(value = "/admin/house/update", method = RequestMethod.POST)
public String updateHouse(House house, #RequestParam("file") MultipartFile file, Model model) {
try {
String fileName = null;
InputStream inputStream = null;
OutputStream outputStream = null;
if (file.getSize() > 0) {
inputStream = file.getInputStream();
fileName = "D:/" + file.getOriginalFilename();
outputStream = new FileOutputStream(fileName);
int readBytes = 0;
byte[] buffer = new byte[10000];
while ((readBytes = inputStream.read(buffer, 0, 10000)) != -1) {
outputStream.write(buffer, 0, readBytes);
}
outputStream.close();
inputStream.close();
}
} catch(Exception ex) {
ex.printStackTrace();
}
model.addAttribute("step", 3);
this.houseDao.update(house);
return "houseAdmin";
}
Second question...what is the best place to upload user files ?
/WEB-INF is a bad place to try to store file uploads. There's no guarantee that this is an actual directory on the disk, and even if it is, the appserver may forbid write access to it.
Where you should store your files depends on what you want to do with them, and what operating system you're running on. Just pick somewhere outside of the webapp itself, is my advice. Perhaps create a dedicated directory
Also, the process of transferring the MultipartFile to another location is much simpler than you're making it out to be:
#RequestMapping(value = "/admin/house/update", method = RequestMethod.POST)
public String updateHouse(House house, #RequestParam("file") MultipartFile srcFile, Model model) throws IOException {
File destFile = new File("/path/to/the/target/file");
srcFile.transferTo(destFile); // easy!
model.addAttribute("step", 3);
this.houseDao.update(house);
return "houseAdmin";
}
You shouldn't store files in /WEB-INF/resource/uploads. This directory is either inside your WAR (if packaged) or exploded somewhere inside servlet container. The first destination is read-only and the latter should not be used for user files.
There are usually two places considered when storing uploaded files:
Some dedicated folder. Make sure users cannot access this directory directly (e.g. anonymous FTP folder). Note that once your application runs on more than one machine you won't have access to this folder. So consider some form of network synchronization or a shared network drive.
Database. This is controversial since binary files tend to occupy a lot of space. But this approach is a bit simpler when distributing your application.