I have files uploaded to sharepoint document library. Trying to use DotNetZip to get those files from document library, zip them and render the zip file.
Response.Clear();
Response.ContentType = "application/zip";
Response.AddHeader("content-disposition", "filename=" + "MyFiles.zip");
using (ZipFile zip = new ZipFile())
{
//Query the sharepoint document library and get SPFolder (folder in this case)
foreach (SPFolder folder in userFolder.SubFolders)
{
foreach (SPFile file in folder.Files)
{
zip.AddFile(file.URL);// Is this possible?
}
}
zip.Save(Response.OutputStream);
Can we pass file URL to AddFile method? If not, is there any another way to do this?
The dotnetzip addfile method does not accept urls. It needs to be a relative or full qualified path. See the documentation
Try the vZIP add-on, it is working great on our SharePoint 2010.
Related
I've Blazor Application (Blazor Server) with side menu. When you click on one of these menus, you will open PDF file based on specific privilege (when clicks on href).
My question :- what if someone changes the URL manually and replace it by the file URL, how I can get this URL or prevent unauthorized user from downloading this file ??
It's better to create a controller to download file, so you can control the download before it starts.
Something like:
My File
In this case the FilesController will have a Download method and inside this method you can check the authorization process.
public FileResult DownloadFile()
{
// logic to allow/disallow users from downloading
byte[] fileBytes = System.IO.File.ReadAllBytes(#"pathtofile"); // or any other source
string fileName = "nameWithExtension";
return File(fileBytes, System.Net.Mime.MediaTypeNames.Application.Octet, fileName);
}
While link to this method in controller can be presented to end users
I want my Uploads folder to reside on a fileshare.
Reason why I want this: Redundant frontend.
So instead of saving to:
C:\inetpub\wwwroot\wwwroot\Uploads
Uploads should be saved to:
\\fileshare01\MyWebsite\Uploads
I am aware that VirtualDirectories exist. This works for reading from the fileshare but writing to the Uploads directory still writes to lokal drive.
So with VirtualDirectories I can access http://localhost/Uploads/myfile.png which is actually on the fileshare BUT new files are not written there!
Here (simplified) how I save files:
IFormFileCollection files = Request.Form.Files;
var file = files.First();
using (var stream = new FileStream(#"Uploads\myfile.png", FileMode.Create))
{
await file.CopyToAsync(stream);
}
When I try to save to the new network path as absolute path it seems I require higher permissions and end up with a 500.30 error. I guess because application pool user has too little permission which I think is a good thing.
My Question:
How is this problem solved as good-practice? Shouldn't everything work automagically when configuring a VirtualDirectory including writing?
Solved it. I just got the 500.30 error because of an error in my appconfig.json. I didn't escape the backslashes in my base path.
I found this blog post saying
There is no need to add a „virtual directory“ in IIS, this stuff is
deprecated
and explaining that this is the way it's done via Startup.cs Configure() method:
app.UseFileServer(new FileServerOptions
{
FileProvider = new PhysicalFileProvider(#"\\server\path"),
RequestPath = new PathString("/MyPath"),
EnableDirectoryBrowsing = false
});
Another configuration mystery of ASP.NET Core solved :)
I try make one program for download one .exe file and run for help in my job.
But idk how to make this, i'm new in VB.
I am using this code, as shown in the Visual Basic document reference:
My.Computer.Network.DownloadFile _
("http://www.cohowinery.com/downloads/WineList.txt", _
"C:\Documents and Settings\All Users\Documents\WineList.txt")
But when I try to download an .exe file, the entire file doesn't complete and I the file is only 1 kb after download.
The webclient should be the way to go a comment above highlights that too.
This is an example from another question:
Either use sync method:
public void DownloadFile()
{
using(var client = new WebClient())
{
client.DownloadFile(new Uri("http://www.FileServerFullOfFiles.net/download/test.exe"), "test.exe");
}
}
Or use new async-await approach:
public async Task DownloadFileAsync()
{
using(var client = new WebClient())
{
await client.DownloadFileTaskAsync(new Uri("http://www.FileServerFullOfFiles.net/download/test.exe"), "test.exe");
}
}
Then call this method like this:
await DownloadFileAsync();
Open up the .exe file you are trying to download in a text editor like NotePad. Odds are what is being downloaded is an HTML page showing some kind of error message like 404 not found.
Another possibility might be that AntiVirus software is moving the original EXE into quarantine and replacing it with a Quarantine MetaData file.
If the file does actually contain binary content your connection could be getting interrupted but odds are if this happened an exception would be thrown.
I am a newbie and I am working on eclipse-rcp and trying to build a address book with data saved in xml files.when I am running the project it is able to read and write into the xml file but when I am exporting it into a rcp product it is only reading the file but not able to write.
I tried searching Google but couldn't find the relevant answers so I turned to SO.
Any suggestions??
Edit This is my method where I am trying to read the file and writing it into xml file
public void writedata() {
try {
DocumentBuilderFactory builderFactory = DocumentBuilderFactory
.newInstance();
DocumentBuilder builder = builderFactory.newDocumentBuilder();
Bundle bundle = Platform.getBundle(Activator.PLUGIN_ID);
URL fileURL = bundle.getEntry("/xmlfiles/person.xml");
InputStream inputStream=fileURL.openStream();
Document xmlDocument = builder.parse(inputStream);
..................................................
..................................................
..................................................
TransformerFactory transformerFactory = TransformerFactory
.newInstance();
Transformer transformer = transformerFactory.newTransformer();
DOMSource source = new DOMSource(xmlDocument);
Bundle bundle1 = Platform.getBundle(Activator.PLUGIN_ID);
URL fileURL1 = bundle1.getEntry("/xmlfiles/person.xml");
StreamResult result = new StreamResult(new File(FileLocator.resolve(fileURL1).getPath()));
transformer.transform(source, result);
When you run your application from Eclipse it uses the expanded project folder for your plugin. Your XML file is writeable in this location.
When you export as an RCP application your plugin gets packaged up as a plugin jar file with the XML file inside it. You won't be able to write to this file.
For the file to be writeable it needs to be outside your plugin project, either in the RCP application workspace or in an external folder.
My application (MVC) needs to download, zip and return one or many files from Amazon S3. I am using the .NET SDK and GetObject to receive the files, and want to use DotNetZip to then zip them up and return the generated zip file as a file stream result for the user to download.
Can anyone suggest the most efficient way of doing this, I am seeing OutOfMemory exceptions when downloading large files from S3, they could be up to 1gb in size for example.
My code so far;
using (
var client = AWSClientFactory.CreateAmazonS3Client(
"apikey",
"apisecret",
new AmazonS3Config { RegionEndpoint = RegionEndpoint.EUWest1 })
)
{
foreach (var file in files)
{
var request = new GetObjectRequest { BucketName = "bucketname", Key = file };
using (var response = client.GetObject(request))
{
}
}
}
If I copy the response into a memory stream and add that to the zip, all works ok (on small files), but with large files assume I cannot store the entire thing in memory?