I have a wcf service that allow clients to download some files. Although there is a new instance of service for every client's request, if two clients try to download same file at the same time, first request to arrive locks the file until it is finished with it. So the other client is actually waiting for first client to finish as there is no multiple services. There must be a way to avoid this.
Is there anyone who knows how I can avoid this without having multiple files on servers hard disk? Or am I doing something totally wrong?
this is server side code:
`public Stream DownloadFile(string path)
{
System.IO.FileInfo fileInfo = new System.IO.FileInfo(path);
// check if exists
if (!fileInfo.Exists) throw new FileNotFoundException();
// open stream
System.IO.FileStream stream = new System.IO.FileStream(path, System.IO.FileMode.Open, System.IO.FileAccess.Read);
// return result
return stream;
}`
this is client side code:
public void Download(string serverPath, string path)
{
Stream stream;
try
{
if (System.IO.File.Exists(path)) System.IO.File.Delete(path);
serviceStreamed = new ServiceStreamedClient("NetTcpBinding_IServiceStreamed");
SimpleResult<long> res = serviceStreamed.ReturnFileSize(serverPath);
if (!res.Success)
{
throw new Exception("File not found: \n" + serverPath);
}
// get stream from server
stream = serviceStreamed.DownloadFile(serverPath);
// write server stream to disk
using (System.IO.FileStream writeStream = new System.IO.FileStream(path, System.IO.FileMode.CreateNew, System.IO.FileAccess.Write))
{
int chunkSize = 1 * 48 * 1024;
byte[] buffer = new byte[chunkSize];
OnTransferStart(new TransferStartArgs());
do
{
// read bytes from input stream
int bytesRead = stream.Read(buffer, 0, chunkSize);
if (bytesRead == 0) break;
// write bytes to output stream
writeStream.Write(buffer, 0, bytesRead);
// report progress from time to time
OnProgressChanged(new ProgressChangedArgs(writeStream.Position));
} while (true);
writeStream.Close();
stream.Dispose();
}
}
catch (Exception ex)
{
throw ex;
}
finally
{
if (serviceStreamed.State == System.ServiceModel.CommunicationState.Opened)
{
serviceStreamed.Close();
}
OnTransferFinished(new TransferFinishedArgs());
}
}
I agree with Mr. Kjörling, it's hard to help without seeing what you're doing. Since you're just downloading files from your server, why are you opening it as R/W (causing the lock). If you open it as read only, then it won't lock. Please don't mod down if my suggestion is lacking as it is only my interpretation of the problem w/o a lot of information.
Try this, it should enable two threads to read the file concurrently and independently:
System.IO.FileStream stream = new System.IO.FileStream(path, System.IO.FileMode.Open, System.IO.FileAccess.Read, System.IO.FileShare.Read);
Related
Here's the warning that I am getting:
S3AbortableInputStream:Not all bytes were read from the S3ObjectInputStream, aborting HTTP connection. This is likely an error and may result in sub-optimal behavior. Request only the bytes you need via a ranged GET or drain the input stream after use.
I tried using try with resources but S3ObjectInputStream doesn't seem to close via this method.
try (S3Object s3object = s3Client.getObject(new GetObjectRequest(bucket, key));
S3ObjectInputStream s3ObjectInputStream = s3object.getObjectContent();
BufferedReader reader = new BufferedReader(new InputStreamReader(s3ObjectInputStream, StandardCharsets.UTF_8));
){
//some code here blah blah blah
}
I also tried below code and explicitly closing but that doesn't work either:
S3Object s3object = s3Client.getObject(new GetObjectRequest(bucket, key));
S3ObjectInputStream s3ObjectInputStream = s3object.getObjectContent();
try (BufferedReader reader = new BufferedReader(new InputStreamReader(s3ObjectInputStream, StandardCharsets.UTF_8));
){
//some code here blah blah
s3ObjectInputStream.close();
s3object.close();
}
Any help would be appreciated.
PS: I am only reading two lines of the file from S3 and the file has more data.
Got the answer via other medium. Sharing it here:
The warning indicates that you called close() without reading the whole file. This is problematic because S3 is still trying to send the data and you're leaving the connection in a sad state.
There's two options here:
Read the rest of the data from the input stream so the connection can be reused.
Call s3ObjectInputStream.abort() to close the connection without reading the data. The connection won't be reused, so you take some performance hit with the next request to re-create the connection. This may be worth it if it's going to take a long time to read the rest of the file.
Following option #1 of Chirag Sejpal's answer I used the below statement to drain the S3AbortableInputStream to ensure the connection can be reused:
com.amazonaws.util.IOUtils.drainInputStream(s3ObjectInputStream);
I ran into the same problem and the following class helped me
#Data
#AllArgsConstructor
public class S3ObjectClosable implements Closeable {
private final S3Object s3Object;
#Override
public void close() throws IOException {
s3Object.getObjectContent().abort();
s3Object.close();
}
}
and now you can use without warning
try (final var s3ObjectClosable = new S3ObjectClosable(s3Client.getObject(bucket, key))) {
//same code
}
To add an example to Chirag Sejpal's answer (elaborating on option #1), the following can be used to read the rest of the data from the input stream before closing it:
S3Object s3object = s3Client.getObject(new GetObjectRequest(bucket, key));
try (S3ObjectInputStream s3ObjectInputStream = s3object.getObjectContent()) {
try {
// Read from stream as necessary
} catch (Exception e) {
// Handle exceptions as necessary
} finally {
while (s3ObjectInputStream != null && s3ObjectInputStream.read() != -1) {
// Read the rest of the stream
}
}
// The stream will be closed automatically by the try-with-resources statement
}
I ran into the same error.
As others have pointed out, the /tmp space in lambda is limited to 512 MB.
And if the lambda context is re-used for a new invocation, then the /tmp space is already half-full.
So, when reading the S3 objects and writing all the files to the /tmp directory (as I was doing),
I ran out of disk space somewhere in between.
Lambda exited with error, but NOT all bytes from the S3ObjectInputStream were read.
So, two things one need to keep in mind:
1) If the first execution causes the problem, be stingy with your /tmp space.
We have only 512 MB
2) If the second execution causes the problem, then this could be resolved by attacking the root problem.
Its not possible to delete the /tmp folder.
So, delete all the files in the /tmp folder after the execution is finished.
In java, here is what I did, which successfully resolved the problem.
public String handleRequest(Map < String, String > keyValuePairs, Context lambdaContext) {
try {
// All work here
} catch (Exception e) {
logger.error("Error {}", e.toString());
return "Error";
} finally {
deleteAllFilesInTmpDir();
}
}
private void deleteAllFilesInTmpDir() {
Path path = java.nio.file.Paths.get(File.separator, "tmp", File.separator);
try {
if (Files.exists(path)) {
deleteDir(path.toFile());
logger.info("Successfully cleaned up the tmp directory");
}
} catch (Exception ex) {
logger.error("Unable to clean up the tmp directory");
}
}
public void deleteDir(File dir) {
File[] files = dir.listFiles();
if (files != null) {
for (final File file: files) {
deleteDir(file);
}
}
dir.delete();
}
This is my solution. I'm using spring boot 2.4.3
Create an amazon s3 client
AmazonS3 amazonS3Client = AmazonS3ClientBuilder
.standard()
.withRegion("your-region")
.withCredentials(
new AWSStaticCredentialsProvider(
new BasicAWSCredentials("your-access-key", "your-secret-access-key")))
.build();
Create an amazon transfer client.
TransferManager transferManagerClient = TransferManagerBuilder.standard()
.withS3Client(amazonS3Client)
.build();
Create a temporary file in /tmp/{your-s3-key} so that we can put the file we download in this file.
File file = new File(System.getProperty("java.io.tmpdir"), "your-s3-key");
try {
file.createNewFile(); // Create temporary file
} catch (IOException e) {
e.printStackTrace();
}
file.mkdirs(); // Create the directory of the temporary file
Then, we download the file from s3 using transfer manager client
// Note that in this line the s3 file downloaded has been transferred in to the temporary file that we created
Download download = transferManagerClient.download(
new GetObjectRequest("your-s3-bucket-name", "your-s3-key"), file);
// This line blocks the thread until the download is finished
download.waitForCompletion();
Now that the s3 file has been successfully transferred into the temporary file that we created. We can get the InputStream of the temporary file.
InputStream input = new DataInputStream(new FileInputStream(file));
Because the temporary file is not needed anymore, we just delete it.
file.delete();
I have a URL to MP4 audio file that I need to send to Speech-To-Text API. The API accepts only WAV stream. I am using NAudio 1.7.3 and the following code to download the file and to get the appropriate stream to be sent to API:
string filePath = "C:\Windows\Temp\file.wav";
using (MediaFoundationReader reader = new MediaFoundationReader(audioFileURL))
{
WaveFileWriter.CreateWaveFile(filePath, reader);
}
System.IO.FileStream fs = new FileStream(filePath, FileMode.Open);
Then I send the fs stream to API and everything works correctly, although very slowly because of I/O to/from disk.
I decided to rewrite this code and execute all required in memory. For this purpose I wrote the following code (that does not provide me a correct stream):
using (MediaFoundationReader reader = new MediaFoundationReader(audioLocation)){
MemoryStream ms = new MemoryStream();
IgnoreDisposeStream ids = new IgnoreDisposeStream(ms);
WaveFileWriter writer = new WaveFileWriter(ids, reader.WaveFormat);
//Doing one of the following (both provide the same outcome):
//1. reader.CopyTo(ids);
//or
//2. this code from NAudio source:
var buffer = new byte[reader.WaveFormat.AverageBytesPerSecond * 4];
while (true)
{
int bytesRead = reader.Read(buffer, 0, buffer.Length);
if (bytesRead == 0)
{
// end of source provider
break;
}
// Write will throw exception if WAV file becomes too large
writer.Write(buffer, 0, bytesRead);
}
writer.Dispose();
Stream streamToSendToAPI = ids.SourceStream;
//Send streamToSendToAPI to Speech-To-Text API
}
My expectation is that using second code example, where I create stream with WAV header and then add the data to the stream, would provide me a valid WAV stream. However, when I send it to speech-to-text API, the API gives error that indicates that stream cannot be processed (meaning that stream is invalid).
Please advise how to fix the in-memory code example to create a valid WAV stream
You need to rewind the memory stream back to the beginning
ms.Position = 0
I use Apache Tika bundle dependency for a Project to find out MimeTypes for Files. due to some issues we have to find out through InputStream. it is actually guaranteed to mark / reset given InputStream. Tika-Bundle includes core and parser api and uses PoifscontainerDetector , ZipContainerDetector, OggDetector, MimeTypes and Magic for detection. I have been debugging for 3 hours and all of Detectors mark and reset after detection. I did it in following way.
TikaInputStream tis = null;
try {
TikaConfig config = new TikaConfig();
tikaDetector = config.getDetector();
tis = TikaInputStream.get(in);
MediaType mediaType = tikaDetector.detect(tis, new Metadata());
if (mediaType != null) {
String[] types = mediaType.toString().split(",");
for (int i = 0; i < types.length; i++) {
mimeTypes.add(new MimeType(types[i]));
}
}
} catch (Exception e) {
logger.error("Mime Type for given Stream could not be resolved: ", e);
}
But Stream is consumed. Does anyone know how to find out MimeTypes without consuming Stream?
This problem bugged me for a while too before I finally solved it. The problem is that, while Detector.detect() methods are required to mark and reset the stream, this resetting will have no effect on your original stream (the in variable) if marking is not supported in that stream.
In order to get this to work, I had to first convert my stream to a BufferedInputStream before doing anything else. I would then pass that buffered stream to the detect algorithm, and I would use that same buffered stream later for parsing, reading, or whatever I needed to do.
BufferedInputStream buffStream = new BufferedInputStream(in);
TikaInputStream tis = null;
try {
TikaConfig config = new TikaConfig();
tikaDetector = config.getDetector();
tis = TikaInputStream.get(buffStream);
MediaType mediaType = tikaDetector.detect(tis, new Metadata());
if (mediaType != null) {
String[] types = mediaType.toString().split(",");
for (int i = 0; i < types.length; i++) {
mimeTypes.add(new MimeType(types[i]));
}
}
} catch (Exception e) {
logger.error("Mime Type for given Stream could not be resolved: ", e);
}
// further along in my code...
doSomething(buffStream); // rather than doSomething(in)
I'm a bit new to WinRT developing platform, and it's already driving me crazy (I'm a long-time .Net developer, and all those removed APIs are quite annoying)
I'm experiencing a problem while zipping all files present in the Windows.Storage.ApplicationData.Current.TemporaryFolder
Here is my current code (VB.Net, based on MSDN code, and "file" is the zip file I'll put all my files into) :
Using zipMemoryStream As New MemoryStream()
Using zipArchive As New Compression.ZipArchive(zipMemoryStream, Compression.ZipArchiveMode.Create)
For Each fileToCompress As Windows.Storage.StorageFile In (Await Windows.Storage.ApplicationData.Current.TemporaryFolder.GetFilesAsync())
Dim buffer As Byte() = WindowsRuntimeBufferExtensions.ToArray(Await Windows.Storage.FileIO.ReadBufferAsync(fileToCompress))
Dim entry As ZipArchiveEntry = zipArchive.CreateEntry(fileToCompress.Name)
Using entryStream As Stream = entry.Open()
Await entryStream.WriteAsync(buffer, 0, buffer.Length)
End Using
Next
End Using
Using zipStream As Windows.Storage.Streams.IRandomAccessStream = Await file.OpenAsync(Windows.Storage.FileAccessMode.ReadWrite)
Using outstream As Stream = zipStream.AsStreamForWrite()
Dim buffer As Byte() = zipMemoryStream.ToArray()
outstream.Write(buffer, 0, buffer.Length)
outstream.Flush()
End Using
End Using
End Using
It builds well, but when I launch the code, I have the exception :
UnauthorizedAccessException : Access denied. (Exception de HRESULT : 0x80070005 (E_ACCESSDENIED))
On line : WindowsRuntimeBufferExtensions.ToArray(blahblah...
I'm wondering what is wrong. Any idea ?
Thanks in advance !
I rewrote your method in C# to try it out:
var file = await ApplicationData.Current.LocalFolder.CreateFileAsync("test.zip");
using (var zipMemoryStream = new MemoryStream())
{
using (var zipArchive = new System.IO.Compression.ZipArchive(zipMemoryStream, System.IO.Compression.ZipArchiveMode.Create))
{
foreach (var fileToCompress in (await ApplicationData.Current.TemporaryFolder.GetFilesAsync()))
{
var buffer = WindowsRuntimeBufferExtensions.ToArray(await FileIO.ReadBufferAsync(fileToCompress));
var entry = zipArchive.CreateEntry(fileToCompress.Name);
using (var entryStream = entry.Open())
{
await entryStream.WriteAsync(buffer, 0, buffer.Length);
}
}
}
using ( var zipStream = await file.OpenAsync(Windows.Storage.FileAccessMode.ReadWrite))
{
using (var outstream = zipStream.AsStreamForWrite())
{
var buffer = zipMemoryStream.ToArray();
outstream.Write(buffer, 0, buffer.Length);
outstream.Flush();
}
}
}
It works flawlessly - it creates the zip file in local folder as expected. Since you get the exception in ToArray call, the reason could be that the file you're trying to open is already locked from somewhere else. If you are creating these files yourself or even only accessing them, make sure you're closing the streams.
To test this method you could manually create a folder inside temp folder, put a couple of files in it and then run the method on that folder (the files are in C:\Users\<Username>\AppData\Local\Packages\<PackageName>\TempState) just to exclude any other reason for error.
i have a image of 5Kb, when i transform it into Base64 string and i upload to my remote database, the remote INSERT query needs only a few secs
but.. i have a image of 100Kb, when i transform it into Base64 string and i upload to my remote database, the remote INSERT query needs a lot of seconds to be executed
why?
it is because the Base64 String needs 100KB of space like the non encoded image?
there is a way to solve these waiting times?
MORE INFO: im using PHP+JSOn to connect to mysql remote database.
Oded sugested me to not using Base64 and to use BLOB and not LONGTEXT. But.... ¿how to use BLOB with JSON+PHP? i dont know it as i know, JSON+PHP needs to receive and send Strings, and BLOB is not a String
thanks
EDIT 2:
this is the code where it takes a looot time waiting (it waits in the line: while ((line = reader.readLine()) != null) { , it is waiting on reader.readLine() )
this code gets one user from the remote database, it takes a loooooot of time to show the user on my app
public Friend RetrieveOneUser(String email)
{
Friend friend=null;
String result = "";
//the parameter data to send
ArrayList<NameValuePair> nameValuePairs = new ArrayList<NameValuePair>();
nameValuePairs.add(new BasicNameValuePair("email",email));
//http post
InputStream is=null;
try{
HttpClient httpclient = new DefaultHttpClient();
HttpPost httppost = new HttpPost(this.BaseURL + this.GetOneUser_URL);
httppost.setEntity(new UrlEncodedFormEntity(nameValuePairs));
HttpResponse response = httpclient.execute(httppost);
HttpEntity entity = response.getEntity();
is = entity.getContent();
}catch(Exception e){
Log.e("log_tag", "Error in http connection "+e.toString());
}
//convert response to string
try{
BufferedReader reader = new BufferedReader(new InputStreamReader(is,"iso-8859-1"),8);
StringBuilder sb = new StringBuilder();
String line = null;
while ((line = reader.readLine()) != null) {
sb.append(line + "\n");
}
is.close();
result=sb.toString();
}catch(Exception e){
Log.e("log_tag", "Error converting result "+e.toString());
}
//parse json data
try{
JSONArray jArray = new JSONArray(result);
for(int i=0;i<jArray.length();i++)
{
JSONObject json_data = jArray.getJSONObject(i);
friend=new Friend(json_data.getString("email"),json_data.getString("password"), json_data.getString("fullName"), json_data.getString("mobilePhone"), json_data.getString("mobileOperatingSystem"),"",json_data.getString("photo"));
}
}
catch(JSONException e){
Log.e("log_tag", "Error parsing data "+e.toString());
}
return friend;
}
Why not store the image directly as a BLOB?
All the conversion accomplishes is delays and extra CPU time.
Update:
Now that we know why base64 is required (since JSON can't transfer binary data), I amend my answer.
You need to check why this is taking a long time. Is it network transfer? Is it the database? Once you know the answer, we can start looking at a solution.
Base64 is 6-bit encoding: it requires 4 characters (4 bytes) to transmit 3 bytes of an image. So storing a 100kb image in Base64 takes up 133kb worth of space.
You haven't said which database you're using, but not all databases perform well if you store more than 8kb per row.