In my WP7 application I have downloaded 200 images from Web and saved in isolated storage .When debug all the images are loaded in panorama view by queue method and I can view when it is connected to pc. after disconnect it from pc when i open the application and navigate the images it shows some images and terminated.
if (i < 150)
{
WebClient m_webClient = new WebClient();
Uri m_uri = new Uri("http://d1mu9ule1cy7bp.cloudfront.net/2012//pages/p_" + i + "/mobile_high.jpg");
m_webClient.OpenReadCompleted += new OpenReadCompletedEventHandler(webClient_OpenReadCompleted);
m_webClient.OpenReadAsync(m_uri);
}
void webClient_OpenReadCompleted(object sender, OpenReadCompletedEventArgs e)
{
int count;
try
{
Stream stream = e.Result;
byte[] buffer = new byte[1024];
using (IsolatedStorageFile isf = IsolatedStorageFile.GetUserStoreForApplication())
{
//isf.Remove();
using (System.IO.IsolatedStorage.IsolatedStorageFileStream isfs = new IsolatedStorageFileStream("IMAGES" + loop2(k) + ".jpg", FileMode.Create, isf))
{
count = 0;
while (0 < (count = stream.Read(buffer, 0, buffer.Length)))
{
isfs.Write(buffer, 0, count);
}
stream.Close();
isfs.Close();
}
}
}
catch (Exception ex)
{
MessageBox.Show(ex.ToString());
}
}
I think that your problem is that if you load too many images at once in a loop the moment you go out of the loop and give focus back to the UI thread all the Garbage Collection on the bitmap images is done.
This article explains it a bit better and provides with a solution.
I also had this problem and came up with my own solution. I had a dictonairy with image url that needed to be loaded, but you can easily alter this for your scenario.
This SO question is also about this problem (loading multiple images and crash (Exception)). It also has Microsofts response to it, I based my solution on their response.
In my solution I use the dispatcher to return to the UI thread and thus making sure the garbage of the image and bitmaps used was cleaned.
private void LoadImages(List<string> sources)
{
List<string>.Enumerator iterator = sources.GetEnumerator();
this.Dispatcher.BeginInvoke(() => { LoadImage(iterator); });
}
private void LoadImage(List<string>.Enumerator iterator)
{
if (iterator.MoveNext())
{
//TODO: Load the image from iterator.Current
//Now load the next image
this.Dispatcher.BeginInvoke(() => { LoadImage(iterator); });
}
else
{
//Done loading images
}
}
After talking on Skype I reviewed his code and found out his problem was with his Isolated Storage Explorer. It couldnt connect to his pc so it gave an error. Had nothing to do with the image loading.
I'd be very wary of the memory implications of loading 200 images at once. Have you been profiling the memory usage? Using too much memory could cause your application to be terminated.
Related
I want to cover ui test cases with automation and was suggested to imply compare screenshots
So the algorithm is the following:
1.I take screenshot of the page using selenium takescreenshot method and stored it in expected results folder
2.That I'm running test case which take screenshot of the same page and then compare it with expected screenshot from the step1
I was using the following method:
try {
// take buffer data from both image files //
BufferedImage biA = ImageIO.read(fileA);
DataBuffer dbA = biA.getData().getDataBuffer();
int sizeA = dbA.getSize();
BufferedImage biB = ImageIO.read(fileB);
DataBuffer dbB = biB.getData().getDataBuffer();
int sizeB = dbB.getSize();
// compare data-buffer objects //
if (sizeA == sizeB) {
for (int i = 0; i < sizeA; i++) {
if (dbA.getElem(i) != dbB.getElem(i)) {
return false;
}
}
return true;
} else {
return false;
}
} catch (Exception e) {
LOGGER.error("Failed to compare image files");
return false;
}
And previous week it was working well, but today I ran the same test and it failed, I opened images properties and see that screenshot made today is larger by 0,1 KB then the expected one.
Can't understand the reason
Can it be somehow related to chrome constant background updates and for example browser has some update during the weekend and now the screenshot is little bit different and this way of comparing is wrong?
And if yes, then how can wee do this, I tried popular library Ashot and it also tells me that the screenshot are different
I have an older implementation using NAudio 1.6 to play a ring tone signalling an incoming call in an application. As soon as the user acceptes the call, I stop the playback.
Basically the follwing is done:
1. As soon as the I get an event that a call must be signalled, a timer is started
2. Inside this timer Play() on the player
3. When the timer starts again, a check is performed if the file is played by checking the CurrentTime property against the TotalTime propery of the WaveStream
4. When the user accepts the call, Stop() is called on the player and also stop the timer
The point is, that we run sometimes in cases where the playback is still repeated although the timer is stopped and the Stop() was called on the player.
In the following link I read that the classes BufferedWaveProvider and WaveChannel32 which are used in the code are always padding the buffer with zero.
http://mark-dot-net.blogspot.com/2011/05/naudio-and-playbackstopped-problem.html
Is it possible that the non-stopping playback is due to usage of the classes BufferedWaveProvider and WaveChannel32?
In NAudio 1.7 the AudioFileReader class is there. Is this class also padding with zeros? I did not find a property like PadWithZeroes in this class. Does it make to use AudioFileReader in this case of looped playback?
Below the code of the current implementation of the TimerElapsed
void TimerElapsed(object sender, ElapsedEventArgs e)
{
try
{
WaveStream stream = _audioStream as WaveStream;
if (stream != null && stream.CurrentTime >= stream.TotalTime )
{
StartPlayback();
}
}
catch (Exception ex)
{
//do some actions here
}
}
The following code creates the input stream:
private WaveStream CreateWavInputStream(string path)
{
WaveStream readerStream = new WaveFileReader(path);
if (readerStream.WaveFormat.Encoding != WaveFormatEncoding.Pcm)
{
readerStream = WaveFormatConversionStream.CreatePcmStream(readerStream);
readerStream = new BlockAlignReductionStream(readerStream);
}
if (readerStream.WaveFormat.BitsPerSample != 16)
{
var format = new WaveFormat(readerStream.WaveFormat.SampleRate, 16, readerStream.WaveFormat.Channels);
readerStream = new WaveFormatConversionStream(format, readerStream);
}
WaveChannel32 inputStream = new WaveChannel32(readerStream);
return inputStream;
}
I'm using usbmanager class to manage USB host on my android 4.1.1 machine.
all seems to work quite well for a few hundreds of transactions until (after ~ 900 transactions) opening the device fails, returning null without exception.
Using a profiler it doesn't seem to be a matter of memory leakage.
this is how I initialize the communication from my main activity (doing this once):
public class MainTestActivity extends Activity {
private BroadcastReceiver m_UsbReceiver = null;
private PendingIntent mPermissionIntent = null;
UsbManager m_manager=null;
DeviceFactory m_factory = null;
#Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.main);
mPermissionIntent = PendingIntent.getBroadcast(this, 0, new Intent(ACTION_USB_PERMISSION), 0);
IntentFilter filter = new IntentFilter(ACTION_USB_PERMISSION);
filter.addAction(UsbManager.ACTION_USB_DEVICE_DETACHED);
m_UsbReceiver = new BroadcastReceiver() {
public void onReceive(Context context, Intent intent) {
String action = intent.getAction();
if (UsbManager.ACTION_USB_DEVICE_DETACHED.equals(action)) {
UsbDevice device = (UsbDevice)intent.getParcelableExtra(UsbManager.EXTRA_DEVICE);
if (device != null) {
// call your method that cleans up and closes communication with the device
Log.v("BroadcastReceiver", "Device Detached");
}
}
}
};
registerReceiver(m_UsbReceiver, filter);
m_manager = (UsbManager) getSystemService(Context.USB_SERVICE);
m_factory = new DeviceFactory(this,mPermissionIntent);
}
and this is the code of my test:
ArrayList<DeviceInterface> devList = m_factory.getDevicesList();
if ( devList.size() > 0){
DeviceInterface devIf = devList.get(0);
UsbDeviceConnection connection;
try
{
connection = m_manager.openDevice(m_device);
}
catch (Exception e)
{
return null;
}
The test will work OK for 900 to 1000 calls and after this the following call will return null (without exception):
UsbDeviceConnection connection;
try
{
connection = m_manager.openDevice(m_device);
}
You might just run out of file handles, a typical limit would be 1024 open files per process.
Try calling close() on the UsbDeviceConnection, see doc.
The UsbDeviceConnection object has allocated system ressources - e.g. a file descriptor - which will be released only on garbage collection in your code. But in this case you run out of ressources before you run out of memory - which means the garbage collector is not invoked yet.
I had opendevice fail on repeated runs on android 4.0 even though I open only once in my code. I had some exit paths that did not close the resources and I had assumed the OS would free it on process termination.
However there seems to be some issue with release of resources on process termination -I used to have issues even when I terminated and launched a fresh process.
I finally ensured release of resources on exit and made the problem go away.
I am confusing with a problem about upload blobs asynchronously, hopes find answer here.
Please take a look at my code snippet first,
public void UploadMultipleBlobs(List<string> filelocations, string containerName, AsyncCallback callback = null, string path = null)
{
try
{
Parallel.ForEach(filelocations, fileLocation =>
{
//File to Stream
MemoryStream str = new MemoryStream();
byte[] file = File.ReadAllBytes(fileLocation);
str.Write(file, 0, file.Length);
str.Seek(0, SeekOrigin.Begin);
//Operations
if (callback == null)
callback = new AsyncCallback(OnUploadCompleted);
BlobRequestOptions blobRequestOptions = new BlobRequestOptions();
blobRequestOptions.Timeout = new TimeSpan(1, 0, 0);
blobRequestOptions.RetryPolicy = retry;
CloudBlob currentBlob = container.GetBlobReference(blobName);
var result = currentBlob.BeginUploadFromStream(str, blobRequestOptions, callback, new Object[] { currentBlob, str });
currentBlob.EndUploadFromStream(result);
});
}
catch
{
throw;
}
}
private void OnUploadCompleted(IAsyncResult result)
{
try
{
// Get array passed to callback
Object[] states = (Object[])result.AsyncState;
var blob = (CloudBlob)states[0];
var stream = (MemoryStream)states[1];
// End the operation
//blob.EndUploadFromStream(result);
// Close the stream
stream.Close();
}
catch
{
throw;
}
}
I need to upload mutil files to Azure blob, number of files may be 10-50,000, each file is about 10KB-50KB. The code snippet works fine for me currently. However, if I call EndUploadFromStream in callback, it always throw an exception when uploading over 2,000 files. I mean if I remove EndUploadFromStream in upload method and call EndUploadFromStream in callback(OnUploadCompleted method), the exception happens. The exception message as below:
Unable to read data from the transport connection: The connection was closed., StackTrace: at Microsoft.WindowsAzure.StorageClient.Tasks.Task`1.get_Result()
at Microsoft.WindowsAzure.StorageClient.CloudBlob.EndUploadFromStream(IAsyncResult asyncResult)
I don't know why it happens...hopes got answer from you guys.
Thanks.
The Begin/End code looks OK. I notice that you're not doing anything to wait for the asynchronous operations to complete, so the problem may be related to that. eg, if you're running this from a console application then the application may exit before all the uploads have completed and then give you those errors. That would not be a problem if the EndUploadFromStream() call is inside the Parallel.ForEach() because it will cause the Parallel.ForEach() call to block until all the uploads have completed.
So try adding code to wait for all the uploads to complete and see if that fixes it. A simple way would be a counter that is initialized to the total number of uploads, and then decremented (using Interlocked.Decremement() for thread safety) inside of the callback. Another options would be use Task.FromAsync() to get an array of Task objects, then using Task.WaitAll() to wait for them to complete.
As an aside, using both Parallel.ForEach() and Begin/End methods at the same time is usually redundant - Begin/End is asynchronous already so there's usually no point using multiple threads to invoke it. Since you have such a big list of items it might make some difference in this case, but probably not much. You're probably better off using a simple foreach loop instead of Parallel.ForEach() unless you've actually measured a significant difference.
Recently I am doing a eclipse plugin project with eclipse_RCP. But I encountered some issues with eclipse UI when I wanted to print a large number of messages in the console of plugin.
The messages are from a complex process which could be considered as a factory producing messages all the time and never stop (until the client stop the process of course).
When I printed the message before (the message is short), I just needed to call the function -org.eclipse.ui.console.MessageConsoleStream.println().
BUT this time ,when I tried like before at first , the runtime-EclipseApplication (launch the debug mode) stopped responding and then tell me out of memory.
It seems like that the eclipse will read all the messages in the memory and THEN print them to the console one time .So when the number of message is large ,it will out of memory.
My issue is what can I do if I want to print the message line by line in the console ?
My description may be not accurate. Below is the java code:
public void print(Process p) {
BufferedReader in = new BufferedReader(
new InputStreamReader(p.getInputStream()),1024);
String line = "";
try {
while ((line = in.readLine()) != null) {
//it is correct when print in the main console
System.out.println(line);
//when print in plugin console .it is out of memory
//this is the function
//org.eclipse.ui.console.MessageConsoleStream.println()
println(line);
}
in.close();
this.flush();
this.close();
p.destroy();
}
catch (IOException e) {
e.printStackTrace();
}
}
Then I try to write to a file at first and let the MessageConsoleStream read from the file every 1000 messages,but it looks like the same .
public void print(Process p) {
BufferedReader in = new BufferedReader(
new InputStreamReader(p.getInputStream()),1024);
String line = "";
char []tem = new char[1024];
int i = 0 ;
try {
File temp = File.createTempFile("temp", ".tep",new File("E:/"));
FileWriter out = new FileWriter(temp);
MessageConsoleStream mcs = null;
while((line = in.readLine())!=null){
if(i<=1000){
System.out.println(line);
out.write(line+"\n", 0, line.length()+1);
i++;
}
else{
i=0 ;
out.flush();
out.close();
FileReader fr=new FileReader(temp);
mcs = CConsole.getMessageStream("consoleName", "file name");
while( fr.read(tem, 0, 1024)!=-1){
mcs.print(String.valueOf(tem));
}
mcs.flush();
mcs.close();
fr.close();
out = new FileWriter(temp,false);
}
}
if(i!= 0){
mcs = CConsole.getMessageStream("consoleName", "file name");
out.flush();
out.close();
FileReader fr=new FileReader(temp);
while( fr.read(tem, 0, 1024)!=-1){
mcs.print(String.valueOf(tem));
}
mcs.flush();
mcs.close();
}
in.close();
p.destroy();
}
catch (IOException e) {
e.printStackTrace();
}
}
All the ways above will make the eclipse out of memory when the number of messages more than 600,000 (then I stop the process ,otherwise it will out of memory).
It looks like the ecplipse wants to print all of them one time but not line by line.So it reads and reads again until out of memory.
BTW,I find a note in the org.eclipse.ui.console.MessageConsoleMessage.java——
Clients should avoid writing large amounts of output to this stream
in the UI thread. The console needs to process the output in the UI
thread and if the client hogs the UI thread writing output to the
console, the console will not be able to process the output.
That is not the real reason ,isn't it ?
I also notice that both the cdt and jdt are ok when printing a large number of message .How did they do ?
THANKS!
You have to use the flush() method every so often to write the MessageConsoleStream out to the console.
The flush() method is part of the IOConsoleOutputStream class, in the org.eclipse.ui.console package. The flush() method is not well documented, so I can see how you might have missed it.