NAudio WaveOut.Init takes a very long time (sometimes) - naudio

I've been using NAudio recently and for the most part I'm very happy with the library. However, I've been experiencing a very annoying intermittent issue that causes the Init method to take a very long time to execute (over 30 seconds).
Here's the code I'm using:
var waveFormat = WaveFormat.CreateIeeeFloatWaveFormat(44100, 2);
_wavePlayer = new WaveOutEvent();
_mixingSampleProvider = new MixingSampleProvider(waveFormat)
{
ReadFully = true
};
_wavePlayer.Init(_mixingSampleProvider); // program halts here
_wavePlayer.Play();
I've also tried using WaveOut instead of WaveOutEvent and I get the same problem.
I can reproduce this issue about 1 in every 3 times or so. So it's not completely easy to reproduce but it's often enough to be very, very annoying.

Related

EPPlus AutoFit() different column width on different machines

I am using EPPlus Version 4.1.0
I know this issue seems extremely weird but I have already wasted 2 days on this and any input is very very welcome!
I run the following code:
using (var package = new ExcelPackage())
{
ExcelWorksheet ws = package.Workbook.Worksheets.Add("Sheet1");
...
for (int i = ws.Dimension.Start.Column; i <= ws.Dimension.End.Column; i++)
{
ws.Column(i).AutoFit(0, 100);
ws.Column(i).Style.WrapText = ws.Column(i).Width > 60;
}
...
I run this code on several machines and the AutFit() function always returns the same value for the column width.
But on one machine the (unfortunately my new laptop) the width is completely off (i.e. 33 instead of expected 11).
Any clues how my machine setup can possibly effect this?
I hope somebody else can benefit from this, but as stated in the comments, it was actually the DPI settings of my new machine that caused this.
I have yet to find out if this actually affects the reports itself.

Getting "Can not convert the given object to query." with ColdFusion ORM

This is happening intermittently (usually at start up). I get the above error message when executing the following code.
var arr = ORMExecuteQuery( "FROM priority WHERE active = 1 ORDER BY sortOrder" );
var qry = entityToQuery( arr );
The first line executes fine, but the second line blows up. The solution is to run ormreload();
The problem keeps coming up in an unpredictable way though. Even when no changes have been made to the beans or gateways that are using ORM. Completely unpredictable and impossible to replicate on purpose. Is there something else that can mess with the hibernate mappings that could cause this type of problem.
Other info that may be pertinent:
This is a MURA plugin based on a recent version of FW/1.
ormreload() is a persistent fix (until it fails again)
My current solution is to put ormreload() in the setupApplication() method of application.cfc
I just want to understand better what could be causing this problem.

Pushing a chart to the the client using Wt

I am using server push in Wt and I am trying to push a new chart with the following code:
Wt::WApplication::UpdateLock uiLock(app);
if (uiLock){
chart_ste = new ScatterPlotExample(this,10*asf.get_outputSamplingRate());
app->triggerUpdate();
}
but it waits for the program to end and then it prints it whereas the following code in the same program pushes the word "Demokritus every 0.5 secs as it should do:
for (int i=0; i<10; i++)
{
boost::this_thread::sleep(boost::posix_time::milliseconds(500));
Wt::WApplication::UpdateLock uiLock(app);
if (uiLock) {
showFileName = new WText(this);
showFileName->setText(boost::lexical_cast<std::string>("Demokritus"));
app->triggerUpdate();
}
}
What might be my mistake?
The documentation for triggerUpdate mentions that "The update is not immediate, and thus changes that happen after this call will equally be pushed to the client." If the changes are not immediate, it could be that the first piece of code continuously tries to push updates as fast as your CPU will allow it, so it never gets to the server because a new update overwrites the last and it begins waiting again. Try adding boost::this_thread::sleep(boost::posix_time::milliseconds(500)); to the first piece of code to see if that helps.
I've done a project once where I needed to update a chart every second with new data and had a very similar setup to yours. I put in the sleep from the start because I did not want my boost thread to use too much CPU.
Also, it is unclear if the first piece of code is in a bigger loop, if it is, you probably shouldn't make a new chart every time, but create it before hand and then update it with data. I hope some of this helps.

Error returned by Nuance DragonMobile text-to-speech when maximum number of transactions is reached

I'm about to release my App on IOS that uses Nuance Dragon Mobile SDK. I'm signed up for the "Silver" plan, which allows me 20 transactions per day.
My question is, does anyone know what error is returned by Nuance, when the limit is exceeded? I'm concerned, because I am filtering out:
error.code == 5 // Because this fires whenever I interrupt running speech
error.code == 1 // Because after interrupting speech, the first time I restart, it cuts off
// before finished, so I automatically start again, so as not to trouble the user to do so
I figure if Nuance returns an error different from these, I'll allow it to pass through, and be able to alert the user that they've reached their daily limit.
I think the following gives the possible errors:
extern NSString * const SKSpeechErrorDomain;
enum {
SKServerConnectionError = 1,
SKServerRetryError = 2,
SKRecognizerError = 3,
SKVocalizerError = 4,
SKCancelledError = 5,
};
It seems likely to me that it's the SKServerConnectionError that would be fired. In that case, I need to come up with a different strategy. If I could figure out what's going on with the restart issue I wouldn't have to filter out that error. Plus, when I automatically restart these false starts, I'm probably racking up my transaction count, which is unfortunate.
Anybody have experience with this aspect of the Nuance SDK for IOS?

Magento - fetching products and looping through is getting slower and slower

I'm fetching aroung 6k articles from the Magento database. Traversing through them in beginning is very fast (0 seconds, just some ms) and gets slower and slower. The loop takes about 8 hours to run and in the end each loop in the foreach takes about 16-20 seconds ! It seems like mysql is getting slower and slower in the end, but I cannot explain why.
$product = Mage::getModel('catalog/product');
$data = $product->getCollection()->addAttributeToSelect('*')->addAttributeToFilter('type_id', 'simple');
$num_products = $product->getCollection()->count();
echo 'exporting '.$num_products."\n";
print "starting export\n";
$start_time = time();
foreach ($data as $tProduct) {
// doing some stuff, no sql !
}
Does anyone know why it is so slow ? Would it be faster, just to fetch the ids and selecting each product one by one ?
The memory usage of the script running this code has a constant memory usage of:
VIRT RES SHR S CPU% MEM%
680M 504M 8832 R 90.0 6.3
Regards, Alex
Oh, well, Shot in the dark time. If you are running Magento 1.4.x.x, previous to 1.4.2.0, you have a memory leak that displays exactly this symptom as it eats up more and more memory, leading eventually to memory exhaustion. Profile exports that took 3-8 minutes under 1.3.x.x will now take 2-5 hours if it doesn't throw an error before completion. Another symptom is exports that fail without finishing and without giving any indication of why the window freezes or gives some sort of funky completion message with no output.
The Array Of Death(tm) has been noted and here's the official repair in the new version. Maybe Data Will Flow again!
Excerpt from 1.4.2.0rc1 /lib/Varien/Db/Select.php that has been patched for memory leak
public function __construct(Zend_Db_Adapter_Abstract $adapter)
{
parent::__construct($adapter);
if (!in_array(self::STRAIGHT_JOIN_ON, self::$_joinTypes)) {
self::$_joinTypes[] = self::STRAIGHT_JOIN_ON;
self::$_partsInit = array(self::STRAIGHT_JOIN => false) + self::$_partsInit;
}
}
Excerpt from 1.4.1.1 /lib/Varien/Db/Select.php with memory leak
public function __construct(Zend_Db_Adapter_Abstract $adapter)
{
parent::__construct($adapter);
self::$_joinTypes[] = self::STRAIGHT_JOIN_ON;
self::$_partsInit = array(self::STRAIGHT_JOIN => false) + self::$_partsInit;
}