I recently upgraded RavenDB from build 573 to 960. There is a noticeable slow-down when saving documents. The only change I made when upgrading was to add this line to Raven.Server.exe.config:
<add key="Raven/Authorization/Windows/RequiredUsers" value="d1\PrestoDatabaseUser;d2\userName"/>
Well, I also changed AnonymousAccess from All to Get.
<add key="Raven/AnonymousAccess" value="Get"/>
Is there a slowness issue with build 960?
Is there anything new to be done when upgrading to build 960, other than replacing the binaries?
Does authorization (like shown above) cause RavenDB to run more slowly?
Any other ideas?
Edit - This Worked
I just tried this (only the third line is new):
documentStore.ConnectionStringName = "RavenDb";
documentStore.Initialize();
documentStore.JsonRequestFactory.ConfigureRequest += (sender, e) => ((HttpWebRequest)e.Request).PreAuthenticate = true;
This shouldn't matter, no.
What is likely to have happened is that you are now actually doing authentication when saving.
Use:
docStore.JsonRequestFactory.ConfigureRequest += (sender, e) => ((HttpWebRequest)e.Request).PreAuthenticate = true;
And see if that helps
Related
I have added the simplest of indexes, like this:
public class MyDocType_ById : AbstractIndexCreationTask<MyDocType>
{
public MyDocType_ById()
{
Map = myDocs => from mine in myDocs
select new
{
Id = mine.Id
};
Index(x => x.Id, FieldIndexing.Analyzed);
}
}
After compiling and hitting my site, I get the following exception:
[IndexCompilationException: Could not find file 'C:\dev\db\ravendb\CompiledIndexCache\-1836739954.u6DpcVRxqMXarC7cwg26Jg%3d%3d.nodebug.dll'.]
I am creating indexes on my app-startup using
IndexCreation.CreateIndexes(typeof(MyDocType_ById).Assembly, IoC.Container.GetInstance<IDocumentStore>());
In the folder a file is created named -1836739954.u6DpcVRxqMXarC7cwg26Jg%3d%3d.nodebug.dll.cs but the dll is not there.
Steps tried:
ensure correct folder permissions on C:\dev\db\ravendb\CompiledIndexCache\
iisreset + restart ravendb (Windows service)
renaming the index
saying for [CURSE] sake out loud in the office
Any ideas how to fix this?
Thorough googling led me to just 1 possible solution found here (where Ayende has also suggested checking folder permissions, ensure no indexing service/antivirus/backup et.c. is touching the folder): https://groups.google.com/forum/#!topic/ravendb/awh7eV00QNE
OP ended up fixing this by rebooting his machine.
This solved it for me too (although I surely am curious as to how rebooting differs from restarting all services, et.c...).
I tried to recreate the bug by adding an identical documenttype + index, and this time around it worked correctly right away. Only thing left is hoping this will never reoccur (especially not during e.g. production release...).
Update: this seems to occur occasionally on any of my dev-machines only after waking the system from hibernating (Windows 10), so another solution might be to not hibernate your dev-machine...
I am encountering this issue in CE1.9.1.
When a User registers (doesn't matter if its during checkout or from the Create an Account link) the user keeps getting the password mismatch error even though the password is re-entered correctly.
The form validation does not indicate a miss-match, but once a user clicks on Register it returns the mismatch error.
There is no errors in the chrome console...
I found this: https://magento.stackexchange.com/questions/37381/please-make-sure-your-passwords-match-password-error-in-checkout-with-new-re
But I don't believe it is the same error.
I need to fix it soon, any help is greatly appreciated!
We also had this issue with 1 of our webshops. However we used a checkout extension. So im not sure if this applies for the regular standard checkout. Anyway.
The question should be, are u using a checkout extension?
If so, the value inside the model's file of that extension is set at:
$customer->setConfirmation($password);
but should be:
$customer->setPasswordConfirmation($password);
For me this worked, without changing anything in the core. It's just that the extensions should get a small update, or you can do it manually like i did. Just find that line in the files of the model map of your extension.
as workaround you can use folloing code:
$confirmation = $this->getConfirmation();
$passwordconfirmation = $this->getPasswordConfirmation();
//if ($password != $confirmation) {
if (!(($password == $confirmation) ||
($password == $passwordconfirmation))) {
$errors[] = Mage::helper('customer')->__('Please make sure your passwords match.');
}
Changing app/code/core/Mage/Customer/Model/Customer.php as proposed by #Pedro breaks the functionality of "forgot password" and "edit customer account" pages. Instead, make the following changes to
app/code/core/Mage/Checkout/Model/Type/Onepage.php
by editing lines starting from 369
if ($quote->getCheckoutMethod() == self::METHOD_REGISTER) {
// set customer password
$customer->setPassword($customerRequest->getParam('customer_password'));
$customer->setConfirmation($customerRequest->getParam('confirm_password'));
} else {
// emulate customer password for quest
$password = $customer->generatePassword();
$customer->setPassword($password);
$customer->setConfirmation($password);
}
and set the PasswordConfirmation -Property and not the Confirmation-Property of the Customer-Object:
if ($quote->getCheckoutMethod() == self::METHOD_REGISTER) {
// set customer password
$customer->setPassword($customerRequest->getParam('customer_password'));
$customer->setPasswordConfirmation($customerRequest->getParam('confirm_password'));
} else {
// emulate customer password for quest
$password = $customer->generatePassword();
$customer->setPassword($password);
$customer->setPasswordConfirmation($password);
}
Encountered the same problem and fixed it. Snel's answer is closer to right answer. The problem could lay in the external/local modules, so you should check not the
app/code/core/Mage/Checkout/Model/Type/Onepage.php
And of course do NOT modify it in any case!
But you should find _validateCustomerData() method which is used in your case. Use Mage::log() or debug_backtrace() for it. It may look something like (but not exactly, because this part could be modified for some reason):
if ($quote->getCheckoutMethod() == self::METHOD_REGISTER) {
// set customer password
$customer->setPassword($customerRequest->getParam('customer_password'));
$customer->setConfirmation($customerRequest->getParam('confirm_password'));
} else {
// emulate customer password for quest
$password = $customer->generatePassword();
$customer->setPassword($password);
$customer->setConfirmation($password);
}
Those modules extend the old version of core file so if you module wasn't updated, you should change them yourself and change
setConfirmation()
to its current usable analog:
setPasswordConfirmation()
I also had this same problem. I'm not comfortable with code so I wanted to avoid all the above fiddling. To fix it all I did was update my extensions, and I also disable one page checkout, cleared cache, then re-enabled one-page checkout.
This has now fixed the problem without needing to modify code.
hope it helps for you.
If anybody still can't figure out, why this is happening:
The Conlabz Useroptin extension (http://www.magentocommerce.com/magento-connect/newsletter-double-opt-in-for-customers.html) can cause this behavior aswell.
Unless this truly is a core bug, I wouldn't recommend changing core files.But i sloved this way Open app\code\core\Mage\Customer\Model\Customer.php and edit your code like below
$confirmation = $this->getConfirmation();
$passwordconfirmation = $this->getPasswordConfirmation();
//if ($password != $confirmation) {
if (!(($password == $confirmation) ||
($password == $passwordconfirmation))) {
$errors[] = Mage::helper('customer')->__('Please make sure your passwords match.');
}
I had the same issue after updating to 1.9.2.1 and was unable to resolve using some of the suggested code changes here and elsewhere - also very reluctant to change core code for obvious reasons.
My solution - a configuration update. It was:
Enable OnePage Checkout = Yes
Allow Guest Checkout = Yes
Require Customer to be Logged in = No
I updated to Yes/No/Yes as per the above and cleared the cache. This resolved the issue by inserting the standard customer registration form (rather than appending the registration to end of the billing info) and passing that info to the billing form on successful registration.
It seems there is a code issue here along the lines of the other responses but this was an excellent workaround for me. Hope it helps.
Change this code
app\code\core\Mage\Customer\Model\Customer.php
$confirmation = $this->getPasswordConfirmation();
to this code
$confirmation = $this->getConfirmation();
I'm running a c# MVC 4 app , using Rotativa to convert Razor views to Pdfs.
Rotativa is basically a wrapper around wkhtmltopdf.
I upgraded to Rotativa 1.6.1, to fix a page break issue in wkhtmltopdf,, and my images are "ghosting". I rolled back to 1.5.0 and the problem went away (but page breaks are broken again).
Looks just like in this wkhtmltopodf bug
http://code.google.com/p/wkhtmltopdf/issues/detail?id=788
They claim it's fixed in the tip. (I tried manually updating to the latest stable release and it still occurred)
Oddly the issue only occurs on our QA server, not our DEV server, or our integration server that the IT group claims is "identical" to QA...
Any ideas what might be causing this issue? Anyone else getting it?
This issue:
https://github.com/webgio/Rotativa/issues/51
and this one
https://github.com/webgio/Rotativa/issues/26
Imply there are some permission issues that can cause Rotativa to have problems.
Can anyone point me to more information on what kind of permissions might be at fault, so I can compare then on the 2 boxes?
Thanks,
Eric-
OK we figured out a work around for this... It's "ghosting for JPEG Images"...
So I Just converted them from JPEG to PNG (one of 2 known good image formats)...
Since they were already stored in the DB as JPEG, I did the conversion on the fly in the Razor View.
There is some loss of fidelity, but other than that it works great....
try
{
byte [] byteArrayIn = ( byte[] )#Model.ETA640StudentProfileVM[ currentRecord ].ImageObj;
byte[] byteArrayOut = null;
MemoryStream ms = new MemoryStream( byteArrayIn, 0, byteArrayIn.Length );
ms.Write( byteArrayIn, 0, byteArrayIn.Length );
Image returnImage = Image.FromStream( ms, true );
using (var output = new MemoryStream())
{
returnImage.Save( output, System.Drawing.Imaging.ImageFormat.Png );
byteArrayOut = output.ToArray();
};
#:<img src="data:image/png;base64,#(Html.Raw( Convert.ToBase64String( byteArrayOut )))" alt="Image Not Available" height="155" />
}
catch
{
#:<img src="" alt="Error Generating Image" height="155" />
}
I'm using SQL Azure in a Windows Azure app running as a cloud service. Most of the time my database actions works completely fine (that is, after handling all sorts of timeouts and what not), however i'm running into a problem that seems
using (var connection = new SqlConnection(m_connectionString))
{
m_ConnectionRetryPolicy.ExecuteAction(() => connection.Open());
using (var command = connection.CreateCommand())
{
command.CommandText = "SELECT * FROM X WHERE Y = Z";
var reader = m_CommandRetryPolicy.ExecuteAction(() => command.ExecuteReader());
return LoadData(reader).FirstOrDefault();
}
}
The line that fails is the Command.ExecuteReader with an:
ExecuteReader requires an open and available Connection. The connection's current state is closed
Things that i have already considered
I'm not "reusing" an old connection or saving a connection is a member variable
There should be no concurrency issues - the repository class that these methods belong to is created each time it is needed
Have anyone else experienced this? I could of course just add this to the list of exception which would yield a retry, but I'm not very comfortable with that as
I had a bunch of these errors a few days ago (West Europe) on my production deployment, but they went away by themselves. At the same time I was seeing timeouts, throttling and other errors from SQL Azure. I assume that there was a temporary problem with the platform (or at least the server that I am running on).
You probably aren't doing anything wrong in your code, but are suffering from degraded performance on SQL Azure. Try and handle the errors, perform retries, exponential back-off, queues (to reduce concurrency), splitting your load across databases — that sort of thing.
write every thing within try and catch,finally block.
as follows:
try
{
con.open();
m_ConnectionRetryPolicy.ExecuteAction(() => connection.Open());
using (var command = connection.CreateCommand())
{
command.CommandText = "SELECT * FROM X WHERE Y = Z";
var reader = m_CommandRetryPolicy.ExecuteAction(() => command.ExecuteReader());
return LoadData(reader).FirstOrDefault();
}
con.close();
}
catch(exception ex)
{
}
finally
{
con.close();
}
Remember to close connection in finally block as well.
There is an Enterprise Library that MS has produced specifically for SQL Azure, here are some examples from their patterns and Practice.
It's similar to what you are doing, however it does more on the reliability (and these examples show how to get a reliable connection)
http://msdn.microsoft.com/en-us/library/hh680899(v=pandp.50).aspx
Are you sure it's the reader that's failing and not the opening of the connection? I'm encountering an exception when I wrap the connection.Open() in the m_ConnectionRetryPolicy.ExecuteAction().
However it works just fine for me if I skip the ExecuteAction wrapper and open the connection using connection.OpenWithRetry(m_ConnectionRetryPolicy).
And I'm also using command.ExecuteReaderWithRetry(m_ConnectionRetryPolicy) which is working for me.
I have no idea though why it's not working when wrapped in ExecuteAction though.
I believe this means that Azure has closed the connection behind the scenes, without telling the connection pooler. This is by design. So, the connection pooler gives you what it thinks is an available, open connection, but when you try to use it, it finds out it's not open after all.
This seems very clunky to me, but it's the way Azure is at the moment.
I have a completely empty RavenHQ database that's linked to my Appharbor application. The amount of space the database is currently using is 1.1mb out of an available 25mb for my bronze account. The database previously had records in it, but I have deleted them using "delete collection" in the management studio.
The very first time I call session.Store(myobject), and BEFORE I call .SaveChanges(), I get the following error.
System.InvalidOperationException: Url: "/docs/Raven/Hilo/AccItems"
Raven.Database.Exceptions.OperationVetoedException: PUT vetoed by Raven.Bundles.Quotas.Triggers.DatabaseSizeQoutaForDocumetsPutTrigger because: Database size is 45,347 KB, which is over the allowed quota of 25,600 KB. No more documents are allowed in.
Now, the document is definitely not that big, so I don't know what this error can mean, especially as I don't think I've even hit the database at that point since I haven't closed the session by calling SaveChanges(). Any ideas? Here's the code itself.
XDocument doc = XDocument.Parse(rawXml);
var accItems = ExtractItemsFromFeed(doc);
using (IDocumentSession session = _store.OpenSession())
{
var dbItems = session.Query<AccItem>().ToList();
foreach (var item in accItems)
{
var existingRecord = dbItems.SingleOrDefault(x => x.Source == x.SourceId == cottage.SourceId);
if (existingRecord == null)
{
session.Store(item);
_logger.Info("Saved new item {0}.", item.ShortName);
}
else
{
existingRecord.ShortName = item.ShortName;
_logger.Info("Updated item {0}.", item.ShortName);
}
session.SaveChanges();
}
}
Any other comments about the style of this code would be most welcome, as I was unsure of the best way to approach the "update existing item or create if it isn't there" scenario.
The answer here was as follows.
RavenHQ support found that the database was indeed oversized, but it seemed that the size reported in the Appharbor-branded RavenHQ control panel was incorrect. I had filled up the database way over the limit with a previous faulty version of the code posted above, so the error message I received was actually correct.
Fixing this problem without paying to upgrade the database wasn't straightforward, as it's not possible to shrink the database. As I also wasn't able to delete my single Appharbor/RavenHQ database or create another one that left me with the choice of creating an entirely new Appharbor application, or registering directly with RavenHQ for a new account. I chose the latter. The RavenHQ-branded control panel is slightly different to the Appharbor one, in that it has the ability to create and delete databases.
So to summarize: there doesn't seem to be any benefit to using RavenHQ as an add-on to Appharbor - you might as well go and get a proper free RavenHQ account.