RavenDB studio "no databases and no file systems" in embedded mode - ravendb

I could not find any information about this online after much searching, any help would be appreciated.
I create my EmbeddableDocumentStore and everything seems to be working fine, my application is using the database. However, when I access the management studio using my port number 5050, it states "No databases and no filesystems are available".
RavenDB.Client and RavenDB.Database nuget package versions are 3.0.3800.
var store = new EmbeddableDocumentStore
{
DataDirectory = "Data",
UseEmbeddedHttpServer = true
};
store.Configuration.Port = 5050;
store.Initialize();

While writing the question I had an idea and tried it. This resolved the issue but I thought I would post this for reference in case someone had a similar problem.
I could not see this in the docs (http://ravendb.net/docs/article-page/3.0/csharp/server/installation/embedded) but in order to access the resource I had to give it a name.
var store = new EmbeddableDocumentStore
{
DataDirectory = "Data",
UseEmbeddedHttpServer = true,
DefaultDatabase = "Default"
};
Now it shows up in the RavenDB studio.

Not really an issue, but by default you'll be connected to the database. You can still get to this database in the Admin Studio.
Go to "Manage My Server", "To system database" then accept the warning message.
Obviously it shouldn't be used like this but useful if you need to recover data accidentally saved here.
Setting the "DefaultDatabase" property as you've done is the correct solution.

Related

Core Data + CloudKit Migration: Cannot create or modify field [...] in record [...] in production schema

I use NSPersistentCloudKitContainer to sync Core Data with Cloud Kit. To prepare for a new migration, I have created a new model version of the xcdatamodel and marked it as "current". I created a new entity and added a relationship from another entity. Nothing spectacular and suitable for a lightweight migration I thought.
Let's name this new entity: EntityNew
This is my code to initialize the NSPersistentCloudKitContainer:
lazy var persistentContainer: NSPersistentContainer = {
let container = NSPersistentCloudKitContainer(name: "MyContainerName")
container.loadPersistentStores(completionHandler: { _, error in
guard let error = error as NSError? else { return }
fatalError("###\(#function): Failed to load persistent stores:\(error)")
})
container.viewContext.automaticallyMergesChangesFromParent = true
return container
}()
shouldMigrateStoreAutomatically and shouldInferMappingModelAutomatically are set to true by default.
Everything worked fine locally. No errors occurred during the migration.
The problems started when I created a new instance of EntityNew:
let newItem = EntityNew(context: context)
newItem = "..."
saveContext()
newItem was created locally without any problems, but the iCloud Sync stopped working from this moment. The following error appeared in the console:
"<CKRecordID: 0x283fb1460; recordName=2E2209A1-F9F6-4DF2-960D-2C31F764ED05, zoneID=com.apple.coredata.cloudkit.zone:__defaultOwner__>" = "<CKError 0x2830a5950: \"Batch Request Failed\" (22/2024); server message = \"Atomic failure\"; uuid = ADA626F4-160E-49FE-A0BD-2198E5FBD09A; container ID = \"iCloud.[MyContainerID]\">"
"<CKRecordID: 0x283fb1a00; recordName=3145C837-D80D-47E0-B944-DBC6576A9B0A, zoneID=com.apple.coredata.cloudkit.zone:__defaultOwner__>" = "<CKError 0x2830a4000: \"Invalid Arguments\" (12/2006); server message = \"Cannot create or modify field 'CD_[Fieldname in EntityNew]' in record 'CD_[OtherEntityName]' in production schema\"; uuid = ADA626F4-160E-49FE-A0BD-2198E5FBD09A; container ID = \"iCloud.[ContainerID]\">";
"Cannot create or modify field 'CD_[Fieldname in EntityNew]' in record 'CD_[OtherEntityName]' in production schema"
Cloud Kit tries to modify the field CD_[Fieldname in EntityNew] (which is correct) on the record CD_[OtherEntityName], which is not the entity I created above! So Core Data tries to modify the wrong entity! This behavior does not happen for all fields (approx. 5 out of 10). I checked the local sqlite file of my iPhone but the local tables seems correct. The phenomenon can be observed in both, the Development and the Production icloud-container-environment. If I start with an empty database (which already contains the new entity, so no migration is necessary) the synchronization works.
What did I miss? Any ideas?
Thank you!

RavenDb UniqueConstraint doesn't work

I'm using RavenDb Server and Client 3.5.0 and I have tried to get UniqueConstraint work without success.
The simple case:
using Raven.Client.UniqueConstraints;
public class User {
public string Id { get; set; }
[UniqueConstraint]
public string Email { get; set; }
}
The documentation says:
Drop the Raven.Bundles.UniqueContraints assembly in the Plugins
directory.
I did it by NuGet: Install-Package RavenDB.Bundles.UniqueConstraints -Version 3.5.0
and then paste the binary Raven.Bundles.UniqueConstraints.dll to folder Plugins that I created myself in Raven's root directory.
After save an User document I get this in Metadata:
"Ensure-Unique-Constraints": [
{
"Name": "Email",
"CaseInsensitive": false
}
]
All seems to work, but I still saving documents with the same email.
UniqueConstraintCheckResult<User> checkResult = session.CheckForUniqueConstraints(user);
// returns whether its constraints are available
if (checkResult.ConstraintsAreFree())
{
session.Store(user);
session.SaveChanges();
}
I check this link RavenDB UniqueConstraint doesn't seem to work and this one https://groups.google.com/forum/#!searchin/ravendb/unique|sort:relevance/ravendb/KzO-eIf9vV0/NJyJ4DNniFUJ and many other that people have the same problem without solution. In some cases they said that are checking if the property already exist in database manualy as solution.
The documentation also says:
To activate unique constraints server-wide, simply add Unique
Constraints to Raven/ActiveBundles configuration in the global
configuration file, or setup a new database with the unique
constraints bundle turned on using API or the Studio
but with no clue how to do that. I did some search and find a possible how:
In Studio, select database, go to Settings -> Database settings, and I found this config:
{
"Id": "TestRaven",
"Settings": {
"Raven/DataDir": "~\\TestRaven"
},
"SecuredSettings": {},
"Disabled": false
}
and I tried add this config:
"Settings": {
"Raven/DataDir": "~\\TestRaven",
"Raven/ActiveBundles": "UniqueConstraints"
}
Then I get an error when trying save it. The message erros says something like "the database is already created and cant modify or add bundles" and make a sugestion to add this line "Raven-Temp-Allow-Bundles-Change": true and I was able to save de settings with UniqueConstraint bundle configuration.
So far I think I did all requirement that documentation describe. The last one is:
Any bundle which is not added to ActiveBundles list, will not be
active, even if the relevant assembly is in the Plugins directory.
The only place that I found a bundle list is creating a new database in Studio, but the list is not editable, just an information about what already has enabled.
The documentation says a lot of requirements but just dont tell us how to do it, super smart, and we have to try gess how. I could get to here so far, but gess what? It still not working!
My question is, UniqueConstraints realy work in RavenDb? Have someone get this working?
If yes, cloud please tell me how?
Thank you in advance!
[Edited]
I forgot to mention that I added the follow line:
store.Listeners.RegisterListener(new UniqueConstraintsStoreListener());
And also tried with version 3.5.1.
The issue is that the specified name of the bundle is incorrect so it won't be active on the server side. Please use "Unique Constraints" instead of "UniqueConstraints" in "Raven/ActiveBundles" settings option.

The name 'NodaTimeField' does not exist in the current context error during installation of index on RavenDB

I am using NodaTime's LocalDate in RavenDB index.
Here is an example of the index:
public class TaskIndex : AbstractIndexCreationTask<ScheduleTask>
{
public TaskIndex()
{
Map = tasks => from task in tasks
select new
{
task.Name,
PlannedStartDate = task.PlannedStartDate.AsLocalDate().Resolve(),
PlannedDueDate = task.PlannedDueDate.AsLocalDate().Resolve()
};
Index(x => x.Name, FieldIndexing.Analyzed);
Store(x => x.Name, FieldStorage.Yes);
TermVector(x => x.Name, FieldTermVector.WithPositionsAndOffsets);
}
}
I installed RavenDB-NodaTime bundle as described here.
Here is a piece of code I use to install index:
var assembly = AppDomain.CurrentDomain.Load(new AssemblyName
{
Name = "cs.Scheduling"
});
var catalog = new AssemblyCatalog(assembly);
var provider = new CompositionContainer(catalog);
var commands = documentStore.DatabaseCommands.ForDatabase(dbName);
IndexCreation.CreateIndexes(provider, commands, documentStore.Conventions);
documentStore is configured with default database, but then I use it to install index to different (tenant) database name of which comes in dbName.
During the installation of the index I got an exception: The name 'NodaTimeField' does not exist in the current context.
I have one default database which is completely different from database I try to install index for. So basically the case is similar to one described here but I am using standalone version of RavenDB server.
I tried to find out how I can do suggested there but was not able to do that:
embeddableDocumentStore.ServerIfEmbedded.Options.DatabaseLandlord.SetupTenantConfiguration += configuration =>
{
configuration.Catalog.Catalogs.Add(new TypeCatalog(typeof(DeleteOnConflict)));
configuration.Catalog.Catalogs.Add(new TypeCatalog(typeof(PutOnConflict)));
};
Version of RavenDB I am using is 2.5.2956.
RavenDB.Client.NodaTime - 2.5.10.
Hope for your help. Thanks.
In my case that was a very silly mistake. When I was installing RavenDB server some time ago I installed it into non-default destination. Later some RavenDB updates were installed into default destination (i.e. \Program Files (x86)\RavenDB). And when I was installing RavenDB-NodaTime bundle I put it into incorrect destination (\Program Files (x86)\RavenDB).
After detecting this issue and configuring RavenDB server in my correct destination properly an error described in the heading has gone away.
Hope this answer can help somebody else.
P.S. Later there was a deserialization error during reading data from db (RavenDB was not aware of how to deserialize date from string in "yyyy-MM-dd" format to LocalDate object) which I fixed by calling store.ConfigureForNodaTime(DateTimeZoneProviders.Tzdb); after store.Initialize(); call as Steven suggested in his answer.
I believe the answer is that your tenant database does not have the bundle "activated" Your database document (under settings in Raven 3) should have something like
"Raven/ActiveBundles": "Encryption;Compression;NodaTime"
Also you must call
store.ConfigureForNodaTime(DateTimeZoneProviders.Tzdb);
I call this after store.Initialize(). Once you do both of these things, you may have to fix existing data by re-saving your documents (not sure if there is another way). New data will be properly stored like '2016-2-3' format which should make your index return data.

Updating Data Source Login Credentials for SSRS Report Server Tables

I have added a lot of reports with an invalid data source login to an SSRS report sever and I wanted to update the User Name and Password with a script to update it so I don't have to update each report individually.
However, from what I can tell the fields are store as Images and are encrypted. I can't find anything out about how they are encrypted or how to update them. It appears that the User Name and password are stored in the dbo.DataSource tables. Any ideas? I want the script to run in SQL.
Example Login Info:
I would be very, very, VERY leery of hacking the Reporting Services tables. It may be that someone out there can offer a reliable way to do what you suggest, but it strikes me as a good way to clobber your entire installation.
My suggestion would be that you make use of the Reporting Services APIs and write a tiny app to do this for you. The APIs are very full-featured -- pretty much anything you can do from the Report Manager website, you can do with the APIs -- and fairly simple to use.
The following code does NOT do exactly what you want -- it points the reports to a shared data source -- but it should show you the basics of what you'd need to do.
public void ReassignDataSources()
{
using (ReportingService2005 client = new ReportingService2005)
{
var reports = client.ListChildren(FolderName, true).Where(ci => ci.Type == ItemTypeEnum.Report);
foreach (var report in reports)
{
SetServerDataSource(client, report.Path);
}
}
}
private void SetServerDataSource(ReportingService2005 client, string reportPath)
{
var itemSources = client.GetItemDataSources(reportPath);
if (itemSources.Any())
client.SetItemDataSources(
reportPath,
new DataSource[] {
new DataSource() {
Item = CreateServerDataSourceReference(),
Name = itemSources.First().Name
}
});
}
private DataSourceDefinitionOrReference CreateServerDataSourceReference()
{
return new DataSourceReference() { Reference = _DataSourcePath };
}
I doubt this answers your question directly, but I hope it can offer some assistance.
MSDN Specifying Credentials
MSDN also suggests using shared data sources for this very reason: See MSDN on shared data sources

RavenDB, RavenHQ and Appharbor - document size error with very first document

I have a completely empty RavenHQ database that's linked to my Appharbor application. The amount of space the database is currently using is 1.1mb out of an available 25mb for my bronze account. The database previously had records in it, but I have deleted them using "delete collection" in the management studio.
The very first time I call session.Store(myobject), and BEFORE I call .SaveChanges(), I get the following error.
System.InvalidOperationException: Url: "/docs/Raven/Hilo/AccItems"
Raven.Database.Exceptions.OperationVetoedException: PUT vetoed by Raven.Bundles.Quotas.Triggers.DatabaseSizeQoutaForDocumetsPutTrigger because: Database size is 45,347 KB, which is over the allowed quota of 25,600 KB. No more documents are allowed in.
Now, the document is definitely not that big, so I don't know what this error can mean, especially as I don't think I've even hit the database at that point since I haven't closed the session by calling SaveChanges(). Any ideas? Here's the code itself.
XDocument doc = XDocument.Parse(rawXml);
var accItems = ExtractItemsFromFeed(doc);
using (IDocumentSession session = _store.OpenSession())
{
var dbItems = session.Query<AccItem>().ToList();
foreach (var item in accItems)
{
var existingRecord = dbItems.SingleOrDefault(x => x.Source == x.SourceId == cottage.SourceId);
if (existingRecord == null)
{
session.Store(item);
_logger.Info("Saved new item {0}.", item.ShortName);
}
else
{
existingRecord.ShortName = item.ShortName;
_logger.Info("Updated item {0}.", item.ShortName);
}
session.SaveChanges();
}
}
Any other comments about the style of this code would be most welcome, as I was unsure of the best way to approach the "update existing item or create if it isn't there" scenario.
The answer here was as follows.
RavenHQ support found that the database was indeed oversized, but it seemed that the size reported in the Appharbor-branded RavenHQ control panel was incorrect. I had filled up the database way over the limit with a previous faulty version of the code posted above, so the error message I received was actually correct.
Fixing this problem without paying to upgrade the database wasn't straightforward, as it's not possible to shrink the database. As I also wasn't able to delete my single Appharbor/RavenHQ database or create another one that left me with the choice of creating an entirely new Appharbor application, or registering directly with RavenHQ for a new account. I chose the latter. The RavenHQ-branded control panel is slightly different to the Appharbor one, in that it has the ability to create and delete databases.
So to summarize: there doesn't seem to be any benefit to using RavenHQ as an add-on to Appharbor - you might as well go and get a proper free RavenHQ account.