I have a requirement, I have two DB hosted # different port's!
For now, let it be at different port as 8080 and 8081
Now, when ever there is any change in RavenDB # port 8080, it should get reflected into 8081 port DB.
Currently,
I am able to dig into the Raven Sample Application folder
Raven\Samples\Raven.Sample.Replication
And execute(After going thru some blog post's and previous question from StackOverflow
Raven DB Replication Setup Issue)
Start Raven.ps1
var documentStore1 = new DocumentStore { Url = "http://localhost:8080" }.Initialize();
var documentStore2 = new DocumentStore { Url = "http://localhost:8081" }.Initialize();
After Initializing DocumentStore, i am trying to save Data
using(var session1 = documentStore1.OpenSession())
{
session1.Store(new User { Id = "users/ayende", Name = "Ayende" });
session1.SaveChanges();
}
using (var session2 = documentStore2.OpenSession())
{
session2.Store(new User { Id = "users/ayende", Name = "Oren" });
session2.SaveChanges();
}
As per my understanding, it should get reflected in Both the DB's. Please rectify, if i am wrong?
But this is not happening!
If, i execute the first set of insert query :-
using(var session1 = documentStore1.OpenSession())
{
session1.Store(new User { Id = "users/ayende", Name = "Ayende" });
session1.SaveChanges();
}
It only save's in port 8080 but not in 8081.
Please let me know, how can i achieve the desired(Replication).
Thanks
You didn't setup the replication bundle. That is what causes the replication.
You can do that via the instructions in: http://old.ravendb.net/documentation/replication
Related
I'm using Apache Ignite on Azure Kubernetes as a distributed cache.
Also, I have a web API on Azure based on .NET6
The Ignite service works stable and very well on AKS.
But at first request, the API tries to connect Ignite and it takes around 3 seconds. After that, Ignite responses take around 100 ms which is great. Here are my Web API performance outputs for the GetProduct function.
At first, I've tried adding the Ignite Service to Singleton but it failed sometimes as 'connection closed'. How can I keep open the Ignite connection always? or does anyone has something better idea?
here is my latest GetProduct code,
[HttpGet("getProduct")]
public IActionResult GetProduct(string barcode)
{
Stopwatch _stopWatch = new Stopwatch();
_stopWatch.Start();
Product product;
CacheManager cacheManager = new CacheManager();
cacheManager.ProductCache.TryGet(barcode, out product);
if(product == null)
{
return NotFound(new ApiResponse<Product>(product));
}
cacheManager.DisposeIgnite();
_logger.LogWarning("Loaded in " + _stopWatch.ElapsedMilliseconds + " ms...");
return Ok(new ApiResponse<Product>(product));
}
Also, I add CacheManager class here;
public CacheManager()
{
ConnectIgnite();
InitializeCaches();
}
public void ConnectIgnite()
{
_ignite = Ignition.StartClient(GetIgniteConfiguration());
}
public IgniteClientConfiguration GetIgniteConfiguration()
{
var appSettingsJson = AppSettingsJson.GetAppSettings();
var igniteEndpoints = appSettingsJson["AppSettings:IgniteEndpoint"];
var igniteUser = appSettingsJson["AppSettings:IgniteUser"];
var ignitePassword = appSettingsJson["AppSettings:IgnitePassword"];
var nodeList = igniteEndpoints.Split(",");
var config = new IgniteClientConfiguration
{
Endpoints = nodeList,
UserName = igniteUser,
Password = ignitePassword,
EnablePartitionAwareness = true,
SocketTimeout = TimeSpan.FromMilliseconds(System.Threading.Timeout.Infinite)
};
return config;
}
Make it a singleton. Ignite node, even in client mode, is supposed to be running for the entire lifetime of your application. All Ignite APIs are thread-safe. If you get a connection error, please provide more details (exception stack trace, how do you create the singleton, etc).
You can also try the Ignite thin client which consumes fewer resources and connects instantly: https://ignite.apache.org/docs/latest/thin-clients/dotnet-thin-client.
I need to limit the concurrent sessions allowed per user in an apache SshServer. I found two references to this functionality, but they seem to be obsolete.
Here's the original patch back in 2010:
https://issues.apache.org/jira/browse/SSHD-95
I also found this reference to its usage:
http://apache-mina.10907.n7.nabble.com/How-to-set-max-count-connections-in-sshd-service-td44764.html
Which refers to a SshServer.setProperty() method.
I'm using sshd-core 2.4.0, and this method is absent from SshServer, I can't see any obvious replacement, and I can't find any documentation on what has happened to it or how I'm supposed to do this now.
I still see the MAX_CONCURRENT_SESSIONS key in ServerFactoryManager, so I assume the functionality is still in there somewhere, but I can't find where I need to set it.
Here's what the setup of the server looks like (it's for an SFTP server, but that shouldn't matter for the problem at ahnd, I thnk):
private val server = SshServer.setUpDefaultServer().apply {
val sftpSubsystemFactory = SftpSubsystemFactory().apply {
addSftpEventListener(sftpEventListener)
}
port = sftpPort
host = "localhost"
keyPairProvider = when {
sftpKeyname.isEmpty() -> throw IllegalStateException("No key name for SFTP, aborting!")
sftpKeyname == "NO_RSA" -> {
log.warn("Explicitly using NO_RSA, sftp encryption is insecure!")
SimpleGeneratorHostKeyProvider(File("host.ser").toPath())
}
else -> KeyPairProvider.wrap(loadKeyPair(sftpKeyname))
}
setPasswordAuthenticator { username, password, _ ->
// current evil hack to prevent users from opening more than one session
if (activeSessions.any { it.username == username }) {
log.warn("User attempted multiple concurrent sessions!")
throw IllegalUserStateException("User already has a session!")
} else {
log.debug("new session for user $username")
// throws AuthenticationException
authenticationService.checkCredentials(username, password)
true
}
}
subsystemFactories = listOf(sftpSubsystemFactory)
fileSystemFactory = YellowSftpFilesystemFactory(ftpHome)
start()
log.info("SFTP server started on port $port")
}
(From my comment) you can set the property directly:
server.apply {
properties[ServerFactoryManager.MAX_CONCURRENT_SESSIONS] = 50L
}
I am working on setting up a multi-tenant, seperate database application and have made some good progress from reading this post below on stackoverflow.
Multitenancy with Fluent nHibernate and Ninject. One Database per Tenant
I see two sessions being setup. One is the 'master' session that will be used to get the tenant information and then the tenant session which is specific to the subdomain. I have the app switching nicely to the specified database based on domain and have questions on how to setup the 'master' database session and how to use it.
I tried registering a new session specifically for the master session be get an error regarding having already registered an ISession.
I'm new to nHibernate and not sure the best route to take on this.
NinjectWebCommon.cs
kernel.Bind<WebApplication1.ISessionSource>().To<NHibernateTenantSessionSource>().InSingletonScope();
kernel.Bind<ISession>().ToMethod(c => c.Kernel.Get<WebApplication1.ISessionSource>().CreateSession());
kernel.Bind<ITenantAccessor>().To<DefaultTenantAccessor>();
ITenantAccessor.cs
public Tenant GetCurrentTenant()
{
var host = HttpContext.Current.Request.Url != null ? HttpContext.Current.Request.Url.Host : string.Empty;
var pattern = ConfigurationManager.AppSettings["UrlRegex"];
var regex = new Regex(pattern);
var match = regex.Match(host);
var subdomain = match.Success ? match.Groups[1].Value.ToLowerInvariant() : string.Empty;
Tenant tenant = null;
if (subdomain != null)
{
// Get Tenant info from Master DB.
// Look up needs to be cached
DomainModel.Master.Tenants tenantInfo;
using (ISession session = new NHibernateMasterSessionSource().CreateSession())
{
tenantInfo = session.CreateCriteria<DomainModel.Master.Tenants>()
.Add(Restrictions.Eq("SubDomain", subdomain))
.UniqueResult<WebApplication1.DomainModel.Master.Tenants>();
}
var connectionString = string.Format(ConfigurationManager.AppSettings["TenanatsDataConnectionStringFormat"],
tenantInfo.DbName, tenantInfo.DbUsername, tenantInfo.DbPassword);
tenant = new Tenant();
tenant.Name = subdomain;
tenant.ConnectionString = connectionString;
}
return tenant;
}
Thanks for you time on this.
Add another session binding and add some condition. E.g.
kernel
.Bind<ISession>()
.ToMethod(c => c.Kernel.Get<NHibernateMasterSessionSource>().CreateSession())
.WhenInjectedInto<TenantEvaluationService>();
I'm using autofac and the interfaces are correctly resolved but this code fails with "No connection could be made because the target machine actively refused it 127.0.0.1:8081"
using (var store = GetService<IDocumentStore>())
{
using (var session = store.OpenSession())
{
session.Store(new Entry { Author = "bob", Comment = "My son says this", EntryId = Guid.NewGuid(), EntryTime = DateTime.Now, Quote = "I hate you dad." });
session.SaveChanges();
}
}
Here is the registration
builder.Register<IDocumentStore>(c =>
{
var store = new DocumentStore { Url = "http://localhost:8081" };
store.Initialize();
return store;
}).SingleInstance();
When I navigate to http://localhost:8081 I do get the silverlight management UI. Although I'm running a Windows VM and vmware and Silverlight5 don't play together. That's another issue entirely. Anyways does anyone see what I'm doing wrong here or what I should be doing differently? Thanks for any code, tips, or tricks.
On a side note, can I enter some dummy records from a command line interface? Any docs or examples of how I can do that?
Thanks All.
Just curious, are you switching RavenDB to listen on 8081? The default is 8080. If you're getting the management studio to come up, I suspect you are.
I'm not too familiar with autofac but, it looks like you're wrapping your singleton DocumentStore in a using statement.
Try:
using (var session = GetService<IDocumentStore>().OpenSession())
{
}
As far as dummy records go, the management studio will ask you if you want to generate some dummy data if your DB is empty. If you can't get silverlight to work in the VM, I'm not sure if there's another automated way to do it.
Perhaps using smuggler:
http://ravendb.net/docs/server/administration/export-import
But you'd have to find something to import.
I'm reading through Rob Ashton's excellent blog post on RavenDB:
http://codeofrob.com/archive/2010/05/09/ravendb-an-introduction.aspx
and I'm working through the code as I read. But when I try to add an index, I get a 401 error. Here's the code:
class Program
{
static void Main(string[] args)
{
using (var documentStore = new DocumentStore() { Url = "http://localhost:8080" })
{
documentStore.Initialise();
documentStore.DatabaseCommands.PutIndex(
"BasicEntityBySomeData",
new IndexDefinition<BasicEntity, BasicEntity>()
{
Map = docs => from doc in docs
where doc.SomeData != null
select new
{
SomeData = doc.SomeData
},
});
string entityId;
using (var documentSession = documentStore.OpenSession())
{
var entity = new BasicEntity()
{
SomeData = "Hello, World!",
SomeOtherData = "This is just another property",
};
documentSession.Store(entity);
documentSession.SaveChanges();
entityId = entity.Id;
var loadedEntity = documentSession.Load<BasicEntity>(entityId);
Console.WriteLine(loadedEntity.SomeData);
var docs = documentSession.Query<BasicEntity>("BasicEntityBySomeData")
.Where("SomeData:Hello~")
.WaitForNonStaleResults()
.ToArray();
docs.ToList().ForEach(doc => Console.WriteLine(doc.SomeData));
Console.Read();
}
}
}
It throws the 401 error when on the line that makes the PutIndex() call. Any ideas what permissions I need to apply? And where I need to apply them?
What do you mean by Server mode? Do you mean simply executing Raven.Server?
I've not had to do anything special client-side to get that to work, although I have had to run Raven.Server with elevated privileges because I'm not sure the code to ask for relevant permissions is quite working as intended. (Actually, I'll raise a query about that on the mailing list)
You shouldn't be getting a 401 error unless you've changed the configuration of Raven.Server.
If you're running the server, you can browse to it directly using the url specified in configuration (localhost:8080 by default) - make sure it's actually running and working as intended before continuing with troubleshooting