I have a .NET client node, and several Java server nodes. There are no .NET server nodes. I have one cache, and in that cache there are many different types. I think of a cache instance as a schema, not a single table. From .NET I want to subscribe to grid events. However, depending on what the client is doing we may only be interested in a subset of types or object instances. Right now, I need to subscribe to all events and then do my filtering on object type (and instance fields) on the .NET client side. What I really want to do is filter on the remote Java side. So a ContinuousQuery with a RemoteFilter seems perfect, although from here, https://apacheignite-net.readme.io/docs/platform-interoperability we see it is not supported.Is there any way I can achieve some server side filtering, at least on the type itself? It doesn't seem right to create one cache per type in order to achieve this.
Thanks!
Gordon.
Remote filter is supported for continuous queries. However, if you're querying on .NET client and filter is implemented in .NET language, server nodes nodes must be running with .NET runtime as well. Refer to this page for information about how to start standalone .NET node: https://apacheignite-net.readme.io/docs/getting-started-2
Related
Currently, we have a .Net Framework 4.7 based windows service that we install through MSI built using Wix. But during install, we register multiple windows services for the same exe with difference being in the arguments passed to each service. It would look like Myapp.exe -instance 1, Myapp.exe -instance 2..and so on. Each instance uses a different configuration based on the instance number and will poll different IBM MQ and process messages. We install around 14 such instances.
Now that we are looking to migrate to .Net Core, we are wondering, if its worth changing this deployment model and instead move to using multiple instances of hosted services. With this, we will simply register the hosted service multiple times but with different constructor parameter. So I am trying to understand, what could be potential downside of this approach. Till now, I could think of coupe of them.
Since these runs as independent processes, we currently have ability to stop/start specific instance of windows service. So we will potentially lose that ability.
Since these runs as independent processes, we can easily identify memory spike in a specific instance of windows service. So for troubleshooting, we can just focus on specific instance. With single executable, we lose this ability as well.
Apart from these, what other potential pitfalls that I may come across with this approach?
Also for the above 2 points, is there any workaround when using multiple hosted services?
I'm not sure specifically about Windows Services but I had the same question for microservices. I think in general, there isn't much either way but some things to consider:
All services go down if you need to deploy a new one (but if they are all the same, you are more likely to update all of them at the same time)
Coordinating between them (if necessary) might be easier (locks, transactions etc) if they are together but likewise might allow you to do things that break encapsulation because you can
They would all start and stop at the same time in a single service, if you want to control them separately, you will either need an external enable-disable mechanism or separate windows services.
If you will ever need to separate them e.g. onto separate machines, you will have to do the risky work of separating them later.
It sounds like they are largely identical just targetting different data so there aren't any things I can think of that would be a problem.
I found this annswer:
1. Long answer to Quartz requiring to data sources, however, if you want an even deeper answer, I believe I’ll need to dig into the source code or do more research:
a. JobStoreCMT relies upon transactions being managed by the application which is using Quartz. A JTA transaction must be in progress before attempt to schedule (or unschedule) jobs/triggers. This allows the "work" of scheduling to be part of the applications "larger" transaction. JobStoreCMT actually requires the use of two datasources - one that has it's connection's transactions managed by the application server (via JTA) and one datasource that has connections that do not participate in global (JTA) transactions. JobStoreCMT is appropriate when applications are using JTA transactions (such as via EJB Session Beans) to perform their work. (Ref; http://quartz-scheduler.org/documentation/quartz-1.x/configuration/ConfigJobStoreCMT)
However, there is a believed conflict with a non transactional driver in our particular application. Does anyone know if Quartz (JobsStoreCMT) can just work with just a transactional data source?
Does anyone know if Quartz (JobsStoreCMT) can just work with just a transactional data source?
No you must have a datasource of each type. Invocations on the API by the client application use the connections that are XA-capable, so that the work join's the application's transaction. Work done by the scheduler's internal threads use the non-XA connections.
I'm currently working in a Silverlight / MS SQL project where the Entity Framework has not been implemented and I would like to know what's the best practice to deal with calculated fields in this particular situation.
Considering that some external system might also consume my data directly in the DB or thru a web service, here's the 3 options I can see right now.
1) Force any external system to consume data thru a web service and create all the calculated fields in the objects only.
2) Create the calculated fields in a DB view and resync your object with the server each time a value needs to be calculated.
3) Replicate the calculation rules in the object and the database view.
Any other suggestions would also be welcomed.
I would recommend to follow two principles: data decoupling and minimum functionality duplication. Both would suggest to put your calculations in one place only, and serve them already calculated. So I would implement the calculations in the DB, and serve them via a web service.
However, you have to consider your particular case. For example, if the calculations are VERY heavy, you could delegate them to the client to spare server resources. This could even be the reason you are using Silverlight. I am in a similar situation on a project, and I found that the best compromise is to push raw data to the client and have it do the heavy computations.
Having a best practice or approach for this kind of problem is difficult as circumstances change what was formerly a good approach might start to seem less useful. That said where possible I would do anything data related at the DB level including calculated fields. This way you know no matter where you are looking at the data from you will see the same results. So your web service, SQL reporting and anything else that needs to look at or receive data will see the same result.
I am very new to WCF/RIA services. I am looking to build an application using PRISM/MEF where I can offer new plug-ins for the application from time to time. Now, my database structure is pretty much static. It will not see many changes during its life (but there still might be a few). The new plug-ins will use the entity classes exposed by the database.
My Question is when I create new plug-in controls, these controls might need some special server side methods to be run. Which would mean I update my WCF/RIA service to account for the new methods. I really want to avoid that and was wondering if it is possible to create a WCF service that has just 4 CRUD mehods. I can pass any entity to these methods and depending upon the type, the entity gets saved, updated or deleted. Also it lets me pass any kind of LINQ query to the get method and returns me the appropriate results. The goal is to avoid making changes to WCF service unless the underlying DB structure changes.
Whatever special methods I add to my plug-in, they could simply mean passing complex LINQ queries to the generic Get method and get the results on the client side. Most of entity management happens on the client. WCF becomes a simple (yet powerful) layer over my database that lets me access any entity and process any complex query based on client side LINQ queries.
Thanks,
M
Have these 4 CRUD operations in a seperated Domain Service.
I am in the process of implementing an enhancement to an existing web application(A). The new solution will provide features(charts/images/data) to the application A. The new enhancement will be a new project and will generate new assemblies. I am trying to identify what would be most elegant way to read this information.
1) Do a binary reference and read the data directly. The new assemblies live with your application and are married together
2) Write a WCF call and get the data. This will help to decouple the application.
The new application will involve me to buy some expensive licences. So if i go with the 2nd option i can limit the license fee to a single server or atmost 2-3. My current applicaiton runs under a webfarm of 8 servers.
Please share out the pros/cons of both approach.
Thanks.
If you decouple the two pieces sufficiently, you will also permit the use of clients running something other than .NET. Using the first option, you could only support .NET clients. This may turn out to be important, even if today you are absolutely certain that only .NET will ever be used - tomorrow, your company may be purchased by another which is a Java or PHP shop.
Even if you never need to support a non .NET client, coupling to the assemblies will require you to maintain version compatibility between the client and server. If this is not necessary, then use option #2.
The benefit of using WCF (decoupled approach) is that you get a deployment option to take it outside of the machine if it impacts the machine too much in terms of processing or storage.
The downside is that you'll likely pay some performance hit compared to linking directly.
I'm sure you can do some dynamic linking so you don't have to deploy to all 8 servers.