I've worked through the documentation on Akka.Net PersistenceQuery here, but I'm struggling to figure out how I would hook up any of those queries inside an ASP.Net6 Blazor Server startup pipeline using the new Akka.Net Hosting model.
What I have in mind is to Sink such a query out to a SignalR hub that will cause views to refresh their data based on the output of a ReadForJournal stream.
Has anyone done this, and if so, please can you provide me with some guidance in this regard?
I have not done this before, much less an expert are this, but I can try to point you in the right direction! :)
If you want to run a local actor, you can spawn the ProjectionBehavior as any other Behavior. This can be useful for testing or when running a local ActorSystem without Akka Cluster.
SourceProvider<Offset, EventEnvelope<ShoppingCart.Event>> sourceProvider(String tag) {
return EventSourcedProvider.eventsByTag(system, CassandraReadJournal.Identifier(), tag);
}
Projection<EventEnvelope<ShoppingCart.Event>> projection(String tag) {
return CassandraProjection.atLeastOnce(
ProjectionId.of("shopping-carts", tag), sourceProvider(tag), ShoppingCartHandler::new);
}
Projection<EventEnvelope<ShoppingCart.Event>> projection1 = projection("carts-1");
ActorRef<ProjectionBehavior.Command> projection1Ref =
context.spawn(ProjectionBehavior.create(projection1), projection1.projectionId().id());
You can combine this with your predefined query, e.g.:
var queries = PersistenceQuery.Get(actorSystem)
.ReadJournalFor<SqlReadJournal>("akka.persistence.query.my-read-journal");
var mat = ActorMaterializer.Create(actorSystem);
Source<string, NotUsed> src = queries.AllPersistenceIds();
So I was thinking maybe your queries could be linked to your ProjectionBehavior for the akka hosting model to host it.
Related sources:
Getakka.net persistence-query
Akka projection dox
Akka hosting
Related
In django application I need to call an external rabbitmq, running on a windows server and using some application there, where the django app runs on a linux server.
I'm currently able to add a task to the queue by using the celery send_task:
app.send_task('tasks', kwargs=self.get_input(), queue=Queue('queue_async', durable=False))
My settings looks like:
CELERY_BROKER_URL = CELERY_CONFIG['broker_url']
BROKER_TRANSPORT_OPTIONS = {"max_retries": 3, "interval_start": 0, "interval_step": 0.2, "interval_max": 0.5}
CELERY_ACCEPT_CONTENT = ['application/json']
CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_SERIALIZER = 'json'
CELERY_TASK_DEFAULT_QUEUE = 'celery'
CELERY_TASK_RESULT_EXPIRES = 3600
CELERY_RESULT_BACKEND = 'rpc://'
CELERY_CREATE_MISSING_QUEUES = True
What I'm not sure about is how I can get and parse the response, since the send_task only returns a key?
If you want to store results of your task , you could use this parameter result_backend or CELERY_RESULT_BACKEND depending on the version of celery you're using.
Complete list of Configuration options can be found here (search for result_backend on this page) => https://docs.celeryproject.org/en/stable/userguide/configuration.html
Many options are available to store results - SQL DBs , NoSQL DBs, Elasticsearch, Memcache, Redis, etc,etc . Choose as per your project stack.
Thanks that helped for the understanding. So since I want to further process the answer I use rpc as already defined in the config I had in the example.
What I found usefull was this example, because most python celery examples assume that the consumer is the same application, that describes the interaction to a Java app Celery-Java since it gives a good example on how to request from python side.
Therefore my implementation is now:
result = app.signature('tasks', kwargs=self.get_input(), queue=Queue('queue_async', durable=False)).delay().get()
which waits and parses the result.
my aim is to add only slaves URIs, because master is not available in my case. But lettuce library returns
io.lettuce.core.RedisException: Master is currently unknown: [RedisMasterSlaveNode [redisURI=RedisURI [host='127.0.0.1', port=6382], role=SLAVE], RedisMasterSlaveNode [redisURI=RedisURI [host='127.0.0.1', port=6381], role=SLAVE]]
So the question is: Is it possible so avoid this exception somehow? Maybe configuration. Thank you in advance
UPDATE: Forgot to say that after borrowing object from pool I set connection.readFrom(ReadFrom.SLAVE) before running commands.
GenericObjectPoolConfig config = fromRedisConfig(properties);
List<RedisURI> nodes = new ArrayList<>(properties.getUrl().length);
for (String url : properties.getUrl()) {
nodes.add(RedisURI.create(url));
}
return ConnectionPoolSupport.createGenericObjectPool(
() -> MasterSlave.connect(redisClient, new ByteArrayCodec(), nodes), config);
The problem was that I tried to set data, which is possible only with master node. So there is no problem with MasterSlave. Get data works perfectly
In my WPF application I’m trying to use off-line map functionality. Right now my feature service is configured for data sync and I’m able to create data replica on server and download local copy of geodatabase.
gdbSyncTask = await GeodatabaseSyncTask.CreateAsync(_featureServiceUri);
Envelope extent = new Envelope(xmin, ymin, xmax, ymax, new SpatialReference(wkidStart));
GenerateGeodatabaseParameters generateParams = await _gdbSyncTask.CreateDefaultGenerateGeodatabaseParametersAsync(extent);
_generateGdbJob = _gdbSyncTask.GenerateGeodatabase(generateParams, _gdbPath);
_generateGdbJob.JobChanged += GenerateGdbJobChanged;
_generateGdbJob.ProgressChanged += ((object sender, EventArgs e) =>
{
UpdateProgressBar();
});
_generateGdbJob.Start();
After initial synchronization, I’m able to successfully work with map in off-line mode. This includes operations like adding new geometries or editing existing polygons inside local DB.
However, when I’m trying to synchronize changes back to server – I’m getting no results.
To perform data synchronization with local database – I’m using the following code:
SyncGeodatabaseParameters parameters = new SyncGeodatabaseParameters()
{
GeodatabaseSyncDirection = SyncDirection.Bidirectional,
RollbackOnFailure = false
};
Geodatabase gdb = await Geodatabase.OpenAsync(this.GetGdbPath());
foreach (GeodatabaseFeatureTable table in gdb.GeodatabaseFeatureTables)
{
long id = table.ServiceLayerId;
SyncLayerOption option = new SyncLayerOption(id);
option.SyncDirection = SyncDirection.Bidirectional;
parameters.LayerOptions.Add(option);
}
_gdbSyncTask = await GeodatabaseSyncTask.CreateAsync(_featureServiceUri);
SyncGeodatabaseJob job = _gdbSyncTask.SyncGeodatabase(parameters, gdb);
job.JobChanged += SyncJob_JobChanged;
job.ProgressChanged += SyncJob_ProgressChanged;
job.Start();
Everything goes well. The synchronization ends with status “Succeeded”. The messages logged by the SyncGeodatabaseJob are like on the screen below:
However – when I open edited feature layer from server inside map web client I cannot found any of my local changes. In the serve database I can also see that no new records were created during synchronization.
Interesting think is that when I open “Replica” data inside web I can see the following information:
Replica Server Gen: 2
Creation Date: 2018/02/07 10:49:54 UTC
Last Sync Date: 2018/02/07 10:49:54 UTC
The “Last Sync Data” is equal to replica “Creation date” However, in the replica log in ArcMap I can see the following information:
Can anyone can tell me how should I interpret above described situation? Am I missing some steps in my code? Or maybe some configuration feature is missing on the server? It looks like data modifications are successfully pushed back to replica on server but after that replica is not synchronized with server database (should it work automatically?).
I’m a “fresh” person regarding ArcGis development so any help will be appreciated
Thanks for all the answers. It occurred that there is versioning enabled on the server database and the offline, versioned changes was not reconciled to the server.
After running reconcile/post script (http://desktop.arcgis.com/en/arcmap/10.3/manage-data/geodatabases/automate-reconcile-post-after-sync.htm) off-line changes started to be visibile to other system users.
The code looks ok on fast look so I would assume that there is something going on in the setup.
What do you get back from the sync operation after the sync has completed? Note that you can just use await syncJob.GetResultsAsync to start the job and wait the results.
How is the Feature Service set up on the server? Please refer https://enterprise.arcgis.com/en/server/latest/publish-services/linux/prepare-data-for-offline-use.htm for the different ways to set these things.
I am using Pivotal GemFire 9.0.0 with 1 Locator and 1 Server. The Server has a Region called "submissions", like below -
<gfe:replicated-region id="submissionsRegion" name="submissions"
statistics="true" template="replicateRegionTemplate">
...
</gfe:replicated-region>
I am getting Region as null when executing the following code -
Region<K, V> region = clientCache.getRegion("submissions");
Surprisingly, the same ClientCache returns all the records when I query using OQL and QueryService as shown below -
String queryString = "SELECT * FROM /submissions";
QueryService queryService = clientCache.getQueryService();
Query query = queryService.newQuery(queryString);
SelectResults results = (SelectResults) query.execute();
I am initializing my ClientCache like this -
ClientCache clientCache = new ClientCacheFactory()
.addPoolLocator("localhost", 10479)
.set("name", "MyClientCache")
.set("log-level", "error")
.create();
I am really baffled by this. Any pointer or help would be great.
You need to configure your ClientCache (either through a cache.xml or pure GemFire API) with the regions as well. Using your example:
ClientRegionFactory regionFactory = clientCache.createClientRegionFactory(ClientRegionShortcut.PROXY);
Region region = regionFactory.create("submissions");
The ClientRegionShortcut.PROXY is used just for the sake of simplicity, you should use the shortcut that meets your needs.
The OQL works as expected because you are obtaining the QueryService through the ClientCache.getQueryService() method (instead of ClientCache.getLocalQueryService()), so the query is actually executed on Server Side.
You can get more information about how to configure the Client/Server topology in
Client/Server Configuration.
Hope this helps.
Cheers.
Yes, you need to "define" the corresponding client-side Region, matching the server-side REPLICATE Region by name (i.e. "submissions"). Actually this is a requirement independent of the server Regions' DataPolicy type (e.g. REPLICATE or PARTITION).
This is necessary since not every client wants to know about or even needs have data/events from every possible server Region. Of course, this is also configurable through subscription and "Interests Registration" (with Client/Server Event Messaging, or alternatively, CQs).
Anyway, you can completely avoid the use of the GemFire API directly or even GemFire's native cache.xml (highly recommend avoiding) by using either SDG's XML namespace...
<gfe:client-cache properties-ref="gemfireProperties" ... />
<gfe:client-region id="submissions" shortcut="PROXY"/>
Or by using Spring JavaConfig with SDG's API...
#Configuration
class GemFireConfiguration {
Properties gemfireProperties() {
Properties gemfireProperties = new Properties();
gemfireProperties.setProperty("log-level", "config");
...
return gemfireProperties;
}
#Bean
ClientCacheFactoryBean gemfireCache() {
ClientCacheFactoryBean gemfireCache = new ClientCacheFactoryBean();
gemfireCache.setClose(true);
gemfireCache.setProperties(gemfireProperties());
...
return gemfireCache;
}
#Bean(name = "submissions");
ClientRegionFactoryBean submissionsRegion(GemFireCache gemfireCache) {
ClientRegionFactoryBean submissions = new ClientRegionFactoryBean();
submissions.setCache(gemfireCache);
submissions.setClose(false);
submissions.setShortcut(ClientRegionShortcut.PROXY);
...
return submissions;
}
...
}
The "submissions" Region can be wrapped with SDG's GemfireTemplate, which will handle getting the "correct" QueryService on your behalf when running queries using the find(..) method.
Of course, you may be interested in making your client "submissions" Region a CACHING_PROXY" too. Of course, you will then need to register "interests" in the keys or data of interests. CQs are the best way to do this as it uses query criteria to define the data of "interests".
CACHING_PROXY is exactly as it sounds, caching data locally in the client based on the interests policies. This also gives you the ability to use the "local" QueryService to query data locally, avoiding the network hop.
Anyway, many options here.
Cheers,
John
Few month back I was working on some Odata WCF project and I had some problems with parsing custom headers for token auth (apiKey).
At that time, being quite a noob (still am!), I posted this SO question: JayData oData request with custom headers
Today I am working on a new project with Jaydata Odata server and client library and this:
application.context.prepareRequest = function (r) {
r[0].headers['apikey'] = '123456';
};
was working fine till I had to do a MERGE request. I found out that somehow MERGE request was overriding my headers so I investigated further.
It appears at first that in the oDataProvider.js (~line 617) in the _saveRest method the headers are not inherited:
request = {
requestUri: this.providerConfiguration.oDataServiceHost + '/',
headers: {
MaxDataServiceVersion: this.providerConfiguration.maxDataServiceVersion
}
};
but a few lines later we get:
this.context.prepareRequest.call(this, requestData);
which "should" call my own prepareRequest, but doesnt... Instead it still points to:
//Line 11302 jaydata.js
prepareRequest: function () { },
which of course does... nothing! Funnilly enough, when you execute a simple GET the same code supposedly on the same context instance works and points to my prepareRequest override.
I can assert with enough confidence that somehow the context between GET/MERGE is not the same instance. I cant see, however, any place where the context instance is reassigned.
Has anyone got a clue?
PS: this is NOT a CORS issue. My OPTIONS is passing fine and manually feeding the headers in oDataProvider works.
More
I followed the lead on different context instances and found something interesting. calling EntitySet.save() ends up calling the EntityContext constructor. see trace:
$data.Class.define.constructor (jaydata.js:10015)
EntityContext (VM110762:7)
Service (VM110840:8)
storeToken.factory (jaydata.js:14166)
$data.Class.define._getContextPromise (jaydata.js:13725)
$data.Class.define._getStoreContext (jaydata.js:13700)
$data.Class.define._getStoreEntitySet (jaydata.js:13756)
$data.Class.define.EntityInstanceSave (jaydata.js:13837)
$data.Entity.$data.Class.define.save (jaydata.js:9774)
(anonymous function) (app.js:162) // My save()
That explains why I get two different instances...
Hack
Replacing the prepareRequest function direcly in the class definition works, but its ugly!
for now I can cope with this:
$data.EntityContext.prototype.prepareRequest = function (r) {
r[0].headers['apikey'] = '12345';
};
This works fine as long as you only need to talk to a single endpoint.
Final word based on my experience
As much as I like JayData, it is obvious that they created a monster and its getting out of their hands (poor forum, no community, half-documented,...).
I chose JD because I was lazy and wanted to keep working with my old WCF DataService. Switching to Web API seemed wrong or too much work for me.
Also as a .net dev I liked strong typing of my entities and the ability to work with a concrete model generated from the JD tools. However, in the end, I was adding confusion. Every time my server side model changed I had to fetch the new metadata and scaffold a new entityModel.
I ended up by switching to Web Api and migrated my data service layer to Breeze. And seriously! its a breeze to work with it!
The documentation is absolutely brilliant and here on S.O you can always count on Ward or Jay Tarband to reply with a very high amount of professionalism.
In the end I realize this should probably be more a wiki than a Question.....