I tried to use contentious query to monitor a cache and set initial query, LocalListener and RemoteFilter as example did.
The issue I met is when client reconnect to Ignite cluster, the initial query will query the data from cache which the client might already got before.
I tried to use unchanged ID or instance name
cfg.setConsistentId("de01");
cfg.setIgniteInstanceName("test1");
but does not work.
Is any way to solve this issue?
Many thanks,
On disconnect phase of reconnect server closes query listener and loses information which updates were already sent to client. The only way not to miss some updates in that situation - run initial query again.
Related
I have an api exposing a websocket connection and to keep the connection alive my reactjs frontend echoes in the websocket connection each second. Whenever the server receives the message, a database query (a SELECT) is done. So I'm querying the database each second by the way. Will it kill the system overtime ? Is it a poor practice to query a database as frequently as that ? Any explanation would help me improve the code. My system will go production in a few and I'd like not to encounter any silly problem
According to your words, a query is executed every second, and by doing this, you will have problems with the server resources
In my opinion, you can have two different solutions
1- Manage the number of requests from the database using the design pattern and data caching
2- Change your websocket structure and in case of an event or data changes, take the data from the bank and send it to the user.
I want to get the CacheEvent for update on the ignite ClientCach is made.
I have 2 Server connected and replicating data between of them.
First application is connecting using the IgniteClient(TcpClient)
'Ignition.startClient()' and publishing the update to ignite instance.
I would like the second Instance to connect to remote Server and get an update when the cache is being updated.
currently this is working only for updates from Ignite with this :
Ignition.start()
Continuous queries could help you https://apacheignite.readme.io/docs/continuous-queries You need run query from second client instance and subscribe for interested events.
To get cache events, you might try remote event listener https://apacheignite.readme.io/docs/events#section-remote-events
Im trying to get SQL Notifications to work with BizTalk, but im struggling a one point.
The Binding of the Receivelocation is the following:
The SQL Server is supporting Notifications, and the connection string is correct.
When i start the Receivelocation it is working exactly one time in a correct way, but when i disable it and start it again, i get the following error in the eventlog.
The Messaging Engine failed to add a receive location
"RL.MDM.SQL" with URL
"mssql://.//Database?InboundId=GetNewMDMChanges" to the adapter
"WCF-SQL". Reason:
"Microsoft.ServiceModel.Channels.Common.TargetSystemException: The
notification callback returned an error. Info=Invalid.
Source=Statement. Type=Subscribe.
I cant start the Receivelocation again till i Execute the following command on the Database to enable the Broker.
alter database MDMDEV set enable_broker with rollback immediate;
The strange thing here is when i check if the broker is still enabled before i execute the command above, i see that the broker is indeed still enabled.
So the command to enable the broker fixes my problem for exactly one other notification and than i have to do this again.
Has anybody ever had this problem or can tell me what im doing wrong?
Thanks in advance.
Regarding the Notifications feature in general, my recommendation is to not use it.
With both SQL Server and Oracle, the Notifications feature is quite fragile and will stop receiving event with no warning or error. When this happens, the only way to recover is Disable/Enable the Receive Location.
Basically, I have found it not reliable enough to use in production apps.
If you or your organization own the database, Polling [+ Triggers if needed] are 100% reliable.
This article describes some different Polling scenarios: BizTalk Server: SQL Patterns for Polling and Batch Retrieve
Not so far ago I was faced with the trouble of dynamically transferring data from one database to another.
The only reason to do that for me is when first database server go down, the system can use second server.
For this purpose i used Transaction Log Shipping service.
As far as i can see, it working fine now and copying logs from one server to another every 15 min.
My question is, when the critical moment comes, and the first server will down, how can i use the database on the second server?
As i can see now, the database says that it "Restoring..." and i can do nothing with it.
I understand that this is because it staying in sync with first server.
But when i need that database, how can i switch it into normal mode, when i can query it and modify ?
Thanks a lot!
It should not be in the restoring state. It should be in 'read-only' mode. As far as I remember setting this thing up. So you basically strip the read-only mode and use it as if it were your primary database.
We are facing the SQL Timed out issue and I found that the Error event ID is either Event 5586 or 3355 (Unable to connect / Network Issue), also could see few other DB related error event ids (3351 & 3760 - Permission issues) reported at different times.
what could be the reason? any help would be appreciated..
Can you elaborate a little? When is this happening? Can you reproduce the behavior or is it sporadic?
It appears SharePoint is involved. Is it possible there is high demand for a large file?
You should check for blocking/locking that might be preventing your query from completing. Also, if you have lots of computed/calculated columns (or just LOTS of data), your query make take a long time to compute.
Finally, if you can't find something blocking your result or optimize your query, it's possible to increase the timeout duration (set it to "0" for no timeout). Do this in Enterprise Manager under the server or database settings.
Troubleshooting Kerberos Errors. It never fails.
Are some of your webapps running under either the Local Service or Network Service account? If so, if your databases are not on the same machine (i.e. SharePoint is on machine A and SQL on machine B), authentication will fail for some tasks (i.e. timerjob related actions etc.) but not all. For instance it seems content databases are still accessible (weird, i know, but i've seen it happen....).