Boost websocket_server_async.cpp example - boost-asio

I was reading the boost websocket example here.
In the listener class, it initializes the socket_ object. After listener accepted the connection, the socket_ is being std::move to the session object. I didn't see any logic to recreate the socket_ object. When the listener accept another connection, would it be an issue?

The only question you need to ask is: Is the state of a socket_ invalid after moving it to a session? The answer is no.
After moving a socket object, it basically is reset to its original state, as if newly constructed.

Related

Circular (COM) reference when using IDispEventSimpleImpl

My question is about sinking COM events from a sub object properly, without creating a circular reference that would lead to memory leak(s).
There is an ActiveX control called CMyControl. This control creates an instance of an embedded web browser (IWebBrowser2) internally to display some HTML-content.
The web browser exposes an event source called DWebBrowserEvents2 that can deliver some interesting progress updates to CMyControl. Such as DocumentComplete when the HTML document has been fully loaded or when an error occurs etc.
And CMyControl will handle these events with the help of IDispEventSimpleImpl.
The issue I'm facing is that instances of CMyControl do not get destroyed when Release is called.
The direct reason for this is that the reference counter always ends up at 1 instead of 0.
Turns out that IDispEventSimpleImpl is indirectly responsible for this. This makes sense to me because the web browser needs the control's interface to sink the events, so it keeps a reference. Until you call the IDispEventSimpleImpl::DispEventUnadvise method, then the interface gets released.
But when Release gets called on IMyControl, the event source won't get disconnected.
I understand that: there is no reason why it would do that: Release doesn't even know about it.
Stumbled upon this post where they advise (pun intended) to create a custom "sink" object:
https://microsoft.public.vc.atl.narkive.com/4MgGRavd/dispeventadvise-dispeventunadvise-problem
The idea is that the sink object would see the events fired by the web browser first, before passing them on to CMyControl.
For this, an instance of this sink object would be stored inside CMyControl.
The sink object the connects to (and gets referenced by) the browser instead of the CMyControl instance itself. This breaks the circular reference.
Furthermore, the sink object gets passed a pointer to the "mothership" (the CMyControl instance) so it can perform a callback whenever an events occurs.
My question is: is this really how it should be done? isn't there a better/proper way to connect the events?

uvm_reg_predictor predict not working

In my environment, I have connected predictor bus_in port to output analysis port of monitor. I have also implement reg_adapter bus2reg function and connect adapter to predictor.
I'm using passive prediction (https://verificationacademy.com/cookbook/registers/integrating). The mirror value of uvm_reg should be updated automaticly as long as there's transaction sent from monitor. However I did not see that happen. When I check the source code for uvm_reg_predictor, it seems like it failed in get_reg_by_offset() function so that it did not get uvm_reg object. Did anyone has similar issue? If so, what's your solution? Thanks.
This issue can be resolved by configuring the offset in the REG map.
Make sure that the transactions (address) received by adapter and the address in the REG map are same. Add set_auto_predict(0) for turning off implicit prediction. The reg_offset failed since there was an address mismatch and Mapping was not happening henceforth the predict method for that register was not getting called implicitly.

Apache ignite listening to state change of objects in local nodes

I am investigating a use case where ignite has to listen to changes of a property of an object in the data grid and do some operations on that object. For performance, I want the processing to be done on the same node where the data is.
How can I get an event when the property of a object has changed to a specific value (eg. Object 'X' has a property 'state' which is set to 'scheduled' from 'created') and make sure that only events are taken from the node where the object lives in?
How can I make sure that when I got the event and start processing it, nobody else changes the object (only read is allowed) until processing is finished (in other words, a transaction starts as soon as the event is picked up)?
How can I make sure that the processing code is deployed to all nodes (processing is stateless) and that it only operates on local data (without having a hard link between data object and code, in other words, if the processing code is updated in the future, the objects stay untouched)
What I got from the docs is the following:
// Local listener that listenes to local events.
IgnitePredicate<CacheEvent> locLsnr = evt -> {
// CODE
return result;
};
// Subscribe to specified cache events occuring on local node.
ignite.events().localListen(locLsnr,
EventType.EVT_CACHE_OBJECT_PUT);
In the CODE block; I have to check for a state change on 'evt.newValue()', can't that be done earlier? Ie. as a paremeter to localListen somehow?
In the CODE block, is the Object locked until I return the result? In other words, is it in here that I am sure nobody can changes the object and that I can safely change my Object? IMO it is a strange place to do that in a 'Predicate' definition and not in a handler class.
Sven
Sven,
Your code looks correct and should work as you expect. Answering your questions:
Event listener is called right after the value is updated, so I think it's OK to check the field you're interested in inside the listener. If the field is not changed, just return right away.
The object is locked, because listener is called inside the sync block for the entry. You can modify the same object, but I would not recommend to execute any sync operations like cache updates inside the listener because it's error-prone and can affect performance. You should do this asynchronously, so that the lock is released as soon as possible.

OpenSSL: How to supply a custom pointer to the certificate verification callback

I want to use X509_STORE_set_verify_cb_func to receive certificate validation errors. I then want to store these errors in a list and process it later after SSL_connect returned.
However my application is multithreaded and I wanted to avoid any mutex locking for this callback. Any ways to pass a "void pointer" or store this somewhere in the X509_STORE_CTX so I can store the error inside the "right" location and don't have to use a global error list and lock that while doing the SSL_connect?
Thanks
AFAIK you are indeed stuck with that - just stuff it as an entry in there under your own id. The other option is to deal with the SSL callbacks a bit more generically - see for example ssl_hook in ssl_engine_kernel.c of Apache its SSL module. While a bit more work - it gives you complete control over the entire process - and entirely in your 'own process space'.
Thanks,
Dw.
If you are using C11 or later, you can define a global thread_local variable
thread_local void * openssl_verify_context;
Then
Set openssl_verify_context before setting the callback (i.e. before X509_STORE_set_verify_cb_func).
Use openssl_verify_context in the callback.
If needed read and unset openssl_verify_context after validating the certificate (i.e. after PKCS7_dataVerify).
The advantage of this solution is you do not need to know the details of the struct behind X509_STORE_CTX (it is hidden in recent versions of OpenSSL).

Question regarding org.apache.commons.dbcp.BasicDataSource

I fixed some bug related to the way we were using BasicDataSource and though I understand part of it I still have some questions unanswered :)
Problem:
The application was not able to auto-connect to the database after a db failure.
Application is using org.apache.commons.dbcp.BasicDataSource class as a TCP-connection pool for a JDBC connection to Oracle db.
Fix:
After some research I discovered that in BasicDataSource testOnBorrow and testOnreturn were not set. I provided the validation query to test connections. This fixed the problem
Max no of connections in pool was set to 1
My Understanding:
The connection pool would hand over a connection to the application.
What I think was happening was the application MAGICALLY returned the bad collection to the pool when it db crashed . Now since the Pool does not know if it is a bad connection it would hand over the same connection to the application next time it needs it causing the application to not auto-reconnect to db.
Now, after the fix.. whenever a bad connection is returned to the connection pool it would be discarded and wont be used again because of the fix I made above.
Now I know that BasicDataSource wraps the connection before giving to the application, such that whenever application says con.close ..BasicDataSource would know that the connection is not used any more.. it will take care of either returning the connection to the pool or discardigg etc.
Unanswered Question:
However what I do not understand is what makes the application MAGICALLY return the connection to the connection pool when its broken[Note that con.close method is not called when the connection exits un-gracefully]. There is no way of BasicDataSource to know that the connection closed or there is ?. Can someone point me to code for that ?
I my overall understanding connect of why the fix worked ??
Now, I know that this is kind of an old thread, but it's high on google search results, so I thought I might give it a quick answer. For more information on configuring the BasicDataSource, you should reference the DBCP Configuration page: http://commons.apache.org/proper/commons-dbcp/configuration.html
To answer the "unanaswered" question of "How does BasicDataSource know when a connection is abondoned and needs to be returned to the connection pool?" (paraphrased)...
org.apache.commons.dbcp.BasicDataSource is able to monitor traffic and usage on the connections it offers by using a wrapper class for the Connection. Every time you call a method on the connection or any Statements created from the connection, you are actually calling a wrapper class that implements an interface or extends a base class with those same methods (Hurray for Polymorphism!). These custom methods allow the DataSource to know whether or not a Connection is active.
On the BasicDataSource itself, there is a property called "removeAbandoned" and another called "removeAbandonedTimeout" that are used to configure this behavior of returning abondonded connections to the connection pool.
"removeAbandoned" is a boolean that indicates whether abandoned connections should be returned to the pool. Defaults to "false".
"removeAbandonedTimeout" is an int, that represents the number of seconds of inactivity that should be allowed to pass before a connection is considered to be abandoned. Default value is 300 (about 5 minutes).
Looking at the test for abandoned connections, it appears that when all connections in the pool are "in use" when a new connection is requested, the "in-use" connections are tested for abandonment (they maintain a timestamp of last used time).
See BasicDataSource#setRemoveAbandoned(boolean) and BasicDataSource#setRemoveAbandonedTimeout(int)
Regardless of how clever or not your connection pool is in closing abandoned connections, you should always ensure each connection is closed in a finally block, e.g.:
Connection conn = getConnection();
try {
... // perform work
} finally {
conn.close();
}
Or use some other means such as Apache DBUtils.