In BLPAPI is there a more efficient way to find the security for a subscription? - bloomberg

When you make a subscription, you pass in a correlationID and use that to reference the security for the fields that come back
So you can have a counter that increments every subscription and use that as the correlationID, and then keep a map from that counter value to the security object
Is there a more efficient way to do this?

In stead of passing a number and keeping a map, you can use the security object as the correlationID
Then, when you get a tick, instead of getting the counter and looking up the mapped security, you can just cast the correlationID pointer to the security object's type and directly apply any changes to the security object
No lookups needed, and there are no concerns about synchronization on the map.

Related

Patterns when designing REST POST endpoint when resource has a computed property

I have a resource, as an example a 'book'.
I want to create a REST POST endpoint to allow consumers to create a new book.
However, some of the properties are required and computed by API, and others were actually taken as they are
Book
{
name,
color,
author # computed
}
Let's say the author is somehow calculated in API based on the book name.
I can think of these solutions each has its drawbacks:
enforce consumer to provide the author and just filter it (do not take into account as an input) # bad because it is very unpredictable why the author was changed
allow the user to provide author # same problem
do not allow the user to provide an author and show an exception if the user provides it
The last solution seems to be the most obvious one. The main problem I can see is that it is inconsistent and can be bizarre for consumers to see the author later on GET request.
I want my POST endpoint to be as expressive as possible. So the POST and GET data transfer objects will look almost the same.
Are there any simple, expressive, and predictable patterns to consider?
Personally I'm a big fan of using the same format for a GET request as well as a PUT.
This makes it possible for a client to do a GET request, add a property to the object they received and immediately PUT again. If your API and clients follow this pattern, it also means it can easily add new properties to GET requests and not break clients.
However, while this is a nice pattern I don't really think that same expectation exists at much for 'creation'. There's usually many things that make less less to require as a property when creating new items (think 'id' for example), so I usually:
Define a schema for PUT and GET.
Define a separate schema for POST that only contains the relevant properties for creation.
If users supply properties not in the schema, always error with a 422.
some of the properties are required and computed by API
Computed properties are neither required nor optional, by definition. No reason to ask consumers to pass such properties.
do not allow the user to provide an author and show an exception if the user provides it
Indeed, DTO should not contain author-property. Consumers can send over network whatever they want, however it is the responsibility of the API-provider to publish contract (DTO) for consumers to use properly. API-provider controls over what properties to consider, and no exception should be thrown, as the number of "bad" properties that can be sent by consumers is endless.
So the POST and GET data transfer objects will look almost the same
Making DTOs of the same resource look the same is not a goal. In many cases, get-operation exposes a lot more properties than post-operation for the same resource, especially when designing domain-driven APIs.
Are there any simple, expressive, and predictable patterns to consider?
If you want your API to express the fact that author is computed, you can have the following endpoints:
POST http://.../author-computed-books
GET http://.../books/1
Personally, I wouldn't implement that way since it does not look natural, however you can get the idea.
I want my POST endpoint to be as expressive as possible. So the POST
and GET data transfer objects will look almost the same.
Maybe just document it instead of relying explicit stuff like it must be almost the same as the GET endpoint.
E.g. my POST endpoint is POST /number "1011" and my GET endpoint is GET /number -> 11. If I don't document that I expect binary and I serve decimal, then nobody will know and they would guess for example decimal for both. Beyond documentation another way of doing this and to be more explicit is changing the response for GET to include the base {"base":10, value:"11"} or changing the GET endpoint GET /number/decimal -> 11.
As of the computed author I don't understand how you would compute it. I mean either a book is registered and the consumer shouldn't register it again or you don't know much about the author of it. If the latter, then you can guess e.g. based on google results for the title, but it will be a guess, not necessarily true. The same with consumer data, but at least that is what the consumers provided. There is no certainty. So for me it would be a complex property not just a primitive one if the source of the information matters. Something like "author": {name: "John Wayne", "source": "consumer/service"} normally it is complex too, because authors tend to have ids, names, other books, etc.
Another thought that if it is weird for the consumers instead of expected, then I have no idea why it is a feature at all. If author guessing is a service, then a possible solution is making the property mandatory and adding a guessing service GET /author?by-book-name={book-name}, so they can use the service if they want to. Or the same with a completely optional property. This way you give back the control to the consumers on whether they want to use this service or not.

Is there a way to get raw message from MassTransit?

I have a consumer with generic argument IEvent. This type is a base interface for all messages, and child interfaces of IEvent have some other properties. I'd like to have access to the raw message with all properties of nested types instead of only IEvent scope. These properties can be seen through RMQ admin dashboard and I think there should be a way to put them out.
You could use context.TryGetMessage<T>() to request the specific type, which essentially attempts to deserialize the message into the specified type (as long as it is in the list of messageTypes serialized into the header).
Otherwise, you can use context.TryGetMessage<JToken>(), and get the JToken from JSON.NET, which can be used to navigate the message body.
Honestly, this isn't the best approach to properly handling events, etc., so I'd refer to the documentation to see how to properly consume the various message types (and let MassTransit do the hard work).

Custom NHibernate session implementation

I'm working on a system that performs bulk processing using NHibernate. I know that NHibernate was not designed for bulk processing, but nonetheless the system is working perfectly thanks to a number of optimizations.
The object at the lowest level of granularity (i.e. the root of my aggregates) has a number of string properties that cannot (or, it does not make sense to) be modeled as many-to-one's (e.g. "Comment"). In reality, the fields in the DB corresponding to these properties take only so many values (for example because most - but not all - comments are machine-generated), with the result that when hydrating tons of objects, lots of memory is wasted by having thousands and thousands of instances of strings with the same values.
I was thinking of optimizing this scenario transparently by creating my own NHibernate custom type that enhances NHibernate's StringType by overriding NullSafeGet() and doing a dictionary lookup to return the same instance of each string occurrence over and over. In other words, I would perform a kind of string interning myself. The use of a custom type allows me to select which properties of which objects should be "interned" by just specifying this type in the mapping files.
Ideally, I would like to "stick" this dictionary into the session, so that the lifetime of this string pool is tied with the lifetime of the first level cache. After all, from our system's point of view, it makes sense to intialize this string pool at the same time a session and its first-level cache are initialized, and to nuke the string pool at the same time a session is closed. It is also a desirable property that concurrent sessions are completely isolated from each other by having their own private dictionaries.
Problem is, I can't find a way to "inject" a custom implementation of NHibernate's session into NHibernate itself so that an IType can access it at NullSafeGet() time, short of creating my own personal NHibernate code branch.
Is there a way to provide NHibernate with a custom session implementation?
I see three different approaches to solve this:
1. Use a interceptor
In the IInterceptor, you get:
void AfterTransactionBegin(ITransaction tx);
void BeforeTransactionCompletion(ITransaction tx);
2. Wrap opening and closing the session:
Opening and closing the session is an explicit call. It should be easy to wrap this into a method.
public ISession OpenSession()
{
var session = sessionFactory.CreateSession();
StringType.Initialize();
}
You could make it much nicer. I wrote a transaction service, which has events. Then you could handle begin transaction and end transaction events.
3. Don't attach the string cache to the session
It doesn't need to be related to the session. The strings are immutable objects, it doesn't hurt when you mix them between sessions. To avoid that the cache grows unlimitedly, you could write your own or use an existing "most recently used"-cache. After growing to a certain size, it throws away the oldest items.
This would probably require some time to implement, but would be very nice and easy to use.

Validating a Self Tracking Entity (EF) through WCF

I'm having trouble defining what my OperationContract should be when adding / updating an entity. I want to send an entity (or list of entities) to the ObjectContext via the WCF Service (which will instantiate a Business Manager for me to do the actual validation).
If the entity passes all of the validation rules (which could very well require querying the database to determine pass/fail for more complex business rules), it'll be saved to the database, and I'll need to be able to pass back its ID (Identity Column primary key) and the value of the concurrency token (timestamp column), but if it fails, obviously we want to have a message or messages saying what was wrong. In the case of an update, all we would need would be the new value of a concurrency token, but again we'd want the validation message(s).
To make it trickier, an entity could have multiple child/grandchild entities as well. For instance, a Trip will have Stops, which could potentially have Orders.
I'm just wondering how people handle this in the real world. The simplest examples just show the WCF service's operations like:
[OperationContract]
bool AddEntity(Entity e);
[OperationContract]
bool UpdateEntity(Entity e);
Does anyone have any great ideas for handling this? I guess I'm really just looking for practical advice here.
Should we be trying to save a collection of objects in one service call?
Should we be conveying the validation messages through a fault contract?
Any advice/input would be helpful, thanks!
Should we be trying to save a
collection of objects in one service
call?
If you mean saving whole object graph in one call then the answer is definitely yes. If you mean saving multiple independent object graphs (collection) in one call then the answer is probably yes. It is good idea to reduce number of roundtrips between client and service to minimum but in the same time doing this can introduce complications. You must decide if the whole collection must be saved as atomic operation or if you are happy with saving only part of the collection and returning errors for the rest. This will influence the rest of your architecture.
Should we be conveying the validation
messages through a fault contract?
Yes but only if you will use save operation as atomic because fault contract is exception and exception should break your current operation and return only validation errors. It should be enough to have single fault contract which will transfer all validation errors. Don't fire the exception for each single validation error because it can make your application pretty annoying and useless.
If you want to save only part of the collection which passes validations and return errors for the rest you should not use fault contracts. Instead of fault contracts you should have some container data contract used for response which will carry both ids and timestamps for saved data and ids and errors for unsaved data.
One little note to STEs: Passing back just Ids and timestamps can be probably tricky. I'm not sure if you don't have to turn off tracking when you want to set them and after that turn the tracking on again.

web service data type (contract)

i have a general design question.
we have a fairly big data model that represents an clinical object, the object itself has 200+ child attributes in the hierarchy.
and we have a SetObject operation, and a GetObject operation. my question is, best practice wise, would it make sense to use that single data model in both operations or different data model for each? Because the Get operation will return much more details than what's needed for Set.
an example of what i mean: the data model has say ProviderId, and ProviderName attributes, in the Get operation, both the ProviderId, and ProviderName would need to be returned. However, in the Set operation, only the ProviderId is needed, and ProviderName is ignored by the service since system has that information already. In this case, if the Get and Set operations use the same data model, the ProviderName is exposed even for Set operation, does that confuse the consuming developer?
It would say: it depends :-)
No seriously. How do you edit / work on the object? I assume your software is calling the WCF service to retrieve an object, using an ID or a search term or something.
So you get back the object with 200+ attributes. How do you work on it, how much of it do you typically change?
If you typically only change a handful of attributes - then maybe having a generic SetProperty method on the service that would take the object ID, a property name, and a new value, might make sense. But think about how this is going to work:
the server side code will get the ID for the object
it will load the object from the database
it will then set a single property to a new value
it will save the object back to the database
What if you update four properties? You'd go through 4 of those cycles. Or: you could extend the SetProperty method to include a dictionary of (property name, value) pairs.
So I guess it depends on how many of those 200 properties are you changing at any given time? If you change 10%, 20% of those properties - wouldn't it be easier to just pass back the whole, modified object?
This looks like a good candidate for using your clinical object as canonical model and providing a restful style service interface. You can then provide different views, or representations of your your data object with only the fields required based on the usage model. Your verbs (get, set) will become the http standard Get, Put.
There are a number of open source Rest frameworks that you can use to make this easier to get started. Restlet is one that I have used successfully.