I was using Aerospike since 3.4 and Python client 1.0.31.
Currently upgraded to Aerospike 3.6.3 and Python client 1.0.50.
Since Python client doesn't have Async writes feature, I am planning to go with Golang. Also read that Go fits well with Aerospike (http://www.aerospike.com/blog/go-aerospike-a-perfect-match/)
I would like to know what are the consequence I will face on changing the client and how to handle them.
One of the issue I see is serialization. As I was using python client since Aerospike 3.4, How to handle older serialized data like float values. I Need not worry on new data as recent releases support floats natively.
Thanks in Advance.
Well, "Python client doesn't have Async" needs to come with a big yet. The C client 4.0.0 provides async operations. The current work being done in the Python client is compatibility with Python >= 3.4. Async is something that is planned.
The main thing to consider when moving from one language client to another, or when combining different SDKs is how to handle 'unsupported' types. You'll have to review your data for where it will contain serialized data in as_bytes, encoded as AS_BYTES_PYTHON. See the 'Serialization' section in the Python API doc. You want to come up with a common custom serialization scheme to allow your Go client to read that data.
Related
It seems that in the Java SDK it is not implemented to delete Operations. The REST API supports it. So I'm wondering if I miss something or if this is the case.
Are there any workaround except using a REST Client to delete Operation(s) in a Java Application?
No, currently not (but feel free to send a pull request with an added method).
As background, operations should usually not be deleted by clients, but instead cycled through their process (pending -> executing -> successful/failed). If you delete an operation, it will be not available anymore and you cannot reproduce what happened on a device at a particular point in time. Deletion is usually taken care of by data retention management.
The easiest way to use an API that is not implemented in the client is calling the rest() method on your platform object.
This will return you the underlaying RestConnector for all API (fully initialised with credentials) and you can execute the calls with it (kind of manually).
There is a ton of documentation on academic theory and best practices on how to manage versioning for RESTful Web Services, however I have not seen much discussion on how multiple REST APIs interact with data.
I'd like to see various architectural strategies or documentation on how to handle hosting multiple versions of your app that rely on the same data pool.
For instance, suppose you make a database level destructive change to a database table that causes you to have to increment your major API version to v2.
Now at any given time, users could be interacting with the v1 web service and the v2 web service at the same time and creating data that is visible and editable by both services. How should this be handled?
Most of changes introduced to API affect the content of the response, till changes introduced are incremental this is not a very big problem (note: you should never expose the exact DB model directly to the clients).
When you make a destructive/significant change to DB model and new API version of API is introduced, there are two options:
Turn the previous version off, filter out all queries to reply with 301 and new location.
If 1. is impossible to need to maintain both previous and current version of the API. Since this might time and money consuming it should be done only for some time and finally previous version should be turned off.
What with DB model? When two versions of API are active at the same time I'd try to keep the DB model as consistent as possible - having in mind that running two versions at the same time is just temporary. But as I wrote earlier, DB model should never be exposed directly to the clients - this may help you to avoid a lot of problems.
I have given this a little thought...
One solution may be this:
Just because the v1 API should not change, it doesn't mean the underlying implementation cannot change. You can modify the v1 implementation code to set a default value, omit the saving of a field, return an unchecked exception, or do some kind of computational logic that helps the v1 API to be compatible with the shared datasource. Then, implement a better, cleaner, more idealistic implementation in v2.
when you are going to change any thing in your API structure that can change the response, you most increase you'r API Version.
for example you have this request and response:
request post: a, b, c, d
res: {a,b,c+d}
and your are going to add 'e' in your response fetched from database.
if you don't have any change based on 'e' in current client versions, you can add it on your current API version.
but if you'r new changes are going to change last responses, for example:
res: {a+e, b, c+d}
you most increase API number to prevent crashing.
changing in the request input's are the same.
While using Shared Access Signature to fetch Table data using StorageClient Library 2.0, I am consistently getting the error "The supplied credentials '{0'} cannot be used to sign request". From GitHub what I can understand is that the error is due to sasCredentials.CanSignRequest returning false...but as per code in GitHub there are no scenarios where it is supposed to return true...is it a bug...or am I doing something wrong here?
StorageCredentials sasCredentials = new StorageCredentialsSharedAccessSignature(sharedAccessSignature);
CloudTableClient ctc = new CloudTableClient(tableEndpoint, sasCredentials);
StorageCredentialsSharedAccessSignature type does not exist in Azure Storage Client Library 2.0. Hence, I am assuming you are still using an older version, most probably 1.7. As also described in the Introducing Table SAS (Shared Access Signature), Queue SAS and update to Blob SAS blog post, support for Table SAS was added in newer releases of the Azure Storage Client Library.
I strongly recommend upgrading to 2.0, which also has many other improvements in addition to the functionality you are looking for. For more details, please refer to Introducing Windows Azure Storage Client Library 2.0 for .NET and Windows Runtime.
I want to access a Cassandra instance in an Iphone application and i need an objectiveC client
for that. I couldnt find one, Thrift is supposed to support ObjectiveC but I couldnt figure out how to do that. If anyone has any knowledge on the subject it is very much appriciated.
Apache Thrift has a generator for ObjC. (Complete list).
If you will distribute the application I would considered the alternative to create a server with simple interface (eg. http) that in turns access the cassandra database.
But if you are the only user it could work with direct database access.
If you're not sure about how to get Thrift to generate the bindings then go with what Schildmeijer posted. Use a simple web server running php + phpcassa or any language of your choice that comes with a high level client library -- list here: High level clients.
You can use some open source libraries to expose resources from Cassandra as JSON or XML then use NSURLRequests to do the work. If you go with XML then Google's GDataXML is an excellent choice of parser, if you go with JSON then json-library on Google Code is another great choice.
Have fun!
I'm interested in grabbing the EPG data from DVB-T streams. Does anyone know of any C libraries or an alternative means of getting the data?
tv_grab_dvb can do this. See the subversion repository for sources.
tv_grab_dvb is made to work with the stream grabbed from the DVB-T card using dvbtools on Linux, but it may be portable to other platforms - I think it just works with the raw data from the stream.
...a new answer to an old question:
I wrote a utility called dvbtee that can be used as a c++ library, a cross-platform command line utility, or a node.js module.
(despite it being a c++ library, one could still link to it from c code)
The command line utility will parse your streams and output the EPG, depending on the arguments you specify, it can generate plain text or a JSON block of data.
dvbtee: a digital television streamer / parser / service information aggregator supporting various interfaces including telnet CLI & http control
The node.js module will emit events containing the PSIP table data (along with EPG info)
node-dvbtee: MPEG2 transport stream parser for Node.js with support for television broadcast PSIP tables