I'm using Safenet Sentinel LDK to protect our commercial product.
Is it possible to enforce time based features with the software-only keys? I'm reading the documentation and it seems to be only available with the hardware dongles.
Yes. Sentinel SL keys can enforce also time-based licenses. To do that, login to the Sentinel EMS and define a Product with expiration date or time period to the Features you'd like.
Once you've created an Entitlement that includes that Product, the time-based license terms will be enforced.
If you have any further questions, contact the SafeNet support team.
Related
When people describe Paxos, they always assume that there are already some proposers in the cluster. But where are the proposers from, or what decides which processes to be proposers?
How the cluster is initially configured and how it is changed is down to the administrator who is trying to optimise the system.
You can run the different roles on different hosts and have different numbers of them. We could run three proposers, five acceptor and seven learners, whatever you choose. Clients that need to write a value only need to connect to proposers. With multi-Paxos for state replication clients only need to connect to proposers as that is sufficient and the clients don't need to exchange messages with any other role type. Yet there is nothing to prevent clients from also being learners by seeing messages from acceptor.
As long as you follow the Paxos algorithm it all comes down to minimising network hops (latency and bandwidth), costs of hardware, and complexity of the software for your particular workload.
From a practical perspective your clients need to be able to find proposers in the face of failures. A cluster administrator will be configuring which nodes are to be proposes and making sure that they are discovered by clients.
It is hard to visualize from descriptions of the abstract algorithm how things might work as many messaging topographies are possible. When applying the algorithm to a practical application its fair more obvious what setup minimises latency, bandwidth, hardware and complexity. An example might be a three node MySQL cluster running Paxos. You want all three servers to have all the data so they are all learners. All must be acceptors as you need three at a minimum to have one node fail and still maintain progress. They may as well all be proposers to give the best availability and simplicity of software and configuration. Note that one will become the distinguished leader. The database administrator doesn't think about the Paxos roles as they just set up a three-node database cluster.
The roles in the cluster may need to change. For example, you might want to expand the capacity of a database cluster. Or a server might die so you need to change the cluster membership to swap the dead one for a fresh one. For the Paxos algorithm to work every process must have a strongly consistent view of which processes are in which roles. How do you get consensus? You use Paxos to fix a new value of the cluster membership.
I'm looking to build a subscription system using redis. I was thinking about to put a key with the subscription ending date as an expire time and then using redis keyspace notifications to proceed to remove/renew subscription based on that but I was reading about Pub/Sub reliability and found out it hasn't. So I don't know if it's the best choice for a subscription system.
Is there a better approach to accomplish this using redis?
Redis has specific commands to implement a subscription system:
https://redis.io/topics/pubsub when you don't need persistence
https://redis.io/topics/streams-intro when you need persistence (since 5.0)
I understand bitcoin uses blockchain technology to maintain a decentralised ledger of all transactions. I have also read many posts eluding to future applications of blockchain technology, none of which have been very clear to me.
Is blockchain technology simply a decentralised database with consensus validation of the data? If this was the case surely the db would grow to be too large to be effectively decentralised?
To help me understand, can anyone point me to a clear example of a non-bitcoin blockchain application?
Yes, its true that the blockchain database would grow overtime which is what is called "blockchain bloat". Currently the blockchain growth of Bitcoin is roughly less than 100mb a day. Today (2016) the bitcoin blockchain takes up about 60-100GB of space which took about 6 years to accumulate. It is indeed growing faster, but also limited by the blocksize cap of 1MB per block (every 10 minutes). Some proposed solutions have been:
SPV nodes: This is how your phone doesn't need to download the entire blockchain, but retrieve its data from SPV nodes that have the entire blockchain.
Lightning network - This is how Bitcoin can overcome the 1MB memory cap.
Those are just some of the solutions for bitcoin that I know of, as for altcoin related solutions. NXT/Ardor has implemented the solution of pruned data, because NXT/Ardor gives the ability to upload arbitrary data and messages onto its blockchain, the bloat is much more apparent in this scenario. The NXT/Ardor blockchain has the ability to delete previous data every 2 weeks and only keep the hash of its data on the blockchain which only takes a few KB. They also have the ability to retain all of the blockchain data with the pruning turned off which would mark a node as an Archival Node and other nodes can replicate this node and become an Archival node.
From my understanding NXT/Ardor has been one of the few blockchains that has production ready decentralized data storage system, marketplace, stock exchange, and messaging system built into its blockchain.
Blockchain is not just a decentralised database, but it is much more than that. While the original Bitcoin blockchain allowed only value to be transferred, along with limited data with every transaction, several new blockchains have been developed in the past 2-3 years, which have much more advanced native scripting and programming capabilities.
Apart from the Bitcoin blockchain, I would say that there a few other major blockchains like Ethereum, Ripple, R3's Corda, Hyperledger. Although Ethereum has a crypto-currency called Ether, it is actually a Turing complete EVM (Ethereum Virtual Machine). Using Ethereum, you can create Smart Contracts that would themselves run in a decentralised manner. As a developer, it opens up completely new avenues for you and changes your perspective of writing programs. While Ripple is mainly geared towards payments, Corda and Hyperledger are built with a view of being private/permissioned blockchains, to solve the issues such as scalability, privacy, and identity. The target markets for Hyperledger and Corda are mostly banks and other Financial Institutions.
As for the non-bitcoin application of blockchain, you can certainly look at some companies like Consensys (multiple different use cases on blockchain), Digix Global (gold tokens on the blockchain), Everledger (tracking of diamonds on the blockchain), Otonomos (Company registration on the blockchain), OT Docs (Trade Finance and document versioning on the blockchain) amongst others.
Blockchain is:
Name for a data structure,
Name for an algorithm,
Name for a suite of Technologies,
An umbrella term for purely distributed peer-to-peer systems with a common application area,
A peer-to-peer-based operating system with its own unique rule set that utilizes hashing to provide unique data transactions with a distributed ledger
Blockchain is much more than a "database". Yes the blocks on the chain stores data but it is more like a service. There are many applications of blockchain. Read about them: here. If you want to see the code of a blockchain application, try this one: here.
Blockchain is combination of p2p network, decentralised database and asymmetric cryptography
P2P network means you can transfer data between two deferent network nodes without any middleman, decentralised db means every nodes of network has one replica of network db and asymmetric cryptography means you can use digital signature to validate the authenticity and integrity of a messages
I've been trying to find out ways to improve our nservicebus code performance. I searched and stumbled on these profiles that you can set upon running/installing the nservicebus host.
Currently we're running the nservicebus host as-is, and I read that by default we are using the "Lite" version of the available profiles. I've also learnt from this link:
http://docs.particular.net/nservicebus/hosting/nservicebus-host/profiles
that there are Integrated and Production profiles. The documentation does not say much - has anyone tried the Production profiles and noticed an improvement in nservicebus performance? Specifically affecting the speed in consuming messages from the queues?
One major difference between the NSB profiles is how they handle storage of subscriptions.
The lite, integration and production profiles allow NSB to configure how reliable it is. For example, the lite profile uses in-memory subscription storage for all pub/sub registrations. This is a concern because in order to register a subscriber in the lite profile, the publisher has to already be running (so the publisher can store the subscriber list in memory). What this means is that if the publisher crashes for any reason (or is taken offline), all the subscription information is lost (until each subscriber is restarted).
So, the lite profile is good if you are running on a developer machine and want to quickly test how your services interact. However, it is just not suitable to other environments.
The integration profile stores subscription information on a local queue. This can be good for simple environments (like QA etc.). However, in a highly distributed environment holding the subscription information in a database is best, hence the production profile.
So, to answer your question, I don't think that by changing profiles you will see a performance gain. If anything, changing from the lite profile to one of the other profiles is likely to decrease performance (because you incur the cost of accessing queue or database storage).
Unless you tuned the logging yourself, we've seen large improvements based on reduced logging. The performance from reading off the queues is same all around. Since the queues are local, you won't gain much from the transport. I would take a look at tuning your handlers and the underlying infrastructure. You may want to check out tuning MSMQ and look at the disk you are using etc. Another spot would be to look at how distributed transactions are working assuming you are using a remote database that requires them.
Another option to increase processing time is to increase the number of threads consuming the queue. This will require a license. If a license is not an option you can have multiple instances of a single threaded endpoint running. This requires you shard your work based on message type or something else.
Continuing up the scale you can then get into using the Distributor to load balance work. Again this will require a license, but you'll be able to add more nodes as necessary. All of the opportunities above also apply to this topology.
I was watching this screencast at RailsLab where the presenter claims that it's possible to have a master DB for write operations and a slave DB for read operations. While for certain types of Web sites (e.g. blogs, social networks, Web 2.0 sites, etc.) it is acceptable for the master and slave DBs not to be 100% synchronized for short periods of time, but AFAIK this is not acceptable in domains such as banking and insurance.
My question is that if such usages of master-slave replication are reliable enough for banking and insurance (and similar) applications where there's no room for violation of the integrity of the system. In other words, if it is acceptable for the master and slave DBs to be out of sync for short periods of time.
If not, what horizontal (not vertical) solutions are available for scaling a database systems in such environments that there's absolutely no room for system integrity to be compromised?
If not, what horizontal (not vertical) solutions are available for scaling a database systems in such environments that there's absolutely no room for system integrity to be compromised?
Clustering