I'm confused trying to figure out whether RedisTimeSeries is free or not. I saw it is open source, but i also saw something there is rate limit. So i am very confused. Could I setup RedisTimeSeries on my own cluster without paying or not?
RedisTimeSeries is source available and licenced under Redis Source Available License (RSAL). This license is permissive if your product is not a "database product". The definition of "database product" is within the license. There is no such thing as a "rate limit" for RedisTimeSeries, at least not with regards to the license.
Disclaimer: I am at the time of writing Product Manager at Redis Inc.
Related
The defective laptop HDD was replaced with an SSD. For many years up to this point, backups were stored on an external HDD (WD My Passport Ultra, 4 TB) using the "WD Backup" software. After numerous failed attempts, the most recent backup could finally be successfully restored with "WD Backup" according to the following instructions:
https://support-en.wd.com/app/answers/detailweb/a_id/5207/kw/restore
Now, however, I would need to restore additional backups of files that were backed up PRIOR to the last backup. As with the first failed attempts when I didn't know the instructions yet, I get the message that there is no backup plan, i.e. I can't save anything back.
In plain language, this means that I would have to delete and reinstall the "WD Backup" software again. And I would have to repeat this in the future for every access to older backups that is still required.
But that can't really be the case, can it? What did I not understand?
Anyone know a reasonable way?
In addition, since the disk replacement, I no longer dare to save new backups on the external HDD: I fear that I will no longer be able to access the previous backups at all.
Anyone know advice?
I would be grateful for every tip!
PS: I am aware that the Western Digital software "WD Backup" has been "out of support" since 2019. However, it is still used in many places. The same problem exists with the replacement software "Acronis".
I've been needing a new VM host for some time now, and from working with/on AWS at work, "The Cloud" seems to be a good idea.
I've done some math, and no matter how I count, it's going to be cheaper to do it myself, than colo or something else. Plus, I really like lots of blinking lights :D
A year or so, I heard about Openstack and have been looking cursory at it since then. Seems big and complex (and scary!), and some friends who have been trying to do it at work for a year and still not quite finished/succeeded, indicate that it is what it seems :)
However, I like tormenting myself, so I've decided I'm going to give it a try. It does provide all the functionality, and then some, that I need. Theoretically, I could go with Vagrant, but that's not quite half-way to what I want/need.
So, I've been looking at https://en.wikipedia.org/wiki/OpenStack#Components and from that came to the following conclusion:
Required: (Nova, Glance, Horizon, Cinder)
This seems to be the "core" services. I need all of them.
Nova
Compute fabric controller
Glance
Image service (for templates)
Horizon
Dashboard
Cinder
Block storage devices (can work with ZoL w/ 3rd party driver)
Less important: (Barbican, Trove, Designate)
I really don't need any of this, it's more of "could be nice to have at some point".
Barbican
REST API designed for the secure storage, provisioning and management of secrets
Trove
Database-as-a-service provisioning relational and non-relational database engine
Designate
DNS as a Service
Possibly not needed: (Neutron, Keystone)
These ones I don't know if I need. I have DHCP, VLAN, VPN, DNS, LDAP, Kerberos services on the network that work just fine, and I'm not replacing them!
Neutron (previously Quantum)
Network management (DHCP, VLAN)
Keystone
Identity service (can work with existing LDAP servers)
Not needed: (Swift, Ceilometer, Ironic, Zaqar, Searchlight, Sahara, Heat, Manilla)
Meh! I'm doing this for me, for my basement and for my own development and enjoyment, so don't need that. Would be nice to go with a fully object based storage, but that's not feasible for me at this time.
Swift
Object storage system
Ceilometer
Telemetry Service (billing)
Ironic
Bare metal provisoning instead of virtual machines
Zaqar
multi-tenant cloud messaging service for Web developers (~ SQS)
Searchlight
Advanced and consistent search capabilities across various OpenStack cloud services
Sahara
Easily and rapidly provision Hadoop (storing and managing vast amounts of data cheaply and efficient) clusters
Heat
Orchestration layer (store the requirements of a cloud application in a file that defines what resources are necessary for that application)
Manila
Shared File System Service (manage shares in a vendor agnostic framework)
If we don't count storage (I already have my own block storage, which I can use with Cinder and some 3rd party plugins/modules) and compute nodes (everything that's left over will become compute nodes), can I run all this on one machine? With a hot standby/failover?
Everything is going to be connected to the same power jack, same rack, same [outgoing] network cable so more redundancy that that is overkill. I don't even need that, but "why not" :)
The basic recommendation I've heard is four to six machines. And after a lot of pestering the ones who said that, it turns out that "two storage, two controller, two compute". Which, is what I was thinking as well: Running this on two machines should be enough. They're basically only going to run Glance, Horizon and Cinder. And possibly Neutron and Keystone.
Neither of them seems to be very resource-heavy.
Is there something I'm missing?
Oh, and nothing of this is going to face the 'Net! It's all just for me.
Though it is theoretically possible to bring up OpenStack without Keystone, it is almost practically impossible and makes the system pretty inconvenient to use.
You can definitely run full OpenStack on a machine (or even in a VM). Checkout the devstack (http://docs.openstack.org/developer/devstack/) -- you just run a shell script to bring up a full working OpenStack setup.
As long as you are not worried about availability and your workload is minimal, single-node deployment is a pretty good start to get your hands wet.
I am new to Google Cloud Bigtable and have a very basic question as to whether the cloud offering protects my data against user error or application corruption? I see a lot of mention on the Google website that the data is safe and protected but not clear if the scenario above is covered because I did not see references to how I can go about restoring data from a previous point-in-time copy. I am sure someone on this forum knows!
Updated 7/24/2020: Bigtable now supports both backups and replication.
Currently we create backups to protect against catastrophic events and provide for disaster recovery.
As of February 2017, Cloud Bigtable does not provide backups from user errors or application bugs at this time. We hope to make this feature available in a future release - there is no planned delivery date at this time. In the meantime you may make your own snapshots using HBase or a similar process.
In addition to Google's disaster protection #Greg Dubicki mentioned, at Egnyte we backup our mission-critical Bigtable data into GCS, as Hadoop sequence files, using a couple Python wrappers for the Bigtable HBase shaded jar.
This provides for a quick recovery, fully under our control (ie. no need to wait for Google support to recover data on demand) in case our BT cluster failed or if an error on our software/admin side corrupted the data. A usefull side-effect is access to historical BT data for debugging.
Last week I wrote about that on Egnyte's engineering blog: https://medium.com/egnyte-engineering/bigtable-backup-for-disaster-recovery-9eeb5ea8e0fb. And we are thinking about open-sourcing this. We'll see how it goes.
UPDATE: On Thu Feb 20 I have published the scripts on Egnyte’s GitHub, under MIT license - https://github.com/egnyte/bigtable-backup-and-restore.
As of February 2020, Cloud Bigtable does provide backups, but only vaguely described as:
(...) we [do] create backups of your data to protect against catastrophic events and provide for disaster recovery.
Source
I am currently working on a file system application in C# that requires users to login to a Perforce server.
During our analysis, we figured that having unique P4 login accounts per user is not really beneficial and would require us to purchase more licenses.
Considering that these users are contractual and will only use the system for a predefined amount of time, it's hard to justify purchasing licenses for each new contractual user.
With that said, are there any disadvantages to having "group" of users share one common Login account to a Perforce server ? For example, we'd have X groups who share X logins.
From a client-spec point-of-view, will Perforce be able to detect that even though someone synced to head, the newly logged user (who's on another machine), also needs to sync to head ? Or are all files flagged as synced to head since someone else synced already ?
Thanks
The client specs are per machine, and so will work in the scenario you give.
However, Perforce licenses are strictly per person, and so you will be breaking the license deal and using the software illegally. I really would not advocate that.
In addition to the 'real' people you need licenses for, you can ask for a couple of free 'robot' accounts to support things like automatic build services, admin etc.
Perforce have had arrangements in the past for licensing of temporary users such as interns, and so what I would recommend is you contact them and ask what they can do for you in your situation.
Greg has an excellent answer and you should follow his directions first. But I would like to make a point on the technical side of sharing clients on multiple machines. This is generally a bad idea. Perforce keeps track of the contents of each client by client name only. So if you sync a client on one machine, and then try to sync the same client on another machine, then the other machine will only get the "recently" changed files and none of the changes that were synced on the first machine.
The result of this is that you have to do a lot of force syncing. Or keep track of the changelists you sync to and do some flushing and then syncing.
I am looking for a laundry list of reasons why a large company with 24000 employees would NOT want to put their Primary SharePoint system for their internet into the cloud?
What are the limitations and challenges compared to operating your own farm servers.
Thank you for your thoughts.
Depending on the provider, they might limit your choice of webparts, addons, solutions, etc.
Check for:
Addons that contain unmanaged code (BPOS does not allow this, for example)
Addons that need to elevate privileges
Anything that needs to run in a full-trust environment
Ask about any other possible limitation. At 24k users, you probably are only looking at high-end providers, but ask, just in case.
You mean apart from the fact that hosting your entire sensitive company data, trade secrets, potential HR data at a third party that may or may not do a good job securing it from other customers on the same cloud may be just a tiny little risk?
Or that if the provider has an outage (like the Amazon S3 blackout yesterday) leaves you somewhat powerless and at the mercy of the provider?