REDIS server requirements for BlueDomino hosting - redis

I have a Python / REDIS service running on my desk that I want to move to my Blue-Domino-hosted site. I've got Python available on the server, but not REDIS. They don't give me root access to my Debian VM so I can't git, extract, and install myself from a Unix prompt.
Their tech support might do the install for me, but they need me to point them to server requirements, which I don't see on the REDIS download page.
I could probably FTP binaries to the site if they were available, but that's dicey.
Has anyone dealt with this?

Installing Redis is actually quite easy, from source. It doesn't have any dependencies, so just download the tarball, unzip it, and follow the install instructions. I'm always afraid of doing that sort of stuff, but with Redis it really was a breeze. If you don't dare to do it their tech support should be able to do it.

If it is Intel/AMD server, you can compile the Redis somewhere (32 bit version for example), and upload it as binary. Then start it with Python. I did this myself couples weeks ago.
For port you will need to use something over 1000. I don't recommend to use default port. Remember to change LogLevel too. Daemonize works well as non-root too.
Some servers blocks all external ports, so you will not be able to connect to Redis from outside, but this will be a problem only if you connect from different machine. For same machine should be OK, since is "internal".
However, I am unsure how hosting administrator will react when he sees the process running :) I personally will kill it immediately.
There is other option as well - check service like Redis4you.com . But their free account is small, you probably will need to spend some money for more RAM.

Is your hosting provider looking for a minimum set of system requirements for running Redis? This is indeed not listed on the Redis website. Probably because there aren't many exotic requirements. Also it depends a lot on your use case. Basically what you need to run Redis is:
Operating system: Unix like, Linux is recommended (one reason to favor Linux I've heard of is the performance of its TCP/IP stack)
Tools: GCC, make, (git).
Memory: lots (no seriously this depends on your use case, but because Redis keeps everything in-memory you need a least more RAM than the size of your dataset).
Disk: disk access for making snapshots.

The problem seems to be dealing with something non-traditional with my BlueDomino hosting. Since this project is a new venture, I think the best course for me is to rent a small Linux VM from rackspace and forget about the BD hosting.

Related

Liferay Cloud IDE, Multiple developpers working on same liferay server

We want to start working with liferay. But the server is too heavy and the developpers computer don't have enought RAM. We want to centralize the server instance.
In other words, we want to build a development server where all developpers can connect and directly develop in their web browser, compile, view the result and push the code to git repository.
I found some good cloud IDE like eclipse CHE and a good maven archetype for liferay projet. So i can build the projet with maven. But now i want to know if it is possible to configure Liferay like every developpers can work without troubling another. And if possible, How ?
The developpers can share the same database and can use different port. Maybe, the server can generate tempory URL like some online cloud editor.
I found this post Liferay With Multiple Server Instances, but i don't think is the best way because he create one server per project. I think is too heavy.
If necessary, We have kubernetes in our IS.
Liferay's tomcat bundle, by default, is configured to take a maximum of 2.5G for the process, but it can run with far less - the default only recently was bumped up, because many people never change the default and then wonder why production systems run out of memory. For 1 concurrent user (the sole developer) on a machine, I guess that the previous default of 1G heap space is enough. Are you saying that that's too much for your developers' machines?
Having many developers on a shared server poses one problem: Yes, you may deploy different code from different machines, but: How about setting a breakpoint? Can you connect with multiple debuggers? If something fails, how do you know whos recent deployment caused the failure?
Sharing a server is an integration technique, not a development technique. If your developers don't have enough memory available for running their own Liferay server next to their IDE, it's a lot cheaper to upgrade their machines than to slow them down when everybody is accessing the same server and they can't properly debug. You pay the memory once, but your waiting developers by the hour.
Is it possible to share one server? Sure it is.
Is it possible to share one server without troubling each other? I doubt.
When you say: You think it's too heavy: What are you basing that assumption on? What does the actual developer machine look like and what keeps you from investing in the extra memory?
It's trivial to share some infrastructure - i.e. have all of them connect to the same database server (and give everyone their own schema). But just the extra effort and setup might require you to pay the developers by the hour as much as you'd otherwise pay for a couple of memory chips.
And yet another option is: Run Liferay on a remote server, but keep 1 instance per developer. This way you don't need the local memory, but can have the memory in the cloud. Calculate if you pay more for remote cloud machines than for local memory - that decision is up to you.

Redis localhost limitations and costs

I downloaded Redis server and cli to my local machine and it working good.
I just wanted to know if I can use it also in production server:
Are there any critical limitations? For example: Can I use 100 GB for free? (It will be on my computer).
I know that Redis labs cost money per month but if I download the redis to my machine and not using the redis labs, would it be free? (and the cost will be only the storage of the machine I using).
Redis is an open source software, licensed under BSD. That basically means you can do anything you want with it, without owing anyone anything.
Redis Labs, the home of open source Redis and the provider of commercial products that leverage on it, offers a wide spectrum of solutions - whether hosted, as-a-service, downloadable, remotely managed and so forth. You can (and should sometimes) use them, but that's definitely not a requirement.
Disclaimer: I work at Redis Labs and with the open source project.

Sandbox a java applet

I heard that minecraft server is very leaky, can consume a lot of resources very quickly. People say to use a virtual machine, all well and good. I'm making an application to automate server setup, and I'd like my whole application (including minecraft) to run in an ultra basic auto setup vm (or something similar). I've heard of mineos, but I'm not sure if that can be set up very quickly. The vm will be so basic it won't even have a ui. I'm using a Mac, not planning to distribute the server WITH the application but have it download from the minecraft server, not modified.
I want it to be like a one-click-done solution for the end user, they don't have to worry about minecraft server gobbling up resources because it's be in a controllable virtual machine.
Distrubuting minecraft server (Notch's property) could be an issue, but if anyone knows about that if be happy to hear.
If you intend for a server to be fully configured and only for your user to only have to download and 'open' it, what you're seeking is known as an 'appliance'. Virtualbox supports the open-standard of such appliances, allowing a single file to be distributed and it contains all the virtualized hardware info as well as the OS/filesystem. A number of other formats exist, such as Turnkey.
In all likelihood, I would find MineOS CRUX to be perfectly suited for this sort of one-click-done, since the OS was designed for pretty much exactly what you're trying to do...only without the configure-the-hardware-for-the-user (it uses an ISO and an installer, the process you would automate for the end-user).
That said, this distribution has never at any point packaged Minecraft files, as clearly stated: "this Linux distro does not contain ANY Minecraft files. The scripts are, however, designed to download/update files directly from the source: http://minecraft.net"
Hope this answers all the concerns, despite being an old thread.

How to Test a Network Application with only a Single Computer?

I want to kick myself to learning network programming, starting with implementing existing network protocols. I've finished the (rudimentary) design and will start coding soon. The problem I haven't been able to figure out solution to is related to testing: I only have one Windows laptop running Windows 7 Pro with only a recovery disc (no installation disc) that obviously cannot be used on a VM.
Hard-coding input/output data clearly isn't a good way to test any sort of program. So, what solutions can I look into?
Thanks for your time.
P.S.: In case this matters, I'll do the coding in C++.
You can run a client and a server on the same machine. When accessing the network layer, just use the local callback loop (127.0.0.1 for ipv4 or ::1 for ipv6) to connect to your server when you run the client.
If you provide the APIs that you will be using (wininet, APR, Boost etc) a more detailed answer would be available.
What about a VM with Ubuntu or some other distro of Linux?

Setting up a development environment INSIDE a virtual machine

Heres the problem. I use around three different machines for development. My partner is using two. We have to go through the same freaking set up procedure on all five machines to get to work.
Working with a php project here, so:
Install and configure, PDT, a php debugger, and some version of XAMPP.
Then possible install an svn client, and any other tools.
Again, to each of the five machines.
What if, instead, we did all of this once, in a virtual machine that is set up with the same stack, same versions, as the production server. Then each of us could grab a copy of the VM image, run that image on each of the five machines and do all of our development in that VM. Put Eclipse, apache, mysql, the works, all in that vm.
The only negative of this approach, and please correct me on the only part, is performance. Is it really that big of an issue though? The slowest machine out of the five is a Samsung NC10 powered by an Intel Atom 1.6 ghz processor.
Do you think this is possible and practically usable? Or am I crazy?
I use a VM for development (running on my laptop) and have never had performance problems. Another approach that you could take would be to image the drive in the state that you want. Use Acronis or Ghost to re-image each machine when you need to. Only takes about 5-10 minutes to restore an image on any modern PC.
I use a VM for all my "work" as it keeps it away from my "play". This set up allows me to use the office VPN without exposing my whole machine to the office environment (which I trust about as much as the internets. ;-) Also I don't have to worry about messing up my development environment by trying games or other software. My work VM is currently running inside VirtualBox but I have used VMWare in the past. I have only noticed performance issues when using graphic intensive programs like Webex or the Terminal Server Client.
It can certainly be done. What turns me off is the size of the VM image, which would normally be several GBs. Having it on a network share means it can take longer to transfer then your current setup process takes. I guess an external hard drive would be the easiest way to move it around.
Performance wouldn't be an issue with any web development.
I have to ask why your current machines need to be "re-imaged" each time you sit down for work?
If you're using Windows you'll probably want to use SYSPREP on the master image so that the 'mini-setup' runs when you boot up the virtual machines for the first time.
Otherwise in terms of Windows' point of view, the machines have the exact same SID, hostname and other things - running multiple machines with the same SID on the same network can cause tons of headaches. Even more if you want them to communicate with each other.
I've run websphere for zSeries on a vmware virtual machine with no problem and websphere is more resource intensive then any PHP stack. I find that having a multi core machine or at least hyper threading makes it run a lot faster.
With vmware, disk operations are slower. For PHP development I doubt it would be a problem, but you'd definitely notice it if you are compiling a large C++ project. There is also Sun's VirtualBox which is free, and the latest version is rather nice (but I haven't looked at how slow disk operations are yet).
I am using that idea in practice. Virtual machines are generally great for development.
To run on multiple operating systems and multiple separate development environments.
Preserver older development environments for later support.
Can be easily backed up, when hard drive crashes no need to start from beginning.
Can be copied from developer to another, so everyone don't have to do tedious installations and configurations.
Down sides are:
Virtual machines are slower, you need more powerful computers than you would need otherwise. I would recommend having at least 4 G of ram, but preferably more like 16, fast multi core processors and fast hard drives.
Copying Windows OS virtual machines, each used copy of virtual machine should have it's own product key. When you make a copy, it needs to be registered with new product key.
Did you think about a software configuration manager like ansible, chef or puppet? With such software automation of such tasks is very easy! It can even create fresh vm and then configure it.