Disable the execute permission over SMB protocol on Synology NAS - permissions

I use to run a little NAS with access for a lot of people that are not quite good with the basics.
To avoid saturation of the NAS, i need to make a rule that doesn't allow users that are not administrators to execute file on the shared folder over SMB protocol. It may seem a little restrictive but with 10 people opening heavy file and work on them right on the NAS make it really slow and often it shutdown.
So i run a synology ds214se with the 7.1.1 version, and if anybody can help i'll be glad.

Related

Liferay Cloud IDE, Multiple developpers working on same liferay server

We want to start working with liferay. But the server is too heavy and the developpers computer don't have enought RAM. We want to centralize the server instance.
In other words, we want to build a development server where all developpers can connect and directly develop in their web browser, compile, view the result and push the code to git repository.
I found some good cloud IDE like eclipse CHE and a good maven archetype for liferay projet. So i can build the projet with maven. But now i want to know if it is possible to configure Liferay like every developpers can work without troubling another. And if possible, How ?
The developpers can share the same database and can use different port. Maybe, the server can generate tempory URL like some online cloud editor.
I found this post Liferay With Multiple Server Instances, but i don't think is the best way because he create one server per project. I think is too heavy.
If necessary, We have kubernetes in our IS.
Liferay's tomcat bundle, by default, is configured to take a maximum of 2.5G for the process, but it can run with far less - the default only recently was bumped up, because many people never change the default and then wonder why production systems run out of memory. For 1 concurrent user (the sole developer) on a machine, I guess that the previous default of 1G heap space is enough. Are you saying that that's too much for your developers' machines?
Having many developers on a shared server poses one problem: Yes, you may deploy different code from different machines, but: How about setting a breakpoint? Can you connect with multiple debuggers? If something fails, how do you know whos recent deployment caused the failure?
Sharing a server is an integration technique, not a development technique. If your developers don't have enough memory available for running their own Liferay server next to their IDE, it's a lot cheaper to upgrade their machines than to slow them down when everybody is accessing the same server and they can't properly debug. You pay the memory once, but your waiting developers by the hour.
Is it possible to share one server? Sure it is.
Is it possible to share one server without troubling each other? I doubt.
When you say: You think it's too heavy: What are you basing that assumption on? What does the actual developer machine look like and what keeps you from investing in the extra memory?
It's trivial to share some infrastructure - i.e. have all of them connect to the same database server (and give everyone their own schema). But just the extra effort and setup might require you to pay the developers by the hour as much as you'd otherwise pay for a couple of memory chips.
And yet another option is: Run Liferay on a remote server, but keep 1 instance per developer. This way you don't need the local memory, but can have the memory in the cloud. Calculate if you pay more for remote cloud machines than for local memory - that decision is up to you.

apache restricting bandwidth

Ive been looking around the web without much success.
I am running a local xampp (1.7.0) installation and a web app that i have developed that backups up my file and send them to an FTP server. The problem is that apache seem to be using only a limited amount of my bandwidth and i am unsure why this is happening.
It usually doesn't get above 64KB/s but i know that my current broadband will allow over 1MB/s which is a massive difference. Also if i use my FTP program to login to the server it will let me download in excess of 500KB/s. Does anyone know how i can get around this cos my backups are very big files and take hours to copy at 64KB/s?
Thanks Mic
You seem to be confusing download and upload speeds: "if i use my FTP program to login to the server it will let me download in excess of 500KB/s."
Are you perhaps on a 1Mb/64Kb ADSL or cable connection?

REDIS server requirements for BlueDomino hosting

I have a Python / REDIS service running on my desk that I want to move to my Blue-Domino-hosted site. I've got Python available on the server, but not REDIS. They don't give me root access to my Debian VM so I can't git, extract, and install myself from a Unix prompt.
Their tech support might do the install for me, but they need me to point them to server requirements, which I don't see on the REDIS download page.
I could probably FTP binaries to the site if they were available, but that's dicey.
Has anyone dealt with this?
Installing Redis is actually quite easy, from source. It doesn't have any dependencies, so just download the tarball, unzip it, and follow the install instructions. I'm always afraid of doing that sort of stuff, but with Redis it really was a breeze. If you don't dare to do it their tech support should be able to do it.
If it is Intel/AMD server, you can compile the Redis somewhere (32 bit version for example), and upload it as binary. Then start it with Python. I did this myself couples weeks ago.
For port you will need to use something over 1000. I don't recommend to use default port. Remember to change LogLevel too. Daemonize works well as non-root too.
Some servers blocks all external ports, so you will not be able to connect to Redis from outside, but this will be a problem only if you connect from different machine. For same machine should be OK, since is "internal".
However, I am unsure how hosting administrator will react when he sees the process running :) I personally will kill it immediately.
There is other option as well - check service like Redis4you.com . But their free account is small, you probably will need to spend some money for more RAM.
Is your hosting provider looking for a minimum set of system requirements for running Redis? This is indeed not listed on the Redis website. Probably because there aren't many exotic requirements. Also it depends a lot on your use case. Basically what you need to run Redis is:
Operating system: Unix like, Linux is recommended (one reason to favor Linux I've heard of is the performance of its TCP/IP stack)
Tools: GCC, make, (git).
Memory: lots (no seriously this depends on your use case, but because Redis keeps everything in-memory you need a least more RAM than the size of your dataset).
Disk: disk access for making snapshots.
The problem seems to be dealing with something non-traditional with my BlueDomino hosting. Since this project is a new venture, I think the best course for me is to rent a small Linux VM from rackspace and forget about the BD hosting.

Does it make sense to put all development works in Cloud?

Is it possible having virtual machines in the cloud, install visual studio there, and making developers using the 'cloud' to do day-to-day programming work? Is the cost going to be too high? Is the speed going to be too slow?
Where can I find statistics or numbers to convince people?
I like using remote virtual machines to run development servers, but I don't like using my IDE on a remote server. The latency is noticeable. If you're without an internet connection you can't work. My happy compromise is to have a dev server available (EC2) and sync it with my laptop via git.
It is completely possible to do this, using a service like Rackspace you can set up a fairly powerful windows server for as little as $60 a month:
http://www.rackspacecloud.com/cloud_hosting_products/servers/pricing
In my experience using Remote Desktop to log into a Rackspace Windows Cloud Server has been snappy and quick (of course a lot of that depends on the strength of your internet connection). The process of standing up the server is lighting fast, backing it up is even easier, and it can be easily resized down the line if you need more storage/bandwidth.
These days I don't understand why a small to mid sized organization would actually waste capital on server hardware.
Evan

Opening a project in an IDE / editor over Samba = SLOW

I'm not sure if this is the correct forum to ask this question so I'll probably be told about it, but anyway - I'm connected to the Samba share on my companies development server from my home (where I work now), and when viewing the files through explorer (windows 7) the browsing is relatively quick. However when I open a directory on the Samba drive as a project in an IDE - whether it be Aptana or eTextEditor - browsing the directories in the project panel is unbearably slow.
Any ideas?
Thanks in advance.
We did extensive trials with the source stored on a smb (cifs) mounted disk in the enterprise software company where I work and the conclusion was that it is not possible to tweak this to any acceptable performance, since cifs performance for handling big amounts of small files is so poor.
In our scenario the terminal server using with the IDE was in the same data center as the app servers serving the cifs so the network was comparable performing as local disks.
We also invested some time trying this out with NFS on Windows but the performance was just slightly better there. To compare we set up the same scenario with NFS and Linux and it turned out to rather okay.
The difference between Explorer and a IDE is that Explorer just bothers about a directory/file a time, while you IDE will access all you files allot.
The way to go is probably to go with a VCS and a local install of the IDE at home or a remote desktop solution.