I have a server with 10+ virtual domains (most running Mediawiki). I'd like to be able to watch their traffic remotely with something nicer than tail -f . I could cobble something together, but was wondering if something super-deluxe already exists that involves a minimum of hacking and support. This is mostly to understand what's going on, not so much for security (though it could serve that role too). It must:
be able to deal with vhost log files
be able to handle updates every 10 seconds or so
Be free/open source
The nice to haves are:
Browser based display (supported by a web app/daemon on the server)
Support filters (bots, etc)
Features like counters for pages, with click to view history
Show a nice graphical display of a geographic map, timeline, etc
Identify individual browsers
Show link relationships (coming from remote site, to page, to another page)
Be able to identify logfile patterns (editing or creating a page)
I run Debian on the server.
Thanks!
Take a look at Splunk.
I'm not sure if it supports real time (~10 second) updates but there are a ton of features and it's pretty easy to get set up.
The free version has some limitations but there is also an enterprise version.
Logstash is the current answer. (=
Depending on the volume, Papertrail could be free for you. It is the closest thing to a tail -f and is searchable, archivable and also sends alerts based on custom criteria.
Should developers have administrator permissions on their PC or is giving them power user access sufficient?
Some comments:
If they want to try out some new
application that would need
installing, then they could try it on
a virtual machine and later get the
network administrator to install it
for them. Do you think that would
work?
Is there anything that a developer
needs to do on their PC that would
require administrator permissions?
We are team of 5 developers and build web applications
The answer is 'Yes'. Developers will need to frig with system configurations to test items, install software (if nothing else, to test the installation process of whatever they happen to be developing), poke about the registry and run software that will not work properly without admin privileges (just to list a few items). There are a host of other tasks integral to development work that require administration privileges to do.
Bearing in mind that development staff do not necessarily have root access to production systems, admin rights on a local PC does not significantly compromise security of production systems. There is almost no legitimate operational reason for restricting admin access to local PCs for staff that need it to do their job.
However, the most important reason to provide administrative access is that setting up a compromised or second rate development environment sends a message to your development staff:
'We value your work so little that we
are prepared to significantly
compromise your ability to do your job
for no good reason. In fact, we are
quite happy to do this to cover our own
arse, pander to the whims of
petty bureaucracy or
because we simply can't be bothered.
That's just the best case. The worst
case is that we're really the
type of control freaks that view it as
our perogative to tell you how to
do your job and what you do or don't
need to do it. Make do with what
you're given and be grateful
that you've got a job at all.'
Generally, providing a second-rate (let alone fundamentally flawed) work environment for development staff is a recipe for the natural consequences of pissing off your staff - inability to retain competent people, high staff turnover, poor morale and poor quality delivery. Going out of your way to do so - particularly if there's an overtone of pandering to bureaucratic whim - is just irresponsible.
Bear in mind that your staff turnover doesn't just incur costs of replacing the staff. The most serious cost of staff turnover is that most of the ones that stick around will be the deadwood that can't get a better job. Over time this degrades the capabilities of the departments affected. If your industry is sufficiently close you can also find yourself getting a reputation.
One point to note is that administrative privileges are far less of an issue for development on unix-oid or mainframe systems than it is on Windows. On these platforms a user can do far more in their own domain without needing system-wide permissions. You will probably still want root or sudo access for developers, but not having this will get underfoot much less often. This flexibility is a significant but lesser known reason for the continuing popularity of unix-derived operating systems in Computer Science schools.
Developers should have full and total control of the machine they are using. Most debugging tools require admin permissions in order to hook into the runtime of the application they are building.
Further, devs frequently download and try new things. Adding additional steps such as needing a network admin to come by and install something for them simply frustrates the dev and will quickly make life hell for the network ops person.
That said, they should be an admin on THEIR box, not the network.
Yes and no.
Yes, it saves lots of time bothering system support.
No, your users don't have it so don't count on it.
We develop with admin permissions and test without. Which works out right.
Local admin yes, for all of the reasons stated above. Network admin no, because they will inevitably be drawn into network administration tasks because "they can". Devs should be developing. Network administration is an entirely different job.
Developers normally need to do things that the average person wouldn't, and so should normally have administrator accounts. Making them hop through awkward hoops wastes their time and demoralizes them. There may be exceptions in high-security situations, but if you can't trust somebody with an admin account you sure can't trust their code.
They should also have an available account of the same permission as their users (more than one account if the pool of users has different permission statuses). Otherwise, they may just develop something cool, deploy it, and then find it won't work for the users.
There are also too many ways to screw up computers with admin accounts (yes, I've done it). The IT department needs a policy that they will re-image a developer's computer if they can't fix it quickly. At one place I contracted at, I had to sign a copy of that policy to get my admin account.
This is a pretty Windows-specific answer. In Linux and other Unix-y systems, developers can more often get by with user accounts only, often don't need another account for test (if they've got an account they can sudo with, they do know when they're using the sudo, but they may need one with the same group permissions), and can do incredible amounts of damage to the OS very easily, so the same IT policy is necessary.
Having endured the pain of having to develop without admin rights on the machine my answer can only be yes, it's essential.
Yes, Half-Life 1 (and all the related mods: counter-strike, day of defeat, etc.) need administrator rights (at least for the 1st run, I think) to work properly in Windows NT, 2000, XP, etc.
And, what kind of developer doesn't play Counter Strike at lunch time? (a crappy one for sure)
Absolutely! How else would I install the download manager to download movies at night?
Sometimes developers really need to install things or change something in the system to test out some idea. It will be impossible if you have to call the admin each time you need to change something.
I also have my personal observation that some admins tend to screw tight all that is possible in order to make even little things depend on them on a daily basis thus... what, securing their job? pissing off the other users? Have no answer. But common sense is not seen here.
Last time there was a problem with my PC I took an active part in restoring the system, making some suggestions working in the team with the admin, or so i thought... Admin turned to be very angry and accused me of trying to teach him or redefine the rules. I suppose it was just his ego as he was not seen that cool in our room among other colleagues.
The answer is, developers should have 2 machines!!
One development one that has admin rights and sufficient power, memory, screen size, and portability, and ADMIN privileges, with corporate antivirus software loaded but configurable by developer when required with autoreset policy..
One corporate one that has corporate load, policies, non-admin user privileges, etc... Developer can use this one for unit testing release mode applications as some developers have the nasty habit of doing all unit testing with administrator privileges.
If you invert the question I think it becomes easier to answer; should we remove administrator permissions from developers? What is the gain?
But actually, I think the answer depends on your context, your environment. Small startup will have a different answer to ISO-certified government agency.
Yes, but they need to be aware of the limitations that their users will face when running software in a more limited environment. Developers should have easy access to "typical" environments with limited resources and permissions. In the past I have incorporated deploying builds to one of these "typical" systems (often a VM on my own workstation) as part of the build process, so that I could always get a quick feel for how the software worked on an end-user's machine.
Programmers also have a responsibility to know the hard-and-fast rules of writing software for non-admin users. They should know exactly which system resources they are always allowed (or forbidden) to access. They should know the APIs that are used to acquire these resources.
"It works on my machine" is never an excuse!
As a systems admin I'm all for developers having local admin rights on their workstations. When possible, it's not a bad idea to do most things with a standard 'user' level account and then use another 'admin' account to make changes, install apps etc. Often you can sudo or runas to accomplish what you want without even logging out. It's also helpful to remind us of what security hurtles the end-users will have to jump through when releasing to production.
On a side note it's also advisable to have a [clean] system or VM(s) so that you can test things properly and not get into the "it looks/works fine on my system" scenario due to system tweaking.
No Power User
First of all, Power User is basically an administrator - so "limiting" a user to Power User does not provide any increase in security to the system - you might as well be administrator.
Log on interactively as a normal user
Second, of course a developer needs administrative access to their developer machine (and servers and second boxes and so on) but of course noone should interactively log on as administrator during normal development or testing. Use a normal user account for this and most applications.
You seriously do not want to run [insert any browser, plugin, IM, E-mail client and so on] as an administrator.
You don't normally log onto your Linux box as root either, even if you likely have root access when you need it.
Use a separate personal administrator account
Provide the developer with a separate personal administrator account to his/her machine (domain account preferably) that is also a valid administrator on other dev/test servers and boxes that person needs administrative access to.
Utilize "run as" and in Vista+ UAC to prompt or request prompt and enter the administrative credentials for tasks and processes only when needed. PKI with smartcards or similar can greatly reduce the strain in entering credentials often.
Everyone is happy (or? ;)
Then audit access. This way there's traceability, and an easy way to find out who is using the terminal services sessions on a particular dev/test server you have to access right now...
Granted, there's definitely development work that will never require local administrator privileges - like most web development where deployment is tested against a separate server or virtual machine and where cassini or whatever is used for local debugging actually runs well as a normal user.
[apologies english is not my mother tongue, doing my best :)]
Well,
Personal experience (I'm a c++/SQL dev):
I used to be admin of my windows machine in my previous job. I also had dbo ( not dba ) rights on databases, including production environment databases. In 2 and a half year with 8 people having these crazy high rights... we never had any trouble. Actually we solved a lot of problems by updating db manually. We could do many things really fast for hot fixes and devs.
Now I changed job. I managed ( crying a lot ) to be admin of my windows machine. But the dev server is a red hat server to which we connect using ssh. Trying to install Qt was a torture, Quota limits, space limits, execution and write rights. We finally gave up and asked the admin to do it for us. 2 weeks later still nothing is installed. I'm getting really fast at newspaper reading and alt+tab hitting.
I asked for admin rights, as only the dev of my soft use this machine.
--> Answer: "If there are processes its for you not to do whatever you want. It has to run fine once in prod".
--> Trying to explain to a non technical manager: "I shall have no admin rights whatsoever in production or UAT environments. But my dev machine is different. If I were to build chairs instead of softwares, would you tell me that I can't put whatever tools I want in my workshop because my workshop needs to look like the place the chair will be used ?
I give an executable package to uat. The libs and tools I used to build them are invisible to the end user or to the dude installing the package."
I'm still waiting today. I found a solution, open a dev environement, go to your favorite online judge, challenge yourself. when somebody look at your screen, he'll be seing you programming. ;)
I work primarily in the *nix world and the standard model there is for developers to work in a normal, non-privileged user account with the ability (via sudo or su) to escalate to admin privileges as/when necessary.
I'm not sure what the equivalent Windows arrangement would be, but this is, in my experience, the ideal setup:
On the one hand, having admin rights available on demand gives the developer full power over his workstation when needed.
On the other, Windows software has a long, long history of assuming that all users have admin rights, to the point that many programs won't run for a non-admin user. Many of Windows' security issues stem directly from this implicit requirement that, in order to be able to reliably use the computer, all users must be admins. This must change and the most effective way to ensure that your software will run for non-admin users is for your developers to be running it themselves as non-admin users.
You can answer this in two ways. Yes and no, or it depends. -- Can I be more vague....
It depends if it is required for them to do their job. If it is then grant them administrative powers over their computer. If not then don't. Not all software development requires an engineer to have admin rights.
Yes and no depends on your view. Some engineers view their computer as their domain and they are the rules of their domain. Others don't want the responsibility.
I have worked at one company where I did not have admin rights and whenever I needed to do something that required admin rights I had to call the help desk and they granted me temp admin rights until I rebooted. This was a pain at times, but that was the way it was so I lived with it. I have also worked at places that I have full admin rights to my computer. This was great except for the time I installed some software that hosed the OS and had to take my computer to the help desk and have them re-image the hard drive....
I personally feel that an engineer should have admin rights to their computer, but with the understanding that if they screw it up then a new baseline image can be reloaded and they will lose anything that was done since the original baseline. I don't believe that everyone in a company should have admin rights to their computer however. Accounting, administrative assistants, and other departments don't really have a need to have those rights so they should not be granted.
https://msdn.microsoft.com/en-us/library/aa302367.aspx
In my experience, a compromise between us (coders) and them (security) is always needed. I admit (though I hate to), there is merit in the Microsoft article above. As I have been a programmer for years, I have experienced the pain where I needed to just install a different debugger, just to get annoyed I can't. It forced me to think creatively in how to get my job done. After years of battling our security team (and several discussions), I understand their job of having to secure all areas, including my desktop. They showed me the daily vulnerabilities that come out, even on the simplest Quicktime app. I can see their frutration everytime I want to install a quick utility or tweak my local IIS that I can cause a serious security problem. I didn't fully understand this until I saw another developer get canned. He was trying to debug and ended up shutting off Symantec only to get (and then GIVE) some virus to hundreds of people. It was a mess. In talking to the one of the "secheads" (security guys) about what happened, I could see he wanted to just say, "told you so...".
I have learned that our secheads (well, at least mine) just want to protect our company. The good news is we did find a compromise, and I can get my job done and the secheads are cool with our secure network!
Creed
Yes if you want the pentesters or some skilled malicious users to get a foothold on compromising your domain.
i.e Compromise low level account > Find where admin -> Mimikatz -> Elevate permissions -> Domain admin.
So no, normal users should not be admins.
Also Microsoft have said UAC is not a security boundary, so don't use it as such. There are various real world bypasses of UAC available.
If they need admin as part of their job role then give out separate domain local admin user accounts used for installing software only (with admin permissions on their own machine only), never for general usage or internet access. This should have a more stringent password policy (eg 15 chars minimum length). Runas functionality should be used for this.
Any environment where normal user accounts are admin is a recipe for a security disaster.
Wow, this question is certainly going to open up to some interesting answers. In reply I quote the oft used - 'It Depends' :)
In small companies this might just be simply a matter of being pragmatic. The developers are also likely to be the most technically adept, so it makes sense for them to adminster their own machines.
Personally, I'm a fan of the "admin account" which can be used when necessary - i.e. "Run As.." (I noticed this approach was very similar in principal to UAC later on).
If you are developing desktop software it's not a bad idea for developers to work within the confines that their end user's will experience - i.e. limited or restricted rights. If you build the software under limited rights, it's a good chance that you'll hit the same problems your target users would face given the same set of permissions.
Having said that, if you have a good testing lab and/or a decent QA team this might be a moot point - especially if you have a half decent ALM practice.
So finally - I develop without UAC, mainly because I trust myself and my skills. In a team environment, I'd put it to a vote. In larger organizations you might not have this freedom.. The Enterprise Admins often have the final say :)
At my company, developers, engineers, and my boss (owner of the company) have local admin privilege. My boss also has network admin privilege, just in case I get hit by that wayward bus (or quit). Everyone else gets locked down.
As sysadmin, this setup has caused me a little grief from time to time, especially when unapproved software gets installed. However, coming from a developer background, I understand the need for power users to have more control over their environment and as such, am willing to put up with the occasional quirk or problem that may surface. I do perform routine backups of their workstations -- just in case.
By the way, I've had more problems with the boss tinkering around with things than with anyone else. Kind of like the old question, "Where does an elephant sit? Anywhere he wants!"
But in a small firm where he is essentially the "backup" sysadmin, there isn't much choice.
It depends on the developer skills and whether s/he is a consultant or not.
I think it's reasonable that a seasoned and trustworthy developer has the rights to do whatever s/he wants with her/his PC as long as it doesn't harm her/his productivity.
No one on Windows XP should be using an administrator account for day-to-day use, and in Vista if you must be an administrator at least have UAC enabled. Especially web developers and other developers who browse the web with Internet Explorer.
What you can do is have developers use their regular user account, but give them a second account that is an administrator on their PC so they can use it as needed (Run As). I know they said web development, but for Windows development your software should be tested using a regular user account, not as an administrator.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Lots of people have things that their systems do for them or for their teams. Source control post-commit hooks are a standard example: have an automated build system that checks out the latest source, compiles, tests, and packages it is a back-office hack that most of us probably use.
What other cool things have you done?
We had one developer in our team who wasn't familiar with the concept of a subversion conflict. He deduced that if he simply deleted all that weird stuff in his code and clicked resolve that everything was ok (i.e. knocking out all the other changes in the file....)
Regardless to say, after the 5th time this occurred, and the 5th time that I had to explain why that defect that I just closed was reoccuring, I wrote a script.
It would diff for the changes to a file to see whether the consecutive checkin deleted all the previous changes and that they were done by the nameless developer.
It would then send an email to the boss with a description of what happened, and how much work was lost during the checkin.
There was no 7th occurrence.
We have a traffic-light that shows whether our daily build succeeds, has failed tests or simply doesn't build.
Also, we have a light bar that lights up for a few seconds whenever we receive an upload from a customer.
We aren't staffed 24x7 but we have critical processes that run throughout the night. We created an in-house alerts system to notify us of serious system issues, failed mission-critical processes, etc. It uses text-to-speech to create a descriptive message and then connects to our automated dialer to call the appropriate people with the message.
Working at a web design company I configured our dev server so we could see a working copy of a project in real time by a sub domain name. So if your name was joe and you were working on project jetfuel you would go to joe.jetfuel.test-example.com and you could see your changes instantly without committing.
This was a simple hack that used sub domain names as a partial directory structure. Our htdocs path looked like this htdocs/tag/project. We had a script (a php app that you would access by setup.test-example.com) that would create a new tag name for you and checkout whatever version you wanted and call the deploy script for that project. If it succeeded it would forward you to the new sub domain. You could then work on this new copy by a samba share.
This worked really well for us since we always deployed to the same linux build and our projects had simple database requirements.
Our original reason for doing this was because our developers worked on all kinds of different platforms. Besides fixing this platform problem this was awesome for viewing changes and testing. We had all kinds of tags ranging from peoples names, trunk versions, test tags, all the way to prototypes like jquery-menu-hack.jetfuel.test-example.com
Now that I look back I wonder how much easier it would have been to run virtual machines.
We had a dev working on a classic ASP site that didn't believe in source control. The code went from his machine straight to the production box. This lead to issues with lost changes or the inability to revert back to a stable version. Since CruiseControl.Net has the ability to monitor a directory, I added a project that actually checked in files whenever they were copied to production. Completely backward from CC.Net's original intent, but we didn't lose any more code.
Put in a pre-commit hook that checks the bug comment refers to an open bug, assigned to the user doing the checkin. (SCMBug can do this).
Then to make life REALLY interesting, spell check the comments!!
The commit comment, and the one in the code. (spell is my buddy)
Run the code through a code formatter set to compayn standard; and diff it to the original: if it's not in company offical format: reject the commit.
Do a coverage test with the unit test build.
Email all mistakes/errors caused to the development team.
I left OUT the name of the developer. They know they did it.
Not exactly hacks, but a couple of must-haves for IT dev work:
If you're using subversion, you've got to use CommitMonitor. (http://tools.tortoisesvn.net/CommitMonitor) It lets you monitor svn repositories for new commits & then review the new commits. Great if you're wanting to stay on top of what your team is doing. Particularly if you have a couple of juniors that need to be watched. ;)
Rsnapshot (http://www.rsnapshot.org/) is also invaluable - we have complete backup snapshots of our entire filesystem every four hours going back 2 years, and every day beyond that. It's like a data cube for your filesystem! The peace of mind this gives is pure bliss. :)
Hardly a hack, but back in the day, on our speedy VAX 11/730, our overnight process would print the file "BLAMMO.TXT" on the printer if something went amiss. Every morning, the first stop was the printer when coming in.
Back in the dotCom days about 9 years ago, I had to hack a failover system between two different locations. We had a funky setup with a powerbuilder front end website, and powerbuilder managment tool. Data was stored in MSSQL 7.0. The webservers used IPX to communicate to the SQL Servers (don't ask). Anyway, I was responsbile for coming up with a failover plan.
I ended up hacking together some linux boxes, and had them run our external DNS. One at each location. We had a remote site w/ webserver, and sql server I got SQL transaction replication working over a 128k ISDN IPX connection (of all things). Then built a monitoring tool at our production site to send packets out to various upstream network handoffs. If we experienced more than 20% outage the primary site, the monitoring tool ran a perl script on the Debian box to change DNS and point to our 2ndary. Our secondary had a heartbeat w/ our primary DNS, and monitoring station. It would duplicate records unless it lost both connections then it would roll over to pointing DNS to backup location.
The primary site would shut down the SQL server at the primary location to break replication. Automated site to site failover using 128k ISDN IPX connection :)
Back at my previous job, we had to audit many tables for data changes (inserts, updates and deletes). Our support crew had to be able to search through this data to find changes that users made.
The temporary solution that had become semi-permanent was to store each non-select query. However this was a large system, that the table would grow by about 1.5GB a day.
The solution I came up with was to create a script that for all tables in an external list, created the appropriate triggers that audit each table, row, column, before and after, when and by whom and store it in our new audit table. This table grew by about 10% the size of the older version and stored much more usable data. It enabled us to create a UI to search and view every change made to our data, without requiring any knowledge of SQL for our support team or business users.
This is at a lesser level, but I am fairly proud of a make file I wrote for compiling code for my research. It only needs to be given your source and header file names that can take care of the rest all by itself (though it does make the one assumption that you will not be compiling any header files into objects, only source files get compiled). The other downsides are the fact that it relies on the GNU make program's second expansion feature, so I don't know if it works on other make programs. Additionally the compiler used needs to support something similar to gcc's -MM feature. Here is hoping that no one laughs at it.
-include prereqs.mk
HEADERS=$(SRC_DIR)/gs_lib.h $(SRC_DIR)/gs_structs.h
SOURCES=$(SRC_DIR)/main.cpp $(SRC_DIR)/gs_lib.cpp
OBJECTS=$(patsubst $(SRC_DIR)/%.cpp,$(OBJ_DIR)/%.o,$(SOURCES))
release: FLAGS=$(GEN_FLAGS)$(OPT_FLAGS)
release: $(OBJECTS) prereqs.mk
$(CXX) $(FLAGS) $(LINKER_FLAGS) $(OUTPUT_FLAG) $(EXECUTABLE) $(OBJECTS)
prereqs.mk: $(SOURCES) $(HEADERS)
$(CXX) $(DIR_FLAGS) $(MAKE_FLAG) $(SOURCES) | sed 's,\([abcdefghijklmnopqrstuvwxyz_]*\).o:,\1= \\\n,' > $#
.SECONDEXPANSION:
$(OBJECTS): $$($$(patsubst $(OBJ_DIR)/%.o,%,$$#))
$(CXX) $(FLAGS) $(NO_LINK_FLAG) $(OUTPUT_FLAG) $# $(patsubst $(OBJ_DIR)/%.o,$(SRC_DIR)/%.cpp,$#)
Obviously I dropped the definition of a number of variables, but I think it gets the idea across.
Since my coding tools and style are compatible with the requirements of this script I like to use it. All I need to do to add (a) new piece(s) of source code is add its name(s) to the appropriate variable and the rest is taken care of.
We have Twitter accounts for many projects which tweet things like commit messages, notices from builds, failed unit tests, deployments, bug tracking activity - any kind of event associated with the project. Running a client like Twitter Gwibber (which displays a pop-up for each new status) is a great way to stay in touch with the activity on the projects you are interested. Using Twitter is good as you can take advantage of all the 3rd party apps - such as the iPhone clients.
Add commit-hook check for VRML/3d-model files with absolute path to textures/images. f:/maya/my-textures/newproject/xxxx.png just doesn't belong on the server.
Back in the 1993, when source control systems were really expensive and unwieldy, the company I worked about had an in-house source control built as 4DOS scripts. It wasn't as sofisticated as most current source control systems, for example it didn't have branching or integrates, but it did the basic job of supporting revisions history, checkout/checkin and rudimentary conflict resolution.