I want to examine exactly how my code operates when using other libraries to which I do not have the code for. Whilst I can do this online (i.e. with FileMon, RegMon and TCPView from SysInternals), I was wondering if there was a good offline method that would allow me to run up my code in a virtual machine, shutdown the virtual machine and diff the entire VM image?
Since persistent modifications to the system is either in the filesystem on in the registry, you could have a little program that list all the files on the hard drive and also dump the registry.
Then you can also do it after program operation and do a simple file diff.
If you are using virtualbox, I think that you can do the mounting of the disk image offline (i.e. virtual machine not running). However dumping the registry from offline files may be harder.
See "Mount vdi" on google.
All integration testing surely will use code for which you don't have the source, your framework libraries, database drivers, databases, comms libraries. Some of which may not even be on the same machine your code is. I'm not clear exactly what you would hope to achieve. You make some calls to a queueing system, it does all manner of secret squirrel stuff. You diff before and after, now what can you say? Do you know what data formats there ought to be represent your request?
I see tests as being defined in terms of the published behaviours of the libraries and systems I'm working with. Example for a database: I execute some business actions which are supposed to create Orders. I know the orders I defined, do they appear in the database? In defining my tests I can specify explict expected outcomes in terms of records in a database. I can then even automate the tests - compare an extract from the database with expected results.
Related
So at work, we check computer's specifications, and need to print these in a standard format. I know how to set up a PXE server already, but I was wondering if it were easy to get a program (or write a script) that will check the computer's hardware (processor, memory, hard drive), and print it over the network.
My thoughts are that I can boot a very simple linux os over PXE, and run a script to do the dirty work. However, I'm not sure how to set it up to use a network printer, or which script to use for that matter.
All the computers have the same architecture, (x86), so a single implementation should work for all of them.
I would be inclined to avoid using a printer directly here and use something like scp or netcat to send back the information you discover.
Edit:
There are a number of tools that might help collecting the data itself, depending on what exactly you want to collect. I've found dmidecode to be very useful. Potientally it can tell you the version of the BIOS, memory stick size/speed/locations and quite a lot of very detailed information. It is buggy on some older hardware with broken DMI tables though. lshal, lshw, lspci and lsusb are all fairly common on linux installations and rather useful for these things.
Have a look at GLPI. It's a good open source software used to manage IT tickets, but, it also integrates a IT infrastructure management that could turn our to be useful in your case.
There is a small piece of software to be installed on each remote client (this could be done remotely and silently) and then you can collect a lot of information and match it by IP addresses
We use 'pdsh' to manage our global network. We have a naming convention of hosts that makes the host expression easy to write. So to continue the ls### suggestion to collect the info on a collection of hardware, we would write a command like this:
[root#admin-console ~]# pdsh -R exec -w china-[1-1024] ssh %h lshal > china-lshal-cabinet-01.log
pdsh prefixes the host name to the output lines and as it runs as a concurrent operator the lines will collate. A simple sorting script using the, say "china-[1-1024]:" tag is needed to get them organized. You could also make the pdsh run sequentially by limiting its concurrency but if you are running large configurations you would want the concurrency.
I know there are other applications also, but considering yum/apt-get/aptitude/pacman are you core package managers for linux distributions.
Today I saw on my fedora 13 box:
(7/7): yum-3.2.28-4.fc13_3.2.28-5.fc13.noarch.drpm | 42 kB 00:00
And I started to wonder how does such a package update itself? What design is needed to ensure a program can update itself?
Perhaps this question is too general but I felt SO was more appropriate than programmers.SE for such a question being that it is more technical in nature. If there is a more appropriate place for this question feel free to let me know and I can close or a moderator can move.
Thanks.
I've no idea how those particular systems work, but...
Modern unix systems will generally tolerate overwriting a running executable without a hiccup, so in theory you could just do it.
You could do it in a chroot jail and then move or something similar to reduce the time during which the system is vulnerable. Add a journalling filesystem and this is a little safer still.
It occurs to me that the package-manager needs to hold the package access database in memory as well to insure against a race condition there. Again, the chroot jail and copy option is available as a lower risk alternative.
And I started to wonder how does such a package update itself? What
design is needed to ensure a program can update itself?
It's like a lot of things, you don't need to "design" specifically to solve this problem ... but you do need to be aware of certain "gotchas".
For instance Unix helps by reference counting inodes so "you" can delete a file you are still using, and it's fine. However this implies a few things you have to do, for instance if you have plugins then you need to load them all before you run start a transaction ... even if the plugin would only run at the end of the transaction (because you might have a different version at the end).
There are also some things that you need to do to make sure that anything you are updating works, like: Put new files down before removing old files. And don't truncate old files, just unlink. But those also help you :).
Using external problems, which you communicate with, can be tricky (because you can't exec a new copy of the old version after it's been updated). But this isn't often done, and when it is it's for things like downloading ... which can somewhat easily be made to happen before any updates.
There are also things which aren't a concern in the cmd line clients like yum/apt, for instance if you have a program which is going to run 2+ "updates" then you can have problems if the first update was to the package manager. downgrades make this even more fun :).
Also daemon like processes should basically never "load" the package manager, but as with other gotchas ... you tend to want to follow this anyway, for other reasons.
I'd like to have a small (not doing too damn much) daemon running on a little server, watching a directory for new files being added to it (and any directories in the main one), and calling another Clojure program to deal with that new file.
Ideally, each file would be added to a queue (a list represented by a ref in Clojure?) and the main process would take care of those files in the queue on a FIFO basis.
My question is: is having a JVM up running this little program all the time too much a resource hog? And do you have any suggestions as to how go about doing this?
Thank you very much!
EDIT: Another question I should ask: should I run this as its own instance (using less memory) and have it launch a new JVM when a file is seen, or have it on the same JVM the Clojure code that will process the file?
As long as it is running fine now and it has no memory leaks it should be fine.
From the daemon terminology I gather it is on a unix clone, and in this case best is to start it from an init script, or from the rc.local script. Unfortunately details differ from OS to OS to be more specific.
Limit the memry using -Xmx=64m or something to make sure it fails before taking down the rest of the services. Play a bit with the number to find the lowest reliable size.
Also, since clojures claim to fame is its ability to deal with concurrency it make a lot of sense to only run one JVM with all functionality running on it in multiple threads. The overhead of spawning new processes is already very big and if it is a JVM which needs to JIT and warm up its memory management, doubly so. On a resource constrained machine could pose a problem. and on a resource rich machine this is a waste.
I always found that the JVM is not made to quickly run something script like and exit again. It is really not made for that use case in my opinion
.
Cloud App has this neat feature wherein it automatically uploads new screenshots as they are added to the Desktop. Any ideas how this is done?
You can do similar things yourself without much in the way of programming. In OSX, you can configure "Folder Actions" to run a script, for example, when a new item appears in a folder, including the Desktop. You can then use the script to do whatever you want with the new files.
This article at TUAW includes an example of uploading files to a web server when they hit a particular folder.
So, basically, the answer is "Folder Actions", or "something's keeping an eye on the folder and sending notifications", at some level. Whether Cloud App uses Folder Actions or watches the folder itself at a lower level, using FSEvents/NSWorkspace, or the kqueue mechanisms (for which there's a nice wrapper class called UKKQueue, if I remember correctly -- don't know how current my knowledge is on that one though!) is another matter...
You could implement this at several different levels, depending on the outcome you want, how you want to design whatever it is you're actually doing, and even what kind of filesystem you're targeting. Fundamentally, in Cocoa/Objective C, I think you probably want to start looking at FSEvents.
Once you've got notifications of the file changes, I'd probably use something like ConnectionKit to do the uploading -- any library at all, really, that means you don't have to bother with the sockets level yourself -- but again, there's a lot of different ways.
Depends, really, what level you're looking to solve the problem at, and whether you want to build something for other people or get something working for yourself. If I just wanted to bash something together for myself, I could probably have something cobbled together using Coda's Transmit app, and Folder Actions, or maybe Hazel, and a minimal bit of Applescript, in a half-hour at most, that would do the job well enough for me...
I am not sure what you are asking for exactly. If you are asking for a way to take a screenshot programmatically in MacOSX, I suggest you have a look at the "screencapture" command (in the terminal, type "man screencapture" for doc).
If you want to do it the "hard" way, you should look at this.
About 2 months ago I overtook building proccess in current company. Even though I don't have much knowledge of it, I was the only with enough time, so I didn't have much choice.
Situation is not that good, and I would like to do following:
Labeling files in SourceSafe with version (example ProjectName PV 1.2)
GetFiles from SourceSafe to specific directory
Build vb6/c++/c# projects(yes, there are all kinds of them)
Build InstallShield setups
This is for now partly done using batch scripts(one for labeling and getting, one for building, etc..). So when building start I pretty much have babysit it.
Good part of this code could be reused.
Any recommendations on how to do it better? One big problem is whole bunch of dependencies between projects. Also labeling has to increment version and if necessary change PV to EV.
I would like to minimize user interaction as much as possible. One click on one build script(Spolsky is god) and all is done, no need to increment version, to set where to get files and similar stuff.
Is the batch scripting best way to go? Should I do some functionality with msbuild. Are there any other options?
Specific code is not need, for now I just need a way how to improve it, even though it wouldn't hurt.
Tnx,
Marko
Since you already have a build system (even though some of it currently "manual"), whatever you do, don't start over from scratch.
(1) Make sure you have a test machine (or Virtual Machine) on which to work. Thus you can make changes and improvements without having to worry about breaking anything.
(2) Put all of your build scripts and tools in version control, not just the source code. Then as you make changes, see if they work. If they do, then save them to version control. If they don't, then roll them back.
(3) Choose one area to work on at a time. Don't try to do everything at once. Going from a lot of manual work to "one-click" will take time no matter what build system you're working with.
Sounds like you want a continuous integration solution, like CC.Net. It has configuration options to do all the things you want and a great community to answer questions.
Also, batch scripting is probably not a good option. Sophisticated build and integration tools will let you feed parameters into the build and create different builds for different environments (test, production, etc.). Batch scripting will involve a lot of hand-coding and glue.