Is there a way to check a computer's specifications, and print the results with PXE booting? - scripting

So at work, we check computer's specifications, and need to print these in a standard format. I know how to set up a PXE server already, but I was wondering if it were easy to get a program (or write a script) that will check the computer's hardware (processor, memory, hard drive), and print it over the network.
My thoughts are that I can boot a very simple linux os over PXE, and run a script to do the dirty work. However, I'm not sure how to set it up to use a network printer, or which script to use for that matter.
All the computers have the same architecture, (x86), so a single implementation should work for all of them.

I would be inclined to avoid using a printer directly here and use something like scp or netcat to send back the information you discover.
Edit:
There are a number of tools that might help collecting the data itself, depending on what exactly you want to collect. I've found dmidecode to be very useful. Potientally it can tell you the version of the BIOS, memory stick size/speed/locations and quite a lot of very detailed information. It is buggy on some older hardware with broken DMI tables though. lshal, lshw, lspci and lsusb are all fairly common on linux installations and rather useful for these things.

Have a look at GLPI. It's a good open source software used to manage IT tickets, but, it also integrates a IT infrastructure management that could turn our to be useful in your case.
There is a small piece of software to be installed on each remote client (this could be done remotely and silently) and then you can collect a lot of information and match it by IP addresses

We use 'pdsh' to manage our global network. We have a naming convention of hosts that makes the host expression easy to write. So to continue the ls### suggestion to collect the info on a collection of hardware, we would write a command like this:
[root#admin-console ~]# pdsh -R exec -w china-[1-1024] ssh %h lshal > china-lshal-cabinet-01.log
pdsh prefixes the host name to the output lines and as it runs as a concurrent operator the lines will collate. A simple sorting script using the, say "china-[1-1024]:" tag is needed to get them organized. You could also make the pdsh run sequentially by limiting its concurrency but if you are running large configurations you would want the concurrency.

Related

Linux process activities

Is there possibility to show what's going on under specified process in Linux?
For example, i run SQL query -> select evil_function();
and notice that process under Linux uses all cpu.
So is there something with what I can see whats going on under this process?
What I want is to see what queries is running under this process.
Thanks!
strace will tell you what system calls the process is making.
To see what called routines are taking the most CPU, you need to run a profiling tool, and make sure the executable of the process you in compiled correctly (sometimes it needs to be instrumented during compilation for profiling, sometimes it just needs to be compiled with debug symbols, or not stripped of them after compilation).
You might want to look at oprofile, valgrind, gprof and for starters on free tools - there are also commercial products available.
Here are a few links:
http://www.pixelbeat.org/programming/profiling/
http://en.wikipedia.org/wiki/List_of_performance_analysis_tools
You are mixing a whole bunch of things.
If you are talking about MySQL do:
show processlist;
For info specifically about linux processes, you can strace the process to get a list of system function that it calls. Unless you are experienced with linux this will be useless to you.
If the process is paused then you can find out what function it is stopped on, but that's probably not what you want, since you say the process is running.
There are also various tools that can give you info on what parts of the disk the process is reading, and how much memory it's allocating.
And finally you can use gdb to break into the process and single step your way through it to see exactly what it's doing. This will also likely be useless to you since an SQL server does a LOT of things - far to many to understand by this method.

Design principles as to how linux repository managers update themselves?

I know there are other applications also, but considering yum/apt-get/aptitude/pacman are you core package managers for linux distributions.
Today I saw on my fedora 13 box:
(7/7): yum-3.2.28-4.fc13_3.2.28-5.fc13.noarch.drpm | 42 kB 00:00
And I started to wonder how does such a package update itself? What design is needed to ensure a program can update itself?
Perhaps this question is too general but I felt SO was more appropriate than programmers.SE for such a question being that it is more technical in nature. If there is a more appropriate place for this question feel free to let me know and I can close or a moderator can move.
Thanks.
I've no idea how those particular systems work, but...
Modern unix systems will generally tolerate overwriting a running executable without a hiccup, so in theory you could just do it.
You could do it in a chroot jail and then move or something similar to reduce the time during which the system is vulnerable. Add a journalling filesystem and this is a little safer still.
It occurs to me that the package-manager needs to hold the package access database in memory as well to insure against a race condition there. Again, the chroot jail and copy option is available as a lower risk alternative.
And I started to wonder how does such a package update itself? What
design is needed to ensure a program can update itself?
It's like a lot of things, you don't need to "design" specifically to solve this problem ... but you do need to be aware of certain "gotchas".
For instance Unix helps by reference counting inodes so "you" can delete a file you are still using, and it's fine. However this implies a few things you have to do, for instance if you have plugins then you need to load them all before you run start a transaction ... even if the plugin would only run at the end of the transaction (because you might have a different version at the end).
There are also some things that you need to do to make sure that anything you are updating works, like: Put new files down before removing old files. And don't truncate old files, just unlink. But those also help you :).
Using external problems, which you communicate with, can be tricky (because you can't exec a new copy of the old version after it's been updated). But this isn't often done, and when it is it's for things like downloading ... which can somewhat easily be made to happen before any updates.
There are also things which aren't a concern in the cmd line clients like yum/apt, for instance if you have a program which is going to run 2+ "updates" then you can have problems if the first update was to the package manager. downgrades make this even more fun :).
Also daemon like processes should basically never "load" the package manager, but as with other gotchas ... you tend to want to follow this anyway, for other reasons.

Daemon with Clojure/JVM

I'd like to have a small (not doing too damn much) daemon running on a little server, watching a directory for new files being added to it (and any directories in the main one), and calling another Clojure program to deal with that new file.
Ideally, each file would be added to a queue (a list represented by a ref in Clojure?) and the main process would take care of those files in the queue on a FIFO basis.
My question is: is having a JVM up running this little program all the time too much a resource hog? And do you have any suggestions as to how go about doing this?
Thank you very much!
EDIT: Another question I should ask: should I run this as its own instance (using less memory) and have it launch a new JVM when a file is seen, or have it on the same JVM the Clojure code that will process the file?
As long as it is running fine now and it has no memory leaks it should be fine.
From the daemon terminology I gather it is on a unix clone, and in this case best is to start it from an init script, or from the rc.local script. Unfortunately details differ from OS to OS to be more specific.
Limit the memry using -Xmx=64m or something to make sure it fails before taking down the rest of the services. Play a bit with the number to find the lowest reliable size.
Also, since clojures claim to fame is its ability to deal with concurrency it make a lot of sense to only run one JVM with all functionality running on it in multiple threads. The overhead of spawning new processes is already very big and if it is a JVM which needs to JIT and warm up its memory management, doubly so. On a resource constrained machine could pose a problem. and on a resource rich machine this is a waste.
I always found that the JVM is not made to quickly run something script like and exit again. It is really not made for that use case in my opinion
.

Clean-rooming when software testing

I want to examine exactly how my code operates when using other libraries to which I do not have the code for. Whilst I can do this online (i.e. with FileMon, RegMon and TCPView from SysInternals), I was wondering if there was a good offline method that would allow me to run up my code in a virtual machine, shutdown the virtual machine and diff the entire VM image?
Since persistent modifications to the system is either in the filesystem on in the registry, you could have a little program that list all the files on the hard drive and also dump the registry.
Then you can also do it after program operation and do a simple file diff.
If you are using virtualbox, I think that you can do the mounting of the disk image offline (i.e. virtual machine not running). However dumping the registry from offline files may be harder.
See "Mount vdi" on google.
All integration testing surely will use code for which you don't have the source, your framework libraries, database drivers, databases, comms libraries. Some of which may not even be on the same machine your code is. I'm not clear exactly what you would hope to achieve. You make some calls to a queueing system, it does all manner of secret squirrel stuff. You diff before and after, now what can you say? Do you know what data formats there ought to be represent your request?
I see tests as being defined in terms of the published behaviours of the libraries and systems I'm working with. Example for a database: I execute some business actions which are supposed to create Orders. I know the orders I defined, do they appear in the database? In defining my tests I can specify explict expected outcomes in terms of records in a database. I can then even automate the tests - compare an extract from the database with expected results.

expect replacement

I want to work with a modem interfaced on a serial port on an embedded platform.
Here are some solutions I have rejected so far :
Expect plus a terminal program :
My (cross)build system does not have any package rules for expect, and according to the installation instructions from the expect sources, the configure script needs to be interactive because it does some test with the terminale it is invoked in. Thid does not look like something you want to do when cross compiling.
Python plus pyserial :
I would love to use this, but the size of the whole thing won't fit on my limited flash space.
Chat (from the pppd package):
Well, I may give it a try but it is very, very limited
So I am looking for some sort of lightweight, embeddable expect replacement. I have no knwoledge of lua. Would it be a good candidate for expect like scipting ?
Well, Expect is just Tcl plus extensions to drive other programs via pseudo-terminals and do pattern-matching on the results. If you just want to drive a serial port you can drop the external terminal program and have Tcl drive the serial port directly - see sample code. See also the Tcl Wiki page on cross-compiling.