How to know which process(stat: T) is attached by gdb? - process

When a process is attached by gdb, the stat of the process is "T", like:
root 6507 0.0 0.0 67896 952 ? Ss 12:01 0:00 /mytest
root 6508 0.0 0.0 156472 7120 ? Sl 12:01 0:00 /mytest
root 26994 0.0 0.0 67896 956 ? Ss 19:59 0:00 /mytest
root 26995 0.0 0.0 156460 7116 ? Tl 19:59 0:00 /mytest
root 27833 0.0 0.0 97972 24564 pts/2 S+ 20:00 0:00 gdb /mytest
From the above, 26995 may be debuging. How can I know 26995 is debug or not? Or can I know which process is attached by gdb(27833)
pstree -p 27833 --- show gdb(27833)
Another question: How to know a process(stat: T) is attached by which gdb(PID)?
In most siduation, I am not the peoson who is debuging the process.

The T in ps output stands for "being ptrace()d". So that process (26995) is being traced by something. That something is most often either GDB, or strace.
So yes, if you know that you are only running GDB and not strace, and if you see a single process in T state, then you know that you are debugging that process.
You could also ask GDB which process(es) it is debugging:
(gdb) info process
(gdb) info inferior
Update
As Matthew Slattery correctly noted, T just means the process is stopped, and not that it is being ptrace()d.
So a better solution is to do this:
grep '^TracerPid:' /proc/*/status | grep -v ':.0'
/proc/7657/status:TracerPid: 31069
From above output you can tell that process 7657 is being traced by process 31069. This answers both "which process is being debugger" and "which debugger is debugging what".

/proc file system is a telent design of Linux. Many process real-time information can be found from /proc/{PID}/.
Another question: How to know a process(stat: T) is attached by which
gdb(PID)? In most siduation, I am not the peoson who is debuging the
process.
For this question, we can check /proc/{PID}/status file to get the answer.
root 14616 0.0 0.0 36152 908 ? Ss Jun28 0:00 /mytest
root 14617 0.5 0.0 106192 7648 ? Sl Jun28 112:45 /mytest
tachyon 2683 0.0 0.0 36132 1008 ? Ss 11:22 0:00 /mytest
tachyon 4276 0.0 0.0 76152 20728 pts/42 S+ 11:22 0:00 gdb /mytest
tachyon 2684 0.0 0.0 106136 7140 ? Tl 11:22 0:00 /mytest
host1-8>cat /proc/2684/status
Name: mytest
State: T (tracing stop)
SleepAVG: 88%
Tgid: 2684
Pid: 2684
PPid: 2683
TracerPid: 4276
.......
Thus we know 2684 is debug by process 4276.

You can find out this info from ps axf output.
1357 ? Ss 0:00 /usr/sbin/sshd
1935 ? Ss 0:00 \_ sshd: root#pts/0
1994 pts/0 Ss 0:00 \_ -bash
2237 pts/0 T 0:00 \_ gdb /bin/ls
2242 pts/0 T 0:00 | \_ /bin/ls
2243 pts/0 R+ 0:00 \_ ps axf
Here process 2242 is being debuged by gdb process 2237.

Related

Filebeat not read all logs from directory

I am configuring filebeat to send to elastic logs located in /var/log/myapp/batch_*
Here my filebeat configuration:
# Version
filebeat version 7.11.0 (amd64), libbeat 7.11.0 [84c4d4c4034fcb49c1a318ccdc7311d70adee15b built 2021-02-08 22:42:11 +0000 UTC]
# Filebeat config
logging.metrics.period: 1h
logging.to_files: true
logging.files:
rotateeverybytes: 16777216
keepfiles: 7
permissions: 0600
filebeat.inputs:
- type: log
enabled: true
scan_frequency: 5m
paths:
- "/var/log/myapp/batch_*"
output.elasticsearch:
hosts: ["server:9200"]
index: "log_test_app-%{+yyyy.MM.dd}"
setup.ilm.enabled: false
setup.template.name: "log_test_app"
setup.template.pattern: "log_test_app-*"
setup.template.overwrite: false
setup.template.settings:
index.number_of_shards: 3
index.number_of_replicas: 1
I only see that two logs are being sent and within the established directory there are a total of eight logs:
2022-05-24T19:39:55.904Z INFO log/input.go:157 Configured paths: [/var/log/myapp/batch_*]
2022-05-24T19:39:55.904Z INFO [crawler] beater/crawler.go:141 Starting input (ID: 3328309751929357009)
2022-05-24T19:39:55.904Z INFO [crawler] beater/crawler.go:108 Loading and starting Inputs completed. Enabled inputs: 1
2022-05-24T19:44:55.905Z INFO log/harvester.go:302 Harvester started for file: /var/log/myapp/batch_emails.log
2022-05-24T19:44:55.905Z INFO log/harvester.go:302 Harvester started for file: /var/log/loyalty/batch_import.log
I show you a list of directory files:
ls -l /var/log/loyalty/batch_*
-rw-r--r-- 1 batch batch 154112 May 24 03:20 /var/log/myapp/batch_gifts.log
-rw-r--r-- 1 batch batch 112319 May 24 02:30 /var/log/myapp/batch_http.log
-rw-r--r-- 1 batch batch 7575342 May 24 02:30 /var/log/myapp/batch_vouchers.log
-rw-r--r-- 1 batch batch 4847849 May 24 19:30 /var/log/myapp/batch_ftp.log
-rw-r--r-- 1 batch batch 99413 May 24 03:40 /var/log/myapp/batch_category.log
-rw-r--r-- 1 root root 367207 May 24 19:50 /var/log/myapp/batch_emails.log
-rw-r--r-- 1 batch batch 479 Jan 1 23:00 /var/log/myapp/batch_history.log
-rw-r--r-- 1 batch batch 2420916 Jan 1 23:00 /var/log/myapp/batch_lists.php
-rw-r--r-- 1 batch batch 25779499 May 24 19:50 /var/log/myapp/batch_import.log
Is there something wrong with my setup? I tried using the ignore_older parameter: 36h but only two log files are processed.
Thanks for the help.
Welcome to stack overflow, Emanuel :)
I believe you are reading only the log files (*.log) thus you can mention the same.
filebeat.inputs:
- type: log
enabled: true
scan_frequency: 5m
paths:
- "/var/log/myapp/batch_*.log"
Keep Posted!!! Thanks!!!

How to correct permissions for CPackDeb directories?

Given a CMakeLists.txt like:
PROJECT(asdf NONE)
CMAKE_MINIMUM_REQUIRED(VERSION 3.0)
INSTALL(FILES CMakeLists.txt DESTINATION share/doc/asdf/whatever)
SET(CPACK_GENERATOR "DEB")
SET(CPACK_PACKAGE_CONTACT "asdf#example.com")
INCLUDE(CPack)
The package generated by make package has following contents:
$ dpkg-deb --contents asdf-0.1.1-Linux.deb
drwx------ root/root 0 2017-12-20 10:50 ./usr/
drwx------ root/root 0 2017-12-20 10:50 ./usr/share/
drwx------ root/root 0 2017-12-20 10:50 ./usr/share/doc/
drwx------ root/root 0 2017-12-20 10:50 ./usr/share/doc/asdf/
drwx------ root/root 0 2017-12-20 10:50 ./usr/share/doc/asdf/whatever/
-rw-r--r-- root/root 235 2017-12-20 10:50 ./usr/share/doc/asdf/whatever/CMakeLists.txt
with the parent directories having only permission bits for the owner. How do I correct these so that the world could read the files I install, e.g. for these to be something like drwxr-xr-x instead?
In a discussion with CMake developer Nils Gladitz we were able to track this issue down the umask of the environment affecting this. If the umask in the environment is set to 0022 instead of 0077, then make package generates the package with different permissions:
$ dpkg-deb --contents asdf-0.1.1-Linux.deb
drwxr-xr-x root/root 0 2017-12-20 11:17 ./usr/
drwxr-xr-x root/root 0 2017-12-20 11:17 ./usr/share/
drwxr-xr-x root/root 0 2017-12-20 11:17 ./usr/share/doc/
drwxr-xr-x root/root 0 2017-12-20 11:17 ./usr/share/doc/asdf/
drwxr-xr-x root/root 0 2017-12-20 11:17 ./usr/share/doc/asdf/whatever/
-rw-r--r-- root/root 235 2017-12-20 10:50 ./usr/share/doc/asdf/whatever/CMakeLists.txt
Nils noted that this is apparently an old unfixed issue[1][2].
Thank you, Nils! =)

Why does xrandr give me errors if I try and use commands on my computer, but not if I ssh into it?

When using xrandr on my device to select a resolution I kept getting an error stating " configure crtc 0 failed: "
(shortened) xrandr output after selecting display and running$ xrandr
Screen 0: minimum 8 x 8, current 1920 x 1080, maximum 32767 x 32767
DP1 disconnected (normal left inverted right x axis y axis)
DP2 connected 1920x1080+0+0 (normal left inverted right x axis y axis) 1439mm x 809mm
1920x1080 60.00*+ 50.00 59.94 30.00 24.00 29.97 23.98
4096x2160 24.00 23.98
3840x2160 30.00 25.00 24.00 29.97 23.98
1920x1080i 60.00 50.00 59.94
1680x1050 59.88
1280x720 60.00 50.00 30.00 59.94 29.97 24.00 23.98
1024x768 60.00
720x480 60.00 59.94
640x480 60.00 59.94
HDMI1 disconnected (normal left inverted right x axis y axis)
VIRTUAL1 disconnected (normal left inverted right x axis y axis)
Code I used to select a new resolution
$ xrandr --output DP2 --mode 3840x2160
when that gave me the error I also added the frame rate by trying both
$ xrandr --output DP2 --mode 3840x2160 30
AND
$xrandr --output DP2 --mode 3840x2160_30
(because I wasnt sure of the proper format to add it) Both gave me the error " configure crtc 0 failed: "
This was done on the device itself. for ergonomical reasons I went back to my desk and used SSH to access the device.
I then used a custom resolution (that was the same as above) and tried to use that instead.
steps I used for custom resolution (minus long outputs)
$ cvt 3840x2160
$ xrandr --newmode "3840x2160 30.00" 338.75 3840 4080 4488 5136 2160 2163 2168 2200 -hsync +vsync
$ xrandr --addmode DP2 3840x2160_30.00
$ xrandr --output DP2 --mode 3840x2160_30.00
That seemed to work on my device. When my device restarts I need to repeat the process again though (reverts to 100p when I need it 4k). I stuck $ xrandr --output DP2 --mode 3840x2160_30.00 into a .sh file and now if I run it from my laptop (using SSH) it changes my screens resolution BUT if I try and run the .sh file from my device itself I get the " configure crtc 0 failed: " error
You can reconfigure xOrg. I did this by creating a file in my /usr/share/X11/xorg.conf.d Directory.
I made it using vim:
sudo vim /usr/share/X11/xorg.conf.d/5-monitor.conf
Here is an example of my file
Section "Monitor"
Identifier "Monitor0"
Modeline "1920x1080_60.00" 173.00 1920 2048 2248 2576 1080 1083 1088 1120 -hsync +vsync
Modeline "3840x2160_30.0" 297.00 3840 4016 4104 4400 2160 2168 2178 2250 +hsync +vsync
Modeline "4096x2160_24.0" 297.00 4096 5116 5204 5500 2160 2168 2178 2250 +hsync +vsync
EndSection
Section "Device"
Identifier "Device0"
Driver "intel"
EndSection
Section "Screen"
Identifier "Screen0"
Device "Device0"
Monitor "Monitor0"
DefaultDepth 24
SubSection "Display"
Depth 24
Modes "3840x2160" "1920x1080"
EndSubSection
EndSection
For directions on how to do this you can follow this tutorial: https://wiki.gentoo.org/wiki/Xorg/Multiple_monitors
I was ran into this issue with ubuntu 16.0.4
you can try to use cvt -r 3840 2160 to replace the operation of cvt 3840 2160

Unable to upload a file into OpenStack Swift 2.14.0 on a Raspberry Pi 3 because of "[Errno 13] Permission denied"

Creating and erasing buckets (containers) in my OpenStack Swift version 2.14.0 installation works well. It is a Swift only installation. No further OpenStack services like Keystone have been deployed.
$ swift stat
Account: AUTH_test
Containers: 2
Objects: 0
Bytes: 0
Containers in policy "policy-0": 2
Objects in policy "policy-0": 0
Bytes in policy "policy-0": 0
Connection: keep-alive
...
$ swift post s3perf
$ swift list -A http://10.0.0.253:8080/auth/v1.0 -U test:tester -K testing
bucket
s3perf
These are the (positive) messages regarding bucket creation inside the file storage1.error.
$ tail -f /var/log/swift/storage1.error
...
May 9 13:58:50 raspberrypi container-server: STDERR: (1114) accepted
('127.0.0.1', 38118)
May 9 13:58:50 raspberrypi container-server: STDERR: 127.0.0.1 - -
[09/May/2017 11:58:50] "POST /d1/122/AUTH_test/s3perf HTTP/1.1" 204 142
0.021630 (txn: tx982eb25d83624b37bd290-005911aefa)
But any attempt to upload a file causes just an error message [Errno 13] Permission denied.
$ swift upload s3perf s3perf-testfile1.txt
Object PUT failed: http://10.0.0.253:8080/v1/AUTH_test/s3perf/s3perf-testfile1.txt
503 Service Unavailable [first 60 chars of response] <html><h1>Service
Unavailable</h1><p>The server is currently
$ tail -f /var/log/swift/storage1.error
...
May 18 20:55:44 raspberrypi object-server: STDERR: (927) accepted
('127.0.0.1', 45684)
May 18 20:55:44 raspberrypi object-server: ERROR __call__ error with PUT
/d1/40/AUTH_test/s3perf/testfile : #012Traceback (most recent call
last):#012 File "/home/pi/swift/swift/obj/server.py", line 1105, in
__call__#012 res = getattr(self, req.method)(req)#012 File
"/home/pi/swift/swift/common/utils.py", line 1626, in _timing_stats#012
resp = func(ctrl, *args, **kwargs)#012 File
"/home/pi/swift/swift/obj/server.py", line 814, in PUT#012
writer.put(metadata)#012 File "/home/pi/swift/swift/obj/diskfile.py",
line 2561, in put#012 super(DiskFileWriter, self)._put(metadata,
True)#012 File "/home/pi/swift/swift/obj/diskfile.py", line 1566, in
_put#012 tpool_reraise(self._finalize_put, metadata, target_path,
cleanup)#012 File "/home/pi/swift/swift/common/utils.py", line 3536, in
tpool_reraise#012 raise resp#012IOError: [Errno 13] Permission denied
(txn: txfbf08bffde6d4657a72a5-00591dee30)
May 18 20:55:44 raspberrypi object-server: STDERR: 127.0.0.1 - -
[18/May/2017 18:55:44] "PUT /d1/40/AUTH_test/s3perf/testfile HTTP/1.1"
500 875 0.015646 (txn: txfbf08bffde6d4657a72a5-00591dee30)
Also the proxy.error file contains an error message ERROR 500 Expect: 100-continue From Object Server.
May 18 20:55:44 raspberrypi proxy-server: ERROR 500 Expect: 100-continue
From Object Server 127.0.0.1:6010/d1 (txn: txfbf08bffde6d4657a72a5-
00591dee30) (client_ip: 10.0.0.220)
I have started Swift as user pi and assigned these folders to this user:
$ sudo chown pi:pi /etc/swift
$ sudo chown -R pi:pi /mnt/sdb1/*
$ sudo chown -R pi:pi /var/cache/swift
$ sudo chown -R pi:pi /var/run/swift
sdb1 is a loopback device with XFS file system.
$ mount | grep sdb1
/srv/swift-disk on /mnt/sdb1 type xfs (rw,noatime,nodiratime,attr2,nobarrier,inode64,logbufs=8,noquota)
$ ls -ld /mnt/sdb1/1/
drwxr-xr-x 3 pi pi 17 May 12 13:14 /mnt/sdb1/1/
I deployed Swift this way.
I wonder why creating buckets (conrainers) works but the upload of a file fails because of Permission denied.
Update 1:
$ sudo swift-ring-builder /etc/swift/account.builder
/etc/swift/account.builder, build version 2
256 partitions, 1.000000 replicas, 1 regions, 1 zones, 1 devices, 0.00 balance, 0.00 dispersion
The minimum number of hours before a partition can be reassigned is 0 (0:00:00 remaining)
The overload factor is 0.00% (0.000000)
Ring file /etc/swift/account.ring.gz is up-to-date
Devices: id region zone ip address:port replication ip:port name weight partitions balance flags meta
0 1 1 127.0.0.1:6012 127.0.0.1:6012 d1 1.00 256 0.00
$ sudo swift-ring-builder /etc/swift/container.builder
/etc/swift/container.builder, build version 2
256 partitions, 1.000000 replicas, 1 regions, 1 zones, 1 devices, 0.00 balance, 0.00 dispersion
The minimum number of hours before a partition can be reassigned is 0 (0:00:00 remaining)
The overload factor is 0.00% (0.000000)
Ring file /etc/swift/container.ring.gz is up-to-date
Devices: id region zone ip address:port replication ip:port name weight partitions balance flags meta
0 1 1 127.0.0.1:6011 127.0.0.1:6011 d1 1.00 256 0.00
$ sudo swift-ring-builder /etc/swift/object.builder
/etc/swift/object.builder, build version 2
256 partitions, 1.000000 replicas, 1 regions, 1 zones, 1 devices, 0.00 balance, 0.00 dispersion
The minimum number of hours before a partition can be reassigned is 0 (0:00:00 remaining)
The overload factor is 0.00% (0.000000)
Ring file /etc/swift/object.ring.gz is up-to-date
Devices: id region zone ip address:port replication ip:port name weight partitions balance flags meta
0 1 1 127.0.0.1:6010 127.0.0.1:6010 d1 1.00 256 0.00
Update 2
The required ports are open.
$ nmap localhost -p 6010,6011,6012,8080,22
...
PORT STATE SERVICE
22/tcp open ssh
6010/tcp open x11
6011/tcp open unknown
6012/tcp open unknown
8080/tcp open http-proxy
Update 3
I can write as user pi inside the folder where Swift should store the objects.
$ whoami
pi
$ touch /srv/1/node/d1/objects/test
$ ls -l /srv/1/node/d1/objects/test
-rw-r--r-- 1 pi pi 0 May 13 22:59 /srv/1/node/d1/objects/test
Update 4
All swift processes belong to user pi.
$ ps aux | grep swift
pi 944 3.2 2.0 24644 20100 ? Ss May12 65:14 /usr/bin/python /usr/local/bin/swift-proxy-server /etc/swift/proxy-server.conf
pi 945 3.1 2.0 25372 20228 ? Ss May12 64:30 /usr/bin/python /usr/local/bin/swift-container-server /etc/swift/container-server.conf
pi 946 3.1 1.9 24512 19416 ? Ss May12 64:03 /usr/bin/python /usr/local/bin/swift-account-server /etc/swift/account-server.conf
pi 947 3.1 1.9 23688 19320 ? Ss May12 64:04 /usr/bin/python /usr/local/bin/swift-object-server /etc/swift/object-server.conf
pi 1000 0.0 1.7 195656 17844 ? Sl May12 0:01 /usr/bin/python /usr/local/bin/swift-object-server /etc/swift/object-server.conf
pi 1001 0.0 1.8 195656 18056 ? Sl May12 0:01 /usr/bin/python /usr/local/bin/swift-object-server /etc/swift/object-server.conf
pi 1002 0.0 1.6 23880 16772 ? S May12 0:01 /usr/bin/python /usr/local/bin/swift-object-server /etc/swift/object-server.conf
pi 1003 0.0 1.7 195656 17848 ? Sl May12 0:01 /usr/bin/python /usr/local/bin/swift-object-server /etc/swift/object-server.conf
pi 1004 0.0 1.7 24924 17504 ? S May12 0:01 /usr/bin/python /usr/local/bin/swift-account-server /etc/swift/account-server.conf
pi 1005 0.0 1.6 24924 16912 ? S May12 0:01 /usr/bin/python /usr/local/bin/swift-account-server /etc/swift/account-server.conf
pi 1006 0.0 1.8 24924 18368 ? S May12 0:01 /usr/bin/python /usr/local/bin/swift-account-server /etc/swift/account-server.conf
pi 1007 0.0 1.8 24924 18208 ? S May12 0:01 /usr/bin/python /usr/local/bin/swift-account-server /etc/swift/account-server.conf
pi 1008 0.0 1.8 25864 18824 ? S May12 0:01 /usr/bin/python /usr/local/bin/swift-container-server /etc/swift/container-server.conf
pi 1009 0.0 1.8 25864 18652 ? S May12 0:01 /usr/bin/python /usr/local/bin/swift-container-server /etc/swift/container-server.conf
pi 1010 0.0 1.7 25864 17340 ? S May12 0:01 /usr/bin/python /usr/local/bin/swift-container-server /etc/swift/container-server.conf
pi 1011 0.0 1.8 25864 18772 ? S May12 0:01 /usr/bin/python /usr/local/bin/swift-container-server /etc/swift/container-server.conf
pi 1012 0.0 1.8 24644 18276 ? S May12 0:03 /usr/bin/python /usr/local/bin/swift-proxy-server /etc/swift/proxy-server.conf
pi 1013 0.0 1.8 24900 18588 ? S May12 0:03 /usr/bin/python /usr/local/bin/swift-proxy-server /etc/swift/proxy-server.conf
pi 1014 0.0 1.8 24900 18588 ? S May12 0:03 /usr/bin/python /usr/local/bin/swift-proxy-server /etc/swift/proxy-server.conf
pi 1015 0.0 1.8 24900 18568 ? S May12 0:03 /usr/bin/python /usr/local/bin/swift-proxy-server /etc/swift/proxy-server.conf
Update 5
When I create a bucket, the Swift service creates a folder like this one:
/mnt/sdb1/1/node/d1/containers/122/9d5/7a23d9409f11da3062432c6faa75f9d5/
and this folder contains a db-file like 7a23d9409f11da3062432c6faa75f9d5.db. I think this is the correct behavior.
But when I try to upload a file inside a bucket, Swift creates just an empty folder like this one.
/mnt/sdb1/1/node/d1/objects/139/eca/8b17958f984943fc97b6b937061d2eca
I can create files inside these empty folders via touch or echo as user pi but for an unknown reason, Swift does not store files inside these folders.
Update 6
In order to investigate this issue further, I installed Swift according to the SAIO - Swift All In One instructions one time inside a VMware ESXi virtual machine with Ubuntu 14.04 LTS and another time inside Raspbian on a Raspberry Pi 3. The result is, that inside the Ubuntu 14.04 VM, Swift works perfectly, but when running ontop of the Rasberry Pi, uploading files does not work.
Object PUT failed: http://10.0.0.253:8080/v1/AUTH_test/s3perf-testbucket/testfiles/s3perf-testfile1.txt
503 Service Unavailable [first 60 chars of response]
<html><h1>Service Unavailable</h1><p>The server is currently
The storage1.error log file still says:
May 24 13:15:15 raspberrypi object-server: ERROR __call__ error with PUT
/sdb1/484/AUTH_test/s3perf-testbucket/testfiles/s3perf-testfile1.txt :
#012Traceback (most recent call last):#012 File
"/home/pi/swift/swift/obj/server.py", line 1105, in __call__#012 res =
getattr(self, req.method)(req)#012 File
"/home/pi/swift/swift/common/utils.py", line 1626, in _timing_stats#012
resp = func(ctrl, *args, **kwargs)#012 File
"/home/pi/swift/swift/obj/server.py", line 814, in PUT#012
writer.put(metadata)#012 File "/home/pi/swift/swift/obj/diskfile.py",
line 2561, in put#012 super(DiskFileWriter, self)._put(metadata,
True)#012 File "/home/pi/swift/swift/obj/diskfile.py", line 1566, in
_put#012 tpool_reraise(self._finalize_put, metadata, target_path,
cleanup)#012 File "/home/pi/swift/swift/common/utils.py", line 3536, in
tpool_reraise#012 raise resp#012IOError: [Errno 13] Permission denied
(txn: txdfe3c7f704be4af8817b3-0059256b43)
Update 7
The issue is still not fixed, but I have now a working Swift service on the Raspberry Pi. I installed the (quite outdated) Swift revision 2.2.0, which is shipped with Raspbian and it works well. The steps I did are explained here.
Based on the information that you provided, the errors occur during writing Metadata.
The operations for writing Metadata falls into two categories: manipulating inode and manipulating extended attributes. So there are two possible sources for your errors.
First, it is an inode related error. This error may occur due to setting the inode64 parameter while mounting the device. According to the XFS man page:
If applications are in use which do not handle inode numbers bigger than 32 bits, the inode32 option should be specified.
Second, it is an extended attributes related error. You can use the xattr package of python to write extended attributes, and check whether exception happens.

How to determine what makes Drupal 6 hog all the memory and crash?

We have a site running Drupal 6 and a pretty standard set of modules such as Views, CCK and so on. The production site is running fine but after I created an SQL dump of the production server and imported the data to our local sandbox it stopped working.
To be more precise, after making a single request to the sandbox's Drupal instance such as loading the front page, 10-20 httpd processes suddenly start eating up all the CPU and memory on the machine. In a few seconds all the mysql handles have been used up and the site goes offline. The processes will keep doing whatever it is they're doing until I shut down the whole Apache httpd.
Since I can't get any output from the server, I can't think of a way to debug. Can there be some junk in the database that is causing infinite loops ore something similar?
Here's a snippet of the output of top. These are all the result of one single page load.
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
7690 apache 16 0 337m 52m 13m S 27.4 1.4 0:04.42 httpd
7715 apache 15 0 337m 52m 13m S 24.1 1.5 0:08.69 httpd
7777 apache 15 0 337m 52m 13m R 20.8 1.4 0:09.94 httpd
7883 apache 16 0 337m 52m 13m S 19.5 1.5 0:12.39 httpd
7574 apache 16 0 337m 52m 13m R 17.2 1.4 0:06.30 httpd
7678 apache 15 0 337m 52m 13m S 16.2 1.4 0:02.26 httpd
7695 apache 15 0 337m 52m 13m S 15.5 1.4 0:10.29 httpd
7774 apache 15 0 337m 52m 13m S 15.5 1.4 0:04.62 httpd
748 mysql 15 0 364m 67m 5408 S 15.2 1.9 15:37.77 mysqld
7847 apache 15 0 337m 52m 13m S 14.9 1.4 0:07.10 httpd
7839 apache 16 0 337m 52m 13m S 14.2 1.4 0:02.85 httpd
7879 apache 15 0 337m 52m 13m S 13.9 1.5 0:12.65 httpd
7851 apache 16 0 337m 52m 13m R 12.5 1.4 0:06.77 httpd
7724 apache 16 0 337m 52m 13m S 12.2 1.4 0:06.62 httpd
7882 apache 16 0 337m 52m 13m S 11.6 1.5 0:09.04 httpd
8273 apache 16 0 337m 52m 13m S 9.2 1.4 0:07.30 httpd
7712 apache 15 0 337m 52m 13m R 8.9 1.4 0:08.13 httpd
7742 apache 16 0 337m 52m 13m S 8.9 1.4 0:06.74 httpd
7754 apache 15 0 337m 52m 13m S 8.6 1.4 0:04.16 httpd
7739 apache 16 0 337m 52m 13m S 8.3 1.4 0:04.51 httpd
7787 apache 15 0 337m 52m 13m S 8.3 1.4 0:07.44 httpd
7819 apache 16 0 337m 52m 13m S 7.6 1.4 0:02.03 httpd
7755 apache 16 0 337m 52m 13m S 7.3 1.4 0:05.89 httpd
7766 apache 16 0 337m 52m 13m R 7.3 1.4 0:01.12 httpd
7894 apache 16 0 337m 52m 13m S 7.3 1.4 0:09.49 httpd
7814 apache 15 0 337m 52m 13m S 5.9 1.4 0:03.88 httpd
7576 apache 15 0 337m 52m 13m S 5.6 1.4 0:03.63 httpd
7829 apache 15 0 337m 52m 13m S 5.3 1.4 0:04.17 httpd
7579 apache 15 0 337m 52m 13m S 5.0 1.4 0:04.43 httpd
7817 apache 15 0 337m 52m 13m S 4.0 1.4 0:04.60 httpd
7789 apache 15 0 337m 52m 13m S 2.0 1.4 0:04.41 httpd
7820 apache 15 0 337m 52m 13m S 1.0 1.4 0:01.57 httpd
First you should empty all cache tables if it's not done yet.
Then try to consult the website without javascript enabled (this could prevent ajax calls).
You could even try to access with lynx (the browser), maybe.
If the apache process creation does not come from javascript but from the internals... well that mean one PHP scipt is spawing apache processes and that would be a very bad behaviour for a PHP script, so I hope it's not that.
You could try a profiling module on Drupal, like this one. After the crash you'll maybe be able to query at least the report pages, all profiling data is saved in database and could report you interesting data (see screenshots), maybe you can try to check the MySQL tables containing the analysed data if you cannot access the module pages.
Else, you could try XDebug and exporting a kcachegrind file on your query, but this can be quite hard to read with Drupal requests.
EDIT
Try as well to check with firebug that you are not making all the request from the requested page (maybe because of empty images src for example if it's not javascript). And check apache log and Mysql log -- where you can activate request logging.