Number of Read and write Requests for Hello World - gem5

I ran a basic hello world program in gem5 and below is the command I ran:
./build/ARM/gem5.opt ./configs/example/se.py --cpu-type=TimingSimpleCPU --cmd=tests/test-progs/hello/bin/arm/linux/hello
I expected that the number of read/write requests would not be crossing few 100s of instructions but when i looked into the stats.txt in the m5out folder I can see that:
system.mem_ctrls.readReqs 6088 # Number of read requests accepted
system.mem_ctrls.writeReqs 936 # Number of write requests accepted
Also here, i statically compiled the binary. Could someone help me understand why there are so many requests going into the main memory.

Related

Combine output of two SSH commands, each running wget download

I found a site that provides things for download which no other site provides.
Trying not to say what, to avoid answers providing a separate download-link, nothing unlawful friends ;-)
But said site has limits for downloads:
Maximum 2 connections per IP.
Maximum 50 KB download per connection.
At least, it supports resume, meaning my download speed is: 50 KB * 2 == 100 KB
Said limits are far below my internet band-width, but luckily, I have a remote host (so one extra IP, beside my Personal-IP), which means I can download an extra file at the same time, using command like:
ssh root#123.45.67.89 'wget -O - https://my-site.com/my-direct-link.zip' >> my-local-download.zip
Above command even shows size-already-downloaded, progress-in-percent, donwload-speed and time-remaining, like:
289050K .......... .......... .......... .......... .......... 15% 50K 10h30m42s
which allows me to see how long it takes to complete.
The problem is, said command uses only one connection of possible 2 connections.
I know wget can increase max-connections per server, but I can not use such thing, because then I would need to wait till download fineshes (maybe blindly without knowing progress), before I can download on local.
So instead I want to run 2 wget commands parallel, but:
How can I config first wget command to download only until 50% of original file.
How can I config second wget command to download after said 50%.
How can I combine both half-files back into one 100% file.

Which file is consuming most of the bandwidth?

My website is consuming much more bandwidth than it supposed to be. From Weblizer or awstats of WHM/ cPanel I can monitor the bandwidth usage, which type of files (jpg, png, php, css etc.) is consuming the bandwidth. But I couldn't get any specific file name. My assumption is the bandwidth usage is done by referral spaming. But from the "Visitors" page of cPanel I can see only last 1000 hits. Is there any way from where I can see that which image or css file is consuming the bandwidth.
If there is a particular file which you think is consuming the most bandwidth, then you use apachetop tool.
yum install apachetop
then run
apachetop -f /var/log/apache2/domlogs/website_name-ssl.log
replace website_name with which you wish too.
It will basically pick the entries from domlogs (which saves requests being served from websites, you may read more about domlogs here).
This will show the file which is being requested the most in real time basis and might give you an idea if particular image/php etc file has maximum requests.
Domlogs is a way to find which file request by which bot etc is being carried out. Your initial investigation may start from this point.

How can I generate a web page that checks the status of a process or service?

I have a dedicated server that runs a few lightweight game servers. The server is already running Apache. However I am cheap and the server hardware is not exactly robust and not all the servers we use run concurrently. I want to be able to generate a web page say /stats that has some info like:
Game 1: Online <uptime>
Game 2: Offline
...etc
I'm certain that I could run a script using a cronjob that just uses ps + grep logged into a file, and then parse that file for information on the server but I'm looking for a more dynamic option that checks as the page is generated.
You have at least a few options (other people may have additional suggestions beyond what is listed here):
Cron a shell script to generate a stats.html or stats.txt
PHP's shell_exec (could run ps |grep... for example) or exec
PHP's variety of posix functions may help (http://php.net/manual/en/ref.posix.php)
If you have PERL available there may be a few options there as well
My suggestion is to evaluate shell_exec or exec before any of the others.
If you need additional assistance please post what you have tried and the results.

tcl shell through apache

I have a tool which supports interactive queries though tcl shell. I want to create a web application through which users can send different queries to the tool. I have done some basic programming using Apache web server and cgi scripts, but i am unable to think of a way to keep the shell alive and send queries to that.
Some more information:
Let me describe it more. Tool builds a graph data structure, after building users can query for information using tcl shell, something like get all child nodes of a particular node. I cannot build the data structure with every query because building takes lot of time. I want to build the data structure and somehow keep the shell alive. Apache server should send all the queries to that shell and return the responses back to the user
You might want to create a daemon process, perhaps using expect, that spawns your interactive program. The daemon program could listen to queries over TCP using Tcl's socket command. Your CGI program would create a client socket to talk to the daemopn.
I'd embed the graph-managing program into an interpreter that's also running a small webserver (e.g., tclhttpd, though that's not the only option) and have the rest of the world interact with the graph through RESTful web accesses. This could then be integrated behind Apache in any way you like — a CGI thunk would work, or you could do request forwarding, or you could write some server-side code to do it (there's many options there!) — or you could even just let clients connect directly. Many options would work.
The question appears to be incomplete as you did not specify what exactly does "interactive" mean with regard to your tool.
How does it support interactive queries? Does it call gets in a kind of endless loop and processed each line as it's read? If so, the solution to your problem is simple: the Tcl shell is not really concerned about whether its standard input is connected to an interactive terminal or not. So just spawn your tool in your CGI request handling code, write the user's query to that process's stdin stream, flush it and then read all the text written by that process to its stdout and stderr streams. Then send them back to the browser. How exactly to spawn the process and communicate with it via its standard streams heavily depends on your CGI code.
If you don't get the idea, try writing your query to a file and then do comething like
$ tclsh /path/to/your/tool/script.tcl </path/to/the/query.file
and you should have the tool to respond in a usual way.
If the interaction is carried using some other way in your tool, then you probably have to split it to a "core" and "front-end" parts so that the core just reads queries and outputs results, and the front-end part carries out interaction. Then hook up that core to your CGI processing code in a way outlined above.

Counting number of Hits on yaws web application

if i have a web application running from a yaws web server, how would i count the number of hits from users to my site?
I have tried to use rudimentary methods of counting the number of lines in the .access file of my site found in the yaws logs like this:
$ cat PATH_TO_YAWS_LOGS/www.my_site.com.access | wc -l
Point me to a better way of finding out how many hits i have received sofar on my site running on top of yaws.
You're lucky, Yaws uses the "Common Log Format" and thus any analytics software supporting Apache should do (for example "Webalizer", as mentioned there).