Problem getting a process' writing/reading bytes and cpu usage in Python 3.8 - process

recently in my school finals project i needed to write a function that return a process' writing and reading bytes as well as it's cpu usage. I have tried using Psutil, tried using WMI (couldn't find a way to recieve the writing and reading bytes through that) and even tried combining them. At the end though i get an access denied error every time on almost all processes by the Psutil. Would love some help in solving this problem.
Edit: using Windows 10
this is the code i currently have:
import wmi
import psutil
from elevate import elevate
elevate()
c = wmi.WMI()
process_watcher = c.Win32_Process.watch_for("creation")
while True:
new_process = process_watcher()
pid = new_process.Processid
name = new_process.Caption
try:
curr_process = psutil.Process(pid=pid)
try:
# get the number of CPU cores that can execute this process
cores = len(curr_process.cpu_affinity())
except psutil.AccessDenied:
cores = 0
# get the CPU usage percentage
cpu_usage = curr_process.cpu_percent()
try:
# get the memory usage in bytes
memory_usage = curr_process.memory_full_info().uss
except psutil.AccessDenied:
memory_usage = 0
# total process read and written bytes
io_counters = curr_process.io_counters()
read_bytes = io_counters.read_bytes
write_bytes = io_counters.write_bytes
print(pid + ": " + name + ", Write bytes: " + write_bytes + ", cpu usage: "+ cpu_usage)
except:
print("Access Denied")

Related

Google colab unable to work with hdf5 files

I have 4 hdf5 files in my drive. While using colab, db=h5py.File(path_to_file, "r") works sometimes and doesn't rest of the time. While writing the hdf5 file, I have ensured that I closed it after writing. Say File1 works on notebook_#1, when I try to use it on notebook_#2 it works sometimes, and doesn't other times. When I run it again on notebook_#1 it may work or maynot.
Probably size is not a matter because my 4 files are 32GB and others 4GB and mostly the problem is with 4GB files.
The hdf5 files were generated using colab itself. The error that I get is:
OSError: Unable to open file (file read failed: time = Tue May 19 12:58:36 2020
, filename = '/content/drive/My Drive/Hrushi/dl4cv/hdf5_files/train.hdf5', file descriptor = 61, errno = 5, error message = 'Input/output error', buf = 0x7ffc437c4c20, total read size = 8, bytes this sub-read = 8, bytes actually read = 18446744073709551615, offset = 0
or
/usr/local/lib/python3.6/dist-packages/h5py/_hl/files.py in make_fid(name, mode, userblock_size, fapl, fcpl, swmr)
171 if swmr and swmr_support:
172 flags |= h5f.ACC_SWMR_READ
--> 173 fid = h5f.open(name, flags, fapl=fapl)
174 elif mode == 'r+':
175 fid = h5f.open(name, h5f.ACC_RDWR, fapl=fapl)
h5py/_objects.pyx in h5py._objects.with_phil.wrapper()
h5py/_objects.pyx in h5py._objects.with_phil.wrapper()
h5py/h5f.pyx in h5py.h5f.open()
OSError: Unable to open file (bad object header version number)
Would be grateful for any help, thanks in advance.
Reading directly from Google Drive can cause problems.
Try copying it to local directory e.g. /content/ first.

yowsup2 receives incomplete messages and strange characters

I am try Yowsup2 with demo EchoClient:
yowsup-cli demos -c config.example -e
I receive messages, but they are incomplete and contain strange characters at the end of each text.
For example: I send "What is your name?" from my mobile phone to Yowsup2 number, and Yowsup2 receive (and print in the terminal):
Echoing hat is your name?������������������������������������������������������������������������������������������������������������������������������������������������������������������������
Any idea?
I was working on a project using this library and i got stuck at this issue, i was waiting for someone to fix it and since no has i tried and got mine to work, this is what i did
clone the repository from github
tested to work on python3.5
in yowsup/layers/axolotl/layer.py
replace line 192 which is
191 padded.extend(self.encodeInt7bit(len(plaintext)))
192 padded.extend(plaintext) # this is the line. replace it
193 padded.append(ord("\x01"))
with this
padded.extend(plaintext.encode() if isinstance(plaintext,str) else plaintext)
add #jlguardi fix in this thread but i had to modify it a bit to work for me
def decodeInt7bit(self, string):
idx = 0
while string[idx] >= 128:
idx += 1
consumedBytes = idx + 1
value = 0
while idx >= 0:
value <<= 7
value += string[idx] % 128
idx -= 1
return value, consumedBytes
def unpadV2Plaintext(self, v2plaintext):
print(v2plaintext)
v2plaintext=bytearray(v2plaintext,'utf8') if isinstance(v2plaintext,str) else v2plaintext
end = (-(v2plaintext[-1])) # length of the left padding
length,consumed = self.decodeInt7bit(v2plaintext[1:])
return v2plaintext[1+consumed:end]
Looks clean on the client side
The garbled text though still appears on the server
not install setup.py install
Hope it works for you

psychopy website does not give a fix for output file doubeling data

Here is some of the code...
Whenever I get an output file I get a doubling of data.
#For each record in keypress, a line is created in the file
keyPress = []
keyPress.append(event.waitKeys(keyList=['s','d'],timeStamped=clock))
for key in keyPress:
for l, t in key:
f.write(str(images[index]) + "\t iteration \t" + str(k + 1) + "\t" + l + "\t" + str(t)+"\n")
f.close()
There's a few things that are unclear here and I haven't managed to reproduce it. But I will give my shot at an answer anyway. First, event.waitKeys returns just one response, so it is really not necessary to loop over them. So I'd just do
l, t = event.waitKeys(keyList=['s','d'],timeStamped=clock)[0]
... which is much nicer. So a full reproducible solution would be this:
# Set things up
from psychopy import visual, event, core
win = visual.Window()
clock = core.Clock()
f = open('log.tsv', 'a')
# Record responses for a few trials and save
for trial in range(5):
l, t = event.waitKeys(keyList=['s','d'], timeStamped=clock)[0] # [0] extracts the first (and only) element, i.e. the (key, rt) tuple which is then unpacked into l and t.
f.write('trial' + trial + '\tkey' + l + "\tRT" + str(t) + "\n")
f.close()
Instead of creating your log files manually like this, consider using the csv module or psychopy's own data.TrialHandler. Usually it's nice to represent trials using a dict and save responses together with the properties of each trial. the csv module has a DictWriter method.

How to process various tasks like video acquisition parallel in Matlab?

I want to acquire image data from stereo camera simultaneously, or in parallel, save somewhere and read the data when need.
Currently I am doing
for i=1:100
start([vid1 vid2]);
imageData1=getdata(vid1,1);
imageData2=getdata(vid2,1);
%do several calculations%
....
end
In this cameras are working serially and it is very slow. How can I make 2 cameras work at a time???
Please help..
P.S : I also tried parfor but it does not help .
Regards
No Parallel Computing Toolbox required!
The following solution can generally solve problems like yours:
First the videos, I just use some vectors as "data" and save them to the workspace, these would be your two video files:
% Creating of some "videos"
fakevideo1 = [1 ; 1 ; 1];
save('fakevideo1','fakevideo1');
fakevideo2 = [2 ; 2 ; 2];
save('fakevideo2','fakevideo2');
The basic trick is to create a function which generates another instance of Matlab:
function [ ] = parallelinstance( fakevideo_number )
% create command
% -sd (set directory), pwd (current directory), -r (run function) ...
% finally "&" to indicate background computation
command = strcat('matlab -sd',{' '},pwd,{' '},'-r "processvideo(',num2str(fakevideo_number),')" -nodesktop -nosplash &');
% call command
system( command{1} );
end
Most important is the use of & at the end of the terminal command!
Within this function another function is called where the actual video processing is done:
function [] = processvideo( fakevideo_number )
% create file and variable name
filename = strcat('fakevideo',num2str(fakevideo_number),'.mat');
varname = strcat('fakevideo',num2str(fakevideo_number));
% load video to workspace or whatever
load(filename);
A = eval(varname);
% do what has to be done
results = A*2;
% save results to workspace, file, grandmothers mailbox, etc.
save([varname 'processed'],'results');
% just to show that both processes run parallel
pause(5)
exit
end
Finally call the two processes in your main script:
% function call with number of video: parallelinstance(fakevideo_number)
parallelinstance(1);
parallelinstance(2);
My code is completely executable, so just play around a bit. I tried to keep it simple.
After all you will find two .mat files with the processed video "data" in your workspace.
Be aware to adjust the string fakevideo to name root of all your video files.

Connection reset by peer error in MongoDb on bulk insert

I am trying to insert 500 documents by doing a bulk insert in pymongo and i get this error
File "/usr/lib64/python2.6/site-packages/pymongo/collection.py", line 306, in insert
continue_on_error, self.__uuid_subtype), safe)
File "/usr/lib64/python2.6/site-packages/pymongo/connection.py", line 748, in _send_message
raise AutoReconnect(str(e))
pymongo.errors.AutoReconnect: [Errno 104] Connection reset by peer
i looked around and found here that this happens because the size of inserted documents exceeds 16 MB so according to that the size of 500 documents should be over 16 MB. So i checked the size of the size of the 500 documents(python dictionaries) like this
size=0
for dict in dicts:
size+=dict.__sizeof__()
print size
this gives me 502920. This is like 500 KB. way less than 16 MB. Then why do i get this error.
I know i am calculating the size of python dictionaries not BSON documents and MongoDB takes in BSON documents but that cant turn 500KB into 16+ MB. Moreover i dont know how to convert a python dict into A BSON document.
My MongoDB version is 2.0.6 and pymongo version is 2.2.1
EDIT
I can do a bulk insert with 150 documents and thats fine but over 150 documents this error appears
This Bulk Inserts bug has been resolved, but you may need to update your pymongo version:
pip install --upgrade pymongo
The error occurs due to the fact that the bulk inserted documents have
an overall size of greater than 16 MB
My method of calculating the size of dictionaries was wrong.
When i manually inspected each key of the dictionary and found that 1 key was having a value of size 300 KB. So that did make the overall size of documents in the bulk insert more than 16 MB. (500*(300+)KB) > 16 MB. But i still dont know how to calculate size of a dictionary without manually inspecting it. Can someone please suggest?
Just had the same error and got around it by creating my own small bulks like this:
region_list = []
region_counter = 0
write_buffer = 1000
# loop through regions
for region in source_db.region.find({}, region_column):
region_counter += 1 # up _counter
region_list.append(region)
# save bulk if we're at the write buffer
if region_counter == write_buffer:
result = user_db.region.insert(region_list)
region_list = []
region_counter = 0
# if there is a rest, also save that
if region_counter > 0:
result = user_db.region.insert(region_list)
Hope this helps
NB: small update, from pymongo 2.6 on, PyMongo will auto-split lists based on the max transfer size: "The insert() method automatically splits large batches of documents into multiple insert messages based on max_message_size"