Python 3.10.4 exe fail gets SSLCertVerificationError on another computer - ssl-certificate

python: 3.10.4
urllib3
Windows
In short, I created a python fail that takes some information from a site and then puts it in a excel file.
From main.py I created an .exe fail as such:
pyinstaller --onefile main.py
Now if I run the main.py or main.exe from my computer everything works fine.
When I try to run ether the main.py or main.exe from another computer I get the following error:
ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired (_ssl.c:997)
Yesterday when I tried to run the same .exe from the other computer everything worked fine but today when I try to run the .exe file from another computer I get the error.
Even when I create a new .exe and try to run it on the other computer I again get the same error.
I don't know if this helps but basically part of the code is as follows:
from urllib.request import urlopen
from bs4 import BeautifulSoup as BS
raw_info = BS(urlopen(url),'html.parser')
#Do stuff with raw_info
If the entire error is need I will crop out some information from it and paste it here.
url = https://dashboard.elering.ee/api/nps/price?start=2022-06-29T22%3A00%3A00.000Z&end=2022-06-30T23%3A59%3A59.999Z

Related

Running remote Pycharm interpreter with tensorflow and cuda (with module load)

I am using a remote computer in order to run my program on its GPU. My program contains some code with tensorflow functions, and for easier debugging with Pycharm I would like to connect via ssh with remote interpreter to the computer with the GPU. This part can be done easily since Pycharm has this option so I can connect there. However, tensorflow is not loaded automatically so I get import error.
Note that in our institution, we run module load cuda/10.0 and module load tensorflow/1.14.0 each time the computer is loaded. Now this part is the tricky one. Opening a remote terminal creates another session which is not related to the remote interpreter session so it's not affecting remote interpreter modules.
I know that module load in general configures env, however I am not sure how can I export the environment variables to Pycharm's environment variables that are configured before a run.
Any help would be appreciated. Thanks in advance.
The workaround after all was relatively simple: first, I have installed the EnvFile plugin, as it was explained here: https://stackoverflow.com/a/42708476/13236698
Them I created an .env file with a quick script on python, extracting all environment variables and their values from os.environ and wrote them to a file in the following format: <env_variable>=<variable_value>, and saved the file with .env extension. Then I loaded it to PyCharm, and voila - all tensorflow modules were loaded fine.

Pandas Import Error when converting Python script to .exe with Pyinstaller

I am currently trying to convert my Python script that automates an Excel task for my boss to an executable so that I can send it to him and he can use it without having Python installed. One of the libraries I used in the script is Pandas, and when I run the .exe after successfully building it, I get "failed to execute script 'main' due to unhandled exception: No module named pandas."
This is what I used to build the .exe:
pyinstaller --onefile -w main.py
Obviously I have Pandas installed on my machine and the script works perfectly fine when I run it normally, so I don't really know why I am getting this error especially since I thought the whole point of me converting the script to an executable was to get rid of the need for other packages to be installed.
I have read other articles on here about a similar error relating to numpy, but none of the other solutions have helped me. I have tried doing --hidden-import pandas while building the executable and downgrading pandas to an older version, both to no success.
Any suggestions with help!

How can I fix the module import error when I run it on the command line after making SSH connection?

I am not working on lego mindstorm project using ev3-brick and successfully connected pc and ev3-brick using Pycharm and then transferred the code from my laptop to EV3-brick.
But whenever I try to implement the python file on SSH shell using terminal, the module import error occurs that no module named 'libs'. But since I transferred all the file and even set the path using export PYTHONPATH="${PYTHONPATH}:/home/robot/csse120/libs", it has to import all the modules. It seems that when I ran another file not using terminal, it correctly imports other modules or packages but whenever I run on the shell, it cannot find files on other packages.
I even tried inserting following code which is,
import os
os.environ['PATH'] += ':/home/robot/csse120/libs'
but it didn't work also in another file, it says it cannot import paho-mqtt thought it worked well when I didn't run it on the shell.
Can you help me solve this problem? I would appreciate your help.
enter image description here

Pycharm giving error on a script which is working from terminal (Module: Tensorflow)

I was working with the tensorflow(GPU version) module in Pycharm. If I run a script from terminal, it works as expected. However when I run the script from pycharm, it says:
ImportError: libcudart.so.7.5: cannot open shared object file: No
such file or directory
How do I resolve this?
Pycharm interpreter shows tensorflow as a package.
In the terminal, when I check for the version of tensorflow, it was the same as in pycharm (0.10.0rc0)
Looks like your CUDA_HOME or LD_LIBRARY_PATH configured correctly in the console, but not in PyCharm. You can check and compare their values, in console do
echo $CUDA_HOME
echo $LD_LIBRARY_PATH
In PyCharm (say, in your main script):
import os
print(os.environ.get('CUDA_HOME'))
print(os.environ.get('LD_LIBRARY_PATH'))
You can configure them for the given Run Configuration in Environment Variables section.
Better approach would be configuring those environment variables globally, so every process in the system would have an access to them. To do that you have to edit /etc/environment file and add original values, which you got from console.
Here are very similar problems: one, two, three.

Command line to run a program with both xlwt and abaqusConstants modules

Windows Machine, Python 2.4.
I have a program that imports both xlwt/xlrd and abaqusConstants module.
When I run my program with the command line: abaqus python abc.py, I get "ImportError: No module named xlwt/xlrd"
When I run my program with the command line: c:\python24\python.exe abc.py, I get "ImportError: No module named abaqusConstants".
The program ran perfectly when I ran it on my system where xlrd/xlwt was present in c:\python24\lib and Abaqus was installed in C-drive. When I tried to access xlrd/xlwt from my organisation's common share, the above problem appeared.
Is it because Abaqus is not present in the common share? How do I rectify this issue? Please tell me what command line to use.
the module abaqusConstants is only available in abaqus kernel executions of python so you need to be running it with abaqus python. Make sure that your PYTHONPATH variable is set properly to include the directory where xlwt/xlrd exists. see Using matplotlib (for python 2.6) with Abaqus 6.12 for a similar issue.