Why can't I use Solver qpsolver anymore? - gurobi

I just coded a quadratic programming and it has worked very well but after the someday it works not at all.
Does anyone have any idea what the problem is?
My code is:
import time
import numpy as np
from numpy import array, dot
from qpsolvers import solve_qp
Matrix10 = np.load(r'C:\Users\skqkr\Desktop\Semesterarbeit/Chiwan_Q7.npz')
start = time.time()
P = Matrix10['Q'] # quick way to build a symmetric matrix
q = Matrix10['p']
G = Matrix10['G']
h = Matrix10['h']
x = solve_qp(P, q, G, h )>print("QP solution: x = {}".format(x))
print("time :", time.time() - start)
And the result is:
ImportError: cannot import name 'solve_qp' from 'qpsolvers' (C:\Users\skqkr\qpsolvers.py)
I don't understand why it isn't suddenly going well.

I do not think the code you shared is the one you are really using hence it is not easy to understand what is going on. However there are few reason for your problem to happen
The python ImportError: cannot import name error occurs when the import class is inaccessible or the imported class in circular dependence. The import keyword is used to load class and function. The keyword from is used to load the module. For any reason, if the import class is not available in the python class path, The “ImportError: cannot import name” python error is thrown.
The following are the reasons for the ImportError: cannot import name
The import class is not available or not created.
The import class name is mis-named or mis-spelled
The import class name and module name is mis-placed.
The import class is not available in python class path
The import class is not available in python library
The import class is in circular dependency
The python module is just a python file with the .py extension. The keyword from will be used to load the python module. A class in a python module is imported using the keyword import. If the imported class is not in the referred python file, the python interpreter will throw the error ImportError: Cannot import name.
If two python files refer to each other and attempt to load the other file, it will create the circular load dependence. That will cause error in heap memory. If the python interpreter detects the circular dependence, it throws the error ImportError: Can’t Import Name.

Related

pandas-read-xml has error on 'json-normalize'

I saw there is a way to directly read XML files using pandas. I followed and used this package. However, I keep getting errors.
https://pypi.org/project/pandas-read-xml/
import pandas as pd
import pandas_read_xml as pdx
from pandas.io.json import json_normalize
The error was generated by last line and the error is
ImportError: cannot import name 'json_normalize'
I am using kernel python 3, can anyone tell me what was wrong with it?

Relative imports from within a notebook that is not at the 'main.py' level

I have a structure like the following one:
/src
__init__.py
module1.py
module2.py
/tests
__init__.py
test_module1.py
test_module2.py
/notebooks
__init__.py
exploring.ipynb
main.py
I'd like to use the notebook 'exploring' to do some data exploration, and to do so I'd need to perform relative imports of module1 and module2. But if I try to run from ..src.module1 import funct1, I receive an ImportError: attempted relative import with no known parent package, which I understand is expected because I'm running the notebook as if it was a script and not as a module.
So as a workaround I have been mainly pulling the notebook outside its folder to the main.py level every time I need to use it, and then from src.module1 import funct1 works.
I know there are tons of threads already on relative imports, but I couldn't find a simpler solution so far of making this work without having to move the notebook every time. Is there any way to perform this relative import, given that the notebook when called is running "as a script"?
Scripts cannot do relative imports. Have you considered something like:
if __name__ == "__main__":
sys.path.insert(0,
os.path.abspath(os.path.join(os.getcwd(), '..')))
from src.module1 import funct1
else:
from ..src.module1 import funct1
Or using exceptions:
try:
from ..src.module1 import funct1
except ImportError:
sys.path.insert(0,
os.path.abspath(os.path.join(os.getcwd(), '..')))
from src.module1 import funct1
?

from pyramid.arima import auto_arima not working

I am doing some timeseries forecasting, while at it I am trying to import auto_arima using pyramid but it throws an Module not found error as - ''No module named 'pyramid.arima'
from pyramid.arima import auto_arima
I also tried importing auto_arima from pmdarima :
from pmdarima.arima import auto_arima
but this throws an error as -
"type object 'pmdarima.arima._arima.array' has no attribute 'reduce_cython'"
What am I doing wrong?...
I'm using pmdarima package without any issues, but your error is highly probably related to your numpy version. I would recommend to you to upgrade it (in case you use pip):
pip install --upgrade numpy
You can also try to import numpy package before importing auto_arima (some people experience strange behavior).
You can follow discussion on github issues - https://github.com/tgsmith61591/pmdarima/issues/91 (similar here or here). You're definitely not the first one with that issue.
If it doesn't help, please, paste your pmdarima and numpy versions.

Google colab issue importing ue using different class files

I am trying to use Google colab for my project for which I have to upload a few python files because I need those class files.But while executing the main function.It is constantly throwing me an error 'module object has no attribute' . Is there some memory issue with colab or what! Help would be much appreciated.
import numpy as np
import time
import tensorflow as tf
import NN
import Option
import Log
import getData
import Quantize
AttributeError: 'module' object has no attribute 'NN'
I uploaded all files using following code :
from google.colab import files
src = list(files.upload().values())[0]
open('Option.py','wb').write(src)
import Option
But its always giving me error on some or the other files which I am importing.
The updated version (for a few weeks) can save the files without you having to call open(fname, 'wb').write(src)
So, you only have to upload your 5 files: NN.py, Option.py, Log.py, getData.py, and Quantize.py (and probably other dependency + data) then try importing each one e.g. import NN to see if there's any error.

How to Reload a Python3 C extension module?

I wrote a C extension (mycext.c) for Python 3.2. The extension relies on constant data stored in a C header (myconst.h). The header file is generated by a Python script. In the same script, I make use of the recently compiled module. The workflow in the Python3 myscript (not shown completely) is as follows:
configure_C_header_constants()
write_constants_to_C_header() # write myconst.h
os.system('python3 setup.py install --user') # compile mycext
import mycext
mycext.do_stuff()
This works perfectly fine the in a Python session for the first time. If I repeat the procedure in the same session (for example, in two different testcases of a unittest), the first compiled version of mycext is always (re)loaded.
How do I effectively reload a extension module with the latest compiled version?
You can reload modules in Python 3.x by using the imp.reload() function. (This function used to be a built-in in Python 2.x. Be sure to read the documentation -- there are a few caveats!)
Python's import mechanism will never dlclose() a shared library. Once loaded, the library will stay until the process terminates.
Your options (sorted by decreasing usefulness):
Move the module import to a subprocess, and call the subprocess again after recompiling, i.e. you have a Python script do_stuff.py that simply does
import mycext
mycext.do_stuff()
and you call this script using
subprocess.call([sys.executable, "do_stuff.py"])
Turn the compile-time constants in your header into variables that can be changed from Python, eliminating the need to reload the module.
Manually dlclose() the library after deleting all references to the module (a bit fragile since you don't hold all the references yourself).
Roll your own import mechanism.
Here is an example how this can be done. I wrote a minimal Python C extension mini.so, only exporting an integer called version.
>>> import ctypes
>>> libdl = ctypes.CDLL("libdl.so")
>>> libdl.dlclose.argtypes = [ctypes.c_void_p]
>>> so = ctypes.PyDLL("./mini.so")
>>> so.PyInit_mini.argtypes = []
>>> so.PyInit_mini.restype = ctypes.py_object
>>> mini = so.PyInit_mini()
>>> mini.version
1
>>> del mini
>>> libdl.dlclose(so._handle)
0
>>> del so
At this point, I incremented the version number in mini.c and recompiled.
>>> so = ctypes.PyDLL("./mini.so")
>>> so.PyInit_mini.argtypes = []
>>> so.PyInit_mini.restype = ctypes.py_object
>>> mini = so.PyInit_mini()
>>> mini.version
2
You can see that the new version of the module is used.
For reference and experimenting, here's mini.c:
#include <Python.h>
static struct PyModuleDef minimodule = {
PyModuleDef_HEAD_INIT, "mini", NULL, -1, NULL
};
PyMODINIT_FUNC
PyInit_mini()
{
PyObject *m = PyModule_Create(&minimodule);
PyModule_AddObject(m, "version", PyLong_FromLong(1));
return m;
}
there is another way, set a new module name, import it, and change reference to it.
Update: I have now created a Python library around this approach:
https://github.com/bergkvist/creload
https://pypi.org/project/creload/
Rather than using the subprocess module in Python, you can use multiprocessing. This allows the child process to inherit all of the memory from the parent (on UNIX-systems).
For this reason, you also need to be careful not to import the C extension module into the parent.
If you return a value that depends on the C extension, it might also force the C extension to become imported in the parent as it receives the return-value of the function.
import multiprocessing as mp
import sys
def subprocess_call(fn, *args, **kwargs):
"""Executes a function in a forked subprocess"""
ctx = mp.get_context('fork')
q = ctx.Queue(1)
is_error = ctx.Value('b', False)
def target():
try:
q.put(fn(*args, **kwargs))
except BaseException as e:
is_error.value = True
q.put(e)
ctx.Process(target=target).start()
result = q.get()
if is_error.value:
raise result
return result
def my_c_extension_add(x, y):
assert 'my_c_extension' not in sys.modules.keys()
# ^ Sanity check, to make sure you didn't import it in the parent process
import my_c_extension
return my_c_extension.add(x, y)
print(subprocess_call(my_c_extension_add, 3, 4))
If you want to extract this into a decorator - for a more natural feel, you can do:
class subprocess:
"""Decorate a function to hint that it should be run in a forked subprocess"""
def __init__(self, fn):
self.fn = fn
def __call__(self, *args, **kwargs):
return subprocess_call(self.fn, *args, **kwargs)
#subprocess
def my_c_extension_add(x, y):
assert 'my_c_extension' not in sys.modules.keys()
# ^ Sanity check, to make sure you didn't import it in the parent process
import my_c_extension
return my_c_extension.add(x, y)
print(my_c_extension_add(3, 4))
This can be useful if you are working in a Jupyter notebook, and you want to rerun some function without rerunning all your existing cells.
Notes
This answer might only be relevant on Linux/macOS where you have a fork() system call:
Python multiprocessing linux windows difference
https://rhodesmill.org/brandon/2010/python-multiprocessing-linux-windows/