Numpy is already installed,but “ImportError:the module named ‘numpy’ is not found” is still provoked - module

I try to create a "pypy3.7" environment by Anaconda,but get a problems as follows:
“numpy is already in the list”
“importError” still happens
I have tried reinstalled several times but nothing different happens.

Related

Pylint: same pylint and pandas version on 2 machines, 1 fails

I have 2 places running the same linting job:
Machine 1: Ubuntu over SSH
pandas==1.2.3
pylint==2.7.4
python 3.8.10
Machine 2: Gitlab CI Docker image, python:3.8.12-buster
pandas==1.2.3
pylint==2.7.4
Python 3.8.12
The Ubuntu machine is able to lint all the code fine, and it has for many months. Same for the CI job, except it had been running Python 3.7.8. Now that I upgraded the Docker image to Python 3.8.12, it throws several no-member linting errors on some Pandas objects. I've tried clearing CI caches etc.
I wish I could provide something more reproducible. But, to check my understanding of what a linter is doing, is it theoretically possible that a small version difference in python messes up pylint like this? For something like a no-member error on Pandas objects, I would think the dominant factor is the pandas version, but those are equal, so I'm confused!
Update:
I've looked at the Pandas code for pd.read_sql_query, which is what's causing the no-member error. It says:
def read_sql_query(
sql,
con,
index_col=None,
coerce_float=True,
params=None,
parse_dates=None,
chunksize: Optional[int] = None,
) -> Union[DataFrame, Iterator[DataFrame]]:
In Docker, I get E1101: Generator 'generator' has no 'query' member (no-member) (because I'm running .query on the returned dataframe). So it seems Pylint thinks that this function returns a generator. But it does not make this assumption in my other setup. (I've also verified the SHA sum of pandas/io/sql.py matches). This seems similar to this issue, but I am still baffled by the discrepancy in environments.
A fix that worked was to bump a limit like:
init-hook = "import astroid; astroid.context.InferenceContext.max_inferred = 500"
in my .pylintrc file, as explained here.
I'm unsure why/if this is connected to my change in Python version, but I'm happy to use this and move on for now. It's probably complex.
(Another hack was to write a function that returns the passed arg if the passed arg is a dataframe, and returns 1 dataframe if the passed arg is an iterable of dataframes. So the ambiguous-type object could be passed through this wrapper to clarify things for Pylint. While this was more intrusive on our codebase, we had dozens of calls to pd.read_csv and pd.real_sql_query, and only about 3 calls caused confusion for Pylint, so we almost used this solution)

Using Optaplanner for VRPPD

I am trying to run the example "optaplanner-mixedvrp-experiment" developed by Geoffrey De Smet and when I run it it throws me the following error:
Caused by: java.lang.IllegalStateException: The entity (MY) has a
variable (previousStandstill) with value (MUNO) which has a
sourceVariableName variable (nextVisit) with a value (WERBOMONT) which
is not null. Verify the consistency of your input problem for that
sourceVariableName variable.
I have not made any change, I have only cloned and executed it, I import and solve it and it throws me this error.
Do you know what could be happening?
I am applying it in the development of a variant of VRP with multiple deliveries and collections, but it throws me the same error. I have activated the FULL_ASSERT mode and nextVisit, previousStandstill, visitIndex are always null
It's been a long time since I looked at that code, so it's using an old version of optaplanner. Our goal is still to clean it up and offer an out of the box example for VRPPD (and probably remove some boilerplate along the way, using the upcoming #CollectionPlanningVariabe etc). That being said, we have multiple users&customers who used that optaplanner-mixedvrp-experiment to successfully build VRPPD implementations.
Which dataset did you try?
FWIW, that IllegalStateException says that when A.previous = B, the B.next is not A. So either the dataset importer didn't import it correctly - before calling solve() - especially if it fails before the first CH step in FULL_ASSERT. Or one of the custom moves corrupted the model.

Python: If error occurs anywhere, do specific line of code

I have a script I'm trying to write to process a large amount of data. There are, of course, potential for errors. In the script I need to connect to databases. If the script encounters an error, the code never reaches the point where the connection to the database is terminated. I'd like to have something in my python code that will recognize an error occurs, not matter where, and if nothing else at least close those databases. Does something like this exist? I know I can use try/except, but that would only work if I know exactly where I could get the error? I'm basically looking for a catchall to close my databases in the event an error occurs in a location I didn't anticipate.
To run certain cleanup code even if there is an error, use the finally block:
try:
# do stuff, possible exception
except:
# run this if exception
finally:
# always run this, even if exception
Reference: https://docs.python.org/3/tutorial/errors.html#defining-clean-up-actions

BigQuery: An internal error occurred and the request could not be completed. Error: 7367027

Trying to run a simple delete statement and the command simply times out and throws the error in the title.
If I change the where to operate on a subset of the data, it works and then I was able to run the where true version successfully. I can't wrap my head around why I'm getting this error.
I've tried running this from the python library as well as from the console, with both returning the same error. I've also tried deleting the table entirely and then recreating it, which did nothing.
delete from `<table>`
where true

Why old nodes are visible even after deleting event files [Tensorflow]?

I have just started learning tensorflow, and wrote the following piece of code in Jupyter-Notebook :
a = tf.placeholder(tf.float32,shape=[3,3],name='X')
b = tf.constant([[5,5,5],[2,3,4],[4,5,6]],tf.float32,name='Y')
c = tf.matmul(a,b)
with tf.Session() as sess:
writer = tf.summary.FileWriter('./graphs',sess.graph)
print (sess.run(c,feed_dict={a:[[2,3,4],[4,5,6],[6,7,8]]}))
writer.close()
Running the tensorboard first time, gives a single X,Y and mult node as follows :
However, when I again compile my code (ctrl+enter), the tensorboard now makes a duplicate of the original graph.
I tried to resolve this( remove the older,dead nodes) by:
1. Deleting the event files.
2. Deleting the whole directory containing multiple event files of the same code.
3. Running fuser 6006/tcp -k before the tensorboard command line call.
But even after that, when I ran tensorboard, it would show the duplicate copies.
The only solution that worked was to reset the graph using tf.reset_default_graph() at the beginning of the code or to shut the notebook down and restart it.
My question is :
1. Why is it that even after deleting the event files, the older dead nodes keep showing up on the tensorboard ?. And yes I even restarted Tensorboard after each try, but duplicates were still there.
2. What are the ways if any, besides the two I listed above, to get rid of the dead nodes ?
The nodes you described are not dead. They are still exist and can be used.
When you run your code the first time, the nodes where created and added to the graph. When you execute the same cell for the second time, they are added one more time with the different names.
The same can be achieved if you will just copy twice the code in your .py file.
Your solution with tf.reset_default_graph() is the right one. Restarting the notebook works because all the information from the memory was removed. The same as rerunning .py file.
Your stuff with removing the even files does not work because nonetheless the files are removed, the nodes added to the graph in the memory are still there.
I was also having the same problem of tensorboard displaying duplicate graphs. I tried several measures like deleting the event files , deleting the log directory which contained the event files, but tensorboard would still remember the nodes and the graph from the previous run and show them as duplicate copies, one after another.
I noticed each time I ran the code tensorflow would create the same nodes but with different names, which means it was still keeping the old nodes in its memory. Looked something like this in the python execution window:
After first run:
After running the code 2nd and 3rd time:
This solved the problem
1) Restarting the kernel before each run solved the problem
2) As mentioned in above posts, adding tf.reset_default_graph() in the beginning of the code also solved the problem.