Problem with Google Colab, Request, JSON DECODE extracting only the result from a line code in other Colab - pandas

I am trying to extract only the result from a line code in one notebook from GoogleColab, and I want to show it in other GoogleColab Notebook, only the result and work with it, because I have to mix two colabs(results from both) in a third one.
This is what I tried:
import pandas as pd
import requests
enlace_compartido = "https://colab.research.google.com/drive/...."
r = requests.get(enlace_compartido)
diccionario_resultado = r.json('utf-8')
diccionario_resultado
The error is:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-30-acf2d9c7ebc8> in <module>
3 enlace_compartido = "https://colab.research.google.com/drive/..."
4 r = requests.get(enlace_compartido)
----> 5 diccionario_resultado = r.json('utf-8')
6 diccionario_resultado
JSONDecodeError: Expecting value: line 1 column 1 (char 0)
The example of the result(code output) that I am wanting to extract from the colab:
Tarifa Precio Cantidad Importe $
Vecina 155 87 13485
Misma Zona 130 72 9360
Alejada 229 17 3893
Grande 250 1 250 El total a pagar es $ 32925.36

Related

Pandasql returns error with a basic example

The following code when run
import pandas as pd
from pandasql import sqldf
df = pd.DataFrame({'col1': [1, 2, 3, 4], 'col2': [10, 20, 30, 40]})
query = "SELECT * FROM df WHERE col1 > 2"
result = sqldf(query, globals())
print(result)
gives the following error:
Output exceeds the size limit. Open the full output data in a text editor
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
File ~/.virtualenvs/r-reticulate/lib64/python3.11/site-packages/sqlalchemy/engine/base.py:1410, in Connection.execute(self, statement, parameters, execution_options)
1409 try:
-> 1410 meth = statement._execute_on_connection
1411 except AttributeError as err:
AttributeError: 'str' object has no attribute '_execute_on_connection'
The above exception was the direct cause of the following exception:
ObjectNotExecutableError Traceback (most recent call last)
Cell In[1], line 11
8 query = "SELECT * FROM df WHERE col1 > 2"
10 # Execute the query using pandasql
---> 11 result = sqldf(query, globals())
13 print(result)
File ~/.virtualenvs/r-reticulate/lib64/python3.11/site-packages/pandasql/sqldf.py:156, in sqldf(query, env, db_uri)
124 def sqldf(query, env=None, db_uri='sqlite:///:memory:'):
125 """
126 Query pandas data frames using sql syntax
127 This function is meant for backward compatibility only. New users are encouraged to use the PandaSQL class.
(...)
154 >>> sqldf("select avg(x) from df;", locals())
...
1416 distilled_parameters,
1417 execution_options or NO_OPTIONS,
1418 )
ObjectNotExecutableError: Not an executable object: 'SELECT * FROM df WHERE col1 > 2'
Could someone please help me?
The problem could be fixed by downgrading SQLAlchemy:
pip install SQLAlchemy==1.4.46
See bug report for more details.

Numpy: negative dimensions are not allowed

Using GeFolki for the coregistration of different satellite datasets I receive the following ValueError trying to manipulate the data.
Could you explain what am I doing wrong? Please Help me
from skimage.transform import resize
nx = int(round(dimx/fdecimation))
ny = int(round(dimy/fdecimation))
Mg = resize(Master,(nx, ny),1,'constant')
nsx = int(round(dimxn/fdecimation))
nsy = int(round(dimyn/fdecimation))
Sg = resize(Slave,(nsx, nsy),1,'constant')
# Rank computation and Criterion on images after deximation
from rank import rank_sup as rank_filter_sup
from rank import rank_inf as rank_filter_inf
Mg_rank = rank_filter_inf(Mg, rank) # rank sup : high value pixels have low rank
Sg_rank = rank_filter_inf(Sg, rank)
R=np.zeros((nx-nsx-1,ny-nsy-1));
indices=np.nonzero(Sg_rank);
test2=Sg_rank[indices];
for k in range(0,nx-nsx-1):
for p in range(0,ny-nsy-1):
test1=Mg_rank[k:k+nsx,p:p+nsy];
test1=test1[indices];
test=(test1-test2)**2
R[k,p]=test.mean();
ValueError Traceback (most recent call last)
<ipython-input-24-bebe7595123d> in <module>
17 Sg_rank = rank_filter_inf(Sg, rank)
18
---> 19 R=np.zeros((nx-nsx-1,ny-nsy-1));
20 indices=np.nonzero(Sg_rank);
21 test2=Sg_rank[indices];
ValueError: negative dimensions are not allowed
Apparently one of nx-nsx-1,ny-nsy-1 is negative, but you cannot create an array of 0s with negative number of rows/columns. I suggest printing out those values and see where they get negative to fix it.

Problems using pandas read sql with a connection using cx_Oracle 6.0b2

When using cx_Oracle 5.3 I did not have this issue, but for a particularly large query that I am trying to run using:
connection = cx_Oracle.connect('Username/Password#host/dbname')
pd.read_sql(Query,connection)
I get the following value error:
ValueError Traceback (most recent call last)
<ipython-input-22-916f315e0bf6> in <module>()
----> 1 OracleEx = pd.read_sql(x,connection)
2 OracleEx.head()
C:\Users\kevinb\AppData\Local\Continuum\Anaconda3\lib\site-packages\pandas\io\sql.py in read_sql(sql, con, index_col, coerce_float, params, parse_dates, columns, chunksize)
497 sql, index_col=index_col, params=params,
498 coerce_float=coerce_float, parse_dates=parse_dates,
--> 499 chunksize=chunksize)
500
501 try:
C:\Users\kevinb\AppData\Local\Continuum\Anaconda3\lib\site-packages\pandas\io\sql.py in read_query(self, sql, index_col, coerce_float, params, parse_dates, chunksize)
1606 parse_dates=parse_dates)
1607 else:
-> 1608 data = self._fetchall_as_list(cursor)
1609 cursor.close()
1610
C:\Users\kevinb\AppData\Local\Continuum\Anaconda3\lib\site-packages\pandas\io\sql.py in _fetchall_as_list(self, cur)
1615
1616 def _fetchall_as_list(self, cur):
-> 1617 result = cur.fetchall()
1618 if not isinstance(result, list):
1619 result = list(result)
ValueError: invalid literal for int() with base 10: '8.9'
Setting up my own cursor and using cur.fetchall() I get a similar result:
ValueError Traceback (most recent call last)
<ipython-input-46-d32c0f219cdf> in <module>()
----> 1 y=x.fetchall()
2 pd.DataFrame(y)
ValueError: invalid literal for int() with base 10: '7.3'
The values '8.9' and '7.3' change with every run.
Any ideas on why I am getting these value errors?
pd.read_sql and using cur.fetchall() have worked for some queries, but not the particular one I am using which has worked in previous versions of cx_Oracle.
Please try with the release candidate instead of beta 2. There was an issue when retrieving certain numeric expressions.
python -m pip install cx_Oracle --upgrade --pre

Building MultiGraph from pandas dataframe - "TypeError: unhashable type: 'dict'"

I am experiencing the same issue, as it is described here
Networkx Multigraph from_pandas_dataframe
Although I replaced line 211 in convert_matrix.py, "TypeError: unhashable type: 'dict'" still exists. I want to build a MultiGraph using the following dataframe (links):
1_id f v v_id_1 v_id_2
0 3483 50 38000 739 2232
1 3482 50 38000 717 2196
2 3482 50 22000 717 2196
3 3480 50 22000 1058 2250
data = {'1_id':[3483, 3482, 3482, 3480], 'v_id_1':[739, 717, 717, 1058], 'v_id_2':[2232,2196, 2196, 2250], 'v':[38000, 38000, 22000, 22000], 'f':[50, 50, 50, 50]}
links = pd.DataFrame(data)
G=nx.from_pandas_dataframe(links, 'v_id_1', 'v_id_2', edge_attr=['v','f'], create_using=nx.MultiGraph())
Trying to create the MultiGraph I'm getting the error:
TypeError Traceback (most recent call last)
<ipython-input-49-d2c7b8312ea7> in <module>()
----> 1 MG= nx.from_pandas_dataframe(df, 'gene1', 'gene2', ['conf','type'], create_using=nx.MultiGraph())
/usr/lib/python2.7/site-packages/networkx-1.10-py2.7.egg/networkx/convert_matrix.pyc in from_pandas_dataframe(df, source, target, edge_attr, create_using)
209 # Iteration on values returns the rows as Numpy arrays
210 for row in df.values:
--> 211 g.add_edge(row[src_i], row[tar_i], {i:row[j] for i, j in edge_i})
212
213 # If no column names are given, then just return the edges.
/usr/lib/python2.7/site-packages/networkx-1.10-py2.7.egg/networkx/classes/multigraph.pyc in add_edge(self, u, v, key, attr_dict, **attr)
340 datadict.update(attr_dict)
341 keydict = self.edge_key_dict_factory()
--> 342 keydict[key] = datadict
343 self.adj[u][v] = keydict
344 self.adj[v][u] = keydict
TypeError: unhashable type: 'dict'
After posting this issue in GitHub (see here link), I got a good answer, which at least in my case seems to work. I had installed networkx 1.11 insted of version 2.0.dev_20161206165920 Try to install the development version of NetworkX from github link

matplotlib - ImportError: No module named _tkinter

I have a simple notebook with the following code:
%matplotlib inline
However, when running it I get the following error:
ImportError: No module named _tkinter
I have another notebook in the same project, and that one is able to run the statement without issue.
The data science experience is a managed service so you don't have root access to install _tkinter.
Full stacktrace:
ImportErrorTraceback (most recent call last)
<ipython-input-43-5f9c00ae8c2d> in <module>()
----> 1 get_ipython().magic(u'matplotlib inline')
2
3 import matplotlib.pyplot as plt
4 #import numpy as np
5
/usr/local/src/bluemix_jupyter_bundle.v20/notebook/lib/python2.7/site-packages/IPython/core/interactiveshell.pyc in magic(self, arg_s)
2161 magic_name, _, magic_arg_s = arg_s.partition(' ')
2162 magic_name = magic_name.lstrip(prefilter.ESC_MAGIC)
-> 2163 return self.run_line_magic(magic_name, magic_arg_s)
2164
2165 #-------------------------------------------------------------------------
/usr/local/src/bluemix_jupyter_bundle.v20/notebook/lib/python2.7/site-packages/IPython/core/interactiveshell.pyc in run_line_magic(self, magic_name, line)
2082 kwargs['local_ns'] = sys._getframe(stack_depth).f_locals
2083 with self.builtin_trap:
-> 2084 result = fn(*args,**kwargs)
2085 return result
2086
<decorator-gen-106> in matplotlib(self, line)
/usr/local/src/bluemix_jupyter_bundle.v20/notebook/lib/python2.7/site-packages/IPython/core/magic.pyc in <lambda>(f, *a, **k)
191 # but it's overkill for just that one bit of state.
192 def magic_deco(arg):
--> 193 call = lambda f, *a, **k: f(*a, **k)
194
195 if callable(arg):
/usr/local/src/bluemix_jupyter_bundle.v20/notebook/lib/python2.7/site-packages/IPython/core/magics/pylab.pyc in matplotlib(self, line)
98 print("Available matplotlib backends: %s" % backends_list)
99 else:
--> 100 gui, backend = self.shell.enable_matplotlib(args.gui)
101 self._show_matplotlib_backend(args.gui, backend)
102
/usr/local/src/bluemix_jupyter_bundle.v20/notebook/lib/python2.7/site-packages/IPython/core/interactiveshell.pyc in enable_matplotlib(self, gui)
2949 gui, backend = pt.find_gui_and_backend(self.pylab_gui_select)
2950
-> 2951 pt.activate_matplotlib(backend)
2952 pt.configure_inline_support(self, backend)
2953
/usr/local/src/bluemix_jupyter_bundle.v20/notebook/lib/python2.7/site-packages/IPython/core/pylabtools.pyc in activate_matplotlib(backend)
293 matplotlib.rcParams['backend'] = backend
294
--> 295 import matplotlib.pyplot
296 matplotlib.pyplot.switch_backend(backend)
297
/gpfs/fs01/user/sdd1-7e9fd7607be53e-39ca506ba762/.local/lib/python2.7/site-packages/matplotlib/pyplot.py in <module>()
112
113 from matplotlib.backends import pylab_setup
--> 114 _backend_mod, new_figure_manager, draw_if_interactive, _show = pylab_setup()
115
116 _IP_REGISTERED = None
/gpfs/fs01/user/sdd1-7e9fd7607be53e-39ca506ba762/.local/lib/python2.7/site-packages/matplotlib/backends/__init__.pyc in pylab_setup()
30 # imports. 0 means only perform absolute imports.
31 backend_mod = __import__(backend_name,
---> 32 globals(),locals(),[backend_name],0)
33
34 # Things we pull in from all backends
/gpfs/fs01/user/sdd1-7e9fd7607be53e-39ca506ba762/.local/lib/python2.7/site-packages/matplotlib/backends/backend_tkagg.py in <module>()
4
5 from matplotlib.externals import six
----> 6 from matplotlib.externals.six.moves import tkinter as Tk
7 from matplotlib.externals.six.moves import tkinter_filedialog as FileDialog
8
/gpfs/fs01/user/sdd1-7e9fd7607be53e-39ca506ba762/.local/lib/python2.7/site-packages/matplotlib/externals/six.pyc in load_module(self, fullname)
197 mod = self.__get_module(fullname)
198 if isinstance(mod, MovedModule):
--> 199 mod = mod._resolve()
200 else:
201 mod.__loader__ = self
/gpfs/fs01/user/sdd1-7e9fd7607be53e-39ca506ba762/.local/lib/python2.7/site-packages/matplotlib/externals/six.pyc in _resolve(self)
111
112 def _resolve(self):
--> 113 return _import_module(self.mod)
114
115 def __getattr__(self, attr):
/gpfs/fs01/user/sdd1-7e9fd7607be53e-39ca506ba762/.local/lib/python2.7/site-packages/matplotlib/externals/six.pyc in _import_module(name)
78 def _import_module(name):
79 """Import module, returning the module after the last dot."""
---> 80 __import__(name)
81 return sys.modules[name]
82
/usr/local/src/bluemix_jupyter_bundle.v20/notebook/lib/python2.7/lib-tk/Tkinter.py in <module>()
37 # Attempt to configure Tcl/Tk without requiring PATH
38 import FixTk
---> 39 import _tkinter # If this fails your Python may not be configured for Tk
40 tkinter = _tkinter # b/w compat for export
41 TclError = _tkinter.TclError
ImportError: No module named _tkinter
So the fix was quite simple - I just had to restart the kernel using the kernel menu item in the notebook.
I had experienced the same problem when running ipython locally on my laptop and the solution was to install tkinter, so I wasn't expecting the answer to be as simple as restarting the kernel.
Another time I received this error message, restarting the kernel did not work. I had to:
change the spark backend
download the notebook to file
delete the notebook in DSX
create a new notebook from the downloaded notebook