pandas.io.ga not working for me - pandas

So I have worked through the Hello Analytics tutorial to confirm that OAuth2 is working as expected for me, but I'm not having any luck with the pandas.io.ga module. In particular, I am stuck with this error:
In [1]: from pandas.io import ga
In [2]: df = ga.read_ga("pageviews", "pagePath", "2014-07-08")
/usr/local/lib/python2.7/dist-packages/pandas/core/index.py:1162: FutureWarning: using '-' to provide set differences
with Indexes is deprecated, use .difference()
"use .difference()",FutureWarning)
/usr/local/lib/python2.7/dist-packages/pandas/core/index.py:1147: FutureWarning: using '+' to provide set union with
Indexes is deprecated, use '|' or .union()
"use '|' or .union()",FutureWarning)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-2-b5343faf9ae6> in <module>()
----> 1 df = ga.read_ga("pageviews", "pagePath", "2014-07-08")
/usr/local/lib/python2.7/dist-packages/pandas/io/ga.pyc in read_ga(metrics, dimensions, start_date, **kwargs)
105 reader = GAnalytics(**reader_kwds)
106 return reader.get_data(metrics=metrics, start_date=start_date,
--> 107 dimensions=dimensions, **kwargs)
108
109
/usr/local/lib/python2.7/dist-packages/pandas/io/ga.pyc in get_data(self, metrics, start_date, end_date, dimensions,
segment, filters, start_index, max_results, index_col, parse_dates, keep_date_col, date_parser, na_values, converters,
sort, dayfirst, account_name, account_id, property_name, property_id, profile_name, profile_id, chunksize)
293
294 if chunksize is None:
--> 295 return _read(start_index, max_results)
296
297 def iterator():
/usr/local/lib/python2.7/dist-packages/pandas/io/ga.pyc in _read(start, result_size)
287 dayfirst=dayfirst,
288 na_values=na_values,
--> 289 converters=converters, sort=sort)
290 except HttpError as inst:
291 raise ValueError('Google API error %s: %s' % (inst.resp.status,
/usr/local/lib/python2.7/dist-packages/pandas/io/ga.pyc in _parse_data(self, rows, col_info, index_col, parse_dates,
keep_date_col, date_parser, dayfirst, na_values, converters, sort)
313 keep_date_col=keep_date_col,
314 converters=converters,
--> 315 header=None, names=col_names))
316
317 if isinstance(sort, bool) and sort:
/usr/local/lib/python2.7/dist-packages/pandas/io/parsers.pyc in _read(filepath_or_buffer, kwds)
237
238 # Create the parser.
--> 239 parser = TextFileReader(filepath_or_buffer, **kwds)
240
241 if (nrows is not None) and (chunksize is not None):
/usr/local/lib/python2.7/dist-packages/pandas/io/parsers.pyc in __init__(self, f, engine, **kwds)
551 self.options['has_index_names'] = kwds['has_index_names']
552
--> 553 self._make_engine(self.engine)
554
555 def _get_options_with_defaults(self, engine):
/usr/local/lib/python2.7/dist-packages/pandas/io/parsers.pyc in _make_engine(self, engine)
694 elif engine == 'python-fwf':
695 klass = FixedWidthFieldParser
--> 696 self._engine = klass(self.f, **self.options)
697
698 def _failover_to_python(self):
/usr/local/lib/python2.7/dist-packages/pandas/io/parsers.pyc in __init__(self, f, **kwds)
1412 if not self._has_complex_date_col:
1413 (index_names,
-> 1414 self.orig_names, self.columns) = self._get_index_name(self.columns)
1415 self._name_processed = True
1416 if self.index_names is None:
/usr/local/lib/python2.7/dist-packages/pandas/io/parsers.pyc in _get_index_name(self, columns)
1886 # Case 2
1887 (index_name, columns_,
-> 1888 self.index_col) = _clean_index_names(columns, self.index_col)
1889
1890 return index_name, orig_names, columns
/usr/local/lib/python2.7/dist-packages/pandas/io/parsers.pyc in _clean_index_names(columns, index_col)
2171 break
2172 else:
-> 2173 name = cp_cols[c]
2174 columns.remove(name)
2175 index_names.append(name)
TypeError: list indices must be integers, not Index
OAuth2 is working as expected and I have only used these parameters as demo variables--the query itself is junk. Basically, I cannot figure out where the error is coming from, and would appreciate any pointers that one may have.
Thanks!
SOLUTION (SORT OF)
Not sure if this has to do with the data I'm trying to access or what, but the offending Index type error I'm getting arises from the the index_col variable in pandas.io.ga.GDataReader.get_data() is of type pandas.core.index.Index. This is fed to pandas.io.parsers._read() in _parse_data() which falls over. I don't understand this, but it is the breaking point for me.
As a fix--if anyone else is having this problem--I have edited line 270 of ga.py to:
index_col = _clean_index(list(dimensions), parse_dates).tolist()
and everything is now smooth as butter, but I suspect this may break things in other situations...

Unfortunately, this module isn't really documented and the errors aren't always meaningful. Include your account_name, property_name and profile_name (profile_name is the View in the online version). Then include some dimensions and metrics you are interested in. Also make sure that the client_secrets.json is in the pandas.io directory. An example:
ga.read_ga(account_name=account_name,
property_name=property_name,
profile_name=profile_name,
dimensions=['date', 'hour', 'minute'],
metrics=['pageviews'],
start_date=start_date,
end_date=end_date,
index_col=0,
parse_dates={'datetime': ['date', 'hour', 'minute']},
date_parser=lambda x: datetime.strptime(x, '%Y%m%d %H %M'),
max_results=max_results)
Also have a look at my recent step by step blog post about GA with pandas.

Related

Iterating Rows in DataFrame and Applying difflib.ratio()

Context of Problem
I am working on a project where I would like to compare two columns from a dataframe to determine what percent of the strings are similar to each other. Specifically, I'm comparing whether bullets scraped from retailer websites match the bullets that I expect to see on those sites for a given product.
I know that I can simply use boolean logic to determine if the value from column ['X'] == column ['Y']. But I'd like to take it to another level and determine what percentage of X matches Y. I did some research and found that difflib.ratio() can accomplish what I want.
Example of difflib.ratio()
a = 'preview'
b = 'previeu'
SequenceMatcher(a=a, b=b).ratio()
My Use Case
Where I'm having trouble is applying this logic to iterate through a DataFrame. This is what my DataFrame looks like.
DataFrame
The DataFrame has 5 "Bullets" and 5 "SEO Bullets". So I tried using a for loop to apply a lambda function to my DataFrame called test.
for x in range(1,6):
test[f'Bullet {x} Ratio'] = test.apply(lambda row: SequenceMatcher(a=row[f'SeoBullet_{x}'], b=row[f'Bullet {x}']).ratio())
But I received the following error:
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-409-39a6ba3c8879> in <module>
1 for x in range(1,6):
----> 2 test[f'Bullet {x} Ratio'] = test.apply(lambda row: SequenceMatcher(a=row[f'SeoBullet_{x}'], b=row[f'Bullet {x}']).ratio())
~\AppData\Local\Programs\PythonCodingPack\lib\site-packages\pandas\core\frame.py in apply(self, func, axis, raw, result_type, args, **kwds)
7539 kwds=kwds,
7540 )
-> 7541 return op.get_result()
7542
7543 def applymap(self, func) -> "DataFrame":
~\AppData\Local\Programs\PythonCodingPack\lib\site-packages\pandas\core\apply.py in get_result(self)
178 return self.apply_raw()
179
--> 180 return self.apply_standard()
181
182 def apply_empty_result(self):
~\AppData\Local\Programs\PythonCodingPack\lib\site-packages\pandas\core\apply.py in apply_standard(self)
253
254 def apply_standard(self):
--> 255 results, res_index = self.apply_series_generator()
256
257 # wrap results
~\AppData\Local\Programs\PythonCodingPack\lib\site-packages\pandas\core\apply.py in apply_series_generator(self)
282 for i, v in enumerate(series_gen):
283 # ignore SettingWithCopy here in case the user mutates
--> 284 results[i] = self.f(v)
285 if isinstance(results[i], ABCSeries):
286 # If we have a view on v, we need to make a copy because
<ipython-input-409-39a6ba3c8879> in <lambda>(row)
1 for x in range(1,6):
----> 2 test[f'Bullet {x} Ratio'] = test.apply(lambda row: SequenceMatcher(a=row[f'SeoBullet_{x}'], b=row[f'Bullet {x}']).ratio())
~\AppData\Local\Programs\PythonCodingPack\lib\site-packages\pandas\core\series.py in __getitem__(self, key)
880
881 elif key_is_scalar:
--> 882 return self._get_value(key)
883
884 if (
~\AppData\Local\Programs\PythonCodingPack\lib\site-packages\pandas\core\series.py in _get_value(self, label, takeable)
989
990 # Similar to Index.get_value, but we do not fall back to positional
--> 991 loc = self.index.get_loc(label)
992 return self.index._get_values_for_loc(self, loc, label)
993
~\AppData\Local\Programs\PythonCodingPack\lib\site-packages\pandas\core\indexes\range.py in get_loc(self, key, method, tolerance)
352 except ValueError as err:
353 raise KeyError(key) from err
--> 354 raise KeyError(key)
355 return super().get_loc(key, method=method, tolerance=tolerance)
356
KeyError: 'SeoBullet_1'
Desired Output
Ideally, the final output would be a dataframe that has 5 additional columns with the ratios for each Bullet comparison.
I'm still new-ish to Python, so I could just naïve and missing something very obvious. I say this also to say that if there is another route I could go to accomplish the same thing (or something very similar) I am open to those suggestions.

DATAFRAME TO BIGQUERY - Error: FileNotFoundError: [Errno 2] No such file or directory: '/tmp/tmp1yeitxcu_job_4b7daa39.parquet'

I am uploading a dataframe to a bigquery table.
df.to_gbq('Deduplic.DailyReport', project_id=BQ_PROJECT_ID, credentials=credentials, if_exists='append')
And I get the following error:
OSError Traceback (most recent call last)
~/.local/lib/python3.8/site-packages/google/cloud/bigquery/client.py in load_table_from_dataframe(self, dataframe, destination, num_retries, job_id, job_id_prefix, location, project, job_config, parquet_compression, timeout)
2624
-> 2625 _pandas_helpers.dataframe_to_parquet(
2626 dataframe,
~/.local/lib/python3.8/site-packages/google/cloud/bigquery/_pandas_helpers.py in dataframe_to_parquet(dataframe, bq_schema, filepath, parquet_compression, parquet_use_compliant_nested_type)
672 arrow_table = dataframe_to_arrow(dataframe, bq_schema)
--> 673 pyarrow.parquet.write_table(
674 arrow_table,
~/.local/lib/python3.8/site-packages/pyarrow/parquet.py in write_table(table, where, row_group_size, version, use_dictionary, compression, write_statistics, use_deprecated_int96_timestamps, coerce_timestamps, allow_truncated_timestamps, data_page_size, flavor, filesystem, compression_level, use_byte_stream_split, column_encoding, data_page_version, use_compliant_nested_type, **kwargs)
2091 **kwargs) as writer:
-> 2092 writer.write_table(table, row_group_size=row_group_size)
2093 except Exception:
~/.local/lib/python3.8/site-packages/pyarrow/parquet.py in write_table(self, table, row_group_size)
753
--> 754 self.writer.write_table(table, row_group_size=row_group_size)
755
~/.local/lib/python3.8/site-packages/pyarrow/_parquet.pyx in pyarrow._parquet.ParquetWriter.write_table()
~/.local/lib/python3.8/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status()
OSError: [Errno 28] Error writing bytes to file. Detail: [errno 28] No space left on device
During handling of the above exception, another exception occurred:
FileNotFoundError Traceback (most recent call last)
<ipython-input-8-f7137c1f7ee8> in <module>
62 )
63
---> 64 df.to_gbq('Deduplic.DailyReport', project_id=BQ_PROJECT_ID, credentials=credentials, if_exists='append')
~/.local/lib/python3.8/site-packages/pandas/core/frame.py in to_gbq(self, destination_table, project_id, chunksize, reauth, if_exists, auth_local_webserver, table_schema, location, progress_bar, credentials)
2052 from pandas.io import gbq
2053
-> 2054 gbq.to_gbq(
2055 self,
2056 destination_table,
~/.local/lib/python3.8/site-packages/pandas/io/gbq.py in to_gbq(dataframe, destination_table, project_id, chunksize, reauth, if_exists, auth_local_webserver, table_schema, location, progress_bar, credentials)
210 ) -> None:
211 pandas_gbq = _try_import()
--> 212 pandas_gbq.to_gbq(
213 dataframe,
214 destination_table,
~/.local/lib/python3.8/site-packages/pandas_gbq/gbq.py in to_gbq(dataframe, destination_table, project_id, chunksize, reauth, if_exists, auth_local_webserver, table_schema, location, progress_bar, credentials, api_method, verbose, private_key)
1191 return
1192
-> 1193 connector.load_data(
1194 dataframe,
1195 destination_table_ref,
~/.local/lib/python3.8/site-packages/pandas_gbq/gbq.py in load_data(self, dataframe, destination_table_ref, chunksize, schema, progress_bar, api_method, billing_project)
584
585 try:
--> 586 chunks = load.load_chunks(
587 self.client,
588 dataframe,
~/.local/lib/python3.8/site-packages/pandas_gbq/load.py in load_chunks(client, dataframe, destination_table_ref, chunksize, schema, location, api_method, billing_project)
235 ):
236 if api_method == "load_parquet":
--> 237 load_parquet(
238 client,
239 dataframe,
~/.local/lib/python3.8/site-packages/pandas_gbq/load.py in load_parquet(client, dataframe, destination_table_ref, location, schema, billing_project)
127
128 try:
--> 129 client.load_table_from_dataframe(
130 dataframe,
131 destination_table_ref,
~/.local/lib/python3.8/site-packages/google/cloud/bigquery/client.py in load_table_from_dataframe(self, dataframe, destination, num_retries, job_id, job_id_prefix, location, project, job_config, parquet_compression, timeout)
2670
2671 finally:
-> 2672 os.remove(tmppath)
2673
2674 def load_table_from_json(
FileNotFoundError: [Errno 2] No such file or directory: '/tmp/tmp1yeitxcu_job_4b7daa39.parquet'
A solution please
As Ricco D has mentioned, when writing the dataframe to the table, the BigQuery client creates temporary files on the host system, then removes it once the dataframe is written. The source code of the client for your reference. The linked code chunk does the following operations.
Create a temporary file
Load the temporary file into the table
Delete the file after loading.
The error you are facing is from the 1st step. There is not enough space for the BigQuery client to create the temporary file. So, consider deleting unused files from the host system for the client to create the temporary files.

I cant retrieve footprints from place information

enter code hereWhen I try to retrieve footprints from place name using
import osmx as ox
tags = {'building': True}
gdf = ox.geometries_from_place('Piedmont, California, USA', tags)
I get the following error message:
IllegalArgumentException: Argument must be Polygonal or LinearRing
PredicateError: Failed to evaluate <_FuncPtr object at 0x13a2ea120>
In the past, I have successfully used the old version to retrieve footprints ox.footprints_from_place(). However this does not work anymore, and neither does the new method. Does anybody had the same issues with the new version (1.0.1) of the osmnx package?
Due to stackoverflow restrictions I can't post the complete traceback message. It seems that osmnx does not create the required polygon. The first error entries are:
---------------------------------------------------------------------------
PredicateError Traceback (most recent call last)
<ipython-input-13-98877af3189c> in <module>
1 import osmnx as ox
2 tags = {'building': True}
----> 3 gdf = ox.geometries_from_place('Piedmont, California, USA', tags)
/opt/anaconda3/envs/gerdaenv/lib/python3.7/site-packages/osmnx/geometries.py in geometries_from_place(query, tags, which_result, buffer_dist)
214
215 # create GeoDataFrame using this polygon(s) geometry
--> 216 gdf = geometries_from_polygon(polygon, tags)
217
218 return gdf
/opt/anaconda3/envs/gerdaenv/lib/python3.7/site-packages/osmnx/geometries.py in geometries_from_polygon(polygon, tags)
264
265 # create GeoDataFrame from the downloaded data
--> 266 gdf = _create_gdf(response_jsons, polygon, tags)
267
268 return gdf
/opt/anaconda3/envs/gerdaenv/lib/python3.7/site-packages/osmnx/geometries.py in _create_gdf(response_jsons, polygon, tags)
428
429 # Apply .buffer(0) to any invalid geometries
--> 430 gdf = _buffer_invalid_geometries(gdf)
431
432 # Filter final gdf to requested tags and query polygon
/opt/anaconda3/envs/gerdaenv/lib/python3.7/site-packages/osmnx/geometries.py in _buffer_invalid_geometries(gdf)
891
892 # create a filter for rows with invalid geometries
--> 893 invalid_geometry_filter = ~gdf["geometry"].is_valid
894
895 # if there are invalid geometries
/opt/anaconda3/envs/gerdaenv/lib/python3.7/site-packages/geopandas/base.py in is_valid(self)
168 """Returns a ``Series`` of ``dtype('bool')`` with value ``True`` for
169 geometries that are valid."""
--> 170 return _delegate_property("is_valid", self)
171
172 #property
The last traceback messages are :
/opt/anaconda3/envs/gerdaenv/lib/python3.7/site-packages/shapely/predicates.py in __call__(self, this)
23 def __call__(self, this):
24 self._validate(this)
---> 25 return self.fn(this._geom)
/opt/anaconda3/envs/gerdaenv/lib/python3.7/site-packages/shapely/geos.py in errcheck_predicate(result, func, argtuple)
582 """Result is 2 on exception, 1 on True, 0 on False"""
583 if result == 2:
--> 584 raise PredicateError("Failed to evaluate %s" % repr(func))
585 return result
586
PredicateError: Failed to evaluate <_FuncPtr object at 0x13a2ea120>

From numpy array of sentences to array of embedding

I'm learning to use tensorflow and trying to classify text. I have a dataset where each text is associated with a label 0 or 1. My goal is to use some sentence embedding to do the classification. First I've created an embedding from the whole text using the Gnews precompile embedding:
embedding = "https://tfhub.dev/google/tf2-preview/gnews-swivel-20dim/1"
hub_layer = hub.KerasLayer(embedding, input_shape=[2], dtype=tf.string,
trainable=True, output_shape=[None, 20])
Now I'd like to try something else (similar to this method http://www.wildml.com/2015/11/understanding-convolutional-neural-networks-for-nlp/) and I wanted to:
Separate each text into setences.
Create an array of embeddings for each text, one per sentence.
Use that as input for my model.
I'm able to separate the texts in sentences. Each text is an array of sentences saved as:
[array(['AITA - Getting Hugged At The Bar .',
'This all happened less than an hour ago..',
'I was at a bar I frequent and talking to some people I know, suddenly I feel someone from behind me hugging and starting to grind against me.',
"I know a lot of people at the bar, and assume it's a friend of mine, but when I look down at the shoes I do not recognize them.",
'I look back and I see a dude I do not know, nor have I ever seen.',
"He looks back at me, with horror in his eyes, because I'm a dude too...",
'I feel an urge of rage inside me and shove him in the chest with my elbow so I can get away..',
'He goes to his table and I go back to mine.',
'I was with my roommate and his girlfriend.',
'They asked what happened and I told them, then I see the guy who hugged me looking around for me.',
'Him and two of his friends come up to us and he says: .',
'"I just wanted to apologize, I thought you were someone else.".',
'I respond, "I understand, just check before you hug people.',
'Now, please fuck off".',
'He repeats his last statement, so do I.',
'This happens one more time and at this point his friends have surrounded me, my roommate is on his feet and I have left my beer at the table.',
'His friend goes in my face and says.', '.',
'"He just wanted to apologize, you really shouldn\'t be yelling at us" and starts waiving his finger at me.. We are at a rock bar, it\'s loud, I was speaking louder just to be sure I am heard..',
'The manager knows me so he comes asking me what happened.',
'I explain the situation and he speaks with them then he tells me.',
'.', '"They want to say sorry, can you guys shake hand?', '".',
'"Yeah sure, I just want them to leave me alone."', '.',
"Honestly I didn't even want to touch the guy, but whatever.",
"We shake hands and they go away.. Me and my roommate look at their table and there's no one that looks anything like me.",
'So, reddit, did I overreact?', 'Am I The Asshole here?'],
dtype='<U190')
array(["AITA if i don't want to pay my friend 5 dollars for a slice of pizzaSo, my friend bought herself, our other friend and I a pizza to eat for lunch.",
'Me and other friend ate 1 slice of pizza from an extra large pizza.',
'Other friend has already paid my friend that bought the pizza 5 dollars..',
'I am trying to save money wherever i can, but she really wants me to pay her 5 dollars "so its fair".. AITA?'],
dtype='<U146')
Now when I try to create an embedding from one element of the array it works. Here is my embedding function:
def embedding_f(test):
print("test shape:", test.shape)
# a = tf.constant(test)
embedding = "https://tfhub.dev/google/tf2-preview/gnews-swivel-20dim/1"
hub_layer = hub.KerasLayer(embedding, input_shape=[], dtype=tf.string,
trainable=True, output_shape=[None, 20])
ret = hub_layer(test)
# print(ret)
return ret.numpy()
# Works
emb = cnn.embedding_f(train_data[0])
But if I try to input a batch of data (as will be done later in the pipeline, the program crashes
# Crashes
emb = cnn.embedding_f(train_data[0:2])
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-8-76f4f9171cad> in <module>
----> 1 emb = cnn.embedding_f(train_data[0:2])
~/AITA/aita/cnn.py in embedding_f(test)
22 hub_layer = hub.KerasLayer(embedding, input_shape=[2], dtype=tf.string,
23 trainable=True, output_shape=[None, 20])
---> 24 ret = hub_layer(test)
25 # print(ret)
26 return ret.numpy()
/usr/lib/python3.8/site-packages/tensorflow/python/keras/engine/base_layer.py in __call__(self, *args, **kwargs)
817 return ops.convert_to_tensor_v2(x)
818 return x
--> 819 inputs = nest.map_structure(_convert_non_tensor, inputs)
820 input_list = nest.flatten(inputs)
821
/usr/lib/python3.8/site-packages/tensorflow/python/util/nest.py in map_structure(func, *structure, **kwargs)
615
616 return pack_sequence_as(
--> 617 structure[0], [func(*x) for x in entries],
618 expand_composites=expand_composites)
619
/usr/lib/python3.8/site-packages/tensorflow/python/util/nest.py in <listcomp>(.0)
615
616 return pack_sequence_as(
--> 617 structure[0], [func(*x) for x in entries],
618 expand_composites=expand_composites)
619
/usr/lib/python3.8/site-packages/tensorflow/python/keras/engine/base_layer.py in _convert_non_tensor(x)
815 # `SparseTensors` can't be converted to `Tensor`.
816 if isinstance(x, (np.ndarray, float, int)):
--> 817 return ops.convert_to_tensor_v2(x)
818 return x
819 inputs = nest.map_structure(_convert_non_tensor, inputs)
/usr/lib/python3.8/site-packages/tensorflow/python/framework/ops.py in convert_to_tensor_v2(value, dtype, dtype_hint, name)
1276 ValueError: If the `value` is a tensor not of given `dtype` in graph mode.
1277 """
-> 1278 return convert_to_tensor(
1279 value=value,
1280 dtype=dtype,
/usr/lib/python3.8/site-packages/tensorflow/python/framework/ops.py in convert_to_tensor(value, dtype, name, as_ref, preferred_dtype, dtype_hint, ctx, accepted_result_types)
1339
1340 if ret is None:
-> 1341 ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
1342
1343 if ret is NotImplemented:
/usr/lib/python3.8/site-packages/tensorflow/python/framework/tensor_conversion_registry.py in _default_conversion_function(***failed resolving arguments***)
50 def _default_conversion_function(value, dtype, name, as_ref):
51 del as_ref # Unused.
---> 52 return constant_op.constant(value, dtype, name=name)
53
54
/usr/lib/python3.8/site-packages/tensorflow/python/framework/constant_op.py in constant(value, dtype, shape, name)
259 ValueError: if called on a symbolic tensor.
260 """
--> 261 return _constant_impl(value, dtype, shape, name, verify_shape=False,
262 allow_broadcast=True)
263
/usr/lib/python3.8/site-packages/tensorflow/python/framework/constant_op.py in _constant_impl(value, dtype, shape, name, verify_shape, allow_broadcast)
268 ctx = context.context()
269 if ctx.executing_eagerly():
--> 270 t = convert_to_eager_tensor(value, ctx, dtype)
271 if shape is None:
272 return t
/usr/lib/python3.8/site-packages/tensorflow/python/framework/constant_op.py in convert_to_eager_tensor(value, ctx, dtype)
94 dtype = dtypes.as_dtype(dtype).as_datatype_enum
95 ctx.ensure_initialized()
---> 96 return ops.EagerTensor(value, ctx.device_name, dtype)
97
98
ValueError: Failed to convert a NumPy array to a Tensor (Unsupported object type numpy.ndarray).
The error states that it's not possible to convert a Numpy array to a tensor. I've tried changing the input_shape parameter of the KerasLayer to no avail. The only solution I see is to calculate the embedding for each text by looping through all of them one by one before finding the result to the rest of the network but that seems highly inefficient (and requires too much memory for my laptop). Examples I see with word embedding, do it this way however.
What is the correct way to go about getting a list of embedding from multiple sentences?
I think your output_shape should be set to [20] (from https://www.tensorflow.org/hub/api_docs/python/hub/KerasLayer):
hub.KerasLayer("/tmp/text_embedding_model",
output_shape=[20], # Outputs a tensor with shape [batch_size, 20].
input_shape=[], # Expects a tensor of shape [batch_size] as input.
dtype=tf.string) # Expects a tf.string input tensor.
Using TF 2.4.1 and tensorflow_hub 0.11.0, this works for me:
data = np.array(['AITA - Getting Hugged At The Bar .', 'This all happened less than an hour ago..'])
model_url = "https://tfhub.dev/google/tf2-preview/gnews-swivel-20dim/1"
embedding = hub.KerasLayer(model_url, input_shape=[], dtype=tf.string,
trainable=True, output_shape=[20])(data)
If you don't want to add layers on top of the KerasLayer, you can also just call
model = hub.load(model_url)
embedding = model(data)

Facebook-Prophet: Overflow error when fitting

I wanted to practice with prophet so I decided to download the "Yearly mean total sunspot number [1700 - now]" data from this place
http://www.sidc.be/silso/datafiles#total.
This is my code so far
import numpy as np
import matplotlib.pyplot as plt
from fbprophet import Prophet
from fbprophet.plot import plot_plotly
import plotly.offline as py
import datetime
py.init_notebook_mode()
plt.style.use('classic')
df = pd.read_csv('SN_y_tot_V2.0.csv',delimiter=';', names = ['ds', 'y','C3', 'C4', 'C5'])
df = df.drop(columns=['C3', 'C4', 'C5'])
df.plot(x="ds", style='-',figsize=(10,5))
plt.xlabel('year',fontsize=15);plt.ylabel('mean number of sunspots',fontsize=15)
plt.xticks(np.arange(1701.5, 2018.5,40))
plt.ylim(-2,300);plt.xlim(1700,2020)
plt.legend()df['ds'] = pd.to_datetime(df.ds, format='%Y')
df['ds'] = pd.to_datetime(df.ds, format='%Y')
m = Prophet(yearly_seasonality=True)
Everything looks good so far and df['ds'] is in date time format.
However when I execute
m.fit(df)
I get the following error
---------------------------------------------------------------------------
OverflowError Traceback (most recent call last)
<ipython-input-57-a8e399fdfab2> in <module>()
----> 1 m.fit(df)
/anaconda2/envs/mde/lib/python3.7/site-packages/fbprophet/forecaster.py in fit(self, df, **kwargs)
1055 self.history_dates = pd.to_datetime(df['ds']).sort_values()
1056
-> 1057 history = self.setup_dataframe(history, initialize_scales=True)
1058 self.history = history
1059 self.set_auto_seasonalities()
/anaconda2/envs/mde/lib/python3.7/site-packages/fbprophet/forecaster.py in setup_dataframe(self, df, initialize_scales)
286 df['cap_scaled'] = (df['cap'] - df['floor']) / self.y_scale
287
--> 288 df['t'] = (df['ds'] - self.start) / self.t_scale
289 if 'y' in df:
290 df['y_scaled'] = (df['y'] - df['floor']) / self.y_scale
/anaconda2/envs/mde/lib/python3.7/site-packages/pandas/core/ops/__init__.py in wrapper(left, right)
990 # test_dt64_series_add_intlike, which the index dispatching handles
991 # specifically.
--> 992 result = dispatch_to_index_op(op, left, right, pd.DatetimeIndex)
993 return construct_result(
994 left, result, index=left.index, name=res_name, dtype=result.dtype
/anaconda2/envs/mde/lib/python3.7/site-packages/pandas/core/ops/__init__.py in dispatch_to_index_op(op, left, right, index_class)
628 left_idx = left_idx._shallow_copy(freq=None)
629 try:
--> 630 result = op(left_idx, right)
631 except NullFrequencyError:
632 # DatetimeIndex and TimedeltaIndex with freq == None raise ValueError
/anaconda2/envs/mde/lib/python3.7/site-packages/pandas/core/indexes/datetimelike.py in __sub__(self, other)
521 def __sub__(self, other):
522 # dispatch to ExtensionArray implementation
--> 523 result = self._data.__sub__(maybe_unwrap_index(other))
524 return wrap_arithmetic_op(self, other, result)
525
/anaconda2/envs/mde/lib/python3.7/site-packages/pandas/core/arrays/datetimelike.py in __sub__(self, other)
1278 result = self._add_offset(-other)
1279 elif isinstance(other, (datetime, np.datetime64)):
-> 1280 result = self._sub_datetimelike_scalar(other)
1281 elif lib.is_integer(other):
1282 # This check must come after the check for np.timedelta64
/anaconda2/envs/mde/lib/python3.7/site-packages/pandas/core/arrays/datetimes.py in _sub_datetimelike_scalar(self, other)
856
857 i8 = self.asi8
--> 858 result = checked_add_with_arr(i8, -other.value, arr_mask=self._isnan)
859 result = self._maybe_mask_results(result)
860 return result.view("timedelta64[ns]")
/anaconda2/envs/mde/lib/python3.7/site-packages/pandas/core/algorithms.py in checked_add_with_arr(arr, b, arr_mask, b_mask)
1006
1007 if to_raise:
-> 1008 raise OverflowError("Overflow in int64 addition")
1009 return arr + b
1010
OverflowError: Overflow in int64 addition```
I understand that there's an issue with 'ds', but I am not sure whether there is something wring with the column's format or an open issue.
Does anyone have any idea how to fix this? I have checked some issues in github, but they haven't been of much help in this case.
Thanks
This is not an answer to fix the issue, but how to avoid the error.
I got the same error, and manage to get rid of the error when I reduce the number of data that is coming in OR when I reduce the horizon span of the forecast.
For example, I limit my training data to only start since 1825 meanwhile I have data from the year of 1700s. I also tried to limit my forecast days from 10 years forecast to only 1 year. Both managed to get rid of the error.
My guess this problem has something to do with how the ARIMA is implemented inside the Prophet itself which in some cases the number is just to huge to be managed by int64 and become overflow.