With Python API, I've created a document in the collection "spells" as follows
>>> client.query(
... q.create(
... q.collection("spells"),
... {
... "data": {"name": "Mountainous Thunder", "element": "air", "cost": 15}
... }
... ))
{'ref': Ref(id=243802653698556416, collection=Ref(id=spells, collection=Ref(id=collections))), 'ts': 1568767179200000, 'data': {'name': 'Mountainous Thunder', 'element': 'air', 'cost': 15}}
Then, I've tried to get the document with its ts as follows:
>>> client.query(q.get(q.ref(q.collection("spells", "1568767179200000"))))
But, the result is, the error as "Ref expected, Object provided".
>>> client.query(q.get(q.ref(q.collection("spells", "1568767179200000"))))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.6/dist-packages/faunadb/client.py", line 175, in query
return self._execute("POST", "", _wrap(expression), with_txn_time=True)
File "/usr/local/lib/python3.6/dist-packages/faunadb/client.py", line 242, in _execute
FaunaError.raise_for_status_code(request_result)
File "/usr/local/lib/python3.6/dist-packages/faunadb/errors.py", line 28, in raise_for_status_code
raise BadRequest(request_result)
faunadb.errors.BadRequest: Ref expected, Object provided.
I've no idea what was wrong, any suggestions are welcome!
I've solved this myself. I've also missed parameters with q.ref.
The correct params are as follows:
>>> client.query(q.get(q.ref(q.collection("spells"),"243802585534824962")))
{'ref': Ref(id=243802585534824962, collection=Ref(id=spells, collection=Ref(id=collections))), 'ts': 1568767114140000, 'data': {'name': 'Mountainous Thunder', 'element': 'air', 'cost': 15}}
Related
I'm trying to use the TIA module to pull EURUSD forward rates from the BBG API? How does one go about doing this?
I tried using the BbgDataManager() to specify a specific forward rate but don't seem to be having any success. The code I tried is below.
df = mgr['EURUSD Curncy','FWD_CURVE']
df
MultiSidAccessor(EURUSD Curncy,FWD_CURVE)
df.FWD_CURVE
Produces the following error message:
File "", line 1
df.EURUSD Curncy
^
SyntaxError: invalid syntax
df.FWD_CURVE
Traceback (most recent call last):
File "", line 1, in
File "C:\Users\anthony.yeh\PycharmProjects\blpapi\venv\lib\site-packages\tia\bbg\datamgr.py", line 85, in getattribute
return self.get_attributes(item, **self.overrides)
File "C:\Users\anthony.yeh\PycharmProjects\blpapi\venv\lib\site-packages\tia\bbg\datamgr.py", line 90, in get_attributes
frame = self.mgr.get_attributes(self.sids, flds, **overrides)
File "C:\Users\anthony.yeh\PycharmProjects\blpapi\venv\lib\site-packages\tia\bbg\datamgr.py", line 148, in get_attributes
return self.terminal.get_reference_data(sids, flds, **overrides).as_frame()
File "C:\Users\anthony.yeh\PycharmProjects\blpapi\venv\lib\site-packages\tia\bbg\v3api.py", line 745, in get_reference_data
return self.execute(req)
File "C:\Users\anthony.yeh\PycharmProjects\blpapi\venv\lib\site-packages\tia\bbg\v3api.py", line 729, in execute
request.has_exception and request.raise_exception()
File "C:\Users\anthony.yeh\PycharmProjects\blpapi\venv\lib\site-packages\tia\bbg\v3api.py", line 215, in raise_exception
raise Exception('SecurityError: %s' % ','.join(msgs))
Exception: SecurityError: (FWD_CURVE, BAD_SEC, Unknown/Invalid Security [nid:2972] )
Similarly, using the mgr with a query similar to the way you would pull this in Excel using BFxForward produces errors..
eurusd_sids = mgr["eurusd curncy","9/12/2019","midoutright"]\
eurusd_sids.PX_LAST
produces this error message:
Traceback (most recent call last):
File "", line 1, in
File "C:\Users\anthony.yeh\PycharmProjects\blpapi\venv\lib\site-packages\tia\bbg\datamgr.py", line 85, in getattribute
return self.get_attributes(item, **self.overrides)
File "C:\Users\anthony.yeh\PycharmProjects\blpapi\venv\lib\site-packages\tia\bbg\datamgr.py", line 90, in get_attributes
frame = self.mgr.get_attributes(self.sids, flds, **overrides)
File "C:\Users\anthony.yeh\PycharmProjects\blpapi\venv\lib\site-packages\tia\bbg\datamgr.py", line 148, in get_attributes
return self.terminal.get_reference_data(sids, flds, **overrides).as_frame()
File "C:\Users\anthony.yeh\PycharmProjects\blpapi\venv\lib\site-packages\tia\bbg\v3api.py", line 745, in get_reference_data
return self.execute(req)
File "C:\Users\anthony.yeh\PycharmProjects\blpapi\venv\lib\site-packages\tia\bbg\v3api.py", line 729, in execute
request.has_exception and request.raise_exception()
File "C:\Users\anthony.yeh\PycharmProjects\blpapi\venv\lib\site-packages\tia\bbg\v3api.py", line 215, in raise_exception
raise Exception('SecurityError: %s' % ','.join(msgs))
Exception: SecurityError: (9/12/2019, BAD_SEC, Unknown/Invalid Security [nid:2972] ),(midoutright, BAD_SEC, Unknown/Invalid Security [nid:2972] )
You may try with "EURUSD BGN Curncy"
I've found this error in server log. Can't replicate the problem.
File "/home/futilestudio/.venvs/36venv/lib/python3.6/site-packages/django/db/models/query.py", line 250, in __len__
self._fetch_all()
File "/home/futilestudio/.venvs/36venv/lib/python3.6/site-packages/django/db/models/query.py", line 1186, in _fetch_all
self._result_cache = list(self._iterable_class(self))
File "/home/futilestudio/.venvs/36venv/lib/python3.6/site-packages/django/db/models/query.py", line 54, in __iter__
results = compiler.execute_sql(chunked_fetch=self.chunked_fetch, chunk_size=self.chunk_size)
File "/home/futilestudio/.venvs/36venv/lib/python3.6/site-packages/django/db/models/sql/compiler.py", line 1065, in execute_sql
cursor.execute(sql, params)
File "/home/futilestudio/.venvs/36venv/lib/python3.6/site-packages/django/db/backends/utils.py", line 100, in execute
return super().execute(sql, params)
File "/home/futilestudio/.venvs/36venv/lib/python3.6/site-packages/django/db/backends/utils.py", line 68, in execute
return self._execute_with_wrappers(sql, params, many=False, executor=self._execute)
File "/home/futilestudio/.venvs/36venv/lib/python3.6/site-packages/django/db/backends/utils.py", line 77, in _execute_with_wrappers
return executor(sql, params, many, context)
File "/home/futilestudio/.venvs/36venv/lib/python3.6/site-packages/django/db/backends/utils.py", line 85, in _execute
return self.cursor.execute(sql, params)
File "/home/futilestudio/.venvs/36venv/lib/python3.6/site-packages/django/db/utils.py", line 89, in __exit__
raise dj_exc_value.with_traceback(traceback) from exc_value
File "/home/futilestudio/.venvs/36venv/lib/python3.6/site-packages/django/db/backends/utils.py", line 85, in _execute
return self.cursor.execute(sql, params)
django.db.utils.NotSupportedError: FOR UPDATE cannot be applied to the nullable side of an outer join
I think that it happens only on PostgreSQL database. I tried Sqlite before and it worked.
The problem is here:
match, created = Match.objects.update_or_create(
match_id=draft_match.match_id,
defaults={
field.name: getattr(
temp_match, field.name, None
) for field in Match._meta.fields if not field.name in [
'id', 'pk'
]
}
)
Is there a problem in attributes or do I must not use update_or_create and do it another way?
EDIT
In [18]: match, created = Match.objects.update_or_create(match_id=draft_match.match_id, defaults={
...: field.name: getattr(temp_match, field.name, None) for field in Match._meta.fields if
...: not field.name in ['id', 'pk','created','modified','home_team','away_team']})
returns the same error so I checked the defaults and NONE of them is ReversedForeignKey, there are only two ForeignKey but when I exclude them, it raises the same error.
In [22]: {
...: ...: field.name: (getattr(temp_match, field.name, None),field) for field in Match._meta.fields if
...: ...: not field.name in ['id', 'pk','created','modified']}
...:
...:
Out[22]:
{'match_url': ('https://cestohlad.eu/sport-kansas-city-monterrey/',
<django.db.models.fields.URLField: match_url>),
'match_id': ('4ps3utZN', <django.db.models.fields.CharField: match_id>),
'datetime': (datetime.datetime(2019, 4, 12, 3, 0),
<django.db.models.fields.DateTimeField: datetime>),
'home_team': (<Team: Sporting Kansas City (USA) (CURGfJWt)>,
<django.db.models.fields.related.ForeignKey: home_team>),
'away_team': (<Team: Monterrey (Mex) (Ya23C2Zs)>,
<django.db.models.fields.related.ForeignKey: away_team>),
'home_score': (2,
<django.db.models.fields.PositiveSmallIntegerField: home_score>),
'away_score': (5,
<django.db.models.fields.PositiveSmallIntegerField: away_score>),
'home_odds': (Decimal('0.6215'),
<django.db.models.fields.DecimalField: home_odds>),
'away_odds': (Decimal('0.4850'),
<django.db.models.fields.DecimalField: away_odds>),
'under_odds': (Decimal('2.02'),
<django.db.models.fields.DecimalField: under_odds>),
'over_odds': (Decimal('1.84'),
<django.db.models.fields.DecimalField: over_odds>),
'total': (Decimal('2.75'), <django.db.models.fields.DecimalField: total>),
'total_real': (Decimal('7.00'),
<django.db.models.fields.DecimalField: total_real>),
'correct': (False, <django.db.models.fields.BooleanField: correct>),
'home_odds_raw': (Decimal('0.4464'),
<django.db.models.fields.DecimalField: home_odds_raw>),
'draw_odds_raw': (Decimal('0.2817'),
<django.db.models.fields.DecimalField: draw_odds_raw>),
'away_odds_raw': (Decimal('0.3484'),
<django.db.models.fields.DecimalField: away_odds_raw>)}
I have a dataframe with columns with lists in them. How can I query these?
>>> df1.shape
(1812871, 7)
>>> df1.dtypes
CHROM object
POS int32
ID object
REF object
ALT object
QUAL int8
FILTER object
dtype: object
>>> df1.head()
CHROM POS ID REF ALT QUAL FILTER
0 20 60343 rs527639301 G [A] 100 [PASS]
1 20 60419 rs538242240 A [G] 100 [PASS]
2 20 60479 rs149529999 C [T] 100 [PASS]
3 20 60522 rs150241001 T [TC] 100 [PASS]
4 20 60568 rs533509214 A [C] 100 [PASS]
>>> df2 = df1.head(30)
>>> df3 = df1.head(3000)
I found a previous question, but the solutions do not quite work for me. The accepted solution does not work:
>>> df2[df2.ALT.apply(lambda x: x == ['TC'])]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/user/miniconda3/lib/python3.6/site-packages/pandas/core/frame.py", line 2682, in __getitem__
return self._getitem_array(key)
File "/home/user/miniconda3/lib/python3.6/site-packages/pandas/core/frame.py", line 2726, in _getitem_array
indexer = self.loc._convert_to_indexer(key, axis=1)
File "/home/user/miniconda3/lib/python3.6/site-packages/pandas/core/indexing.py", line 1314, in _convert_to_indexer
indexer = check = labels.get_indexer(objarr)
File "/home/user/miniconda3/lib/python3.6/site-packages/pandas/core/indexes/base.py", line 3259, in get_indexer
indexer = self._engine.get_indexer(target._ndarray_values)
File "pandas/_libs/index.pyx", line 301, in pandas._libs.index.IndexEngine.get_indexer
File "pandas/_libs/hashtable_class_helper.pxi", line 1544, in pandas._libs.hashtable.PyObjectHashTable.lookup
TypeError: unhashable type: 'numpy.ndarray'
The reason being, the booleans get nested:
>>> df2.ALT.apply(lambda x: x == ['TC']).head()
0 [False]
1 [False]
2 [False]
3 [True]
4 [False]
Name: ALT, dtype: object
So I tried the second answer, which seemed to work:
>>> c = np.empty(1, object)
>>> c[0] = ['TC']
>>> df2[df2.ALT.values == c]
CHROM POS ID REF ALT QUAL FILTER
3 20 60522 rs150241001 T [TC] 100 [PASS]
But strangely, it doesn't work when I try it on the larger dataframe:
>>> df3[df3.ALT.values == c]
Traceback (most recent call last):
File "/home/user/miniconda3/lib/python3.6/site-packages/pandas/core/indexes/base.py", line 3078, in get_loc
return self._engine.get_loc(key)
File "pandas/_libs/index.pyx", line 140, in pandas._libs.index.IndexEngine.get_loc
File "pandas/_libs/index.pyx", line 162, in pandas._libs.index.IndexEngine.get_loc
File "pandas/_libs/hashtable_class_helper.pxi", line 1492, in pandas._libs.hashtable.PyObjectHashTable.get_item
File "pandas/_libs/hashtable_class_helper.pxi", line 1500, in pandas._libs.hashtable.PyObjectHashTable.get_item
KeyError: False
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/user/miniconda3/lib/python3.6/site-packages/pandas/core/frame.py", line 2688, in __getitem__
return self._getitem_column(key)
File "/home/user/miniconda3/lib/python3.6/site-packages/pandas/core/frame.py", line 2695, in _getitem_column
return self._get_item_cache(key)
File "/home/user/miniconda3/lib/python3.6/site-packages/pandas/core/generic.py", line 2489, in _get_item_cache
values = self._data.get(item)
File "/home/user/miniconda3/lib/python3.6/site-packages/pandas/core/internals.py", line 4115, in get
loc = self.items.get_loc(item)
File "/home/user/miniconda3/lib/python3.6/site-packages/pandas/core/indexes/base.py", line 3080, in get_loc
return self._engine.get_loc(self._maybe_cast_indexer(key))
File "pandas/_libs/index.pyx", line 140, in pandas._libs.index.IndexEngine.get_loc
File "pandas/_libs/index.pyx", line 162, in pandas._libs.index.IndexEngine.get_loc
File "pandas/_libs/hashtable_class_helper.pxi", line 1492, in pandas._libs.hashtable.PyObjectHashTable.get_item
File "pandas/_libs/hashtable_class_helper.pxi", line 1500, in pandas._libs.hashtable.PyObjectHashTable.get_item
KeyError: False
Which is probably because the result of the boolean comparison is different!
>>> df3.ALT.values == c
False
>>> df2.ALT.values == c
array([False, False, False, True, False, False, False, False, False,
False, False, False, False, False, False, False, False, False,
False, False, False, False, False, False, False, False, False,
False, False, False])
This is completely baffling to me.
I found a hacky solution of casting the list as tuples works for me
df = pd.DataFrame({'CHROM': [20] *5,
'POS': [60343, 60419, 60479, 60522, 60568],
'ID': ['rs527639301', 'rs538242240', 'rs149529999', 'rs150241001', 'rs533509214'],
'REF': ['G', 'A', 'C', 'T', 'A'],
'ALT': [['A'], ['G'], ['T'], ['TC'], ['C']],
'QUAL': [100] * 5,
'FILTER': [['PASS']] * 5})
df['ALT'] = df['ALT'].apply(tuple)
df[df['ALT'] == ('C',)]
This method works because the immutability of tuples allows pandas to check if the entire element is correct compared to the intra-list elementwise comparison you got for the Boolean series because lists are not hashable.
When I am trying to use Youtube Live Stream API, I got errors like this.
Is there any way to solve this issue?
======================================================================
07:23:49.643 Traceback (most recent call last):
07:23:49.643 File "/usr/lib/enigma2/python/Components/PluginComponent.py", line 53, in readPluginList
07:23:49.643 File "/usr/lib/enigma2/python/Tools/Import.py", line 2, in my_import
07:23:49.643 File "/usr/lib/enigma2/python/Plugins/Extensions/YouTubeLiveStreaming/plugin.py", line 30, in <module>
07:23:49.644 import apiclient.discovery
07:23:49.644 File "/usr/lib/python2.7/site-packages/apiclient/__init__.py", line 24, in <module>
07:23:49.644 File "/usr/lib/python2.7/site-packages/googleapiclient/sample_tools.py", line 32, in <module>
07:23:49.644 File "/usr/lib/python2.7/site-packages/oauth2client/tools.py", line 70, in <module>
07:23:49.644 argparser = _CreateArgumentParser()
07:23:49.644 File "/usr/lib/python2.7/site-packages/oauth2client/tools.py", line 55, in _CreateArgumentParser
07:23:49.644 parser = argparse.ArgumentParser(add_help=False)
07:23:49.644 File "/usr/lib/python2.7/argparse.py", line 1586, in __init__
07:23:49.644 AttributeError: 'module' object has no attribute 'argv'
======================================================================
For the past few days I have been having an issue with serializing data to tfrecord format and then subsequently deserializing it using parse_single_sequence example. I am attempting to retrieve data for use with a fairly standard RNN model, however this is my first attempt at using the tfrecords format and the associated pipeline that goes with it.
Here is a toy example to reproduce the issue I am having:
import tensorflow as tf
import tempfile
from IPython import embed
sequences = [[1, 2, 3], [4, 5, 1], [1, 2]]
label_sequences = [[0, 1, 0], [1, 0, 0], [1, 1]]
def make_example(sequence, labels):
ex = tf.train.SequenceExample()
sequence_length = len(sequence)
ex.context.feature["length"].int64_list.value.append(sequence_length)
fl_tokens = ex.feature_lists.feature_list["tokens"]
fl_labels = ex.feature_lists.feature_list["labels"]
for token, label in zip(sequence, labels):
fl_tokens.feature.add().int64_list.value.append(token)
fl_labels.feature.add().int64_list.value.append(label)
return ex
writer = tf.python_io.TFRecordWriter('./test.tfrecords')
for sequence, label_sequence in zip(sequences, label_sequences):
ex = make_example(sequence, label_sequence)
writer.write(ex.SerializeToString())
writer.close()
tf.reset_default_graph()
file_name_queue = tf.train.string_input_producer(['./test.tfrecords'], num_epochs=None)
reader = tf.TFRecordReader()
context_features = {
"length": tf.FixedLenFeature([], dtype=tf.int64)
}
sequence_features = {
"tokens": tf.FixedLenSequenceFeature([], dtype=tf.int64),
"labels": tf.FixedLenSequenceFeature([], dtype=tf.int64)
}
ex = reader.read(file_name_queue)
# Parse the example (returns a dictionary of tensors)
context_parsed, sequence_parsed = tf.parse_single_sequence_example(
serialized=ex,
context_features=context_features,
sequence_features=sequence_features
)
context = tf.contrib.learn.run_n(context_parsed, n=1, feed_dict=None)
print(context[0])
sequence = tf.contrib.learn.run_n(sequence_parsed, n=1, feed_dict=None)
print(sequence[0])
The associated stack trace is:
Traceback (most recent call last):
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/common_shapes.py", line 594, in call_cpp_shape_fn
status)
File "/usr/lib/python3.5/contextlib.py", line 66, in exit
next(self.gen)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/errors.py", line 463, in raise_exception_on_not_ok_status
pywrap_tensorflow.TF_GetCode(status))
tensorflow.python.framework.errors.InvalidArgumentError: Shape must be rank 0 but is rank 1
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "my_test.py", line 51, in
sequence_features=sequence_features
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/parsing_ops.py", line 640, in parse_single_sequence_example
feature_list_dense_defaults, example_name, name)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/parsing_ops.py", line 837, in _parse_single_sequence_example_raw
name=name)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/gen_parsing_ops.py", line 285, in _parse_single_sequence_example
name=name)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/op_def_library.py", line 749, in apply_op
op_def=op_def)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py", line 2382, in create_op
set_shapes_for_outputs(ret)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py", line 1783, in set_shapes_for_outputs
shapes = shape_func(op)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/common_shapes.py", line 596, in call_cpp_shape_fn
raise ValueError(err.message)
ValueError: Shape must be rank 0 but is rank 1
I posted this as a potential issue over on github though it seems I may just be using it incorrectly: Tensorflow Github Issue
So with the background information out of the way, I'm just wondering if I am in fact making an error here? Any help in the right direction would be greatly appreciated, its been a few days and my poking around hasn't panned out. Thanks all!
Got it and it was a bad assumption on my part. The tf.TFRecordReader.read(queue, name=None) returns a tuple when I assumed it would have returned just the value not (key, value) which I was directly passing into the example parser.