i18next-scanner using keys with variables - i18next

I have a list x = ['a', 'b', 'c'] and load translations using t(vars.${x[i]}).
When I used i18next-scanner, I get a records under vars with key "".
I have already defined in comments, e.g., // t('vars.a'); , so this is picked up by the scanner. How can I eliminate so that i18next-scanner does not also add an empty key?

Related

Python append entry KeyError problem because of missing data from the API

So, i'm trying to collect data from an API to make a dataframe. The problems is that that when i get the response in JSON some of the values are missing for some rows. That means that one row has all 10 out of 10 values and some only have 8 out of 10.
For e.g. I have such code to fill in the data from the API to then form a DataFrame:
response = r.json()
cols = ['A', 'B', 'C', 'D']
l = []
for entry in response:
l.append([
entry['realizationreport_id'],
entry['suppliercontract_code'],
entry['rid'],
entry['ppvz_inn'],
I get this error because in one of the rows the API didn't give a value in response:
KeyError: 'ppvz_inn'
So i'm tryng to fix it so that the cell of the DataFrame is filled with 0 or Nan if the API doesn't have a value for this specific row
l = []
for entry in response:
l.append([
entry['realizationreport_id'],
entry['suppliercontract_code'],
entry['rid'],
entry['ppvz_inn'],
try:
entry['ppvz_supplier_name'],
except KeyError:
'0',
And now i get this error:
try:
^
SyntaxError: invalid syntax
How to actually make it work and fill those cells with no data?
You cannot have a try-except statement in the middle of your append statement.
You could either work with if statements or first try to fix your JSON data by filling in empty values. You could also maybe use setdefault, see here some info about it.
Use collections.defaultdict. It's a subclass of dict which does not return KeyError, creating a called key instead.
You can cast your existing dict to defaultdict using unpacking.
for entry in response:
entry_defaultdict = defaultdict(list, **entry)
In this case, every call to non-existing object will create an empty list as a value of the key.

Database query for events based on friends going

I'm trying to build a timeline with events for users.
To build this timeline, I would run a query that does the following:
returning all events where friends of the user are set to going.
What I have now:
client-side: a List of all the ids of the friends of a user: ex. ['a', 'b', 'c', 'd', 'e']
server-side: a Firestore collection with all the events:
events/eventID
This document has a value 'going' which is a list that contains all the users IDs that are 'going' ex. ['d', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm']
(As an example, the query should return this document, because 2 friends are going)
A simple solution would be:
Getting all events and comparing the 2 lists on the client-side. Problem: ok for 100 documents, but impossible if the collection scales. (Because Firstore bills on document reads.)
Is there a better way to do this (With Firestore)?
Or is this not possible with Firestore and are there other technologies to do this?
Thanks in advance!
There's no code in the question and I don't know the platform so let me answer this at a high level.
Suppose we have a users collection:
users
uid_0
name: Larry
friends:
uid_1: true
uid_2: true
events_where_friends_are_going:
event_0:
uid_1: true
uid_2: true
uid_1
name: Moe
friends:
uid_2: true
events_where_friends_are_going:
event_0:
uid_2: true
uid_2
name: Curly
and let's say we have a series of events stored in a collection:
events
event_0
name: "some event"
signups:
uid_1: true
uid_2: true //Curly signed up
The process is that when a user signs up for an event, event_0 for example, they are added to that event collection and then query the users collection for all of the other users they are friends with via the friends sub-collection. Then they add themselves to the events_where_friends_are_going, creating the event if it doesn't exist, or if it does, add themselves to the list.
In the above structure, if Curly signed up for event_0, as shown in the signups collection, the query reveals they are friends with uid_0 and uid_1. They are then added to uid_0 and uid_1 events_where_friends_are_going collection.

How to specify which key/value pairs to exclude in spaCy's Doc.to_disk(path, exclude=['user_data'])?

My nlp pipeline has some doc extensions that store 3 items (a string for file name and two dicts which map non-serializable objects). I'd like only to exclude the non-serializable key/value pairs in the user data, but keep the filename.
doc.to_disk(path, exclude=['user_data'])
works as expected, excluding all user data. There are apparently options to instead exclude either 'user_data_keys' or 'user_data_values' but I find no explanation of their usage, and furthermore I can't think of any good reason to store either all the keys without the values or all the values without the keys!
I would like to exclude both keys and values of only certain fields in the doc.user_data. If this is possible, how is it done?
You will need to specify which keys or values you want to exclude.
https://spacy.io/api/doc#serialization-fields
data = doc.to_bytes(exclude=["text", "tensor"])
doc.from_disk("./doc.bin", exclude=["user_data"])
Per this thread here, you can try the following work around:
def remove_unserializable_results(doc):
doc.user_data = {}
for x in dir(doc._):
if x in ['get', 'set', 'has']: continue
setattr(doc._, x, None)
for token in doc:
for x in dir(token._):
if x in ['get', 'set', 'has']: continue
setattr(token._, x, None)
return doc
nlp.add_pipe(remove_unserializable_results, last=True)

How do I use values within a list to specify changing selection conditions and export paths?

I'm trying to split a large csv data using a condition. To automate this process, I'm pulling a list of unique conditions from a column in the data set and wanting to use this list within a loop to specify condition and also rename the export file.
I've converted the array of values into a list and have tried fitting my function into a loop, however, I believe syntax is the main error.
# df1718 is my df
# znlist is my list of values (e.g. 0 1 2 3 4)
# serial is specified at the top e.g. '4'
for x in znlist:
dftemps = df1718[(df1718.varname == 'RoomTemperature') & (df1718.zone == x)]
dftemps.to_csv('E:\\path\\test%d_zone(x).csv', serial)
So in theory, I would like each iteration to export the data relevant to the next zone in the list and the export file to be named test33_zone0.csv (for example). Thanks for any help!
EDIT:
The error I am getting is: "delimiter" must be string, not int
So if the error is in saving the file try this
dftemps.to_csv('E:\\path\\test{}_zone{}.csv'.format(str(serial),str(x)))

Pandas HDF5 Select with Where on non natural-named columns

in my continuing spree of exotic pandas/HDF5 issues, I encountered the following:
I have a series of non-natural named columns (nb: because of a good reason, with negative numbers being "system" ids etc), which normally doesn't give an issue:
fact_hdf.select('store_0_0', columns=['o', 'a-6', 'm-13'])
however, my select statement does fall over it:
>>> fact_hdf.select('store_0_0', columns=['o', 'a-6', 'm-13'], where=[('a-6', '=', [0, 25, 28])])
blablabla
File "/srv/www/li/venv/local/lib/python2.7/site-packages/tables/table.py", line 1251, in _required_expr_vars
raise NameError("name ``%s`` is not defined" % var)
NameError: name ``a`` is not defined
Is there any way to work around it? I could rename my negative value from "a-1" to a "a_1" but that means reloading all of the data in my system. Which is rather much! :)
Suggestions are very welcome!
Here's a test table
In [1]: df = DataFrame({ 'a-6' : [1,2,3,np.nan] })
In [2]: df
Out[2]:
a-6
0 1
1 2
2 3
3 NaN
In [3]: df.to_hdf('test.h5','df',mode='w',table=True)
In [5]: df.to_hdf('test.h5','df',mode='w',table=True,data_columns=True)
/usr/local/lib/python2.7/site-packages/tables/path.py:99: NaturalNameWarning: object name is not a valid Python identifier: 'a-6'; it does not match the pattern ``^[a-zA-Z_][a-zA-Z0-9_]*$``; you will not be able to use natural naming to access this object; using ``getattr()`` will still work, though
NaturalNameWarning)
/usr/local/lib/python2.7/site-packages/tables/path.py:99: NaturalNameWarning: object name is not a valid Python identifier: 'a-6_kind'; it does not match the pattern ``^[a-zA-Z_][a-zA-Z0-9_]*$``; you will not be able to use natural naming to access this object; using ``getattr()`` will still work, though
NaturalNameWarning)
/usr/local/lib/python2.7/site-packages/tables/path.py:99: NaturalNameWarning: object name is not a valid Python identifier: 'a-6_dtype'; it does not match the pattern ``^[a-zA-Z_][a-zA-Z0-9_]*$``; you will not be able to use natural naming to access this object; using ``getattr()`` will still work, though
NaturalNameWarning)
There is a very way, but would to build this into the code itself. You can do a variable substitution on the column names as follows. Here is the existing routine (in master)
def select(self):
"""
generate the selection
"""
if self.condition is not None:
return self.table.table.readWhere(self.condition.format(), start=self.start, stop=self.stop)
elif self.coordinates is not None:
return self.table.table.readCoordinates(self.coordinates)
return self.table.table.read(start=self.start, stop=self.stop)
If instead you do this
(Pdb) self.table.table.readWhere("(x>2.0)",
condvars={ 'x' : getattr(self.table.table.cols,'a-6')})
array([(2, 3.0)],
dtype=[('index', '<i8'), ('a-6', '<f8')])
e.g. by subsituting x with the column reference, you can get the data.
This could be done on detection of invalid column names, but is pretty tricky.
Unfortunately I would suggest renaming your columns.