insert and query an OrderedDict in MongoHQ - pymongo

After successfully inserting an OrderedDict object in to MongoHQ, i try to query the same OrderedDictionary by the collection.find_one() command of PyMongo. This fails, the order of the keys are lost. The dictonary becomes an ordinary dictonary.
My code looks like this:
import collections
import pymongo
test = collections.OrderedDict()
test.update({'test1': 1})
test.update({'test2': 2})
test.update({'test3': 3})
test.update({'test4': 4})
test.update({'test5': 5})
test
>>>OrderedDict([('test1', 1), ('test2', 2), ('test3', 3), ('test4', 4), ('test5', 5
)])
db_conn = pymongo.Connection('mongodb://*:*#*.mongohq.com:*/testDatabase')
db_conn.testDatabase['testCollection'].insert(test)
test_new = db_conn.testDatabase['testCollection'].find_one()
print test_new
>>> {u'test1': 1, u'test3': 3, u'test2': 2, u'test5': 5, u'test4': 4, u'_id': Object
Id('52cc777b92c49c146cb5e3db')}
Could you guys help me out? Thanks a lot.

Maybe you want to try
test_new = db_conn.testDatabase['testCollection'].find_one(as_class=collections.OrderedDict)

Related

Keras Sequential with multiple inputs

Given 3 array as input to the network, it should learn what links data in 1st array, 2nd array, and 3rd array.
In particular:
1st array contains integer numbers (eg.: 2, 3, 5, 6, 7)
2nd array contains integer numbers (eg.: 3, 2, 4, 6, 2)
3rd array contains integer numbers that are the results of an operation done between data in 1st and 2nd array (eg.: 6, 6, 20, 36, 14).
As you can see from the example data here above, the operation done is a multiplication so the network should learn this, giving:
model.predict(11,2) = 22.
Here's the code I've used:
import logging
import numpy as np
import tensorflow as tf
primo = np.array([2, 3, 5, 6, 7])
secondo = np.array([3, 2, 4, 6, 2])
risu = np.array([6, 6, 20, 36, 14])
l0 = tf.keras.layers.Dense(units=1, input_shape=[1])
model = tf.keras.Sequential([l0])
input1 = tf.keras.layers.Input(shape=(1, ), name="Pri")
input2 = tf.keras.layers.Input(shape=(1, ), name="Sec")
merged = tf.keras.layers.Concatenate(axis=1)([input1, input2])
dense1 = tf.keras.layers.Dense(
2,
input_dim=2,
activation=tf.keras.activations.sigmoid,
use_bias=True)(merged)
output = tf.keras.layers.Dense(
1,
activation=tf.keras.activations.relu,
use_bias=True)(dense1)
model = tf.keras.models.Model([input1, input2], output)
model.compile(
loss="mean_squared_error",
optimizer=tf.keras.optimizers.Adam(0.1))
model.fit([primo, secondo], risu, epochs=500, verbose = False, batch_size=16)
print(model.predict(11, 2))
My questions are:
is it correct to concatenate the 2 input as I did? I don't understand if concatenating in such a way the network understand that input1 and input2 are 2 different data
I'm not able to make the model.predict() working, every attempt result in an error
Your model has two inputs, each with shape (None,1), so you need to use np.expand_dims:
print(model.predict([np.expand_dims(np.array(11), 0), np.expand_dims(np.array(2), 0)]))
Output:
[[20.316557]]

numpy append in a for loop with different sizes

I have a for loop but where i has changes by 2 and i want to save a value in a numpy array in each iteration that that changes by 1.
n = 8 #steps
# random sequence
rand_seq = np.zeros(n-1)
for i in range(0, (n-1)*2, 2):
curr_state= i+3
I want to get curr_state outside the loop in the rand_seq array (seven values).
can you help me with that?
thanks a lot
A much simpler version (if I understand the question correctly) would be:
np.arange(3, 15+1, 2)
where 3 = start, 15 = stop, 2 = step size.
In general, when using numpy try to avoid adding elements in a for loop as this is inefficient. I would suggest checking out the documentation of np.arange(), np.array() and np.zeros() as in my experience, these will solve 90% of array - creation issues.
A straight forward list iteration:
In [313]: alist = []
...: for i in range(0,(8-1)*2,2):
...: alist.append(i+3)
...:
In [314]: alist
Out[314]: [3, 5, 7, 9, 11, 13, 15]
or cast as a list comprehension:
In [315]: [i+3 for i in range(0,(8-1)*2,2)]
Out[315]: [3, 5, 7, 9, 11, 13, 15]
Or if you make an array with the same range parameters:
In [316]: arr = np.arange(0,(8-1)*2,2)
In [317]: arr
Out[317]: array([ 0, 2, 4, 6, 8, 10, 12])
you can add the 3 with one simple expression:
In [318]: arr + 3
Out[318]: array([ 3, 5, 7, 9, 11, 13, 15])
With lists, iteration and comprehensions are great. With numpy you should try to make an array, such as with arange, and modify that with whole-array methods (not with iterations).

Is there a numpy function like np.fill(), but for arrays as fill value?

I'm trying to build an array of some given shape in which all elements are given by another array. Is there a function in numpy which does that efficiently, similar to np.full(), or any other elegant way, without simply employing for loops?
Example: Let's say I want an array with shape
(dim1,dim2) filled with a given, constant scalar value. Numpy has np.full() for this:
my_array = np.full((dim1,dim2),value)
I'm looking for an analog way of doing this, but I want the array to be filled with another array of shape (filldim1,filldim2) A brute-force way would be this:
my_array = np.array([])
for i in range(dim1):
for j in range(dim2):
my_array = np.append(my_array,fill_array)
my_array = my_array.reshape((dim1,dim2,filldim1,filldim2))
EDIT
I was being stupid, np.full() does take arrays as fill value if the shape is modified accordingly:
my_array = np.full((dim1,dim2,filldim1,filldim2),fill_array)
Thanks for pointing that out, #Arne!
You can use np.tile:
>>> shape = (2, 3)
>>> fill_shape = (4, 5)
>>> fill_arr = np.random.randn(*fill_shape)
>>> arr = np.tile(fill_arr, [*shape, 1, 1])
>>> arr.shape
(2, 3, 4, 5)
>>> np.all(arr[0, 0] == fill_arr)
True
Edit: better answer, as suggested by #Arne, directly using np.full:
>>> arr = np.full([*shape, *fill_shape], fill_arr)
>>> arr.shape
(2, 3, 4, 5)
>>> np.all(arr[0, 0] == fill_arr)
True

group by key with pandas series and export to_dict()

I have a dictionary that looks like this:
d = {1:0, 2:0, 3:1, 4:0, 5:2, 6:1, 7:2, 8:0}
And I want to group by .keys() such as I get:
pandas_ordered = { 0:[1,2,4,8], 1:[3,6], 2:[5,7] }
But with this command for
pd.Series(list(d.values())).groupby(list(partition.keys())).to_dict()
Bellow is an example:
# Example:
import pandas as pd
d = {1:0, 2:0, 3:1, 4:0, 5:2, 6:1, 7:2, 8:0}
def pandas_groupby(dictionary):
values = list(dictionary.values())
keys = list(dictionary.keys())
return pd.Series(values).groupby(keys).to_dict()
pandas_groupby(d)
The above code produces the error:
AttributeError: Cannot access callable attribute 'to_dict' of
'SeriesGroupBy' objects, try using the 'apply' method
Any ideas on how to do this?
You dict is already given by the groups in your groupby
d = {1:0, 2:0, 3:1, 4:0, 5:2, 6:1, 7:2, 8:0}
s = pd.Series(d)
s.groupby(s).groups
{0: Int64Index([1, 2, 4, 8], dtype='int64'),
1: Int64Index([3, 6], dtype='int64'),
2: Int64Index([5, 7], dtype='int64')}
But of course, can always agg and customize
s.groupby(s).agg(lambda x: tuple(x.index)).to_dict()
{0: (1, 2, 4, 8), 1: (3, 6), 2: (5, 7)}

Is it possible to spread a list inside a list in Kotlin?

It is possible to do argument unpacking in Kotlin similar to how it is done in Python? E.g.
>>> a = [1,2,3]
>>> b = [*a,4,5,6]
>>> b
[1, 2, 3, 4, 5, 6]
I know that it is possible in Kotlin as follows:
>>> listOf(1, 2, 3, *listOf(4,5,6).toTypedArray())
[1, 2, 3, 4, 5, 6]
Feels like there is an easier way in Kotlin. Any ideas?
The spread operator works on arrays, so you can do this:
listOf(1, 2, 3, *(arrayOf(4, 5, 6)))
The python code can be expressed with the following Kotlin code. As already answered by zsmb13, the operator * is also available in Kotlin:
fun main(args: Array<String>) {
val a = arrayOf(1, 2, 3)
val b = arrayOf(*a, 4, 5, 6)
println(b.contentToString())
}
Documentation tells us:
When we call a vararg-function, we can pass arguments one-by-one, e.g. asList(1, 2, 3), or, if we already have an array and want to pass its contents to the function, we use the spread operator (prefix the array with *):
Also related to this question.