How can I access value in sequence type? - tensorflow

There are the following attributes in client_output
weights_delta = attr.ib()
client_weight = attr.ib()
model_output = attr.ib()
client_loss = attr.ib()
After that, I made the client_output in the form of a sequence through
a = tff.federated_collect(client_output) and round_model_delta = tff.federated_map(selecting_fn,a)in here . and I declared
`
#tff.tf_computation() # append
def selecting_fn(a):
#TODO
return round_model_delta
in here. In the process of averaging on the server, I want to average the weights_delta by selecting some of the clients with a small loss value. So I try to access it via a.weights_delta but it doesn't work.

The tff.federated_collect returns a tff.SequenceType placed at tff.SERVER which you can manipulate the same way as for example client dataset is usually handled in a method decorated by tff.tf_computation.
Note that you have to use the tff.federated_collect operator in the scope of a tff.federated_computation. What you probably want to do[*] is pass it into a tff.tf_computation, using the tff.federated_map operator. Once inside the tff.tf_computation, you can think of it as a tf.data.Dataset object and everything in the tf.data module is available.
[*] I am guessing. More detailed explanation of what you would like to achieve would be helpful.

Related

How to call traverseLinkedWorkItems method using Velocity script block?

In the Polarion documentation:
https://almdemo.polarion.com/polarion/sdk/doc/javadoc/com/polarion/alm/tracker/model/IWorkItem.html#traverseLinkedWorkitems(java.util.Set,java.util.Set,java.util.Set,com.polarion.alm.tracker.model.IWorkItem.ITerminalCondition)
I have created empty sets using $objectFactory.newSet() to account for the first 3 parameters, and I have tried "null" for the conditional parameter, but nothing works.
This is an example of what I have tried:
#set($project = "Project X"
#set($workItem1 = 'ABC-123')
#set($emptySet = $objectFactory.newSet())
#set($ts1 = $trackerService.getWorkItem($project,$workItem1))
$ts1 ##output: PObject(WorkItem; subterra:data-service:objects:/default/Project X${WorkItem}ABC-123)
$ts1.traverseLinkedWorkItems($emptySet,$emptySet,$emptySet,'null')
The output is always $ts1.traverseLinkedWorkItems($emptySet,$emptySet,$emptySet,'null')
Is there no way to do this in Velocity? I have seen only one other post regarding this question:
https://community.sw.siemens.com/s/question/0D54O000075P0SCSA0/any-way-to-call-traverselinkedworkitems-from-a-velocity-script-block-widget
Have you tried $null as the last argument? As an undefined reference, it will translate to null.
But this solution will only work if Velocity is not running in strict mode.

Pymongo: insert_many() gives "TypeError: document must be instance of dict" for list of dicts

I haven't been able to find any relevant solutions to my problem when googling, so I thought I'd try here.
I have a program where I parse though folders for a certain kind of trace files, and then save these in a MongoDB database. Like so:
posts = function(source_path)
client = pymongo.MongoClient()
db = client.database
collection = db.collection
insert = collection.insert_many(posts)
def function(...):
....
post = parse(trace)
posts.append(post)
return posts
def parse(...):
....
post = {'Thing1': thing,
'Thing2': other_thing,
etc}
return post
However, when I get to "insert = collection.insert_many(posts)", it returns an error:
TypeError: document must be an instance of dict, bson.son.SON, bson.raw_bson.RawBSONDocument, or a type that inherits from collections.MutableMapping
According to the debugger, "posts" is a list of about 1000 dicts, which should be vaild input according to all of my research. If I construct a smaller list of dicts and insert_many(), it works flawlessly.
Does anyone know what the issue may be?
Some more debugging revealed the issue to be that the "parse" function sometimes returned None rather than a dict. Easily fixed.

Apache Flink Error Handing and Conditional Processing

I am new to Flink and have gone through site(s)/examples/blogs to get started. I am struggling with the correct use of operators. Basically I have 2 questions
Question 1: Does Flink support declarative exception handling, I need to handle parse/validate/... errors?
Can I use org.apache.flink.runtime.operators.sort.ExceptionHandler or similar
to handle errors?
or Rich/FlatMap function my best option?
If Rich/FlatMap the only option then is there a way to get handle to Stream inside Rich/FlatMap function so Sink(s) could be attached for error processing?
Question 2: Can I conditionally attach different Sink(s)?
Based on certain field(s) in keyed split streams I need to select different sink(s), do I split the stream again or use a Rich/FlatMap to handle that?
I am using Flink 1.3.2. Here is the relevant portion of my job
.....
.....
DataStream<String> eventTextStream = env.addSource(messageSource)
KeyedStream<EventPojo, Tuple> eventPojoStream = eventTextStream
// parse, transform or enrich
.flatMap(new MyParseTransformEnrichFunction())
.assignTimestampsAndWatermarks(new EventAscendingTimestampExtractor())
.keyBy("eventId");
// split stream based on eventType as different reduce and windowing functions need to be applied
SplitStream<EventPojo> splitStream = eventPojoStream
.split(new EventStreamSplitFunction());
// need to apply reduce function
DataStream<EventPojo> event1TypeStream = splitStream.select("event1Type");
// need to apply reduce function
DataStream<EventPojo> event2TypeStream = splitStream.select("event2Type");
// need to apply time based windowing function
DataStream<EventPojo> event3TypeStream = splitStream.select("event3Type");
....
....
env.execute("Event Processing");
Am I using the correct operators here?
Update 1:
Tried using the ProcessFunction as suggested by #alpinegizmo but that didn't work as it depends upon a keyed stream which I don't have until I parse/validate input. I get "InvalidProgramException: Field expression must be equal to '*' or '_' for non-composite types. ".
It's such a common use case where your first parse/validate input and won't have keyed stream yet, so how do you solve it?
Thanks for your patience and help.
There's one key building block that you've overlooked. Take a look at side outputs.
This mechanism provides a typesafe way to produce any number of additional output streams. This can be a clean way to report errors, among other uses. In Flink 1.3 side outputs can only be used with ProcessFunction, but 1.4 will add side outputs to ProcessWindowFunction.

Odoo - Understanding recordsets in the new api

Please look at the following code block:
class hr_payslip:
...
#api.multi # same behavior with or without method decorators
def do_something(self):
print "self------------------------------->",self
for r in self:
print "r------------------------------->",r
As you can see I'm overriding a 'hr.payslip' model and I need to access some field inside this method. The problem is that it doesn't make sense to me what gets printed:
self-------------------------------> hr.payslip(hr.payslip(1,),)
r-------------------------------> hr.payslip(hr.payslip(1,),)
Why is it the same thing inside and outside of for loop. If it's always a 'recordset', how would one access one record's field.
My lack of understanding is probably connected to this question also:
Odoo - Cannot loop trough model records
Working on RecordSets always means working on RecordSets. When you loop over one RecordSet you will get RecordSets as looping variable. But you can only access fields directly when the length of a RecordSet is 1 or 0. You can test it fairly easy (more then one payslip in database!):
slips = self.env['hr.payslip'].search([])
# Exception because you cannot access the field directly on multi entries
print slips.id
# Working
print slips.ids
for slip in slips:
print slip.id

D3 Graph Example Using In Memory Object

This seems like it should be simple, but I have spent literally hours without any success.
Take the D3 graph example at http://bl.ocks.org/mbostock/950642. The example uses a local file called graph.json. I have set up a Rails app to serve a similar graph, however I don't want to write a file of the JSON. Rather, I generate the nodes and links into an object such as:
{"nodes":[{"node_type":"Person","name":"Damien","id":"damien_person"}, {"node_type":"Person","name":"Grant","id":"grant_person"}}],
"links":[{"source":"damien_person","target":"grant_person","label":"Friends"}}
Now when I render the D3, I need to update the call d3.json("graph.json", function(json) {...}); to reference my in-memory object rather than the local file (or url). However, everything I've tried breaks my html/javascript. For example I tried setting the var dataset = <%= raw(#myInMemoryObject) %>;, and that works for assignment (I did an alert on the dataset), however I can't get the D3 code to use it.
How can I replace the d3.json call in order to use my in-memory object?
Thank you,
Damien
Your idea of using, for example, var dataset = <%= raw(#myInMemoryObject) %>; is the right way to go but you need to prep your object to be in the right format.
The nodes specified in the links need to either be numeric references to nodes in the nodes array eg. 0 for first, 1 for second
var json ={
"nodes":[{"name":"Damien","id":"a"}, {"name":"Bob","id":"b"}],
"links":[{"source":0, "target":1,"value":1}]
}
or links to the actual objects which make the nodes themselves:
var a = {"name":"Damien","id":"a"};
var b = {"name":"Bob","id":"b"}
var json ={
"nodes":[a,b],
"links":[{"source":a,"target":b,"value":1}]
};
Relevant discussion is here: https://groups.google.com/forum/?fromgroups=#!topic/d3-js/LWuhBeEipz4
Example here: http://jsfiddle.net/5A9eV/1/