Find placeholder and other tensors in remote graph - tensorflow

I'm running a distributed graph in a TensorFlow cluster, and wish to re-connect with this from a client as part of a prediction service. The only way I've found so far to make this work is via queues for the client's input and output using shared_name.
In principle named variables could be used for this as well, but is there any other way, using e.g. placeholders?
I've tried exploring the graph from the client's session but it only seems to contain nodes added by the client itself.

Related

CoTurn Data Usage Stats on Multi User System

We want to track each users turn usage seperately. I inspected Turn Rest API, AFAIU it is just used to authorize the user which already exists in Coturn db. This is the point I couldn't realize exactly. I can use an ice server list which includes different username-credential peers. But I must have initialized these username-credential peers on coturn db before. Am I right? If I am right, I do not have an idea how to do this. Detection of user's request to use turn from frontend -> Should generate credentials like this CoTURN: How to use TURN REST API? which I am already achieved -> if this is a new user, my backend should go to my EC2 instance and run "turnadmin create user command" somehow -> then I will let the WebRTC connection - > then track the usage of specific user and send it back to my backend somehow.
Is this a true scenario? If not, how should it be? Is there another way to manage multiple users and their data usage? Any help would be appreciated.
AFAIU to get the stats data, we must use redis database. I tried to used it, I displayed the traffic data (with psubscribe turn/realm//user//allocation//traffic ) but the other subscribe events have never worked ( psubscribe turn/realm//user//allocation//traffic/peer or psubscribe turn/realm//user//allocation/*/total_traffic even if the allocation is deleted). So, I tried to get past traffic data from redis db but I couldn't find how. At redis, KEYS * command gives just "status" events.
Even if I get these traffic data, I couldn't realize how to use it with multi users. Currently in our project we have one user(in terms of coturn) and other users are using turn over this user.
BTW we tried to track the usage where we created peer connection object from RTCPeerConnection interface. I realized that incoming bytes are lower than the redis output. So I think there is a loss and I think I should calculate it from turn side.

How can I set up authenticated links between processes in Elixir?

Background:
I am trying to write a program in Elixir to test distributed algorithms by running them on a set of processes and recording certain statistics. To begin with I will be running these processes on the same machine, but the intention is eventually to have them running on separate machines/VMs.
Problem:
One of the requirements for algorithms I wish to implement is that messages include authentication. That is, whenever a process sends a message to another process, the receiver should be able to verify that this message did indeed come from the sender, and wasn't forged by another process. The following snippets should help to illustrate the idea:
# Sender
a = authenticate(self, receiver, msg)
send(receiver, {msg, self, a})
# Receiver
if verify(msg, sender, a) do
deliver(msg)
end
Thoughts so far:
I have searched far and wide for any documentation of authenticated communication between Elixir processes, and haven't been able to find anything. Perhaps in some way this is already done for me behind the scenes, but so far I haven't been able to verify this. If it were the case, I wonder if it would still be correct when the processes aren't running on the same machine.
I have looked into the possibility of using SSL/TLS functions provided by Erlang, but with my limited knowledge in this area, I'm not sure how this would apply to my situation of running a set of processes as opposed to the more standard use in client-server systems and HTTPS. If I went down this route, I believe I would have to set up all the keys and signatures myself beforehand, which I believe could possible using the X509 Elixir package, though I'm not sure if this is appropriate and may be more work than is necessary.
In summary:
Is there a standard/pre-existing way to achieve authenticated communication between processes in Elixir?
If yes, will it be suitable for processes communicating between separate machines/VMs?
If no to either of the above, what is the simplest way I could achieve this myself?
As Aleksei and Paweł point out, if something is in your cluster, it is already trusted. It's not quite like authenticating random web requests that could have originated virtually anywhere, you are talking about messages originating from inside your local network of trusted machines. If some nefarious actor is running on one of your servers, you have far bigger problems to worry about than just authenticating messages.
There are very few limitations put on Elixir/Erlang processes running inside a cluster with respect to security: their states can be inspected by any other process, for example. Some of this transparency is by-design and necessary in order to have a fault-tolerant system capable of doing hot-code reloads, but the conversation about the specific how's and why's is too nuanced for me to do it justice.
If you really need to do some logging to have an auditable "paper trail" to verify which process sent which message, I think you'll have to roll your own solution which could rely on a number of common techniques (such as keys + signatures, block-chains, etc.). But keep in mind: these are concerns that would come up if you were dealing with web requests between different servers anyhow! And there are already protocols for establishing secure connections between computers, so I would not recommend re-inventing those network protocols in your application.
Your time may be better spent working on the algorithms themselves and not trying to re-invent the wheel on security. Your app should focus on the unique stuff that nobody else is doing (algorithms in your case). If you have multiple interconnected VMs passing messages to each other, all the "security" requirements there come with defining the proper access to each machine/subnet, and that requirement holds no matter what application/language you're running on them.
The more I read what are you trying to achieve, the more I am sure all you need is the footprint of the calling process.
For synchronous calls GenServer.handle_call/3 you already have the second parameter as a footprint.
For asynchronous messages, you might add the caller information to the messages themselves. Like, instead of sending a plain :foo message, send {:foo, pid()} or somewhat even more sophisticated like {:foo, {pid(), timestamp(), ip(), ...} and make callee to verify those.
That would be safe by all means: erlang cluster would ensure these messages are coming from trusted sources, and your internal validation might ensure that the source is valid within your internal rules.

Is there a way to setup TensorFlow RL training with a client via httprequests (or equivalent)?

Situation:
I have an Environment I can not directly control. I can, however, hook into that Environment and call httprequests from it. Using Post/Get requests I'd like to send current status data and retrieve the most probable next action to take. I'd like to do this for at least 1 client, but could use more. Desired Learning Method would be reinforcement learning (so I'd send reward with the Post request along the state).
I have seen similar behaviour in Unity's Implementation of TensorFlow, where it'll spawn multiple instances of your client to train the AI.
OpenAI's Gym environment is a very popular tool for training and visualizing reinforcement learnings problems. If your environment doesn't already exist in Gym, you can make your own. The basic structure of learning in gym is as follows:
Start in an initial state s
Send an HTTP request to the server with state information. Wait for the server to send a response containing an action.
Your agent executes this action and interacts with your environment and ends up in a new state s' and yields a reward r.
Send an HTTP request to the server with the reward r so that you may update your policy.
In your server, you want to write a request handler that accepts the state information s, computes a next action and sends this request back.
In terms of selection an action from your policy. You could use Q-Learning, which stores the value of taking an action in a state, and you select actions using an epsilon-greedy approach. You could also implement a neural network where your input would be your state vector, and your output would be a probability distribution over your possible actions. This is known as a policy gradient method.
Once you receive a reward, you can use this to improve your policy. This will be trickier with the policy gradient method which will require you to send a vector with gradient information to leverage automatic backpropagation tools.
Note that this method of selecting actions is incredibly slow for training. It would likely be best to train locally and perhaps use a server when deploying the agent if necessary.

Tensorflow session.run( ) doesn't work with HUG

I am struggling with tensorflow session.run() with hug in python.
If I run session.run() without HUG for predicting, It is fine.
But, if I run this on hug, it doesn't make any result (even any error).
Did anyone come across such scenario?
Please help me.
My environments:
tensorflow version 1.2.1
hug 2.3.0
python version 3.5.2
I don't know if this is the cause of your problem, but perhaps it might be related to the fact hug is a sever, and thus, it has some asynchronous code somewhere. Maybe what is happening is, because hug is trying to handle the request, it starts the session but doesn't wait for it to run.
Again, I don't know if this is the root cause or not, or even if this scenario even makes sense.
What I can suggest though, based on the little experience I had with Tensorflow, is to set up a different architecture.
If I understand correctly, what you are trying to do now is, you want to send a request to the hug api server, pick up the data from that request and feed it to a Tensorflow session. You want to wait for Tensorflow to predict something and return that prediction to the user that make the request.
I would approach this problem in a slightly different way. I would have the client establish a websockets connection. The client would use this connection to send data to the server. Upon receiving this data, the server would place it in a message queue. You can use a real message queue like RabbitMq or Kafka but to be honest, if you don't have a production quality app, you might as well just use redis or even mongo as a message queue. The messages in this queue would be picked up by a worker process, one running Tensoflow and would use the data in the messages to perform a prediction. Upon performing the prediction, the data would be placed in a different queue and be picked up by the server. The server would then return through the websocket the prediction to the client.
The major thing you solve with this approach is separating the api server from the Tensorflow worker. This will allow you to debug and test each part independently. More over, you will not be clogging your server by waiting for Tensorflow. The server will essentially act as a scheduler and the worker will work. When it finishes, it just returns the result.
I hope this will help you and sorry again if my suggestion regarding the possible root cause is wrong.

what are some good "load balancing issues" to know?

Hey there guys, I am a recent grad, and looking at a couple jobs I am applying for I see that I need to know things like runtime complexity (straight forward enough), caching (memcached!), and load balancing issues
 (no idea on this!!)
So, what kind of load balancing issues and solutions should I try to learn about, or at least be vaguely familiar with for .net or java jobs ?
Googling around gives me things like network load balancing, but wouldn't that usually not be adminstrated by a software developer?
One thing I can think of is session management. By default, whenever you get a session ID, that session ID points to some in-memory data on the server. However, when you use load-balacing, there are multiple servers. What happens when data is stored in the session on machine 1, but for the next request the user is redirected to machine 2? His session data would be lost.
So, you'll have to make sure that either the user gets back to the same machine for every concurrent request ('sticky connection') or you do not use in-proc session state, but out-of-proc session state, where session data is stored in, for example, a database.
There is a concept of load distribution where requests are sprayed across a number of servers (usually with session affinity). Here there is no feedback on how busy any particular server may be, we just rely on statistical sharing of the load. You could view the WebSphere Http plugin in WAS ND as doing this. It actually works pretty well even for substantial web sites
Load balancing tries to be cleverer than that. Where some feedback on the relative load of the servers determines where new requests go. (even then session affinity tends to be treated as higher priority than balancing load). The WebSphere On Demand Router that was originally delivered in XD does this. If you read this article you will see the kind of algorithms used.
You can achieve balancing with network spraying devices, they could consult "agents" running in the servers which give feedback to the sprayer to give a basis for decisions where request should go. Hence even this Hardware-based approach can have a Software element. See Dynamic Feedback Protocol
network combinatorics, max- flow min-cut theorems and their use