I'm looking for a way to execute multiple commands sequentially. What I do right now is creating a mew channel for each command and close It. If I use just one channel I get an error that a channel can not be reused. But I'm not sure If this is the right way because opening a channel for each command sounds costly.
What I'm looking to do is creating a ssh connection to an OpenWrt device which contains an executable called uci which can modify the configuration files on the device and use It like this:
uci set network.lan.ipaddr='192.168.1.2'
uci set network.lan.dns='192.168.1.1'
My code is similar to this:
let tcp = TcpStream::connect("127.0.0.1:22").unwrap();
let mut sess = Session::new().unwrap();
sess.set_tcp_stream(tcp);
sess.handshake().unwrap();
sess.userauth_agent("username").unwrap();
let mut channel = sess.channel_session().unwrap();
channel.exec("ls").unwrap();
channel.wait_close();
println!("#1 exit: {}", channel.exit_status().unwrap());
let mut channel = sess.channel_session().unwrap();
channel.exec("ls").unwrap();
channel.wait_close();
println!("#2 exit: {}", channel.exit_status().unwrap());
If I don't close the channel and execute 2 commands sequentially I get the error code -39(wrong usage).
Related
I am continuously listening on redis streams using the spring reactive api(using lettuce driver). I am using a standalone connection. It seems like the reactor's event loop opens a new connection every time it reads the messages instead of keeping the connection open. I see a lot of TIME_WAIT ports in my machine when i run my program. Is this normal? Is there a way to let lettuce know to re-use the connection instead of reconnecting every time?
This is my code:
StreamReceiver<String, MapRecord<String, String, String>> receiver = StreamReceiver.create(factory);
return receiver
.receive(Consumer.from(keyCacheStreamsConfig.getConsumerGroup(), keyCacheStreamsConfig.getConsumer()),
StreamOffset.create(keyCacheStreamsConfig.getStreamName(), ReadOffset.lastConsumed()))//
// flatMap reads 256 messages by default and processes them in the given scheduler
.flatMap(record -> Mono.fromCallable(() -> consumer.consume(record)).subscribeOn(Schedulers.boundedElastic()))//
.doOnError(t -> {
log.error("Error processing.", t);
streamConnections.get(nodeName).setDirty(true);
})//
.onErrorContinue((err, elem) -> log.error("Error processing message. Continue listening."))//
.subscribe();
Looks like the spring-data-redis library re-uses the connection only if the poll timeout is set to '0' in the stream receiver options and pass it as the second argument in StreamReceiver.create(factory, options). Figured by looking into spring-data-redis' source code.
I am trying to understand, how pipe lining in redis works? According to one blog I read, For this code
Pipeline pipeline = jedis.pipelined();
long start = System.currentTimeMillis();
for (int i = 0; i < 100000; i++) {
pipeline.set("" + i, "" + i);
}
List<Object> results = pipeline.execute();
Every call to pipeline.set() effectively sends the SET command to Redis (you can easily see this by setting a breakpoint inside the loop and querying Redis with redis-cli). The call to pipeline.execute() is when the reading of all the pending responses happens.
So basically, when we use pipe-lining, when we execute any command like set above, the command gets executed on the server but we don't collect the response until we executed, pipeline.execute().
However, according to the documentation of pyredis,
Pipelines are a subclass of the base Redis class that provide support for buffering multiple commands to the server in a single request.
I think, this implies that, we use pipelining, all the commands are buffered and are sent to the server, when we execute pipe.execute(), so this behaviour is different from the behaviour described above.
Could someone please tell me what is the right behaviour when using pyreids?
This is not just a redis-py thing. In Redis, pipelining always means buffering a set of commands and then sending them to the server all at once. The main point of pipelining is to avoid extraneous network back-and-forths-- frequently the bottleneck when running commands against Redis. If each command were sent to Redis before the pipeline was run, this would not be the case.
You can test this in practice. Open up python and:
import redis
r = redis.Redis()
p = r.pipeline()
p.set('blah', 'foo') # this buffers the command. it is not yet run.
r.get('blah') # pipeline hasn't been run, so this returns nothing.
p.execute()
r.get('blah') # now that we've run the pipeline, this returns "foo".
I did run the test that you described from the blog, and I could not reproduce the behaviour.
Setting breakpoints in the for loop, and running
redis-cli info | grep keys
does not show the size increasing after every set command.
Speaking of which, the code you pasted seems to be Java using Jedis (which I also used).
And in the test I ran, and according to the documentation, there is no method execute() in jedis but an exec() and sync() one.
I did see the values being set in redis after the sync() command.
Besides, this question seems to go with the pyredis documentation.
Finally, the redis documentation itself focuses on networking optimization (Quoting the example)
This time we are not paying the cost of RTT for every call, but just one time for the three commands.
P.S. Could you get the link to the blog you read?
I need help from someone that have some experience in playing with wpa_supplicant code.
What i understand is that wpa_supplicant dose everything in order for a supplicant to connect to an AP (if that what you what). Hence the steps are as:
Scan
Get scan results
AUTH
ASSOC
4-hand shake
data exchange
As i understand this then the first 4 steps are only managed by wpa_supplicant. That is, wpa_supplicant simply calls the under laying driver to perform these steps and after the main event loop receives the EVENT_ASSOC msg. it starts the 4-handshake.
For my part, it is fine with the first two steps are carried out at the driver, ie., wpa_supplicant send a scan req, the driver perform the scan and feed the scan results.
My question is, is it correct that wpa_supplicant cannot generate the necessary packet and use, e.g., layer 2 (rawsocket) to send authentication request to the AP ? and followed by an associate request ?... shall one simply provides these as a handle from the driver layer ?
as i can see from the code in wpa_supplicant.c
(void wpa_supplicant_associate(struct wpa_supplicant *wpa_s,
struct wpa_bss *bss, struct wpa_ssid *ssid))
that this function calls a function pointer to the selected driver eg. ".associate = wpa_driver_nl80211_associate" and here the driver then send this down to the udnerlaying nl80211 driver code ? .... so wpa_supplicant can not generate these packet by it self ?
I hope that this make any sens, if not please ask :)
Yes, your understanding is correct. To send auth/assoc req, the wpa_supplicant should construct the corresponding NL80211 commands in following different scenarios:
a) in case the SME is maintained in wpa_supplicant
NL80211_CMD_AUTHENTICATE
NL80211_CMD_ASSOCIATE
b) in case the SME is maintained by driver
NL80211_CMD_CONNECT
And these commands will trigger the corresponding cfg80211_ops hooks (.auth, .assoc, .connect) registered by the wifi driver to be called to construct the frames and then send out the frames.
I'm all new to erlang, and i got this task:
Write a function "setalarm(T,Message)" what starts two processes at
the same time. After T miliseconds the first process sends a message
to the second process, and that message will be the Message arg.
It's forbidden to use function library, only primitives (send, receive, spawn)
Me as a novice useful to write more code, so I suggest such an option:
setalarm(T,Message)->
S = spawn(sotest,second,[]),
Pid = spawn(sotest,first,[S,T,Message]).
first(Pid,T,Message) ->
receive
after T -> Pid ! Message
end.
second() ->
receive
Message -> io:format("The message is ~p~n",[Message])
end.
From Don Syme blog (http://blogs.msdn.com/b/dsyme/archive/2010/01/10/async-and-parallel-design-patterns-in-f-reporting-progress-with-events-plus-twitter-sample.aspx) I tried to implement a twitter stream listener. My goal is to follow the guidance of the twitter api documentation which says "that tweets should often be saved or queued before processing when building a high-reliability system".
So my code needs to have two components:
A queue that piles up and processes each status/tweet json
Something to read the twitter stream that dumps to the queue the tweet in json strings
I choose the following:
An agent to which I post each tweet, that decodes the json, and dumps it to database
A simple http webrequest
I also would like to dump into a text file any error from inserting in the database. ( I will probably switch to a supervisor agent for all the errors).
Two problems:
is my strategy here any good ? If I understand correctly, the agent behaves like a smart queue and processes its messages asynchronously ( if it has 10 guys on its queue it will process a bunch of them at time, instead of waiting for the 1 st one to finish then the 2nd etc...), correct ?
According to Don Syme's post everything before the while is Isolated so the StreamWriter and the database dump are Isolated. But because I need this, I never close my database connection... ?
The code looks something like:
let dumpToDatabase databaseName =
//opens databse connection
fun tweet -> inserts tweet in database
type Agent<'T> = MailboxProcessor<'T>
let agentDump =
Agent.Start(fun (inbox: MailboxProcessor<string>) ->
async{
use w2 = new StreamWriter(#"\Errors.txt")
let dumpError =fun (error:string) -> w2.WriteLine( error )
let dumpTweet = dumpToDatabase "stream"
while true do
let! msg = inbox.Receive()
try
let tw = decode msg
dumpTweet tw
with
| :? MySql.Data.MySqlClient.MySqlException as ex ->
dumpError (msg+ex.ToString() )
| _ as ex -> ()
}
)
let filter_url = "http://stream.twitter.com/1/statuses/filter.json"
let parameters = "track=RT&"
let stream_url = filter_url
let stream = twitterStream MyCredentials stream_url parameters
while true do
agentDump.Post(stream.ReadLine())
Thanks a lot !
Edit of code with processor agent:
let dumpToDatabase (tweets:tweet list)=
bulk insert of tweets in database
let agentProcessor =
Agent.Start(fun (inbox: MailboxProcessor<string list>) ->
async{
while true do
let! msg = inbox.Receive()
try
msg
|> List.map(decode)
|> dumpToDatabase
with
| _ as ex -> Console.WriteLine("Processor "+ex.ToString()))
}
)
let agentDump =
Agent.Start(fun (inbox: MailboxProcessor<string>) ->
let rec loop messageList count = async{
try
let! newMsg = inbox.Receive()
let newMsgList = newMsg::messageList
if count = 10 then
agentProcessor.Post( newMsgList )
return! loop [] 0
else
return! loop newMsgList (count+1)
with
| _ as ex -> Console.WriteLine("Dump "+ex.ToString())
}
loop [] 0)
let filter_url = "http://stream.twitter.com/1/statuses/filter.json"
let parameters = "track=RT&"
let stream_url = filter_url
let stream = twitterStream MyCredentials stream_url parameters
while true do
agentDump.Post(stream.ReadLine())
I think that the best way to describe agent is that it is is a running process that keeps some state and can communicate with other agents (or web pages or database). When writing agent-based application, you can often use multiple agents that send messages to each other.
I think that the idea to create an agent that reads tweets from the web and stores them in a database is a good choice (though you could also keep the tweets in memory as the state of the agent).
I wouldn't keep the database connection open all the time - MSSQL (and MySQL likely too) implements connection pooling, so it will not close the connection automatically when you release it. This means that it is safer and similarly efficient to reopen the connection each time you need to write data to the database.
Unless you expect to receive a large number of error messages, I would probably do the same for file stream as well (when writing, you can open it, so that new content is added to the end).
The way queue of F# agents work is that it processes messages one by one (in your example, you're waiting for a message using inbox.Receive(). When the queue contains multiple messages, you'll get them one by one (in a loop).
If you wanted to process multiple messages at once, you could write an agent that waits for, say, 10 messages and then sends them as a list to another agent (which would then perform bulk-processing).
You can also specify timeout parameter to the Receive method, so you could wait for at most 10 messages as long as they all arrive within one second - this way, you can quite elegantly implement bulk processing that doesn't hold messages for a long time.