Unable to establish a TLS connection using Rust, Mio and TlsConnector - ssl

I'm trying to create a websocket client using tokio_tungstenite and mio but I couldn't initialize a stream because of handshake issues. Here is the code I have:
let addr: Vec<_> = ws_url
.to_socket_addrs()
.map_err(|err| ClientError {
message: err.to_string(),
})
.unwrap()
.collect();
println!("{:?}", addr);
let connector = TlsConnector::new().unwrap();
let stream = TcpStream::connect(addr[0]).unwrap();
let mut stream = match connector.connect(ws_url.as_str(), stream) {
Ok(stream) => Ok(stream),
Err(err) => match err {
native_tls::HandshakeError::Failure(err) => Err(ClientError::new(format!(
"Handshake failed: {}",
err.to_string()
))),
native_tls::HandshakeError::WouldBlock(mh) => match mh.handshake() {
Ok(stream) => Ok(stream),
Err(err) => Err(ClientError::new(format!( // <-- the handshake process was interrupted
"Handshake failed: {}",
err.to_string()
))),
},
},
}?;
This code fails on mh.handshake() with an error: the handshake process was interrupted.
Does anyone know why that happens and how to fix it?

After long research decided to not use mio completely and create an event loop manually. This is doable but too time consuming for a simple task I do.
If you were to create a single threaded event loop, you can just use tungstenite and set_nonblocking method of the underling socket:
let url = Url::parse(ws_url.as_str()).unwrap();
match tungstenite::connect(url) {
Ok((mut sock, _)) => {
let s = sock.get_mut();
match s {
tungstenite::stream::MaybeTlsStream::Plain(s) => s.set_nonblocking(true),
tungstenite::stream::MaybeTlsStream::NativeTls(s) => {
s.get_mut().set_nonblocking(true)
}
x => panic!("Received unknown stream type: {:?}", x),
}
.map_err(|err| ClientError::new(format!("Failed to unblock the socket: {}", err)))
.unwrap();
Ok(Box::new(KrakenWsClient { sock }))
}
Err(err) => Err(ClientError {
message: format!(
"Failed to establish websocket connection: {}",
err.to_string()
),
}),
}
Then reading will look like this:
fn next_message(&mut self) -> Result<Option<Message>> {
match self.sock.read_message() {
Ok(msg) => self.parse_msg(msg),
Err(err) => match err {
Error::Io(err) => {
if err.kind() == ErrorKind::WouldBlock {
Ok(None)
} else {
Err(ClientError::new(format!(
"Error reading from websocket: {}",
err
)))
}
}
_ => Err(ClientError::new(format!(
"Error reading from websocket: {}",
err
))),
},
}
}
Remember to control timing of yours event loop to prevent it using 100% of CPU. Hope this helps!

Related

Is there a way to get an OS error code from a std::io::Error?

When I run the following:
use std::fs::File;
fn main() {
let filename = "not_exists.txt";
let reader = File::open(filename);
match reader {
Ok(_) => println!(" * file '{}' opened successfully.", filename),
Err(e) => {
println!("{:?}", &e);
}
}
}
The output is:
Os { code: 2, kind: NotFound, message: "No such file or directory" }
Is it possible to get that code as an integer?
Use io::Error::raw_os_error:
match reader {
Ok(_) => println!(" * file '{}' opened successfully.", filename),
Err(e) => println!("{:?}", e.raw_os_error()),
}
Output:
Some(2)
See also:
How does one get the error message as provided by the system without the "os error n" suffix?
Yes, use the raw_os_error method on std::io::Error. Example:
use std::fs::File;
fn main() {
let filename = "not_exists.txt";
let reader = File::open(filename);
match reader {
Ok(_) => println!(" * file '{}' opened successfully.", filename),
Err(e) => {
println!("{:?} {:?}", e, e.raw_os_error());
}
}
}
playground

Redirect stdio over TLS in Rust

I am trying to replicate the "-e" option in ncat to redirect stdio in Rust to a remote ncat listener.
I can do it over TcpStream by using dup2 and then executing the "/bin/sh" command in Rust. However, I do not know how to do it over TLS as redirection seems to require file descriptors, which TlsStream does not seem to provide.
Can anyone advise on this?
EDIT 2 Nov 2020
Someone in the Rust forum has kindly shared a solution with me (https://users.rust-lang.org/t/redirect-stdio-pipes-and-file-descriptors/50751/8) and now I am trying to work on how to redirect the stdio over the TLS connection.
let mut command_output = std::process::Command::new("/bin/sh")
.stdin(Stdio::piped())
.stdout(Stdio::piped())
.stderr(Stdio::piped())
.spawn()
.expect("cannot execute command");
let mut command_stdin = command_output.stdin.unwrap();
println!("command_stdin {}", command_stdin.as_raw_fd());
let copy_stdin_thread = std::thread::spawn(move || {
io::copy(&mut io::stdin(), &mut command_stdin)
});
let mut command_stdout = command_output.stdout.unwrap();
println!("command_stdout {}", command_stdout.as_raw_fd());
let copy_stdout_thread = std::thread::spawn(move || {
io::copy(&mut command_stdout, &mut io::stdout())
});
let command_stderr = command_output.stderr.unwrap();
println!("command_stderr {}", command_stderr.as_raw_fd());
let copy_stderr_thread = std::thread::spawn(move || {
io::copy(&mut command_stderr, &mut io::stderr())
});
copy_stdin_thread.join().unwrap()?;
copy_stdout_thread.join().unwrap()?;
copy_stderr_thread.join().unwrap()?;
This question and this answer are not specific to Rust.
You noticed the important fact that the I/O of the redirected process must be file descriptors.
One possible solution in your application is
use socketpair(PF_LOCAL, SOCK_STREAM, 0, fd)
this provides two connected bidirectional file descriptors
use dup2() on one end of this socketpair for the I/O of the redirected process (as you would do with an unencrypted TCP stream)
watch both the other end and the TLS stream (in a select()-like manner for example) in order to
receive what becomes available from the socketpair and send it to the TLS stream,
receive what becomes available from the TLS stream and send it to the socketpair.
Note that select() on a TLS stream (its underlying file descriptor, actually) is a bit tricky because some bytes may already have been received (on its underlying file descriptor) and decrypted in the internal buffer while not yet consumed by the application.
You have to ask the TSL stream if its reception buffer is empty before trying a new select() on it.
Using an asynchronous or threaded solution for this watch/recv/send loop is probably easier than relying on a select()-like solution.
edit, after the edition in the question
Since you have now a solution relying on three distinct pipes you can forget everything about socketpair().
The invocation of std::io::copy() in each thread of your example is a simple loop that receives some bytes from its first parameter and sends them to the second.
Your TlsStream is probably a single structure performing all the encrypted I/O operations (sending as well as receiving) thus you will not be able to provide a &mut reference on it to your multiple threads.
The best is probably to write your own loop trying to detect new incoming bytes and then dispatch them to the appropriate destination.
As explained ebove, I would use select() for that.
Unfortunately in Rust, as far as I know, we have to rely on low-level features as libc for that (there may be other high level solutions I am not aware of in the async world...).
I produced a (not so) minimal example below in order to show the main idea; it is certainly far from being perfect, so « handle with care » ;^)
(it relies on native-tls and libc)
Accessing it from openssl gives this
$ openssl s_client -connect localhost:9876
CONNECTED(00000003)
Can't use SSL_get_servername
...
Extended master secret: yes
---
hello
/bin/sh: line 1: hello: command not found
df
Filesystem 1K-blocks Used Available Use% Mounted on
dev 4028936 0 4028936 0% /dev
run 4038472 1168 4037304 1% /run
/dev/sda5 30832548 22074768 7168532 76% /
tmpfs 4038472 234916 3803556 6% /dev/shm
tmpfs 4096 0 4096 0% /sys/fs/cgroup
tmpfs 4038472 4 4038468 1% /tmp
/dev/sda6 338368556 219588980 101568392 69% /home
tmpfs 807692 56 807636 1% /run/user/9223
exit
read:errno=0
fn main() {
let args: Vec<_> = std::env::args().collect();
let use_simple = args.len() == 2 && args[1] == "s";
let mut file = std::fs::File::open("server.pfx").unwrap();
let mut identity = vec![];
use std::io::Read;
file.read_to_end(&mut identity).unwrap();
let identity =
native_tls::Identity::from_pkcs12(&identity, "dummy").unwrap();
let listener = std::net::TcpListener::bind("0.0.0.0:9876").unwrap();
let acceptor = native_tls::TlsAcceptor::new(identity).unwrap();
let acceptor = std::sync::Arc::new(acceptor);
for stream in listener.incoming() {
match stream {
Ok(stream) => {
let acceptor = acceptor.clone();
std::thread::spawn(move || {
let stream = acceptor.accept(stream).unwrap();
if use_simple {
simple_client(stream);
} else {
redirect_shell(stream);
}
});
}
Err(_) => {
println!("accept failure");
break;
}
}
}
}
fn simple_client(mut stream: native_tls::TlsStream<std::net::TcpStream>) {
let mut buffer = [0_u8; 100];
let mut count = 0;
loop {
use std::io::Read;
if let Ok(sz_r) = stream.read(&mut buffer) {
if sz_r == 0 {
println!("EOF");
break;
}
println!(
"received <{}>",
std::str::from_utf8(&buffer[0..sz_r]).unwrap_or("???")
);
let reply = format!("message {} is {} bytes long\n", count, sz_r);
count += 1;
use std::io::Write;
if stream.write_all(reply.as_bytes()).is_err() {
println!("write failure");
break;
}
} else {
println!("read failure");
break;
}
}
}
fn redirect_shell(mut stream: native_tls::TlsStream<std::net::TcpStream>) {
// start child process
let mut child = std::process::Command::new("/bin/sh")
.stdin(std::process::Stdio::piped())
.stdout(std::process::Stdio::piped())
.stderr(std::process::Stdio::piped())
.spawn()
.expect("cannot execute command");
// access useful I/O and file descriptors
let stdin = child.stdin.as_mut().unwrap();
let stdout = child.stdout.as_mut().unwrap();
let stderr = child.stderr.as_mut().unwrap();
use std::os::unix::io::AsRawFd;
let stream_fd = stream.get_ref().as_raw_fd();
let stdout_fd = stdout.as_raw_fd();
let stderr_fd = stderr.as_raw_fd();
// main send/recv loop
use std::io::{Read, Write};
let mut buffer = [0_u8; 100];
loop {
// no need to wait for new incoming bytes on tcp-stream
// if some are already decoded in the tls-stream
let already_buffered = match stream.buffered_read_size() {
Ok(sz) if sz > 0 => true,
_ => false,
};
// prepare file descriptors to be watched for by select()
let mut fdset =
unsafe { std::mem::MaybeUninit::uninit().assume_init() };
let mut max_fd = -1;
unsafe { libc::FD_ZERO(&mut fdset) };
unsafe { libc::FD_SET(stdout_fd, &mut fdset) };
max_fd = std::cmp::max(max_fd, stdout_fd);
unsafe { libc::FD_SET(stderr_fd, &mut fdset) };
max_fd = std::cmp::max(max_fd, stderr_fd);
if !already_buffered {
// see above
unsafe { libc::FD_SET(stream_fd, &mut fdset) };
max_fd = std::cmp::max(max_fd, stream_fd);
}
// block this thread until something new happens
// on these file-descriptors (don't wait if some bytes
// are already decoded in the tls-stream)
let mut zero_timeout =
unsafe { std::mem::MaybeUninit::zeroed().assume_init() };
unsafe {
libc::select(
max_fd + 1,
&mut fdset,
std::ptr::null_mut(),
std::ptr::null_mut(),
if already_buffered {
&mut zero_timeout
} else {
std::ptr::null_mut()
},
)
};
// this thread is not blocked any more,
// try to handle what happened on the file descriptors
if unsafe { libc::FD_ISSET(stdout_fd, &mut fdset) } {
// something new happened on stdout,
// try to receive some bytes an send them through the tls-stream
if let Ok(sz_r) = stdout.read(&mut buffer) {
if sz_r == 0 {
println!("EOF detected on stdout");
break;
}
if stream.write_all(&buffer[0..sz_r]).is_err() {
println!("write failure on tls-stream");
break;
}
} else {
println!("read failure on process stdout");
break;
}
}
if unsafe { libc::FD_ISSET(stderr_fd, &mut fdset) } {
// something new happened on stderr,
// try to receive some bytes an send them through the tls-stream
if let Ok(sz_r) = stderr.read(&mut buffer) {
if sz_r == 0 {
println!("EOF detected on stderr");
break;
}
if stream.write_all(&buffer[0..sz_r]).is_err() {
println!("write failure on tls-stream");
break;
}
} else {
println!("read failure on process stderr");
break;
}
}
if already_buffered
|| unsafe { libc::FD_ISSET(stream_fd, &mut fdset) }
{
// something new happened on the tls-stream
// (or some bytes were already buffered),
// try to receive some bytes an send them on stdin
if let Ok(sz_r) = stream.read(&mut buffer) {
if sz_r == 0 {
println!("EOF detected on tls-stream");
break;
}
if stdin.write_all(&buffer[0..sz_r]).is_err() {
println!("write failure on stdin");
break;
}
} else {
println!("read failure on tls-stream");
break;
}
}
}
let _ = child.wait();
}

Simplifying Rust matching with combinators

I have something like this:
match fnX(
fnY(x), // Returns Result<(), X>
) // Returns Result<(), Y>
.await
{
Ok(v) => {
if v.is_err() {
error!("error = {}", v);
}
}
Err(e) => error!("error = {}", e),
};
How can I write this with combinators so that I only have to error! once? I don't want to do anything with the Ok value, just print the error whether it comes from fnX or fnY.
I'm assuming that you meant to simplify something like this (removing the .await that is unrelated to the issue):
match fnX(x) { // Returns Result<X, EX>
Ok(y) => match fnY(y) { // Returns Result<Y, EY>
Ok(_) => println!("Success!"),
Err(e) => error!("error = {}", e),
},
Err(e) => error!("error = {}", e),
}
If the error types are the same, you can simplify the code with and_then:
match fnX(x).and_then(fnY) {
Ok(_) => println!("Success!"),
Err(e) => error!("error = {}", e),
}
If the error types are different, you can use map_err to convert them to a single type:
match fnX(x)
.map_err(MyError::from)
.and_then(|y| fnY(y).map_err(MyError::from))
{
Ok(_) => println!("Success!"),
Err(e) => error!("error = {}", e),
}
The latter can be simplified using the latest development version of the map_for crate:
match map_for!(y <- fnX (x);
v <- fnY (y);
=> v)
{
Ok(_) => println!("Success"),
Err(e # MyError { .. }) => error!("error = {}", e),
}
Note that the # MyError {..} annotation is only required if the compiler is unable to infer the error type automatically.
Full disclaimer: I am the author of the map_for crate.
You don't need such a "combinator".
fnX accepts an argument of type Result<(), X> and returns a Result<(), Y>
When the code is convoluted it may help to separate the expressions, making it more readable.
let result = fnY(x);
match fnX(result).await {
Ok(v) => {
// here v is ok value, in this case ()
}
Err(e) => error!("error = {}", e),
};

How do I iterate over a Vec of functions returning Futures in Rust?

Is it possible to loop over a Vec, calling a method that returns a Future on each, and build a chain of Futures, to be evaluated (eventually) by the consumer? Whether to execute the later Futures would depend on the outcome of the earlier Futures in the Vec.
To clarify:
I'm working on an application that can fetch data from an arbitrary set of upstream sources.
Requesting data would check with each of the sources, in turn. If the first source had an error (Err), or did not have the data available (None), then the second source would be tried, and so on.
Each source should be tried exactly once, and no source should be tried until all of the sources before have returned their results. Errors are logged, but otherwise ignored, passing the query to the next upstream data source.
I have some working code that does this for fetching metadata:
/// Attempts to read/write data to various external sources. These are
/// nested types, because a data source may exist as both a reader and a writer
struct StoreManager {
/// Upstream data sources
readers: Vec<Rc<RefCell<StoreRead>>>,
/// Downstream data sinks
writers: Vec<Rc<RefCell<StoreWrite>>>,
}
impl StoreRead for StoreManager {
fn metadata(self: &Self, id: &Identifier) -> Box<Future<Option<Metadata>, Error>> {
Box::new(ok(self.readers
.iter()
.map(|store| {
executor::block_on(store.borrow().metadata(id)).unwrap_or_else(|err| {
error!("Error on metadata(): {:?}", err);
None
})
})
.find(Option::is_some)
.unwrap_or(None)))
}
}
Aside from my unhappiness with all of the Box and Rc/RefCell nonsense, my real concern is with the executor::block_on() call. It blocks, waiting for each Future to return a result, before continuing to the next.
Given that it's possible to call fn_returning_future().or_else(|_| other_fn()) and so on, is it possible to build up a dynamic chain like this? Or is it a requirement to fully evaluate each Future in the iterator before moving to the next?
You can use stream::unfold to convert a single value into a stream. In this case, we can use the IntoIter iterator as that single value.
use futures::{executor, stream, Stream, TryStreamExt}; // 0.3.4
type Error = Box<dyn std::error::Error>;
type Result<T, E = Error> = std::result::Result<T, E>;
async fn network_request(val: i32) -> Result<i32> {
// Just for demonstration, don't do this in a real program
use std::{
thread,
time::{Duration, Instant},
};
thread::sleep(Duration::from_secs(1));
println!("Resolving {} at {:?}", val, Instant::now());
Ok(val * 100)
}
fn requests_in_sequence(vals: Vec<i32>) -> impl Stream<Item = Result<i32>> {
stream::unfold(vals.into_iter(), |mut vals| async {
let val = vals.next()?;
let response = network_request(val).await;
Some((response, vals))
})
}
fn main() {
let s = requests_in_sequence(vec![1, 2, 3]);
executor::block_on(async {
s.try_for_each(|v| async move {
println!("-> {}", v);
Ok(())
})
.await
.expect("An error occurred");
});
}
Resolving 1 at Instant { tv_sec: 6223328, tv_nsec: 294631597 }
-> 100
Resolving 2 at Instant { tv_sec: 6223329, tv_nsec: 310839993 }
-> 200
Resolving 3 at Instant { tv_sec: 6223330, tv_nsec: 311005834 }
-> 300
To ignore Err and None, you have to shuttle the Error over to the Item, making the Item type a Result<Option<T>, Error>:
use futures::{executor, stream, Stream, StreamExt}; // 0.3.4
type Error = Box<dyn std::error::Error>;
type Result<T, E = Error> = std::result::Result<T, E>;
async fn network_request(val: i32) -> Result<Option<i32>> {
// Just for demonstration, don't do this in a real program
use std::{
thread,
time::{Duration, Instant},
};
thread::sleep(Duration::from_secs(1));
println!("Resolving {} at {:?}", val, Instant::now());
match val {
1 => Err("boom".into()), // An error
2 => Ok(None), // No data
_ => Ok(Some(val * 100)), // Success
}
}
fn requests_in_sequence(vals: Vec<i32>) -> impl Stream<Item = Result<Option<i32>>> {
stream::unfold(vals.into_iter(), |mut vals| async {
let val = vals.next()?;
let response = network_request(val).await;
Some((response, vals))
})
}
fn main() {
executor::block_on(async {
let s = requests_in_sequence(vec![1, 2, 3]);
let s = s.filter_map(|v| async move { v.ok() });
let s = s.filter_map(|v| async move { v });
let mut s = s.boxed_local();
match s.next().await {
Some(v) => println!("First success: {}", v),
None => println!("No successful requests"),
}
});
}
Resolving 1 at Instant { tv_sec: 6224229, tv_nsec: 727216392 }
Resolving 2 at Instant { tv_sec: 6224230, tv_nsec: 727404752 }
Resolving 3 at Instant { tv_sec: 6224231, tv_nsec: 727593740 }
First success: 300
is it possible to build up a dynamic chain like this
Yes, by leveraging async functions:
use futures::executor; // 0.3.4
type Error = Box<dyn std::error::Error>;
type Result<T, E = Error> = std::result::Result<T, E>;
async fn network_request(val: i32) -> Result<Option<i32>> {
// Just for demonstration, don't do this in a real program
use std::{
thread,
time::{Duration, Instant},
};
thread::sleep(Duration::from_secs(1));
println!("Resolving {} at {:?}", val, Instant::now());
match val {
1 => Err("boom".into()), // An error
2 => Ok(None), // No data
_ => Ok(Some(val * 100)), // Success
}
}
async fn requests_in_sequence(vals: Vec<i32>) -> Result<i32> {
let mut vals = vals.into_iter().peekable();
while let Some(v) = vals.next() {
match network_request(v).await {
Ok(Some(v)) => return Ok(v),
Err(e) if vals.peek().is_none() => return Err(e),
Ok(None) | Err(_) => { /* Do nothing and try the next source */ }
}
}
Err("Ran out of sources".into())
}
fn main() {
executor::block_on(async {
match requests_in_sequence(vec![1, 2, 3]).await {
Ok(v) => println!("First success: {}", v),
Err(e) => println!("No successful requests: {}", e),
}
});
}
See also:
Creating Diesel.rs queries with a dynamic number of .and()'s
is it a requirement to fully evaluate each Future in the iterator before moving to the next
Isn't that part of your own requirements? Emphasis mine:
Requesting data would check with each of the sources, in turn. If the first source had an error (Err), or did not have the data available (None), then the second source would be tried

What is the best variant for appending a new line in a text file?

I am using this code to append a new line to the end of a file:
let text = "New line".to_string();
let mut option = OpenOptions::new();
option.read(true);
option.write(true);
option.create(true);
match option.open("foo.txt") {
Err(e) => {
println!("Error");
}
Ok(mut f) => {
println!("File opened");
let size = f.seek(SeekFrom::End(0)).unwrap();
let n_text = match size {
0 => text.clone(),
_ => format!("\n{}", text),
};
match f.write_all(n_text.as_bytes()) {
Err(e) => {
println!("Write error");
}
Ok(_) => {
println!("Write success");
}
}
f.sync_all();
}
}
It works, but I think it's too difficult. I found option.append(true);, but if I use it instead of option.write(true); I get "Write error".
Using OpenOptions::append is the clearest way to append to a file:
use std::fs::OpenOptions;
use std::io::prelude::*;
fn main() {
let mut file = OpenOptions::new()
.write(true)
.append(true)
.open("my-file")
.unwrap();
if let Err(e) = writeln!(file, "A new line!") {
eprintln!("Couldn't write to file: {}", e);
}
}
As of Rust 1.8.0 (commit) and RFC 1252, append(true) implies write(true). This should not be a problem anymore.
Before Rust 1.8.0, you must use both write and append — the first one allows you to write bytes into a file, the second specifies where the bytes will be written.