Using a phantom scope to bypass bounds-checking - indexing

I want to construct a indexing-type associated to a value which is a wrapper around a usize such that any vector created from that value I don't need to bounds-check the index. It seems like phantom lifetimes can be used to do some small amount of dependently typed programming like this. Will this work or are there things I'm not considering?
In other words, using the module below is it impossible to write ("safe") code which will step out of memory?
Also, is there a way to do this without the unit references?
pub mod things {
use std::iter;
#[derive(Clone, Copy)]
pub struct ThingIndex<'scope>(usize, &'scope ());
pub struct Things {
nthings: usize,
}
pub struct ThingMapping<'scope, V>(Vec<V>, &'scope ());
impl Things {
pub fn in_context<F: FnOnce(&Things) -> V, V>(nthings: usize, continuation: F) -> V {
continuation(&Things { nthings })
}
pub fn make_index<'scope>(&'scope self, i: usize) -> ThingIndex<'scope> {
if i >= self.nthings {
panic!("Out-of-bounds index!")
}
ThingIndex(i, &())
}
pub fn make_mapping<'scope, V: Clone>(&'scope self, def: V) -> ThingMapping<'scope, V> {
ThingMapping(iter::repeat(def).take(self.nthings).collect(), &())
}
}
impl<'scope, V> ThingMapping<'scope, V> {
pub fn get<'a>(&'a self, ind: ThingIndex<'scope>) -> &'a V {
unsafe { &self.0.get_unchecked(ind.0) }
}
// ...
}
}
Update:
This doesn't seem to work. I added a test that I expected would fail to compile and it compiled without complaint. Is there a way to repair it and make it work? What if I write a macro?
#[cfg(test)]
mod tests {
use crate::things::*;
#[test]
fn it_fails() {
Things::in_context(1, |r1| {
Things::in_context(5, |r2| {
let m1 = r1.make_mapping(());
let i2 = r2.make_index(3);
assert_eq!(*m1.get(i2), ());
});
})
}
}
Note: in_context is loosely based on Haskell's runST function. In Haskell the type-signature of runST requires RankNTypes. I wonder if perhaps this is impossible because the Rust compiler does nothing conjugate to the behavior of RankNTypes.

Related

mock trait implementation for concrete struct

So I have a function. And I want to test it. It takes a struct as param. And the struct has some trait implemented. This trait has a long running io method. I don't want this io method to actually go fetch data, I want to mock this io method and just return the result. I am a little lost about how this can be done. Here is my try (not working)
struct TestStruct {
a: u32,
b: u32,
}
pub trait TestTrait {
fn some_long_running_io_method(&self) -> u32 {
156
}
}
fn important_func(a: TestStruct) {
println!("a: {}", a.some_long_running_io_method());
}
impl TestTrait for TestStruct {
fn some_long_running_io_method(&self) -> u32 {
self.a + self.b
}
}
#[cfg(test)]
mod tests {
use super::*;
use mockall::predicate::*;
use mockall::*;
#[cfg(test)]
mock! {
pub TestStruct {}
impl TestTrait for TestStruct {
fn some_long_running_io_method(&self) -> u32;
}
}
#[test]
fn test_important_func() {
let mut mock = MockTestStruct::new();
mock.expect_some_long_running_io_method()
.returning(|| 1);
important_func(mock);
}
}
I obviously get this error:
error[E0308]: mismatched types
--> src/test.rs:35:24
|
35 | important_func(mock);
| -------------- ^^^^ expected struct `TestStruct`, found struct `MockTestStruct`
| |
| arguments to this function are incorrect
How can I achieve mocking trait methods? 1) One way is to change function param and instead of accepting a concrete struct accept trait. And implement this trait on MockTestStruct. But then we have dynamic dispatching and it hurts the performance. I don't want a performance degrade just for the test. 2) I also tried reimplementing the Trait right where the test is, but conflicting implementations are not allowed in Rust. 3) Make function accept TestStruct or MockTestStruct? Probably not great way either.
Could you please tell me what is the idiomatic way to do it?
You can make your function, important_func a generic function. You can then use generic bounds to restrict the type to implementors of your trait.
Here is an example with your code:
struct TestStruct {
a: u32,
b: u32,
}
pub trait TestTrait {
fn some_long_running_io_method(&self) -> u32 {
156
}
}
// important_func can now use any type T which implements TestTrait,
// including your mock implementation!
fn important_func<T: TestTrait>(a: T) {
println!("a: {}", a.some_long_running_io_method());
}
impl TestTrait for TestStruct {
fn some_long_running_io_method(&self) -> u32 {
self.a + self.b
}
}
#[cfg(test)]
mod tests {
use super::*;
use mockall::predicate::*;
use mockall::*;
#[cfg(test)]
mock! {
pub TestStruct {}
impl TestTrait for TestStruct {
fn some_long_running_io_method(&self) -> u32;
}
}
#[test]
fn test_important_func() {
let mut mock = MockTestStruct::new();
mock.expect_some_long_running_io_method().returning(|| 1);
important_func(mock);
}
}

Is there a way to split Trait implementation and defenition across different modules?

I'd like to define a trait with many methods:
pub trait DataSetT{
fn numeric_op_1(&self){...};
fn numeric_op_2(&self)->f64{...};
...
fn io_op_1(&self) {..};
fn io_op_2(&self) -> DataFrame {...};
...
}
Now, if I define all these methods in the same file it would get humongous.
For the sake of clean and visibile code, I'd like to split these definitions across different files/modules.
For example numeric operatins would live in:
src/numerics.rs
And io operations would live in
src/io.rs
Same thing with implementing this trait for a Struct (overriding the default trait behaviour).
As soon as I try doing that, I either get not all trait items implemented or confilicting definitions.
What is the best practice solution in this kind of situtation?
Without macro you should not be able to split a trait definition over different modules. Where you write trait MyTrait { .. } you need to define it.
But you can define multiple traits and have a super trait, like this:
// src/ops/a.rs
pub trait OpA {
fn op_a1(&self);
fn op_a2(&self) -> f64;
}
// src/ops/b.rs
pub trait OpB {
fn op_b1(&self);
fn op_b2(&self);
}
// src/ops/mod.rs
pub trait Op: OpA + OpB {}
// src/ops_impl/mod.rs
struct MyOp {}
impl Op for MyOp {}
// src/ops_impl/a.rs
impl OpA for MyOp {
fn op_a1(&self) {}
fn op_a2(&self) -> f64 {
42.0
}
}
// src/ops_impl/b.rs
impl OpB for MyOp {
fn op_b1(&self) {}
fn op_b2(&self) {}
}

How to split an implementation of a trait into multiple files?

I started working with ws, and I would like to split the Handler trait implementation into multiple files.
So I wrote this in one file, on_open.rs:
impl Handler for Client {
fn on_open(&mut self, _: Handshake) -> Result<()> {
println!("Socket opened");
Ok(())
}
}
And this in another file, on_message.rs:
impl Handler for Client {
fn on_message(&mut self, msg: Message) -> Result<()> {
println!("Server got message '{}'. ", msg);
Ok(())
}
}
While compiling it I got the following error:
error[E0119]: conflicting implementations of trait `ws::handler::Handler` for type `models::client::Client`:
--> src\sockets\on_message.rs:9:1
|
9 | impl Handler for Client {
| ^^^^^^^^^^^^^^^^^^^^^^^ conflicting implementation for `models::client::Client`
|
::: src\sockets\on_open.rs:8:1
|
8 | impl Handler for Client {
| ----------------------- first implementation here
I'd like to have the files to be separated so that each developer can work on a separate one. Is there a way to achieve this or am I forced to have the full trait implementation in a single file?
Although you can have multiple impl blocks for the same object, you can't have two which are exactly the same, hence the error of conflicting implementations indicated by E0119:
Since a trait cannot be implemented multiple times, this is an error.
(If the trait could be specialised because it takes any number of generic type arguments the situation would be very much different because every specialisation would be a different impl block. However even then you wouldn't be able to have the same specialisation implemented more than once.)
If you would like to split the functionality into separate files, you could do that, but in a slightly different way than you originally thought. You could split the Client's impl block instead of the Handler implementation as the following minimal and compilable example demonstrates. (Try it in the playground!)
As you can see, the Handler trait is implemented for Client in one place, but all the implementations of Client are split into multiple files/modules and the Handler implementation is just referencing those:
mod handler
{
pub type Result<T> = ::std::result::Result<T, HandlerError>;
pub struct HandlerError;
pub trait Handler
{
fn on_open(&mut self, h: usize) -> Result<()>;
fn on_message(&mut self, m: bool) -> Result<()>;
}
}
mod client
{
use super::handler::{ self, Handler };
struct Client
{
h: usize,
m: bool,
}
impl Handler for Client
{
fn on_open(&mut self, h: usize) -> handler::Result<()>
{
self.handle_on_open(h)
}
fn on_message(&mut self, m: bool) -> handler::Result<()>
{
self.handle_on_message(m)
}
}
mod open
{
use super::super::handler;
use super::Client;
impl Client
{
pub fn handle_on_open(&mut self, h: usize) -> handler::Result<()>
{
self.h = h;
Ok(())
}
}
}
mod message
{
use super::super::handler;
use super::Client;
impl Client
{
pub fn handle_on_message(&mut self, m: bool) -> handler::Result<()>
{
self.m = m;
Ok(())
}
}
}
}
Thanks for #Peter's answer, I re-wrote my code as below, and it is working fine:
socket.rs
use ws::Handler;
use crate::models::client::Client;
use ws::{Message, Request, Response, Result, CloseCode, Handshake};
impl Handler for Client {
fn on_open(&mut self, hs: Handshake) -> Result<()> {
self.handle_on_open(hs)
}
fn on_message(&mut self, msg: Message) -> Result<()> {
self.handle_on_message(msg)
}
fn on_close(&mut self, code: CloseCode, reason: &str) {
self.handle_on_close(code, reason)
}
fn on_request(&mut self, req: &Request) -> Result<(Response)> {
self.handle_on_request(req)
}
}
sockets/on_open.rs
use crate::models::client::Client;
use crate::CLIENTS;
use crate::models::{truck::Truck};
use ws::{Result, Handshake};
impl Client {
pub fn handle_on_open(&mut self, _: Handshake) -> Result<()> {
println!("socket is opened");
Ok(())
}
}

I need help refactoring for error handling in Rust

I would like to refactor this Rust code for calculating the largest series product and make it as efficient and elegant as possible. I feel that
lsp(string_digits: &str, span: usize) -> Result<u64, Error>
could be done in a way to make it much more elegant than it is right now. Could lsp be implemented with only one series of chained iterator methods?
#[derive(Debug, PartialEq)]
pub enum Error {
SpanTooLong,
InvalidDigit(char),
}
fn sp(w: &[u8]) -> u64 {
w.iter().fold(1u64, |acc, &d| acc * u64::from(d))
}
pub fn lsp(string_digits: &str, span: usize) -> Result<u64, Error> {
let invalid_chars = string_digits
.chars()
.filter(|ch| !ch.is_numeric())
.collect::<Vec<_>>();
if span > string_digits.len() {
return Err(Error::SpanTooLong);
} else if !invalid_chars.is_empty() {
return Err(Error::InvalidDigit(invalid_chars[0]));
} else if span == 0 || string_digits.is_empty() {
return Ok(1);
}
let vec_of_u8_digits = string_digits
.chars()
.map(|ch| ch.to_digit(10).unwrap() as u8)
.collect::<Vec<_>>();
let lsp = vec_of_u8_digits
.windows(span)
.max_by(|&w1, &w2| sp(w1).cmp(&sp(w2)))
.unwrap();
Ok(sp(lsp))
}
Not sure if this is the most elegant way, but I've given it a try, hope the new version is equivalent to the given program.
Two things will be needed in this case: First, we need a data structure that provides the sliding window "on the fly" and second a function that ends the iteration early if the conversion yields an error.
For the former I've chosen a VecDeque since span is dynamic. For the latter there is a function called process_results in the itertools crate. It converts an iterator over results to an iterator over the unwrapped type and stops iteration if an error is encountered.
I've also slightly changed the signature of sp to accept any iterator over u8.
This is the code:
use std::collections::VecDeque;
use itertools::process_results;
#[derive(Debug, PartialEq)]
pub enum Error {
SpanTooLong,
InvalidDigit(char),
}
fn sp(w: impl Iterator<Item=u8>) -> u64 {
w.fold(1u64, |acc, d| acc * u64::from(d))
}
pub fn lsp(string_digits: &str, span: usize) -> Result<u64, Error> {
if span > string_digits.len() {
return Err(Error::SpanTooLong);
} else if span == 0 || string_digits.is_empty() {
return Ok(1);
}
let mut init_state = VecDeque::new();
init_state.resize(span, 0);
process_results(string_digits.chars()
.map(|ch| ch.to_digit(10)
.map(|d| d as u8)
.ok_or(Error::InvalidDigit(ch))),
|digits|
digits.scan(init_state, |state, digit| {
state.pop_back();
state.push_front(digit);
Some(sp(state.iter().cloned()))
})
.max()
.unwrap()
)
}

How can I return an iterator over a locked struct member in Rust?

Here is as far as I could get, using rental, partly based on How can I store a Chars iterator in the same struct as the String it is iterating on?. The difference here is that the get_iter method of the locked member has to take a mutable self reference.
I'm not tied to using rental: I'd be just as happy with a solution using reffers or owning_ref.
The PhantomData is present here just so that MyIter bears the normal lifetime relationship to MyIterable, the thing being iterated over.
I also tried changing #[rental] to #[rental(deref_mut_suffix)] and changing the return type of MyIterable.get_iter to Box<Iterator<Item=i32> + 'a> but that gave me other lifetime errors originating in the macro that I was unable to decipher.
#[macro_use]
extern crate rental;
use std::marker::PhantomData;
pub struct MyIterable {}
impl MyIterable {
// In the real use-case I can't remove the 'mut'.
pub fn get_iter<'a>(&'a mut self) -> MyIter<'a> {
MyIter {
marker: PhantomData,
}
}
}
pub struct MyIter<'a> {
marker: PhantomData<&'a MyIterable>,
}
impl<'a> Iterator for MyIter<'a> {
type Item = i32;
fn next(&mut self) -> Option<i32> {
Some(42)
}
}
use std::sync::Mutex;
rental! {
mod locking_iter {
pub use super::{MyIterable, MyIter};
use std::sync::MutexGuard;
#[rental]
pub struct LockingIter<'a> {
guard: MutexGuard<'a, MyIterable>,
iter: MyIter<'guard>,
}
}
}
use locking_iter::LockingIter;
impl<'a> Iterator for LockingIter<'a> {
type Item = i32;
#[inline]
fn next(&mut self) -> Option<Self::Item> {
self.rent_mut(|iter| iter.next())
}
}
struct Access {
shared: Mutex<MyIterable>,
}
impl Access {
pub fn get_iter<'a>(&'a self) -> Box<Iterator<Item = i32> + 'a> {
Box::new(LockingIter::new(self.shared.lock().unwrap(), |mi| {
mi.get_iter()
}))
}
}
fn main() {
let access = Access {
shared: Mutex::new(MyIterable {}),
};
let iter = access.get_iter();
let contents: Vec<i32> = iter.take(2).collect();
println!("contents: {:?}", contents);
}
As user rodrigo has pointed out in a comment, the solution is simply to change #[rental] to #[rental_mut].