I have a vertex_handle, and what I want to do is to get the halfedge_handles around the vertex,here is my attempt:
HV_circulator hc = v -> vertex_begin();
do{
hc++;
Polyhedron::Halfedge halfedge = *hc;
HE_handle hh = &halfedge;
//blabla~~
}while(hc != v -> vertex_begin());
but it seems not works good, the hh seems not waht I want to get,
what should I do to convert this circulator to a halfedge_hanle, thanks
A halfedge circulator is convertible to a halfedge handle.
Thus you simply need to write:
Polyhedron::Halfedge_handle hh = hc;
Related
The code is used to pack historical financial data in 16 bytes:
type PackedCandle =
struct
val H: single
val L: single
val C: single
val V: int
end
new(h: single, l: single, c: single, v: int) = { H = h; L = l; C = c; V = v }
member this.ToByteArray =
let a = Array.create 16 (byte 0)
let h = BitConverter.GetBytes(this.H)
let l = BitConverter.GetBytes(this.L)
let c = BitConverter.GetBytes(this.C)
let v = BitConverter.GetBytes(this.V)
a.[00] <- h.[0]; a.[01] <- h.[1]; a.[02] <- h.[2]; a.[03] <- h.[3]
a.[04] <- l.[0]; a.[05] <- l.[1]; a.[06] <- l.[2]; a.[07] <- l.[3]
a.[08] <- c.[0]; a.[09] <- c.[1]; a.[10] <- c.[2]; a.[11] <- c.[3]
a.[12] <- v.[0]; a.[13] <- v.[1]; a.[14] <- v.[2]; a.[15] <- v.[3]
printfn "!!" <- for the second part of the question
a
Arrays of these are sent across the network, so I need the data to be as small as possible, but since this is tracking about 80 tradable instruments at the same time, performance matters as well.
A tradeoff was made where clients are not getting historical data and then updates, but just getting chunks of the last 3 days minute by minute, resulting in the same data being sent over and over to simplify the client logic.. and I inherit the problem of making the inefficient design.. as efficient as possible. This is also done over rest polling which I'm converting to sockets right now to keep everything binary.
So my first question is:
how can I make this faster? in C where you can cast anything into anything, I can just take a float and write it straight into the array so there is nothing faster, but in F# it looks like I need to jump through hoops, getting the bytes and then copying them one by one instead of 4 by 4, etc. Is there a better way?
My second question is that since this was to be evaluated once, I made ToByteArray a property. I'm doing some test with random values in Jupyter Notebook but then I see that:
the property seems to be executed twice (indicated by the two "!!" lines). Why is that?
Assuming you have array to write to (generally you should use buffer for reading & writing when working with sockets), you can use System.Runtime.CompilerServices.Unsafe.As<TFrom, TTo> to cast memory from one type to another (same thing that you can do with C/C++)
type PackedCandle =
// omitting fields & consructor
override c.ToString() = $"%f{c.H} %f{c.L} %f{c.C} %d{c.V}" // debug purpose
static member ReadFrom(array: byte[], offset) =
// get managed(!) pointer
// cast pointer to another type
// same as *(PackedCandle*)(&array[offset]) but safe from GC
Unsafe.As<byte, PackedCandle> &array.[offset]
member c.WriteTo(array: byte[], offset: int) =
Unsafe.As<byte, PackedCandle> &array.[offset] <- c
Usage
let byteArray = Array.zeroCreate<byte> 100 // assume array come from different function
// writing
let mutable offset = 0
for i = 0 to 5 do
let candle = PackedCandle(float32 i, float32 i, float32 i, i)
candle.WriteTo(byteArray, offset)
offset <- offset + Unsafe.SizeOf<PackedCandle>() // "increment pointer"
// reading
let mutable offset = 0
for i = 0 to 5 do
let candle = PackedCandle.ReadFrom(byteArray, offset)
printfn "%O" candle
offset <- offset + Unsafe.SizeOf<PackedCandle>()
But do you really want to mess with pointers (even managed)? Have measured that this code is bottleneck?
Update
It's better to use MemoryMarshal instead of raw Unsafe because first checks out-of-range and enforces usage of unmanaged (see here or here) types at runtime
member c.WriteTo (array: byte[], offset: int) =
MemoryMarshal.Write(array.AsSpan(offset), &Unsafe.AsRef(&c))
static member ReadFrom (array: byte[], offset: int) =
MemoryMarshal.Read<PackedCandle>(ReadOnlySpan(array).Slice(offset))
My first question would be, why do you need the ToByteArray operation? In the comments, you say that you are sending arrays of these values over network, so I assume you plan to convert the data to a byte array so that you can write it to network stream.
I think it would be more efficient (and easier) to instead have a method that takes a StreamWriter and writes the data to the stream directly:
type PackedCandle =
struct
val H: single
val L: single
val C: single
val V: int
end
new(h: single, l: single, c: single, v: int) = { H = h; L = l; C = c; V = v }
member this.WriteTo(sw:StreamWriter) =
sw.Write(this.H)
sw.Write(this.L)
sw.Write(this.C)
sw.Write(this.V)
If you now have some code for the network communication, that will expose a stream and you'll need to write to that stream. Assuming this is stream, you can do just:
use writer = new StreamWriter(stream)
for a in packedCandles do a.WriteTo(writer)
Regarding your second question, I think this cannot be answered without a more complete code sample.
When I run the code below I get a DataFrame with one bool column and two double columns. However, when I extract the boolcolumn as a Series the result is a Series object with types DateTime and float.
It looks like Deedle "cast" the column to another type.
Why is this happening?
open Deedle
let dates =
[ DateTime(2013,1,1);
DateTime(2013,1,4);
DateTime(2013,1,8) ]
let values = [ 10.0; 20.0; 30.0 ]
let values2 = [ 0.0; -1.0; 1.0 ]
let first = Series(dates, values)
let second = Series(dates, values2)
let third: Series<DateTime,bool> = Series.map (fun k v -> v > 0.0) second
let df1 = Frame(["first"; "second"; "third"], [first; second; third])
let sb = df1.["third"]
df1;;
val it : Frame<DateTime,string> =
Deedle.Frame`2[System.DateTime,System.String]
{ColumnCount = 3;
ColumnIndex = Deedle.Indices.Linear.LinearIndex`1[System.String];
ColumnKeys = seq ["first"; "second"; "third"];
ColumnTypes = seq [System.Double; System.Double; System.Boolean];
...
sb;;
val it : Series<DateTime,float> = ...
As the existing answer points out, GetColumn is the way to go. You can specify the generic parameter directly when calling GetColumn and avoid the type annotation to make the code nicer:
let sb = df1.GetColumn<bool>("third")
Deedle frame does not statically keep track of the types of the columns, so when you want to get a column as a typed series, you need to specify the type in some way.
We did not want to force people to write type annotations, because they tend to be quite long and ugly, so the primary way of getting a column is GetColumn where you can specify the type argument as in the above example.
The other ways of accessing column such as df?third and df.["third"] are shorthands that assume the column type to be float because that happens to be quite common scenario (at least for the most common uses of Deedle in finance), so these two notations give you a simpler way that "often works nicely".
You can use .GetColumn to extract the Series as a bool:
let sb':(Series<DateTime,bool>) = df1.GetColumn("third")
//val sb' : Series<DateTime,bool> =
//series [ 2013/01/01 0:00:00 => False; 2013/01/04 0:00:00 => False; 2013/01/08 0:00:00 => True]
As to your question of why, I haven't looked at the source, but I assume the type of indexer you use maybe returns an obj, then Deedle tries to cast it to something, or maybe it tries to cast everything to float.
Random access to the elements is not allowed.
let vec = vec![1,2,3,4,5,6,7,8,9,0];
let n = 3;
for v in vec.iter().rev().take(n) {
println!("{}", v);
}
// this printed: 0, 9, 8
// need: 8, 9, 0
for v in vec.iter().rev().skip(n).rev() does not work.
I think the code you wrote does what you're asking it to.
You are reversing the vec with rev() and then you're taking the first 3 elements of the reversed vector (therefore 0, 9, 8)
To obtain the last 3 in non-reversed order you can skip to the end of the vector minus 3 elements, without reversing it:
let vec = vec![1,2,3,4,5,6,7,8,9,0];
let n = vec.len() - 3;
for v in vec.iter().skip(n) {
println!("{}", v);
}
Neither skip nor take yield DoubleEndIterator, you have to either:
skip, which is O(N) in the number of skipped items
collect the result of .rev().take(), and then rev it, which is O(N) in the number of items to be printed, and requires allocating memory for them
The skip is obvious, so let me illustrate the collect:
let vec = vec![1,2,3,4,5,6,7,8,9,0];
let vec: Vec<_> = vec.iter().rev().take(3).collect();
for v in vec.iter().rev() {
println!("{}", v);
}
Of course, the inefficiency is due to you shooting yourself in the foot by avoiding random access in the first place...
Based on the comments, I guess you want to iterate specifically through the elements of a Vec or slice. If that is the case, you could use range slicing, as shown below:
let vec = vec![1,2,3,4,5,6,7,8,9,0];
let n = vec.len() - 3;
for v in &vec[n..] {
println!("{}", v);
}
The big advantage of this approach is that it doesn't require to skip through elements you are not interested in (which may have a big cost if not optimized away). It will just make a new slice and then iterate through it. In other words, you have the guarantee that it will be fast.
I'm trying to understand why this is happening even given the limitations of DataArrays. Suppose you want to map over a DataArray of Int64s:
da = DataArray([1,2,3,4])
println(typeof(da))
println(typeof(map(a -> a^2, da))) # Returns an int for this input
println(typeof(map(a -> int(a^2), da))) # Cast the piecewise result to int
println(typeof(int(map(a -> a^2, da)))) # Cast the output DataArray{Any,1} to int
which results in
DataArray{Int64,1}
DataArray{Any,1}
DataArray{Any,1}
Array{Int64,1}
For an array, a = [1,2,3,4], map(a -> a^2, da) returns an Array of Int64s as expected. What is it about map and/or DataArrays that's causing type information to be lost here? Is there any solution to preserve type information when you're working with a type which doesn't have a constructor that converts DataArray{Any,1} to DataArray{ThatType,1}, like Dates.DateTime?
Edit: convert works fine to make a DataArray{Any,1} a DataArray{ThatType,1} (well at least for DateTime).
#which map(a -> a^2, da::DataArray{Int64, 1})
map(f::Function,dv::DataArray{T,1}) at /home/omer/.julia/v0.3/DataArrays/src/datavector.jl:114
Checking the source;
https://github.com/JuliaStats/DataArrays.jl/blob/master/src/datavector.jl
# TODO: should this be an AbstractDataVector, so it works with PDV's?
function Base.map(f::Function, dv::DataVector)
n = length(dv)
res = DataArray(Any, n)
for i in 1:n
res[i] = f(dv[i])
end
return res
end
It's creating the type DataArray{Any,1} to return.
res = DataArray(Any, n)
You can check the answer given by James Fairbanks (1 Apr 04:12 2015)
http://blog.gmane.org/gmane.comp.lang.julia.user/month=20150401
I am new to F#, and need help to solve this problem.
I have a stream of bytes, which come from serial port and represented as sequence in F#. Stream consits of frames, each begins with 0xFE and ends with 0xFF. Bytes transmitted continuously and for synchronization I must skip some bytes until 0xFE. But stream may be corrupted and may lack the 0xFE or 0xFF.
I have a function that is iterated through input from the serial port. The code of function is:
let getFrame s =
let r = s |> Seq.skipWhile (fun x->x<>0xFEuy)
|> Seq.takeWhile (fun x->x<>0xFFuy)
if Seq.isEmpty r then r else Seq.skip 1 r
How can I rewrite this code to skip bytes until 0xFE or skip certain number of bytes and if no 0xFE occured return error in functional way?
The same is tru for take frame bytes until 0xFF.
You can use the Option<'T> type as a result of your function, where Some x would mean that some frame was captured form input sequence and None that no frame data was found:
let getFrame s =
let r = s |> Seq.skipWhile ((<>) 0xFEuy)
|> Seq.takeWhile ((<>) 0xFFuy)
if Seq.isEmpty r then None
else Some (Seq.skip 1 r)
However correct failure handling implementation depends on the semantics of your getFrame function. For example, what if the 0xFE is present, but there's no 0xFF? Does that mean that this isn't a data frame? Does that mean that the data frame was split accross many sequences? Etc.
I think the following function does what you want. The input stream consists of garbage and frames. Gargabe can come first or between frames. An error is reported if there is either too much garbage or if a frame is too long. Otherwise the next frame is returned.
let [<Literal>] FrameStart = 0xFE
let [<Literal>] FrameEnd = 0xFF
let [<Literal>] MaxGarbageLength = 5
let [<Literal>] MaxFrameLength = 5
type State<'T> =
| Garbage of int
| Frame of int * 'T list
let getFrame stream =
let getNextRest stream =
match Seq.isEmpty stream with
| true -> None
| false -> Some(Seq.head stream, Seq.skip 1 stream)
let rec parse state stream =
match getNextRest stream with
| None -> None
| Some(next, rest) ->
match state with
| Garbage n when n >= MaxGarbageLength -> None
| Garbage n ->
match next with
| FrameStart -> parse (Frame(0, [])) rest
| _ -> parse (Garbage(n+1)) rest
| Frame(n, _) when n >= MaxFrameLength -> None
| Frame(n, content) ->
match next with
| FrameEnd -> Some(content, rest)
| _ -> parse (Frame(n+1, content # [next])) rest
parse (Garbage 0) stream
To get two frames from a stream:
[<Test>]
let ``can parse two frames with garbage in between``() =
let stream = Seq.ofList [1;2;3;FrameStart;4;5;6;FrameEnd;7;8;FrameStart;9;0;FrameEnd]
let (frame1, rest) = (getFrame stream).Value
frame1 |> should equal [4;5;6]
rest |> should equal [7;8;FrameStart;9;0;FrameEnd]
let (frame2, rest) = (getFrame rest).Value
frame2 |> should equal [9;0]
rest |> should equal []
Errors are correctly detected by returning None (note MaxGarbageLength is 5, so the following reports an error):
[<Test>]
let ``none is returned when there is too much garbage``() =
let stream = [1;2;3;4;5;6;FrameStart;7;8;9;FrameEnd]
(getFrame stream).IsNone |> should equal true
This seems to work and should be easy to extend/modify. But it looks like quite a bit of code to me. Improvements welcome.