Accessing Variables by Reference - variables

I am trying to figure out the basic way to reference a simple type in Swift.
In C, it's no issue:
int a = 42;
int* refA = &a;
*refA = 43;
// At this point, a is 43
In Swift, it seems that I can't do this.
var a:Int = 42
println ( a )
// var aRef:Int = &a // Nope.
// var aRef:Int& = &a // Nah.
// inout var:Int aRef = &a // Nyet
// var inout:Int aRef = &a // Non
// var aRef:Int* = &a // What are you, stupid, or stubborn?
//
// aRef = 43 // If any of the above worked, I could do this.
println ( a ) // How can I get this to print "43"?
I can't find anything in the docs that say I can do this. I know about inout as a function parameter modifier, but I'd like to be able to do this outside of functions.
There's some basic reasons that I'd like to do this. Declaring classes of everything introduces some overhead (mostly planning and writing, as opposed to execution time).

Values cannot be passed by reference in Swift (except for inout parameters), this is one of the things that makes it "Objective-C without the C". You might have to rethink your approach with the possibilities and restrictions of the language in mind.

In general, trying to use Language A as if it were Language B on a feature-for-feature basis is a good way to get yourself into round-peg-square-hole issues. Instead, step back a bit -- what problems do you solve using Feature X in Language B? Language A might have different (and even perhaps more elegant) ways to address those problems.
Not being able to create a pointer (or C++-style reference) to any arbitrary value is probably part of Swift's type/memory safety. Adding that level of indirection makes it harder for a compiler to make reasoned deductions about code (in particular, the ownership of memory addresses), opening all kinds of possibilities for undefined behavior (read: bugs, security holes).
Swift is designed to eat bugs. By using carefully-designed semantics for values vs. references, augmented with inout for specific uses, Swift can more carefully control memory ownership and help you write more reliable code. Without any loss in expressivity, really -- that expressivity just takes different forms. :)
If you really want to put a round peg into a square hole, you can cut down your "planning and writing" overhead with a single generic implementation that wraps any value in a reference. Something like this, maybe:
class Handle<T> {
var val: T
init(_ val: T) {
self.val = val
}
}
Note that with this, you still need to plan ahead -- since you can't create pointers/references to arbitrary things that already exist, you'll have to create something through a Handle when you want to be able to treat it like a reference type later.
And with some custom operator definitions, you might even be able to make it look a little bit like C/C++. (Maybe post on Terrible Swift Ideas if you do.)

For the record, your desired behavior is not all the difficult to achieve:
1> func inc (inout x: Int) { x = x + 1 }
2> var val:Int = 10
val: Int = 10
3> inc(&val)
4> val
$R0: Int = 11
What you loose is the 'performance' of a builtin binary operation; but what you gain by using functional abstraction well out weighs any non-builtin issue.
func incrementor (by: Int) (inout _ x: Int) { x = x + by } #Bug in Swift prevents this '_'
var incBy10 = incrementor (10)
incBy10(&val)

Related

F# equivalent to Kotlin's ?. operator

I just started my first F# project and coming from the JVM world, I really like Kotlin's nullability syntax and was wondering how I could achieve similarily compact syntax in F#.
Here's an example:
class MyClass {
fun doSomething() {
// ...
}
}
// At some other place in the code:
val myNullableValue: MyClass? = null
myNullableVallue?.doSomething()
What this does:
If myNullableValue is not null, i.e. there is some data, doSomething() is called on that object.
If myNullableValue is null (like in the code above), nothing happens.
As far as I see, the F# equivalent would be:
type MyClass =
member this.doSomething() = ()
type CallingCode() =
let callingCode() =
let myOptionalValue: MyClass option = None
match myOptionalValue with
|Some(x) -> x.doSomething()
|None -> ()
A stamement that is 1 line long in Kotlin is 3 lines long in F#. My question is therefore whether there's a shorter syntax that acomplishes the same thing.
There is no built-in operator for doing this in F# at the moment. I suspect that the reason is that working with undefined values is just less frequent in F#. For example, you would never define a variable, initialize it to null and then have some code that may or may not set it to a value in F#, so the usual way of writing F# eliminates many of the needs for such operator.
You still need to do this sometimes, for example when using option to represent something that can legitimately be missing, but I think this is less frequent than in other languages. You also may need something like this when interacting with .NET, but then it's probably good practice to handle nulls first, before doing anything else.
Aside from pattern matching, you can use Option.map or an F# computation expression (there is no standard one, but it's easy to use a library or define one - see for example). Then you can write:
let myOptionalValue: MyClass option = None
// Option #1: Using the `opt` computation expression
opt { let! v = myOptionalValue
return v.doSomething() }
// Option #2: Using the `Option.map` function
myOptionalValue |> Option.map (fun v -> v.doSomething() )
For reference, my definition of opt is:
type OptionBuilder() =
member x.Bind(v,f) = Option.bind f v
member x.Return v = Some v
member x.ReturnFrom o = o
member x.Zero () = None
let opt = OptionBuilder()
The ?. operator has been suggested to be added to F#.
https://github.com/fsharp/fslang-suggestions/issues/14
Some day it will be added, I hope soon.

F# and ILNumerics

I have just downloaded the last version of ILNumerics, to be used in my F# project. Is it possible to leverage on this library in F#? I have tried simple computations and it seems very cumbersome (in F#).
I would like to set up a constrained (or even unconstrained) optimization problem. The usual Rosenbrock function would do and then I will use my own function. I am having hard times in having even an Array being defined. The only kind of array I could define was a RetArray, for example with this code
let vector = ILMath.vector<float>(1.0, 2.0)
The compiler signals that vector is a RetArray; I think this is due to the fact that it is returning from a function (i.e.: ILMath.vector). If I define another similar vector, I can -e.g.- sum vectors, simply writing, for example
let a = ILMath.vector<float>(1.0, 2.0)
let b = ILMath.vector<float>(3.2,2.2)
let c = a + b
and I get
RetArray<float> = seq [4.2; 4.2]
but if I try to retrieve the value of c, again, writing, for example in FSI,
c;;
I get
Error: Object reference not set to an instance of an object.
What is the suggested way of using ILNumerics in F#? Is it possible to use the library natively in F# or I am forced to call my F# code from a C# library to use the whole ILNumerics library? Other than with the problem cited, I have problems in understanding the very basic logic of ILNumerics, when ported in F#.
For example, what would be the F# equivalent of the C# using scope as in the example code, as in:
using (ILScope.Enter(inData)) { ...
}
Just to elaborate a bit on brianberns' answer, there are a couple of things you could do to make it easier for yourself.
I would personally not go the route of defining a custom operator - especially one that overrides an existing one. Instead, perhaps you should consider using a computation expression to work with the ILMath types. This will allow you to hide a lot of the uglyness, that comes when working with libraries making use of non-F# standards (e.g. implicit type conversions).
I don't have access to ILMath, so I have just implemented these dummy alternatives, in order to get my code to compile. I suspect you should be able to just not copy that in, and the rest of the code will work as intended
module ILMath =
type RetArray<'t> = { Values: 't seq }
and Array<'t> = { OtherValues: 't seq } with
static member op_Implicit(x: RetArray<_>) = { OtherValues = x.Values }
static member inline (+) (x1, x2) = { Values = (x1.OtherValues, x2.OtherValues) ||> Seq.map2 (+) }
type ILMath =
static member vector<'t>([<ParamArray>] vs : 't []) = { ILMath.Values = vs }
If you have never seen or implemented a computation expression before, you should check the documentation I referenced. Basically, it adds some nice, syntactic sugar on top of some uglyness, in a way that you decide. My sample implementation adds just the let! (desugars to Bind) and return (desugars to Return, duh) key words.
type ILMathBuilder() =
member __.Bind(x: ILMath.RetArray<'t>, f) =
f(ILMath.Array<'t>.op_Implicit(x))
member __.Return(x: ILMath.RetArray<'t>) =
ILMath.Array<'t>.op_Implicit(x)
let ilmath = ILMathBuilder()
This should be defined and instantiated (the ilmath variable) at the top level. This allows you to write
let c = ilmath {
let! a = vector(1.0, 2.0)
let! b = vector(3.2, 2.2)
return a + b
}
Of course, this implementation adds only support for very few things, and requires, for instance, that a value of type RetArray<'t> is always returned. Extending the ILMathBuilder type according to the documentation is the way to go from here.
The reason that the second access of c fails is that ILNumerics is doing some very unusual memory management, which automatically releases the vector's memory when you might not expect it. In C#, this is managed via implicit conversion from vector to Array:
// C#
var A = vector<int>(1, 2, 3); // bad!
Array<int> A = vector<int>(1, 2, 3); // good
F# doesn't have implicit type conversions, but you can invoke the op_Implicit member manually, like this:
open ILNumerics
open type ILMath // open static class - new feature in F# 5
let inline (!) (x : RetArray<'t>) =
Array<'t>.op_Implicit(x)
[<EntryPoint>]
let main argv =
let a = !vector<float>(1.0, 2.0)
let b = !vector<float>(3.2,2.2)
let c = !(a + b)
printfn "%A" c
printfn "%A" c
0
Note that I've created an inline helper function called ! to make this easier. Every time you create an ILNumerics vector in F#, you must call this function to convert it to an array. (It's ugly, I know, but I don't see an easier alternative.)
To answer your last question, the equivalent F# code is:
use _scope = Scope.Enter(inData)
...

Object-oriented programming in Go -- use "new" keyword or nah?

I am learning Go, and I have a question based on the following code:
package main
import (
"fmt"
)
type Vector struct {
x, y, z int
}
func VectorFactory(x,y,z int) *Vector {
return &Vector{x, y, z}
}
func main() {
vect := VectorFactory(1, 2, 3)
fmt.Printf("%d\n", (vect.x * vect.y * vect.z))
}
Here I've defined a type Vector with x, y, and z, and I've defined function VectorFactory which declares a pointer to a Vector and returns that pointer. I use this function to create a new Vector named vect.
Is this bad code? Should I be using the new keyword rather than building a Factory?
Do I need to delete the Vector after using it, like in C++? If so, how?
Thanks. I'm still waiting for my Go book to be delivered.
Prefer NewThing to ThingFactory.
Don't make a NewThing function, unless you have complex initialisation, or you're intentionally not exporting parts of a struct. Selectively setting only parts of a struct is not complex, that can be accomplished by using labels. Complex would be things like "the value of slot Q depends on what the value of slot Zorb is". Unexported struct fields can be useful for information hiding, but should be used with care.
Go is garbage-collected, any piece of data that is not referenced is eligible to be collected. Start out y not worrying about it, then get to a point where you ensure you clean up any reference to data you're no longer interested in so as to avoid accidental liveness ("accidental liveness" is essentially the GC equivalent of "memory leak").
If you expect to print your data structures frequently, consider making a String method for them (this is not exactly corresponding to the print you do, but might be generally more useful for a vector):
func (v Vector) String() string {
return fmt.Sprintf("V<%d, %d, %d>", v.x v.y, v.z);
}
Unless "vect" really means something to you, prefer either "v" or "vector" as a name.

Unpacking stack objects (such as structs) using "if let"

This is rather a Swift compiler optimization question about the Swift optional stack object (such as struct) and "if let".
In Swift "if let" provides you a syntactic sugar to work with optionals.
What about the structs that live on the stack? As a c++ programmer, I would not introduce an unnecessary copy of a stack object, especially, only in order to check it's presence in the container. Is the struct being copied with all it's members recursively every time you use "if let", or the swift compiler is optimized enough to create a local variable by reference or using other tricks?
For example, we have this struct packaged into an optional:
struct MyData{
var a=1
var b=2
//lots more store....
func description()->String{
return "MyData: a="+String(a)+", b="+String(b)
}
}
var optionalData:MyData?=nil
optionalData=MyData()
since the struct is on the stack, to unpack, is there an unnecessary copy from the container optionalData to local var data, or the fact that the data is a constant, the copy is optimized away?
if let data=optionalData{//is data copy or reference?
println(data.description())
}
since the struct is on the stack, to unpack, is there an unnecessary copy from the container optionalData to local var data, or the fact that the data is a constant, the copy is optimized away?
It is unlikely that the compiler is actually emitting code to make a copy. let essentially gives another name to an expression.
With classes, "let x = y" will allow you to write through your copy of x (because you are just copying a reference), i.e.
let x = y
x.foo = bar
y.foo // => bar
but with structs, this is not the case. You aren't allowed to write to a let struct or call any mutable methods on it. This allows the Swift compiler to treat let x = y, where y is a struct, as a no-op.
However, this code probably does make a copy of y:
y.foo = bar
let x = y
y.foo = baz
x.foo // => bar
It has to, because you wrote to the thing you were copying from. This is known as "copy-on-write", and it's an optimization that's made possible by using let semantics.
To answer your final question:
if let data=optionalData{//is data copy or reference?
println(data.description())
}
data is assuredly a reference in this case. Actually it probably does not exist at all; the compiler is going to emit the same code as if you wrote:
if (optionalData != nil)
{
println(optionalData!.description())
}

Why would a programming language need such way to declare variables?

I'm learning C and I kinda know how to program on Mathematica.
On Mathematica, I can declare a variable simply by writing:
a=9
a="b"
a=9.5
And it seems that Mathematica understands naturally what kind of variable is this by simply reading and finding some kind of pattern on it. (Int, char, float). I guess Python has the same feature.
While on C, I must say what it is first:
int num;
char ch;
float f;
num=9;
ch='b';
f=9.5;
I'm aware that this extends to other languanges. So my question is: Why would a programming languange need this kind of variable declaration?
References on the topic are going to be very useful.
Mathematica, Python, and other "dynamically typed" languages have variables that consist of a value and a type, whereas "statically typed" languages like C have variables that just consist of a value. Not only does this mean that less memory is needed for storing variables, but dynamically typed languages have to set and examine the variable type at runtime to know what type of value the variable contains, whereas with statically typed languages the type, and thus what operations can be/need to be performed on it, are known at compile time. As a result, statically typed languages are considerably faster.
In addition, modern statically typed languages (such as C# and C++11) have type inference, which often makes it unnecessary to mention the type. In some very advanced statically typed languages with type inference like Haskell, large programs can be written without ever specifying a type, providing the efficiency of statically typed languages with the terseness and convenience of dynamically typed languages.
Declarations are necessary for C to be compiled into an efficient binary. Basically, it's a big part of why C is much faster than Mathematica.
In contrast to what most other answers seem to suggest, this has little to do with types, which could easily be inferred (let alone efficiency). It is all about unambiguous semantics of scoping.
In a language that allows non-trivial nesting of language constructs it is important to have clear rules about where a variable belongs, and which identifiers refer to the same variable. For that, every variable needs an unambiguous scope that defines where it is visible. Without explicit declarations of variables (whether with or without type annotations) that is not possible in the general case.
Consider a simple function (you can construct similar examples with other forms of nested scope):
function f() {
i = 0
while (i < 10) {
doSomething()
i = i + 1
}
}
function g() {
i = 0
while (i < 20) {
f()
i = i + 1
}
}
What happens? To tell, you need to know where i will be bound: in the global scope or in the local function scopes? The latter implies that the variables in both functions are completely separate, whereas the former will make them share -- and this particular example loop forever (although the global scope may be what is intended in other examples).
Contrast the above with
function f() {
var i = 0
while (i < 10) {
doSomething()
i = i + 1
}
}
function g() {
var i = 0
while (i < 20) {
f()
i = i + 1
}
}
vs
var i
function f() {
i = 0
while (i < 10) {
doSomething()
i = i + 1
}
}
function g() {
i = 0
while (i < 20) {
f()
i = i + 1
}
}
which makes the different possible meanings perfectly clear.
In general, there are no good rules that are able to (1) guess what the programmer really meant, and (2) are sufficiently stable under program extensions or refactorings. It gets nastier the bigger and more complex programs get.
The only way to avoid hairy ambiguities and surprising errors is to require explicit declarations of variables -- which is what all reasonable languages do. (This is language design 101 and has been for 50 years, which, unfortunately, doesn't prevent new generations of language "designers" from repeating the same old mistake over and over again, especially in so-called scripting languages. Until they learn the lesson the hard way and correct the mistake, e.g. JavaScript in ES6.)
Variable types are necessary for the compiler to be able to verify that correct value types are assigned to a variable. The underlying needs vary from language to language.
This doesn't have anything to do with types. Consider JavaScript, which has variable declarations without types:
var x = 8;
y = 8;
The reason is that JavaScript needs to know whether you are referring to an old name, or creating a new one. The above code would leave any existing xs untouched, but would destroy the old contents of an existing y in a surrounding scope.
For a more extensive example, see this answer.
Fundamentally you are comparing two types of languages, ones that are being interpreted by high level virtual machines or interpreters (python, Mathematica ...) and others that are being compiled down to native binaries (C, C++ ...) being executed on a physical machine, obviously if you can define your own virtual machine this gives you amazing flexibility to how dynamic and powerful your language is vs a physical machine which is quite limited, with very limited number of structures and basic instruction set and so on ....
While its true that some languages that are compiled down to virtual machines or interpreted, still require types to be declared (java, C#), this simply done for performance, imagine having to deduce the type at run time or having to use base type for every possible type, this would consume quite a bit of resources not to mention make it quite hard to implement JIT (just in time compiler) that would run some things natively.