mcrl2 problem with traffic light changing color - locking

Im trying to make a traffic lights problem with mcrl2. I don't know if my code is 100% correct, but it compiles.
Basically i have 3 colors and 3 traffic lighters and with this code i can switch their colors, however i want to make restritions like, if one traffic light is green the others cannot change to any color, they can only change their color when the traffic light with green color change to yellow.
sort
Color = struct green | red | yellow;
map
next: Color -> Color;
eqn
next(green) = yellow;
next(yellow) = red;
next(red) = green;
act
toRed, toGreen,toRedYellow,toYellow;
proc
Lighter1 (c : Color) = (c == green) -> toYellow.Lighter1(next(c))
+(c == yellow) -> toRed.Lighter1(next(c))
+(c == red) -> toGreen.Lighter1(next(c));
Lighter2 (c : Color) = (c == green) -> toYellow.Lighter2(next(c))
+(c == yellow) -> toRed.Lighter2(next(c))
+(c == red) -> toGreen.Lighter2(next(c));
Lighter3 (c : Color) = (c == green) -> toYellow.Lighter3(next(c))
+(c == yellow) -> toRed.Lighter3(next(c))
+(c == red) -> toGreen.Lighter3(next(c));
init
Lighter1(green) || Lighter2(red) || Lighter3(green);

Related

Calculating the 2D turn movement required given an incoming and outgoing direction

Consider a 2D square tiled grid (chess board like) which contains conveyor belt like structures that can curve and move game pieces around.
I need to calculate the turn movement (TURN_LEFT, TURN_RIGHT or STAY), depending on
the direction from which a piece moves onto the field
the direction from which the underlying belt exits the field
Example:
1 2
1 |>X>|>v |
2 | | v |
The belt makes a RIGHT turn. As such, the result of calcTurn(LEFT, DOWN) should be TURN_RIGHT. Meaning the X game piece will be rotated 90° right when it moves over the curve at (1,2).
I already implemented a function but it only works on some of my test cases.
enum class Direction {
NONE,
UP,
RIGHT,
DOWN,
LEFT;
fun isOpposite(other: Direction) = this == UP && other == DOWN
|| this == DOWN && other == UP
|| this == LEFT && other == RIGHT
|| this == RIGHT && other == LEFT
}
data class Vec2(val x: Float, val y: Float)
fun Direction.toVec2() = when (this) {
Direction.NONE -> Vec2(0f, 0f)
Direction.UP -> Vec2(0f, 1f)
Direction.RIGHT -> Vec2(1f, 0f)
Direction.DOWN -> Vec2(0f, -1f)
Direction.LEFT -> Vec2(-1f, 0f)
}
fun getTurnMovement(incomingDirection: Direction, outgoingDirection: Direction): Movement {
if (incomingDirection.isOpposite(outgoingDirection) || incomingDirection == outgoingDirection) {
return Movement.STAY
}
val incVec = incomingDirection.toVec2()
val outVec = outgoingDirection.toVec2()
val angle = atan2(
incVec.x * outVec.x - incVec.y * outVec.y,
incVec.x * outVec.x + incVec.y * outVec.y
)
return when {
angle < 0 -> Movement.TURN_RIGHT
angle > 0 -> Movement.TURN_LEFT
else -> Movement.STAY
}
}
I can't quite figure out what's going wrong here, especially not because some test cases work (like DOWN+LEFT=TURN_LEFT) but others don't (like DOWN+RIGHT=STAY instead of TURN_LEFT)
You're trying to calculate the angle between two two-dimensional vectors, but are doing so incorrectly.
Mathematically, given two vectors (x1,y1) and (x2,y2), the angle between them is the angle of the second to the x-axis minus the angle of the first to the x-axis. In equation form: arctan(y2/x2) - arctan(y1/x1).
Translating that to Kotlin, you should instead use:
val angle = atan2(outVec.y, outVec.x) - atan2(incVec.y, incVec.x)
I'd note that you could achieve also your overall goal by just delineating the cases in a when statement as you only have a small number of possible directions, but perhaps you want a more general solution.
It's not answering your question of why your code isn't working, but here's another general approach you could use for wrapping ordered data like this:
enum class Direction {
UP, RIGHT, DOWN, LEFT;
companion object {
// storing thing means you only need to generate the array once
private val directions = values()
private fun getPositionWrapped(pos: Int) = directions[(pos).mod(directions.size)]
}
// using getters here as a general example
val toLeft get() = getPositionWrapped(ordinal - 1)
val toRight get() = getPositionWrapped(ordinal + 1)
val opposite get() = getPositionWrapped(ordinal + 2)
}
It's taking advantage of the fact enums are ordered, with an ordinal property to pull out the position of a particular constant. It also uses the (x).mod(y) trick where if x is negative, putting it in parentheses makes it wrap around
x| 6 5 4 3 2 1 0 -1 -2 -3 -4 -5
mod 4| 2 1 0 3 2 1 0 3 2 1 0 3
which makes it easy to grab the next or previous (or however far you want to jump) index, acting like a circular array.
Since you have a NONE value in your example (which obviously doesn't fit into this pattern) I'd probably represent that with a null Direction? instead, since it's more of a lack of a value than an actual type of direction. Depends what you're doing of course!

Does elm share objects or duplicate them?

I'm working on a e-commerce website with a front-end in Elm and I am wondering if objects are shared or duplicated. I'm pretty sure of the answer but I just want to make sure.
Basically, I have a ProductVariant containing some Colour. At the moment ProductVariant as a field colour_ids : List ColourId (which is what I got from the json) but I am thinking replacing the id with the colour itself : colours : List Colour . I can do that when I decode the JSON and then it's done, I don't need to lookup coulours in the colours dictionary. Am I correct to assume that each Colour will be shared between different variant or each colour will be duplicated, thus taking more memory.
I made a simple program to see the compiled JS output. With Elm code
type alias Color = { red: Int, green: Int, blue: Int }
black: Color
black = { red = 255, green = 255, blue = 255 }
white: Color
white = { red = 0, green = 0, blue = 0 }
red: Color
red = { red = 255, green = 0, blue = 0 }
colors: List Color
colors =
[
black,
white,
red,
{ red = 123, green = 234, blue = 11 },
black,
{ red = 123, green = 234, blue = 11 },
red,
{ red = 123, green = 234, blue = 11 }
]
The output JS code contains
var author$project$SharedData$colors = _List_fromArray(
[
author$project$SharedData$black,
author$project$SharedData$white,
author$project$SharedData$red,
{blue: 11, green: 234, red: 123},
author$project$SharedData$black,
{blue: 11, green: 234, red: 123},
author$project$SharedData$red,
{blue: 11, green: 234, red: 123}
]);
This shows that the compiler is able to reuse the pre-defined colors black, white and red, but when creating a new record even with the exact same data, there'll always be a duplication.
I don't know how your data is organized, but for the above example case, instead of trying to optimize the data structure, I would simply store the colors as hex code strings.
I'm not 100% sure how JS engines handle strings, but on many other platforms there's only one instance in heap of a single string. For the above toy app this would mean to use "7BEA0B" instead of { red: 123, green: 234, blue: 11 }
following #kaskelotti advices I made small program fetching the same item twice from a dictionary and checked using the JS debugger if data where shared.
import Html exposing (text,div)
import Dict
a = {t= "My name is a"}
b = {t= "My name is B"}
d = Dict.fromList [("a", a), ("b", b)]
mytest = List.filterMap (\key -> Dict.get key d) ["a", "b", "a"]
main =
let x = mytest
(y, z)= case x of
[a1,_,a2] -> (a1,a2)
_ -> (a,b)
in div [] (List.map (\o -> text o.t) mytest)
By setting a breakpoint in main I could check that y andz are actually the same (y === z is true). Also modify y.t modifies z.t.

SceneKit avoid lighting on specific node

In SceneKit I'm building a node made of lines to draw the XYZ axes at the center of the scene, like in Cinema4D.
I would like these 3 nodes not to participate to the global lighting and be viewable even if the light is dark / inexistent / too strong. In the picture below you can see that the Z axis appears too heavily lighten and can't be seen.
Is there a way to stop a node participating to the scene's lighting, like with category masks for physics?
In this case, how would the node be lighten in order for it to appear anyway?
SCNLight has a categoryBitMask property. This lets you choose which nodes are affected by the light (Although this is ignored for ambient lights). You could have 2 light source categories, one for your main scene, and another that only affects your lines.
Here is a simple example with 2 nodes, each lit with a different colour light:
struct LightType {
static let light1:Int = 0x1 << 1
static let light2:Int = 0x1 << 2
}
class GameViewController: UIViewController {
override func viewDidLoad() {
super.viewDidLoad()
let scene = SCNScene(named: "art.scnassets/scene.scn")!
let lightNode1 = SCNNode()
lightNode1.light = SCNLight()
lightNode1.light!.type = .omni
lightNode1.light!.color = UIColor.yellow
lightNode1.position = SCNVector3(x: 0, y: 10, z: 10)
lightNode1.light!.categoryBitMask = LightType.light1
scene.rootNode.addChildNode(lightNode1)
let lightNode2 = SCNNode()
lightNode2.light = SCNLight()
lightNode2.light!.type = .omni
lightNode2.light!.color = UIColor.red
lightNode2.position = SCNVector3(x: 0, y: 10, z: 10)
lightNode2.light!.categoryBitMask = LightType.light2
scene.rootNode.addChildNode(lightNode2)
let sphere1 = scene.rootNode.childNode(withName: "sphere1", recursively: true)!
sphere1.categoryBitMask = LightType.light1
let sphere2 = scene.rootNode.childNode(withName: "sphere2", recursively: true)!
sphere2.categoryBitMask = LightType.light2
let scnView = self.view as! SCNView
scnView.scene = scene
}
}
I think it would be much easier to set the material's lightning model to constant.
yourNode.geometry?.firstMaterial?.lightingModel = SCNMaterial.LightingModel.constant

Count colors in image: `NSCountedSet` and `colorAtX` are very slow

I'm making an OS X app which creates a color scheme from the main colors of an image.
As a first step, I'm using NSCountedSet and colorAtX to get all the colors from an image and count their occurrences:
func sampleImage(#width: Int, height: Int, imageRep: NSBitmapImageRep) -> (NSCountedSet, NSCountedSet) {
// Store all colors from image
var colors = NSCountedSet(capacity: width * height)
// Store the colors from left edge of the image
var leftEdgeColors = NSCountedSet(capacity: height)
// Loop over the image pixels
var x = 0
var y = 0
while x < width {
while y < height {
// Instruments shows that `colorAtX` is very slow
// and using `NSCountedSet` is also very slow
if let color = imageRep.colorAtX(x, y: y) {
if x == 0 {
leftEdgeColors.addObject(color)
}
colors.addObject(color)
}
y++
}
// Reset y every x loop
y = 0
// We sample a vertical line every x pixels
x += 1
}
return (colors, leftEdgeColors)
}
My problem is that this is very slow. In Instruments, I see there's two big bottlenecks: with NSCountedSet and with colorAtX.
So first I thought maybe replace NSCountedSet by a pure Swift equivalent, but the new implementation was unsurprisingly much slower than NSCountedSet.
For colorAtX, there's this interesting SO answer but I haven't been able to translate it to Swift (and I can't use a bridging header to Objective-C for this project).
My problem when trying to translate this is I don't understand the unsigned char and char parts in the answer.
What should I try to scan the colors faster than with colorAtX?
Continue working on adapting the Objective-C answer because it's a good answer? Despite being stuck for now, maybe I can achieve this later.
Use another Foundation/Cocoa method that I don't know of?
Anything else that I could try to improve my code?
TL;DR
colorAtX is slow, and I don't understand how to adapt this Objective-C answer to Swift because of unsigned char.
The fastest alternative to colorAtX() would be iterating over the raw bytes of the image using let bitmapBytes = imageRep.bitmapData and composing the colour yourself from that information, which should be really simple if it's just RGBA data. Instead of your for x/y loop, do something like this...
let bitmapBytes = imageRep.bitmapData
var colors = Dictionary<UInt32, Int>()
var index = 0
for _ in 0..<(width * height) {
let r = UInt32(bitmapBytes[index++])
let g = UInt32(bitmapBytes[index++])
let b = UInt32(bitmapBytes[index++])
let a = UInt32(bitmapBytes[index++])
let finalColor = (r << 24) + (g << 16) + (b << 8) + a
if colors[finalColor] == nil {
colors[finalColor] = 1
} else {
colors[finalColor]!++
}
}
You will have to check the order of the RGBA values though, I just guessed!
The quickest way to maintain a count might just be a [Int, Int] dictionary of pixel values to counts, doing something like colors[color]++. Later on if you need to you can convert that to a NSColor using NSColor(calibratedRed red: CGFloat, green green: CGFloat, blue blue: CGFloat, alpha alpha: CGFloat)

Calculating Color Differences between two RGB values; or two HSV values

I have an application in which a user selects an area on the screen to detect color. The color detection code works fine and it scans and stores both the RGB and HSV values.
So here is the flow of the program:
The user hits "Calibrate" in which the selected area of screen calibrates the current "white" values and stores them in the app. This does not have to necessarily by white; for instance the user could calibrate anything as white. But we are assuming they know what white is and calibrate correctly.
So a user gets a white strip of paper and calibrates the app for the values as such for RGB and HSV:
WR:208 WG:196 WB:187 WH:25.814 WS:.099194 WV:0.814201
Now the RGB of 208,196,187 obviously does not produce a "true" white but this is what the user has calibrated it for since the lighting cannot be perfect.
So.
Since I have the "Calibrated white"; I can now gain a second set of variables from the camera. However I want to know when a color passes a certain level of difference from the white.
So for instance; I take a strip of paper of varying degrees of purple. On one end the purple is indistinguishable from white, and as the length of the paper increases it gradually gets closer to a solid; deep purple.
Assuming the user has the device calibrated to the white values stated previously.
Let's say the user tests the strip and receives the new RGB and HSV values accordingly:
R:128 G:49: B:92 H:326 S:.66 V:.47
Now this is where I get stuck. Assuming I have retrieved both of these values; how can I scale the colors using their respective V's. And how can I accurately calculate if a color is say "20% different than the calibrated white" or something of the sort.
I did some simple comparisons like such:
NSLog(#"testing");
if(r <= whiteR*whiteV && g <= whiteG*whiteV && b <= whiteB*whiteV){
NSLog(#"TEST PASSED");
[testStatusLabel setText:[NSString stringWithFormat:#"PASS"]];
[testStatusLabel setTextColor:[UIColor greenColor]];
}
else{
[testStatusLabel setText:[NSString stringWithFormat:#"FAIL"]];
[testStatusLabel setTextColor:[UIColor redColor]];
}
but tests that shouldn't pass are passing; for instance when I calibrate a strip to white and then I test another white piece of paper it is passing sometimes...
Please ask for further clarification if need be.
Edit 1:
I found some pseudo-code here that may do the trick between to HEX (RGB) color codes; but it does not take into account different V (lightning brightness) values.
function color_meter(cwith, ccolor) {
if (!cwith && !ccolor) return;
var _cwith = (cwith.charAt(0)=="#") ? cwith.substring(1,7) : cwith;
var _ccolor = (ccolor.charAt(0)=="#") ? ccolor.substring(1,7) : ccolor;
var _r = parseInt(_cwith.substring(0,2), 16);
var _g = parseInt(_cwith.substring(2,4), 16);
var _b = parseInt(_cwith.substring(4,6), 16);
var __r = parseInt(_ccolor.substring(0,2), 16);
var __g = parseInt(_ccolor.substring(2,4), 16);
var __b = parseInt(_ccolor.substring(4,6), 16);
var p1 = (_r / 255) * 100;
var p2 = (_g / 255) * 100;
var p3 = (_b / 255) * 100;
var perc1 = Math.round((p1 + p2 + p3) / 3);
var p1 = (__r / 255) * 100;
var p2 = (__g / 255) * 100;
var p3 = (__b / 255) * 100;
var perc2 = Math.round((p1 + p2 + p3) / 3);
return Math.abs(perc1 - perc2);
}