I'm working on a e-commerce website with a front-end in Elm and I am wondering if objects are shared or duplicated. I'm pretty sure of the answer but I just want to make sure.
Basically, I have a ProductVariant containing some Colour. At the moment ProductVariant as a field colour_ids : List ColourId (which is what I got from the json) but I am thinking replacing the id with the colour itself : colours : List Colour . I can do that when I decode the JSON and then it's done, I don't need to lookup coulours in the colours dictionary. Am I correct to assume that each Colour will be shared between different variant or each colour will be duplicated, thus taking more memory.
I made a simple program to see the compiled JS output. With Elm code
type alias Color = { red: Int, green: Int, blue: Int }
black: Color
black = { red = 255, green = 255, blue = 255 }
white: Color
white = { red = 0, green = 0, blue = 0 }
red: Color
red = { red = 255, green = 0, blue = 0 }
colors: List Color
colors =
[
black,
white,
red,
{ red = 123, green = 234, blue = 11 },
black,
{ red = 123, green = 234, blue = 11 },
red,
{ red = 123, green = 234, blue = 11 }
]
The output JS code contains
var author$project$SharedData$colors = _List_fromArray(
[
author$project$SharedData$black,
author$project$SharedData$white,
author$project$SharedData$red,
{blue: 11, green: 234, red: 123},
author$project$SharedData$black,
{blue: 11, green: 234, red: 123},
author$project$SharedData$red,
{blue: 11, green: 234, red: 123}
]);
This shows that the compiler is able to reuse the pre-defined colors black, white and red, but when creating a new record even with the exact same data, there'll always be a duplication.
I don't know how your data is organized, but for the above example case, instead of trying to optimize the data structure, I would simply store the colors as hex code strings.
I'm not 100% sure how JS engines handle strings, but on many other platforms there's only one instance in heap of a single string. For the above toy app this would mean to use "7BEA0B" instead of { red: 123, green: 234, blue: 11 }
following #kaskelotti advices I made small program fetching the same item twice from a dictionary and checked using the JS debugger if data where shared.
import Html exposing (text,div)
import Dict
a = {t= "My name is a"}
b = {t= "My name is B"}
d = Dict.fromList [("a", a), ("b", b)]
mytest = List.filterMap (\key -> Dict.get key d) ["a", "b", "a"]
main =
let x = mytest
(y, z)= case x of
[a1,_,a2] -> (a1,a2)
_ -> (a,b)
in div [] (List.map (\o -> text o.t) mytest)
By setting a breakpoint in main I could check that y andz are actually the same (y === z is true). Also modify y.t modifies z.t.
Related
I use Code A to draw a shape with a path, I hope to fill the shape in a color, and draw border with different color and width. I get the Image A as I expected.
I find Code A to launch drawPath operation two times, maybe it's not good way, can I use drawPath to both fill shape and draw border only one time?
Code A
fun setProcess(drawScope: DrawScope, dataList: List<Double>){
drawScope.drawIntoCanvas{
val step = xAxisLength / maxPointCount
val shadowPath = Path()
shadowPath.moveTo(0f.toX, 0f.toY)
for (i in dataList.indices) {
...
}
shadowPath.close()
it.drawPath(shadowPath, paintTablePath)
it.drawPath(shadowPath, paintTableBorder)
}
}
val paintTablePath = Paint().also {
it.isAntiAlias = true
it.style = PaintingStyle.Fill
it.strokeWidth = 1f
it.color = Color(0xffdfecfe)
}
val paintTableBorder = Paint().also {
it.isAntiAlias = true
it.style = PaintingStyle.Stroke
it.strokeWidth = 3f
it.color = Color.Red
}
Image A
What you ask is neither available in Compose nor Android Canvas. As you can check here but with Jetpack Compose Canvas.
You don't need to use the ones with Paint unless you need some properties of Paint or functions like drawText
DrawScope already has functions such as
fun drawPath(
path: Path,
color: Color,
/*#FloatRange(from = 0.0, to = 1.0)*/
alpha: Float = 1.0f,
style: DrawStyle = Fill,
colorFilter: ColorFilter? = null,
blendMode: BlendMode = DefaultBlendMode
)
And instead of passing DrawScope as parameter it would help creating extension functions of DrawScope as default Compose source code often does.
inline fun DrawScope.rotate(
degrees: Float,
pivot: Offset = center,
block: DrawScope.() -> Unit
) = withTransform({ rotate(degrees, pivot) }, block)
inline fun DrawScope.clipRect(
left: Float = 0.0f,
top: Float = 0.0f,
right: Float = size.width,
bottom: Float = size.height,
clipOp: ClipOp = ClipOp.Intersect,
block: DrawScope.() -> Unit
) = withTransform({ clipRect(left, top, right, bottom, clipOp) }, block)
I have the following data class representing a color from red/green/blue values :
data class HexColor(
val red: Byte,
val blue: Byte,
val green: Byte
)
I'm trying to create an enumeration with defined colors (RED, YELLOW, PURPLE, ...) and colors from other red/green/blue possibilities (using HexColor) :
enum class Color {
RED,
YELLOW,
PURPLE,
// ...
HEX_COLOR: HexColor // how to represent this ?
}
The previous code does not compile, but just shows the general idea I want to implement.
How can I define my enum to represent a Color as defined values without parameters (constants like YELLOW for example) and HexColor (a data class with parameters). Is it possible ?
The idea is to use like this, or something similar :
val red = Color.RED
val blue = Color.HEX_COLOR(0, 255, 0)
It's rather not possible with an enum class (all enum instances need to be defined at the compile time), but you can try something like this:
class Color private constructor(val r: Float, val g: Float, val b: Float) {
companion object {
val RED = Color(1.0f, 0.0f, 0.0f)
val GREEN = Color(0.0f,1.0f, 0.0f)
val BLUE = Color(0.0f, 0.0f, 1.0f)
fun HEX_COLOR(r: Float, g: Float, b: Float) = Color(r, g, b)
}
}
In SceneKit I'm building a node made of lines to draw the XYZ axes at the center of the scene, like in Cinema4D.
I would like these 3 nodes not to participate to the global lighting and be viewable even if the light is dark / inexistent / too strong. In the picture below you can see that the Z axis appears too heavily lighten and can't be seen.
Is there a way to stop a node participating to the scene's lighting, like with category masks for physics?
In this case, how would the node be lighten in order for it to appear anyway?
SCNLight has a categoryBitMask property. This lets you choose which nodes are affected by the light (Although this is ignored for ambient lights). You could have 2 light source categories, one for your main scene, and another that only affects your lines.
Here is a simple example with 2 nodes, each lit with a different colour light:
struct LightType {
static let light1:Int = 0x1 << 1
static let light2:Int = 0x1 << 2
}
class GameViewController: UIViewController {
override func viewDidLoad() {
super.viewDidLoad()
let scene = SCNScene(named: "art.scnassets/scene.scn")!
let lightNode1 = SCNNode()
lightNode1.light = SCNLight()
lightNode1.light!.type = .omni
lightNode1.light!.color = UIColor.yellow
lightNode1.position = SCNVector3(x: 0, y: 10, z: 10)
lightNode1.light!.categoryBitMask = LightType.light1
scene.rootNode.addChildNode(lightNode1)
let lightNode2 = SCNNode()
lightNode2.light = SCNLight()
lightNode2.light!.type = .omni
lightNode2.light!.color = UIColor.red
lightNode2.position = SCNVector3(x: 0, y: 10, z: 10)
lightNode2.light!.categoryBitMask = LightType.light2
scene.rootNode.addChildNode(lightNode2)
let sphere1 = scene.rootNode.childNode(withName: "sphere1", recursively: true)!
sphere1.categoryBitMask = LightType.light1
let sphere2 = scene.rootNode.childNode(withName: "sphere2", recursively: true)!
sphere2.categoryBitMask = LightType.light2
let scnView = self.view as! SCNView
scnView.scene = scene
}
}
I think it would be much easier to set the material's lightning model to constant.
yourNode.geometry?.firstMaterial?.lightingModel = SCNMaterial.LightingModel.constant
I'm making an OS X app which creates a color scheme from the main colors of an image.
As a first step, I'm using NSCountedSet and colorAtX to get all the colors from an image and count their occurrences:
func sampleImage(#width: Int, height: Int, imageRep: NSBitmapImageRep) -> (NSCountedSet, NSCountedSet) {
// Store all colors from image
var colors = NSCountedSet(capacity: width * height)
// Store the colors from left edge of the image
var leftEdgeColors = NSCountedSet(capacity: height)
// Loop over the image pixels
var x = 0
var y = 0
while x < width {
while y < height {
// Instruments shows that `colorAtX` is very slow
// and using `NSCountedSet` is also very slow
if let color = imageRep.colorAtX(x, y: y) {
if x == 0 {
leftEdgeColors.addObject(color)
}
colors.addObject(color)
}
y++
}
// Reset y every x loop
y = 0
// We sample a vertical line every x pixels
x += 1
}
return (colors, leftEdgeColors)
}
My problem is that this is very slow. In Instruments, I see there's two big bottlenecks: with NSCountedSet and with colorAtX.
So first I thought maybe replace NSCountedSet by a pure Swift equivalent, but the new implementation was unsurprisingly much slower than NSCountedSet.
For colorAtX, there's this interesting SO answer but I haven't been able to translate it to Swift (and I can't use a bridging header to Objective-C for this project).
My problem when trying to translate this is I don't understand the unsigned char and char parts in the answer.
What should I try to scan the colors faster than with colorAtX?
Continue working on adapting the Objective-C answer because it's a good answer? Despite being stuck for now, maybe I can achieve this later.
Use another Foundation/Cocoa method that I don't know of?
Anything else that I could try to improve my code?
TL;DR
colorAtX is slow, and I don't understand how to adapt this Objective-C answer to Swift because of unsigned char.
The fastest alternative to colorAtX() would be iterating over the raw bytes of the image using let bitmapBytes = imageRep.bitmapData and composing the colour yourself from that information, which should be really simple if it's just RGBA data. Instead of your for x/y loop, do something like this...
let bitmapBytes = imageRep.bitmapData
var colors = Dictionary<UInt32, Int>()
var index = 0
for _ in 0..<(width * height) {
let r = UInt32(bitmapBytes[index++])
let g = UInt32(bitmapBytes[index++])
let b = UInt32(bitmapBytes[index++])
let a = UInt32(bitmapBytes[index++])
let finalColor = (r << 24) + (g << 16) + (b << 8) + a
if colors[finalColor] == nil {
colors[finalColor] = 1
} else {
colors[finalColor]!++
}
}
You will have to check the order of the RGBA values though, I just guessed!
The quickest way to maintain a count might just be a [Int, Int] dictionary of pixel values to counts, doing something like colors[color]++. Later on if you need to you can convert that to a NSColor using NSColor(calibratedRed red: CGFloat, green green: CGFloat, blue blue: CGFloat, alpha alpha: CGFloat)
I have an application in which a user selects an area on the screen to detect color. The color detection code works fine and it scans and stores both the RGB and HSV values.
So here is the flow of the program:
The user hits "Calibrate" in which the selected area of screen calibrates the current "white" values and stores them in the app. This does not have to necessarily by white; for instance the user could calibrate anything as white. But we are assuming they know what white is and calibrate correctly.
So a user gets a white strip of paper and calibrates the app for the values as such for RGB and HSV:
WR:208 WG:196 WB:187 WH:25.814 WS:.099194 WV:0.814201
Now the RGB of 208,196,187 obviously does not produce a "true" white but this is what the user has calibrated it for since the lighting cannot be perfect.
So.
Since I have the "Calibrated white"; I can now gain a second set of variables from the camera. However I want to know when a color passes a certain level of difference from the white.
So for instance; I take a strip of paper of varying degrees of purple. On one end the purple is indistinguishable from white, and as the length of the paper increases it gradually gets closer to a solid; deep purple.
Assuming the user has the device calibrated to the white values stated previously.
Let's say the user tests the strip and receives the new RGB and HSV values accordingly:
R:128 G:49: B:92 H:326 S:.66 V:.47
Now this is where I get stuck. Assuming I have retrieved both of these values; how can I scale the colors using their respective V's. And how can I accurately calculate if a color is say "20% different than the calibrated white" or something of the sort.
I did some simple comparisons like such:
NSLog(#"testing");
if(r <= whiteR*whiteV && g <= whiteG*whiteV && b <= whiteB*whiteV){
NSLog(#"TEST PASSED");
[testStatusLabel setText:[NSString stringWithFormat:#"PASS"]];
[testStatusLabel setTextColor:[UIColor greenColor]];
}
else{
[testStatusLabel setText:[NSString stringWithFormat:#"FAIL"]];
[testStatusLabel setTextColor:[UIColor redColor]];
}
but tests that shouldn't pass are passing; for instance when I calibrate a strip to white and then I test another white piece of paper it is passing sometimes...
Please ask for further clarification if need be.
Edit 1:
I found some pseudo-code here that may do the trick between to HEX (RGB) color codes; but it does not take into account different V (lightning brightness) values.
function color_meter(cwith, ccolor) {
if (!cwith && !ccolor) return;
var _cwith = (cwith.charAt(0)=="#") ? cwith.substring(1,7) : cwith;
var _ccolor = (ccolor.charAt(0)=="#") ? ccolor.substring(1,7) : ccolor;
var _r = parseInt(_cwith.substring(0,2), 16);
var _g = parseInt(_cwith.substring(2,4), 16);
var _b = parseInt(_cwith.substring(4,6), 16);
var __r = parseInt(_ccolor.substring(0,2), 16);
var __g = parseInt(_ccolor.substring(2,4), 16);
var __b = parseInt(_ccolor.substring(4,6), 16);
var p1 = (_r / 255) * 100;
var p2 = (_g / 255) * 100;
var p3 = (_b / 255) * 100;
var perc1 = Math.round((p1 + p2 + p3) / 3);
var p1 = (__r / 255) * 100;
var p2 = (__g / 255) * 100;
var p3 = (__b / 255) * 100;
var perc2 = Math.round((p1 + p2 + p3) / 3);
return Math.abs(perc1 - perc2);
}