Insert an array into a node property in Amazon Neptune using openCypher - amazon-neptune

I am trying to insert an array into a node property in Amazon Neptune using openCypher. Is there a way to do this with openCypher ?
I have tried the following query :
MERGE (n:Test { name: 'test', colors : ['blue', 'yellow'] })
Error message :
"detailedMessage": "Expected a simple literal but found List."
If it's not supported how could AWS release it for production if this basic feature is not yet available.

Neptune only supports Set-based array properties, which are not supported in openCypher specification.
Neptune does support comparable functionality to what you are looking to achieve through the split() and join() functions as shown here: https://docs.aws.amazon.com/neptune/latest/userguide/migration-opencypher-rewrites.html#migration-opencypher-rewrites-lists
//For writing data
MERGE (n:Test { name: 'test', colors : 'blue, yellow'})
//For reading data
MATCH (n:Test
WITH n, [tag in split(n.colors, ',') WHERE NOT (color IN ['blue', 'yellow'])] AS colors
RETURN n

Related

Terraform: How Do I Setup a Resource Based on Configuration

So here is what I want as a module in Pseudo Code:
IF UseCustom, Create AWS Launch Config With One Custom EBS Device and One Generic EBS Device
ELSE Create AWS Launch Config With One Generic EBS Device
I am aware that I can use the 'count' function within a resource to decide whether it is created or not... So I currently have:
resource aws_launch_configuration "basic_launch_config" {
count = var.boolean ? 0 : 1
blah
}
resource aws_launch_configuration "custom_launch_config" {
count = var.boolean ? 1 : 0
blah
blah
}
Which is great, now it creates the right Launch configuration based on my 'boolean' variable... But in order to then create the AutoScalingGroup using that Launch Configuration, I need the Launch Configuration Name. I know what you're thinking, just output it and grab it, you moron! Well of course I'm outputting it:
output "name" {
description = "The Name of the Default Launch Configuration"
value = aws_launch_configuration.basic_launch_config.*.name
}
output "name" {
description = "The Name of the Custom Launch Configuration"
value = aws_launch_configuration.custom_launch_config.*.name
}
But how the heck do I know from the higher area that I'm calling the module that creates the Launch Configuration and Then the Auto Scaling Group which output to use for passing into the ASG???
Is there a different way to grab the value I want that I'm overlooking? I'm new to Terraform and the whole no real conditional thing is really throwing me for a loop.
Terraform: How to conditionally assign an EBS volume to an ECS Cluster
This seemed to be the cleanest way I could find, using a ternary operator:
output "name {
description = "The Name of the Launch Configuration"
value = "${(var.booleanVar) == 0 ? aws_launch_configuration.default_launch_config.*.name : aws_launch_configuration.custom_launch_config.*.name}
}
Let me know if there is a better way!
You can use the same variable you used to decide which resource to enable to select the appropriate result:
output "name" {
value = var.boolean ? aws_launch_configuration.custom_launch_config[0].name : aws_launch_configuration.basic_launch_config[0].name
}
Another option, which is a little more terse but arguably also a little less clear to a future reader, is to exploit the fact that you will always have one list of zero elements and one list with one elements, like this:
output "name" {
value = concat(
aws_launch_configuration.basic_launch_config[*].name,
aws_launch_configuration.custom_launch_config[*].name,
)[0]
}
Concatenating these two lists will always produce a single-item list due to how the count expressions are written, and so we can use [0] to take that single item and return it.

Querying data on json saved using ReJSON

i have saved one json using Rejson against a key,now i would like to filter/query out data using ReJson.
Please let me know how can i do it ...python prefered .
print("Abount to execute coomnad")
response=redisClient.execute_command('JSON.SET', 'object', '.', json.dumps(data))
print(response)
reply = json.loads(redisClient.execute_command('JSON.GET', 'object'))
print(reply)
using the above code i was able to set data using ReJson .now lets suppose i want to filer data .
my test json is :
data = {
'foo': 'bar',
'ans': 42
}
How can you filter say json in which foo has value as bar
Redis in general, and ReJSON specifically, do not provide search-by-value functionality. For that, you'll have to either index the values yourself (see https://redis.io/topics/indexes) or use RediSearch.

want an explaination for attributes usage of spark sql example code in official document

In spark 2.0.1 sql document,
I have three questions about following code snippets
1) what's attributes? where can I find the document?
2) => means what?
3) I search the APIs for RDD, but do not find toDF function. Why can call .toDF here?
// Create an RDD of Person objects from a text file, convert it to a Dataframe.
val peopleDF = spark.sparkContext.textFile("examples/src/main/resources/people.txt")
.map(_.split(","))
.map(attributes => Person(attributes(0), attributes(1).trim.toInt))
.toDF()

Reading data from SQL Server using Spark SQL

Is it possible to read data from Microsoft Sql Server (and oracle, mysql, etc.) into an rdd in a Spark application? Or do we need to create an in memory set and parallize that into an RDD?
In Spark 1.4.0+ you can now use sqlContext.read.jdbc
That will give you a DataFrame instead of an RDD of Row objects.
The equivalent to the solution you posted above would be
sqlContext.read.jdbc("jdbc:sqlserver://omnimirror;databaseName=moneycorp;integratedSecurity=true;", "TABLE_NAME", "id", 1, 100000, 1000, new java.util.Properties)
It should pick up the schema of the table, but if you'd like to force it, you can use the schema method after read sqlContext.read.schema(...insert schema here...).jdbc(...rest of the things...)
Note that you won't get an RDD of SomeClass here (which is nicer in my view). Instead you'll get a DataFrame of the relevant fields.
More information can be found here: http://spark.apache.org/docs/latest/sql-programming-guide.html#jdbc-to-other-databases
Found a solution to this from the mailing list. JdbcRDD can be used to accomplish this. I needed to get the MS Sql Server JDBC driver jar and add it to the lib for my project. I wanted to use integrated security, and so needed to put sqljdbc_auth.dll (available in the same download) in a location that java.library.path can see. Then, the code looks like this:
val rdd = new JdbcRDD[Email](sc,
() => {DriverManager.getConnection(
"jdbc:sqlserver://omnimirror;databaseName=moneycorp;integratedSecurity=true;")},
"SELECT * FROM TABLE_NAME Where ? < X and X < ?",
1, 100000, 1000,
(r:ResultSet) => { SomeClass(r.getString("Col1"),
r.getString("Col2"), r.getString("Col3")) } )
This gives an Rdd of SomeClass.The second, third and fourth parameters are required and are for lower and upper bounds, and number of partitions. In other words, that source data needs to be partitionable by longs for this to work.

Passing options to JS function with Amber

I'm trying to write the equivalent of:
$( "#draggable" ).draggable({ axis: "y" });
in Amber smalltalk.
My guess was: '#draggable' asJQuery draggable: {'axis' -> 'y'} but that's not it.
Not working on vanilla 0.9.1, but working on master at least last two months ago is:
'#draggable' asJQuery draggable: #{'axis' -> 'y'}
and afaict this is the recommended way.
P.S.: #{ 'key' -> val. 'key2' -> val } is the syntax for inline creation of HashedCollection, which is implemented (from the aforementioned two-month ago fix) so that only public (aka enumerable) properties are the HashedCollection keys. Before the fix also all the methods were enumerable, which prevented to use it naturally in place of JavaScript objects.
herby's excellent answer points out the recommended way to do it. Appearently, there is now Dictionary-literal support (see his comment below). Didn't know that :-)
Old / Alternate way of doing it
For historical reasons, or for users not using the latest master version, this is an alternative way to do it:
options := <{}>.
options at: #axis put: 'y'.
'#draggable' asJQuery draggable: options.
The first line constructs an empty JavaScript object (it's really an JSObjectProxy).
The second line puts the string "y" in the slot "axis". It has the same effect as:
options.axis = "y"; // JavaScript
Lastly, it is invoked, and passed as a parameter.
Array-literals vs Dictionaries
What you were doing didn't work because in modern Smalltalk (Pharo/Squeak/Amber) the curly-brackets are used for array-literals, not as an object-literal as they are used in JavaScript.
If you evaluate (print-it) this in a Workspace:
{ #elelemt1. #element2. #element3 }.
You get:
a Array (#elelemt1 #element2 #element3)
As a result, if you have something that looks like a JavaScript object-literal in reality it is an Array of Association(s). To illustrate I give you this snippet, with the results of print-it on the right:
arrayLookingLikeObject := { #key1 -> #value1. #key2 -> #value2. #key3 -> #value3}.
arrayLookingLikeObject class. "==> Array"
arrayLookingLikeObject first class. "==> Association"
arrayLookingLikeObject "==> a Array (a Association a Association a Association)"
I wrote about it here:
http://smalltalkreloaded.blogspot.co.at/2012/04/javascript-objects-back-and-forth.html