Convert GraphResultSet JSON using Graphson - datastax

I'm trying to convert the GraphResultSet object to JSON format similar to datastax studio returns. I'm trying to use Graphson. Is there any sample codes convert the result object to JSON?
i tried the following from the tikerpop blueprints but its not working
List<GraphNode> gf=((GraphResultSet) resultSet).all();
Vertex v = (Vertex) gf.get(0).asVertex();
JSONObject json = null;
try {
json = GraphSONUtility.jsonFromElement((Element) v,getElementPropertyKeys((Element) v, false), GraphSONMode.COMPACT);
} catch (JSONException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
I'm getting a GraphResultSet object from dse, It has vertex and edges. I wanted to output in JSON format.

There is no direct way for now to convert a DSE driver graph object into JSON. However if using the DSE driver 1.5.0 you can configure the driver to use GraphSON1 if you are looking for simple JSON responses. Then simply output the String representation of GraphNode:
DseCluster dseCluster = DseCluster.builder().addContactPoint("127.0.0.1")
.withGraphOptions(
new GraphOptions()
.setGraphName("demo")
// GraphSON version is set here:
.setGraphSubProtocol(GraphProtocol.GRAPHSON_1_0)
)
.build();
DseSession dseSession = dseCluster.connect();
// create query
GraphStatement graphStatement = [.....];
GraphResultSet resultSet = dseSession.executeGraph(graphStatement);
for (GraphNode gn : resultSet) {
String json = gn.toString();
}

You can't directly cast between com.datastax.driver.dse.graph.DefaultVertex & com.tinkerpop.blueprints.Element.
There is GraphSONUtils class (src) in DSE Java driver that should be able to handle these conversions. But because it's in the "internal" package, I expect that changes may happen any time.

Related

UpdateReportedPropertiesAsync with complex types?

I'm trying to update some Azure IoT Device Twin properties like this:
static async void MainAsync()
{
DeviceClient deviceClient = DeviceClient.CreateFromConnectionString(connectionString);
TwinCollection reportedProperties = new TwinCollection();
dynamic heatingModes = new[]
{
new { Id="OUT2", Name="Comfort" },
new { Id="OUT", Name="Away" },
};
reportedProperties["heatingMode"] = "AWAY";
reportedProperties["supportedHeatingModes"] = heatingModes;
await deviceClient.UpdateReportedPropertiesAsync(reportedProperties);
}
The above code does not work and none of the Device Twin properties are not updated.
If I comment out this line everything works fine and the heatingMode property is updated as expected:
reportedProperties["supportedHeatingModes"] = heatingModes;
I have also tried to use a regular (not dynamic) type for heatingModes, but it does not work either.
I also tried to manually serialize the object to JSON:
reportedProperties["supportedHeatingModes"] = JsonConvert.SerializeObject(heatingModes);
But, the resulting JSON was kind of ugly with escaped quotes:
Why doesn't the updating the supportedHeatingModes property work for objects based on complex types?
Any other workarounds?
Have a look at the MSDN document Understand and use device twins in IOT Hub , where is described:
All values in JSON objects can be of the following JSON types: boolean, number, string, object. Arrays are not allowed.

Google diff-match-patch : How to unpatch to get Original String?

I am using Google diff-match-patch JAVA plugin to create patch between two JSON strings and storing the patch to database.
diff_match_patch dmp = new diff_match_patch();
LinkedList<Patch> diffs = dmp.patch_make(latestString, originalString);
String patch = dmp.patch_toText(diffs); // Store patch to DB
Now is there any way to use this patch to re-create the originalString by passing the latestString?
I google about this and found this very old comment # Google diff-match-patch Wiki saying,
Unpatching can be done by just looping through the diff, swapping
DIFF_INSERT with DIFF_DELETE, then applying the patch.
But i did not find any useful code that demonstrates this. How could i achieve this with my existing code ? Any pointers or code reference would be appreciated.
Edit:
The problem i am facing is, in the front-end i am showing a revisions module that shows all the transactions of a particular fragment (take for example an employee details), like which user has updated what details etc. Now i am recreating the fragment JSON by reverse applying each patch to get the current transaction data and show it as a table (using http://marianoguerra.github.io/json.human.js/). But some JSON data are not valid JSON and I am getting JSON.parse error.
I was looking to do something similar (in C#) and what is working for me with a relatively simple object is the patch_apply method. This use case seems somewhat missing from the documentation, so I'm answering here. Code is C# but the API is cross language:
static void Main(string[] args)
{
var dmp = new diff_match_patch();
string v1 = "My Json Object;
string v2 = "My Mutated Json Object"
var v2ToV1Patch = dmp.patch_make(v2, v1);
var v2ToV1PatchText = dmp.patch_toText(v2ToV1Patch); // Persist text to db
string v3 = "Latest version of JSON object;
var v3ToV2Patch = dmp.patch_make(v3, v2);
var v3ToV2PatchTxt = dmp.patch_toText(v3ToV2Patch); // Persist text to db
// Time to re-hydrate the objects
var altV3ToV2Patch = dmp.patch_fromText(v3ToV2PatchTxt);
var altV2 = dmp.patch_apply(altV3ToV2Patch, v3)[0].ToString(); // .get(0) in Java I think
var altV2ToV1Patch = dmp.patch_fromText(v2ToV1PatchText);
var altV1 = dmp.patch_apply(altV2ToV1Patch, altV2)[0].ToString();
}
I am attempting to retrofit this as an audit log, where previously the entire JSON object was saved. As the audited objects have become more complex the storage requirements have increased dramatically. I haven't yet applied this to the complex large objects, but it is possible to check if the patch was successful by checking the second object in the array returned by the patch_apply method. This is an array of boolean values, all of which should be true if the patch worked correctly. You could write some code to check this, which would help check if the object can be successfully re-hydrated from the JSON rather than just getting a parsing error. My prototype C# method looks like this:
private static bool ValidatePatch(object[] patchResult, out string patchedString)
{
patchedString = patchResult[0] as string;
var successArray = patchResult[1] as bool[];
foreach (var b in successArray)
{
if (!b)
return false;
}
return true;
}

Reading HDFS extended attributes in HiveQL

I am working on a use case where we would like to add metadata (e.g. load time, data source...) to raw files as HDFS extended attributes (xattrs).
I was wondering if there was a way for HiveQL to retrieve such metadata in queries in the result set.
This would avoid storing such metadata in each record within raw files.
Would a custom Hive SerDe be a way to make such xattrs available? Otherwise, do you see another way to make this possible?
I am still relatively novice with this, so bear with me if I misused terms.
Thanks
There may be other ways to implement it, but after I discovered Hive virtual column 'INPUT__FILE__NAME' containing the URL of the source HDFS file, I create a User-Defined Function in Java to read its extended attributes. This function can be used in a Hive query as:
XAttrSimpleUDF(INPUT__FILE__NAME,'user.my_key')
The (quick and dirty) Java source code of the UDF looks like:
public class XAttrSimpleUDF extends UDF {
public Text evaluate(Text uri, Text attr) {
if(uri == null || attr == null) return null;
Text xAttrTxt = null;
try {
Configuration myConf = new Configuration();
//Creating filesystem using uri
URI myURI = URI.create(uri.toString());
FileSystem fs = FileSystem.get(myURI, myConf);
// Retrieve value of extended attribute
xAttrTxt = new Text(fs.getXAttr(new Path(myURI), attr.toString()));
} catch (IOException e) {
e.printStackTrace();
} catch (Exception e) {
e.printStackTrace();
}
return xAttrTxt;
}
}
I didn't test the performance of this when querying very large data sets.
I wished that extended attributes could be retrieved directly as a virtual column in a way similar to using virtual column INPUT__FILE__NAME.

How to generate JSONP in Badgerfish format

I'm trying to create a controller for Spring MVC that will return a JSONP in Badgerfish format. My code currently creates the JSONP correctly using Jackson, but I do not know how to specify Badgerfish format. Assuming that callback is the name of the callback function and summary is my jaxb object, then my code is currently
ObjectMapper objectMapper = new ObjectMapper();
return objectMapper.writeValueAsString(new JSONPObject(callback,summary));
Is there any way to do this using Jackson or I have to use another framework? I have found an approach to generate Badgerfish using RestEasy, but only for JSON.
I actually managed to solve this with Jettison (I did not find a way to do this with Jackson). The required code is
Marshaller marshaller = null;
Writer writer = new StringWriter();
AbstractXMLStreamWriter xmlStreamWriter = new BadgerFishXMLStreamWriter(writer);
try {
marshaller = jaxbContextSummary.createMarshaller();
marshaller.marshal(myObject, xmlStreamWriter);
} catch (JAXBException e) {
logger.error("Could not construct JSONP response", e);
}

Unable to query a different workspace

I was trying to follow this post to query a testcase in a workspace("/workspace/6749437088") that is not the default workspace but the query is not returning that testcase and in fact, not returning anything. Below is the code I am using. If I do a query with 'not equal' the test cases, I notice that it is returning test cases in the user's default workspace. I am using C# and using Rally Rest API Runtime v4.0.30319 and ver 1.0.15.0. Any suggestions? Thanks.
Inserting test case result using Java Rally Rest API failing when workspace is different from default set on account
private string GetRallyObj_Ref(string ObjFormttedId)
{
string tcref = string.Empty;
try
{
string reqType = _Helper.GetRallyRequestType(ObjFormttedId.Substring(0, 2).ToLower());
Request request = new Request(reqType);
request.Workspace = "/workspace/6749437088";
request.Fetch = new List<string>()
{
//Here other fields can be retrieved
"Workspace",
"Name",
"FormattedID",
"ObjectID"
};
//request.Project = null;
string test = request.Workspace;
request.Query = new Query("FormattedID", Query.Operator.Equals, ObjFormttedId);
QueryResult qr = _RallyApi.Query(request);
string objectid= string.Empty;
foreach (var rslt in qr.Results)
{
objectid = rslt.ObjectID.ToString();
break;
}
tcref = "/"+reqType+"/" + objectid;
}
catch (Exception ex)
{
throw ex;
}
return tcref;
Sorry, I found out the issue. I was feeding the code a project ref#, not a workspace ref #. I found out the correct workspace by using pieces of the code in the answer part of this post: Failed when query users in workspace via Rally Rest .net api by querying the workspace refs of the username I am using and there I found out the correct workspace ref. Thanks, Kyle anyway.
The code above seems like it should work. This may be a defect- I'll look into that. In the meantime if you are just trying to read a specific object from Rally by Object ID you should be able to do so like this:
restApi.GetByReference('/testcase/12345',
'Results, 'Verdict', 'Duration' //fetch fields);