ember data serialize embed records - serialization

Ember: 1.5.1 ember.js
Ember Data: 1.0.0-beta.7.f87cba88
I have a need for asymmetrical (de)serialization for one relationship type: sideloaded records on deserializing and embedded on serializing.
I have asked for this in the standard way:
RailsEmberTest.PlanItemSerializer = DS.ActiveModelSerializer.extend(DS.EmbeddedRecordsMixin, {
attrs: {
completions: {serialize: 'records', deserialize: 'ids'}//embedded: 'always'}
}
});
However, it doesn't seem to work. Following the execution through, I find that at line 498 of Ember data, the serializer decides whether or not to embed a relationship:
embed = attrs && attrs[key] && attrs[key].embedded === 'always';
At this stage, the attrs hash is well-formed, with completions containing the attributes as above. However, this line results in embed being false, and consequently the record is not embedded.
Overriding the value of embed to true makes it all hunky-dory.
Any ideas why Ember data is ignoring the settings? I suspect that maybe in my version the only option is embedded, and I need to upgrade to a later version to take advantage of the asymmetrical settings for serialize and deserialize.
However, given the possible manifold changes I am fearful of upgrading!
I'd be very grateful for your advice.

Courtesy of the London Ember meetup, I now know that it was simply down to the version of Ember Data! Now upgraded to the latest beta with no trouble.

Related

system_cpu_usage is Nan when compiled in native

In my quarkus application i'm using micrometer to retrieve metrics (like in this guide : https://quarkus.io/guides/micrometer).
In JVM mode everything works fine, but in native mode system_cpu_usage is "Nan".
I tried bumping micrometer to 1.8.4 and adding :
{
"name":"com.sun.management.OperatingSystemMXBean", "allPublicMethods": true
},
to my reflect-config.json but no luck. I also tried generating the reflect-config (and other native configuration files) with the graalvm tracing agent but still no luck.
This may be a bug.
Micrometer is looking for a few known implementations of the MXBean:
https://github.com/micrometer-metrics/micrometer/blob/b087856355667abf9bf2386265edef8642e0e077/micrometer-core/src/main/java/io/micrometer/core/instrument/binder/system/ProcessorMetrics.java#L55
private static final List<String> OPERATING_SYSTEM_BEAN_CLASS_NAMES = Arrays.asList(
"com.ibm.lang.management.OperatingSystemMXBean", // J9
"com.sun.management.OperatingSystemMXBean" // HotSpot
);
so that it can find the methods that it should be invoking...
https://github.com/micrometer-metrics/micrometer/blob/b087856355667abf9bf2386265edef8642e0e077/micrometer-core/src/main/java/io/micrometer/core/instrument/binder/system/ProcessorMetrics.java#L80
this.operatingSystemBean = ManagementFactory.getOperatingSystemMXBean();
this.operatingSystemBeanClass = getFirstClassFound(OPERATING_SYSTEM_BEAN_CLASS_NAMES);
Method getCpuLoad = detectMethod("getCpuLoad");
this.systemCpuUsage = getCpuLoad != null ? getCpuLoad : detectMethod("getSystemCpuLoad");
this.processCpuUsage = detectMethod("getProcessCpuLoad");
(Note specifically "getFirstClassFound", which is constrained against the first list).
Speculation on my part, but I suspect Graal is returning a different type, which is possible from here:
https://github.com/oracle/graal/blob/6ba65dad76a4f54fa59e1ed2a62dedd3afe39928/substratevm/src/com.oracle.svm.core/src/com/oracle/svm/core/jdk/management/ManagementSupport.java#L166
would take some digging to know which, but I would open an issue with Micrometer so we can sort it out.

Why do the persist() and cache() methods shorten DataFrame plan in Spark?

I am working with spark version 3.0.1. I am generating a large dataframe. At the end calculations, I save dataframe plan in json format. I need him.
But there is one problem. If I persist a DataFrame, then its plan in json format is completely truncated. That is, all data lineage disappears.
For example, I do this:
val myDf: DataFrame = ???
val myPersistDf = myDf.persist
//toJSON method cuts down my plan
val jsonPlan = myPersistDf.queryExecution.optimizedPlan.toJSON
As a result, only information about the current columns remains.
But for example, if you use spark version 3.1.2, then there is no such problem. That is, the plan is not cut.
It is also worth saying that if you do not call the toJSON method, then the plan is not cut:
// Plan is not being cut.
val textPlan = myPersistDf.queryExecution.optimizedPlan.toString
I made a small test project to make sure of this:
https://github.com/MinorityMeaning/CutPlanDataFrame
Please help me figure it out.
I need to get the full plan in json format.
UPD(1):
Now I'm trying to convert each node to json separately. Now it doesn't work perfectly, but I think we need to go in this direction.
val jsonPlan = s"[${getJson(result_df.queryExecution.optimizedPlan).mkString(",")}]"
def getJson(lp: TreeNode[_]): Seq[String] = {
val children = (lp.innerChildren ++ lp.children.map(c => c.asInstanceOf[TreeNode[_]])).distinct
JsonMethods.compact(JsonMethods.render(JsonMethods.parse(lp.toJSON)(0))) +:
getJson(t.asInstanceOf[TreeNode[_]])))
children.flatMap(t => getJson(t))
}
UPD(2):
OK, here's how I finally solved this problem.
I downloaded spark 3.0.1 from github. Then replaced the TreeNode class in this project with a file from spark 3.1.2. Recompiled the project.
As a result, I received a package spark-catalyst_2.12-3.0.1.jar
Which replaced the existing original packaging.
There is no option to switch to another version of spark. I have not found any other solutions to the problem.
Thank you guys for prompting. Your advice helped.
You can cherry pick below 2 commits into spark 3.0.1 to fix this issue.
* 1603775934 - [SPARK-35411][SQL][FOLLOWUP] Handle Currying Product while serializing TreeNode to JSON (8 months ago) <Tengfei Huang>
* 9804f07c17 - [SPARK-35411][SQL] Add essential information while serializing TreeNode to json (9 months ago) <Tengfei Huang>
Well, you already kinda have your answer.
TLDR: Upgrade.
You could, could, look into the github repo at the logical plan code, and look for difference in how logical plans are calculated between. 3.0.1 and 3.1.2. (Or look at the code for persist and see how it's changed.) You could then back port a patch to 3.0.1. But you'd still need to build a new version of spark and then deploy it so that the plan returned. But if you are doing all that work why not upgrade to 3.1.2 if you know it works? (Or some later version of Spark?)
(You must have some dependency on a sub-component that only compatible with 3.0.1?)

How to mimic Optaplanner 7's MoveIteratorFactoryToMoveSelectorBridge in Optaplanner 8

We have implemented a custom construction heuristic with Optaplanner 7. We didn't use a simple CustomPhaseCommand; Instead, we extend MoveSelectorConfig and override buildBaseMoveSelector to return our own MoveFactory wrapped in a MoveIteratorFactoryToMoveSelectorBridge. We decided to do so, because it gives us the following advantages:
global termination config is supported out of the box
type safe configuration from code (no raw Strings)
With Optaplanner 8 the method buildBaseMoveSelector is gone from the MoveSelectorConfig API and building a custom config class seems to be prevented in the new implementation of MoveSelectorFactory.
Is it still possible to inject a proper custom construction heuristic into the Optaplanner 8 configuration and if yes, how? Or should we be using a CustomPhaseCommand with a custom self-implemented termination?
EDIT:
For clarity, in Optaplanner 7 we had the following snippet in our Optaplanner-config (defined in kotlin code):
ConstructionHeuristicPhaseConfig().apply {
foragerConfig = ConstructionHeuristicForagerConfig().apply {
pickEarlyType = FIRST_FEASIBLE_SCORE
}
entityPlacerConfig = QueuedEntityPlacerConfig().apply {
moveSelectorConfigList = listOf(
CustomMoveSelectorConfig().apply {
someProperty = 1
otherProperty = 0
}
)
}
},
CustomMoveSelectorConfig extends MoveSelectorConfig and overrides buildBaseMoveSelector:
class CustomMoveSelectorConfig(
var someProperty: Int = 0,
var otherProperty: Int = 0,
) : MoveSelectorConfig<CustomMoveSelectorConfig>() {
override fun buildBaseMoveSelector(
configPolicy: HeuristicConfigPolicy?,
minimumCacheType: SelectionCacheType?,
randomSelection: Boolean,
): MoveSelector {
return MoveIteratorFactoryToMoveSelectorBridge(
CustomMoveFactory(someProperty, otherProperty),
randomSelection
)
}
To summarize: We really need to plug our own MoveSelector with the custom factory. I think this is not possible with Optaplanner 8 at the moment.
Interesting extension.
Motivation for the changes in 8:
the buildBaseMoveSelector was not public API (the config package was not in the api package, we only guaranteed XML backwards compatibility for package config in 7). Now, we also guarantee API backwards compatibility for package config, so including programmatic configuration, because we moved all build* methods out of it.
In 8.2 or later we want to internalize the configuration in the SolverFactory, so we can build thousands of Solver instances faster. For example, loading classes wouldn't to be done only at SolverFactory build, once, no longer at every Solver build.
Anyway, let's first see if you can use the default way to override the moves of the CH, by explicitly configuring the queuedEntityPlacer and it's MoveIteratorFactory? https://docs.optaplanner.org/latestFinal/optaplanner-docs/html_single/#allocateEntityFromQueueConfiguration
I guess not, because you'd need mimic support... for the selected entity you need to generate n moves, but always the same entity during 1 placement (hence the need for mimicing).
Clearly, the changes in 8 prevent users from plugging in their own MoveSelectors (= which is an internal API, but anyway). We might be able to add an internal API to allow that again.

Ruby mongodb: Three newly created objects doesn't appear to exist during testing

I'm using mongodb to store some data. Then I have a function that gets the object with the latest timestamp and one with the oldest. I haven't experienced any issues during development or production with this method but when I try to implement a test for it the test fails approx 20% of the times. I'm using rspec to test this method and I'm not using mongoid or mongomapper. I create three objects with different timestamps but get a nil response since my dataset contains 0 objects. I have read a lot of articles about write_concern and that it might be the problem with "unsafe writes" but I have tried almost all the different combinations with these parameters (w, fsync, j, wtimeout) without any success. Does anyone have any idea how to solve this issue? Perhaps I have focused too much with the write_concern track and that the problems lies somewhere else.
This is the method that fetches the latest and oldest timestamp.
def first_and_last_timestamp(customer_id, system_id)
last = collection(customer_id).
find({sid:system_id}).
sort(["t",Mongo::DESCENDING]).
limit(1).next()
first = collection(customer_id).
find({sid:system_id}).
sort(["t",Mongo::ASCENDING]).
limit(1).next()
{ min: first["t"], max: last["t"] }
end
Im inserting data using this method where data is a json object.
def insert(customer_id, data)
collection(customer_id).insert(data)
end
I have reverted back to use the default for setting up my connection
Mongo::MongoClient.new(mongo_host, mongo_port)
I'm using the gem mongo (1.10.2). I'm not using any fancy setup for my mongo database. I've just installed mongo using brew on my mac and started it. The version of my mongo database is v2.6.1.

RallyDev: ConversationPost- How to query for Discussions in RallyDev Story

I am using the following RallyApi service to communicate with RallyDev:
https://rally1.rallydev.com/slm/webservice/1.40/RallyService
I have the following method:
public HierarchicalRequirement GetFeedbackById(string usid)
{
var query = string.Format("(FormattedID = \"{0}\")", usid);
const string orderByString = "CreationDate desc";
var rallyService = GetRallyService();
var rtnval = rallyService.query(Workspace, Projs["XXX"], true, true,"HierarchicalRequirement", query,
orderByString, true, 1, 20).Results[0] as HierarchicalRequirement;
return rtnval;
}
Although I am successfully retrieving the "HierarchicalRquirement" object using the "FormattedID", I am not able to load the associated "ConversationPost" objects for this story, Since all the nested complex objects of the "HierarchicalRquirement" contains the "ref" and "reffield" property and nothing else.
Could you please let me know if there is a way to actively load all the associated discussions when we query for the story or if there is a query as follows:
rallyService.query(Workspace, Projs["XXX"], true, true, "ConversationPost", query, orderByString, true, 1, 20)
Using the above can I search for discussions(ConversationPost) using FormattedID?
Thanks for your help.
Regards,
Varun
You're right on target with your use of rallyService.read(). With SOAP, even with fetchFullObjects=true, any Artifact attributes that are themselves Rally objects, are hydrated with refs to those object.
Especially if you're just getting started with building your integration, I'd highly recommend using REST:
http://developer.help.rallydev.com/rest-apis
instead of SOAP.
REST is more robust, more performant, and, the soon-to-be-released Webservices API 1.41, will be the final API release to have SOAP support. Webservices 2.x will be REST-only, so using REST will be essential to anyone wanting new Webservices features moving forward.