Built-in MATLAB classes have values for a Description and DetailedDescription attribute:
>> ?handle
ans =
meta.class handle
Package: meta
Properties:
Name: 'handle'
Description: 'Base class for handle classes'
DetailedDescription: ''
[snip]
Similarly, some methods and properties of built-in classes have the same attributes:
>> a = ?containers.Map;
>> a.PropertyList(1)
ans =
meta.property handle
Package: meta
Properties:
Name: 'Count'
Description: 'Number of pairs in the collection'
DetailedDescription: ''
[snip]
How can I set these attributes for my classes/methods/properties?
Use arguments to the classdef:
classdef (Description='A type of story.',...
DetailedDescription='Once upon a time..') MyFairyTaleClass
Commandline:
>> ?MyFairyTaleClass
ans =
meta.class handle
Package: meta
Properties:
Name: 'MyFairyTaleClass'
Description: 'A type of story.'
DetailedDescription: 'Once upon a time..'
Hidden: 0
Sealed: 0
ConstructOnLoad: 0
HandleCompatible: 0
InferiorClasses: {0x1 cell}
This is an undocumented feature it seems.
Related
/example:
/{uriParams}:
get:
is: [defaultResponses, commonHeaders]
uriParameters:
uriParams:
description: Example description uriParams
body:
application/json:
example: !include examples.example.json
I would like creating the ruleset that checking the example !include and the traits (defaultResponse, commonHeaders) Now I have like this but this ruleset working separately.(It's mean that if I have ruleset with "traits" and "example" in the same file there is only working "traits". If I delete the ruleset from file "traits". It's working the ruleset "example".) But I would like that they working together.
And also I'm trying doing ruleset for checking all fields are have name with camelCase example: "camelCase-exampleTwo"
provide-examples:
message: Always include examples in request and response bodies
targetClass: apiContract.Payload
rego: |
schema = find with data.link as $node["http://a.ml/vocabularies/shapes#schema"]
nested_nodes[examples] with data.nodes as object.get(schema, "http://a.ml/vocabularies/apiContract#examples", [])
examples_from_this_payload = { element |
example = examples[_]
sourcemap = find with data.link as object.get(example, "http://a.ml/vocabularies/document-source-maps#sources", [])
tracked_element = find with data.link as object.get(sourcemap, "http://a.ml/vocabularies/document-source-maps#tracked-element", [])
tracked_element["http://a.ml/vocabularies/document-source-maps#value"] = $node["#id"]
element := example
}
$result := (count(examples_from_this_payload) > 0)
traits:
message: common default
targetClass: apiContract.EndPoint
propertyConstraints:
apiContract.ParametrizedTrait:
core.name:
pattern: defaultResponses
camel-case-fields:
message: Use camelCase.
targetClass: apiContract.EndPoint
if:
propertyConstraints:
shacl.name:
in: ['path']
then:
propertyConstraints:
shacl.name:
pattern: "^[a-z]+([A-Z][a-z]+)*$"
I have case classes like these
case class PrimeraRemoteCopyConfig(
links: Option[Vector[JsObject]] = None,
status: Option[Vector[JsObject]] = None,
targets: Option[Vector[JsObject]] = None,
groups: Option[Vector[JsObject]] = None,
groupTargets: Option[Vector[JsObject]] = None,
groupVolumes: Option[Vector[JsObject]] = None)
object PrimeraRemoteCopyConfig {
implicit val _format = Json.format[PrimeraRemoteCopyConfig]
}
case class PrimeraConfig(
systemUid: String,
tenantId: String,
systemWWN: String,
remoteCopyConfig: Option[PrimeraRemoteCopyConfig] = None)
object PrimeraConfig {
implicit val _format = Json.format[PrimeraConfig]
}
And I have spark dataset that uses state management, using flatMapGroupsWithState.
However I am getting
22/05/10 17:07:48 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, 192.168.0.199, 53277, None)
Exception in thread "main" java.lang.UnsupportedOperationException: No Encoder found for play.api.libs.json.JsValue
- map value class: "play.api.libs.json.JsValue"
- field (class: "scala.collection.Map", name: "underlying")
- array element class: "play.api.libs.json.JsObject"
- option value class: "scala.collection.immutable.Vector"
- field (class: "scala.Option", name: "links")
- option value class: "model.PrimeraRemoteCopyConfig"
- field (class: "scala.Option", name: "remoteCopyConfig")
- root class: "model.PrimeraConfig"
at org.apache.spark.sql.errors.QueryExecutionErrors$.cannotFindEncoderForTypeError(QueryExecutionErrors.scala:1000)
I tried defining encoders for JsObject using Encoders.product[JsObject] However it does not work. What am I doing wrongly here
Suppose I have the following class:
class ModelConfig(pydantic.BaseModel):
name: str = "bert"
If I were to instantiate it with model_config = ModelConfig(name2="hello"), this simply ignores that there is no name2 and just keeps name="bert". Is there a way to raise an error saying unknown argument in pydantic?
You can do this using the forbid Model Config
For example:
class ModelConfig(pydantic.BaseModel, extra=pydantic.Extra.forbid):
name: str = "bert"
Passing model_config = ModelConfig(name2="hello") will throw an error
I am getting a weird error while passing class and date to an ActiveJob with the Sidekiq adapter.
1] pry(main)> StripeTransactionsSyncJob.perform_later(Stripe::SyncCharges, nil, 3.days.ago.to_date)
ActiveJob::SerializationError: Unsupported argument type: Class
from /home/amit/.rvm/gems/ruby-2.5.8#immosite/gems/activejob-5.0.7.2/lib/active_job/arguments.rb:83:in `serialize_argument'
[2] pry(main)> StripeTransactionsSyncJob.perform_later('Stripe::SyncCharges', nil, 3.days.ago.to_date)
ActiveJob::SerializationError: Unsupported argument type: Date
from /home/amit/.rvm/gems/ruby-2.5.8#immosite/gems/activejob-5.0.7.2/lib/active_job/arguments.rb:83:in `serialize_argument'
As per the doc, ActiveJob should support both types of arguments out of the box. What is wrong here?
The guide you have referenced in your post refers to the v6.1.4 of Rails. See the version info on top-right corner on that page.
The guide for v5.0 doesn't explicitly specify about the arguments types supported. And looking at the source code (see below) for the version of Rails you are using i.e 5.0.7.2
def serialize_argument(argument)
case argument
when *TYPE_WHITELIST
argument
when GlobalID::Identification
convert_to_global_id_hash(argument)
when Array
argument.map { |arg| serialize_argument(arg) }
when ActiveSupport::HashWithIndifferentAccess
result = serialize_hash(argument)
result[WITH_INDIFFERENT_ACCESS_KEY] = serialize_argument(true)
result
when Hash
symbol_keys = argument.each_key.grep(Symbol).map(&:to_s)
result = serialize_hash(argument)
result[SYMBOL_KEYS_KEY] = symbol_keys
result
else
raise SerializationError.new("Unsupported argument type: #{argument.class.name}")
end
end
your passed argument types Class and Date are not supported and hence you are getting SerializationError.
Note: Whenever referring to the API-docs or Guide I would recommend to view them for the specific version of Rails you are using.
The Class/Date/DateTime/Time etc were not supported in Rails 5.0. So I need to use String form of data being passed to the Job.
For reference, here is the method(simplified) that does deserialization
def serialize_argument(argument)
case argument
when *[ NilClass, String, Integer, Float, BigDecimal, TrueClass, FalseClass ]
argument
when GlobalID::Identification
convert_to_global_id_hash(argument)
when Array
argument.map { |arg| serialize_argument(arg) }
when ActiveSupport::HashWithIndifferentAccess
result = serialize_hash(argument)
result[WITH_INDIFFERENT_ACCESS_KEY] = serialize_argument(true)
result
when Hash
symbol_keys = argument.each_key.grep(Symbol).map(&:to_s)
result = serialize_hash(argument)
result[SYMBOL_KEYS_KEY] = symbol_keys
result
else
raise SerializationError.new("Unsupported argument type: #{argument.class.name}")
end
end
For an internal status logging in my jenkins pipeline I have created a "template map which I want do use in multiple stages which are running independently in parallel
def status= [
a : '',
b: [
b1: '',
b2: '',
b3: ''
],
c: [
c1: '',
c2 : ''
]
]
this status template I want to pass to multiple parallel running functions/executors. Inside the parallel branches I want to modify the status independently. See the following minimal example
def status= [
a : '',
b: [
b1: '',
b2: '',
b3: ''
],
c: [
c1: '',
c2 : ''
]
]
def label1 = "windows"
def label2 = ''
parallel firstBranch: {
run_node(label1, status)
}, secondBranch: {
run_node(label2, status)
},
failFast: true|false
def run_node (label, status){
node(label) {
status.b.b1 = env.NODE_NAME +"_"+ env.EXECUTOR_NUMBER
sleep(1)
echo "env.NODE_NAME_env.EXECUTOR_NUMBER: ${status.b.b1}"
// expected: env.NODE_NAME_env.EXECUTOR_NUMBER
this.a_function(status)
echo "env.NODE_NAME_env.EXECUTOR_NUMBER: ${status.b.b1}"
// expected(still): env.NODE_NAME_env.EXECUTOR_NUMBER (off current node)
// is: env.NODE_NAME_env.EXECUTOR_NUMBERmore Info AND probably from the wrong node
}
}
def a_function(status){
status.b.b1 += "more Info"
echo "env.NODE_NAME_env.EXECUTOR_NUMBERmore Info: ${status.b.b1}"
// expected: env.NODE_NAME_env.EXECUTOR_NUMBERmore Info
sleep(0.5)
echo "env.NODE_NAME_env.EXECUTOR_NUMBERmore Info: ${status.b.b1}"
// expected: env.NODE_NAME_env.EXECUTOR_NUMBERmore Info
}
Which results in
[firstBranch] env.NODE_NAME_env.EXECUTOR_NUMBER:
LR-Z4933-39110bdb_0
[firstBranch] env.NODE_NAME_env.EXECUTOR_NUMBERmore Info:
LR-Z4933-39110bdb_0more Info
[firstBranch] env.NODE_NAME_env.EXECUTOR_NUMBER>more Info:
LR-Z4933-39110bdb_0more Info
[firstBranch] env.NODE_NAME_env.EXECUTOR_NUMBER:
LR-Z4933-39110bdb_0more Info
[secondBranch] env.NODE_NAME_env.EXECUTOR_NUMBER:
LR-Z4933-39110bdb_0more Info
[secondBranch] env.NODE_NAME_env.EXECUTOR_NUMBERmore Info:
LR-Z4933-39110bdb_0more Infomore Info
[secondBranch] env.NODE_NAME_env.EXECUTOR_NUMBERmore Info:
LR-Z4933-39110bdb_0more Infomore Info
[secondBranch] env.NODE_NAME_env.EXECUTOR_NUMBER:
LR-Z4933-39110bdb_0more Infomore Info
Note that in the status in the first branch is overwritten by the second branch and the other way around.
How to realize independent status variables when passing thm as a parameter to functions
You could define the template map. When you need multiple instances of the same which you may want to modify differently per instance by using cloned template map.
Here is short code snippet to show the example.
def template = [a: '', b: '']
def instancea = template.clone()
def instanceb = template.clone()
def instancec = template.clone()
instancea.a = 'testa'
instanceb.a = 'testb'
instancec.a = 'testc'
println instancea
println instanceb
println instancec
Of course, you can include bigger map, the above is only for demonstration.
You are passing status by reference to the function. But even if you do a status.clone(), I suspect this isn't a deep copy of status. status.b probably still points to the same reference. You need to make a deep copy of status and send that deep copy to the function.
I'm not sure a deep copy of a framework map is the right way to do this. You could just send an empty map [:] and let the called functions add the pieces to the map that they need. If you really need to pre-define the content of the map, then I think you should add a class and create new objects from that class.