SAP patch SAPK-XXXXX* kind - sap

I know that:
SAP_ABA -> SAPKA?????
SAP_APPL -> SAPKH?????
SAP_BASIS-> SAPKB?????
SAP_BW -> SAPKW?????
SAP_HR -> SAPKE?????
SAP_CRM -> SAPKU?????
But I don't know what SAPK-XXXXX* is. It is often applied to different components. Anybody can talk about this patch?

These package files are created using the so-called Add-On Assembly Kit (AAK). SAPK is something like the designation of the originating system (think of your normal transports, starting with K - btw, this is one of the reasons that a SID may never be "SAP", this would cause collisions here). The dash following the SAPK designates an AAK package. This is followed by a version indicator and/or a package type and/or the short package name.

Related

Is there an official Concourse pipeline grammar?

I couldn't find it online after a little bit of searching, so i'm asking it here. Is there a 'reference' bnf grammar file for yaml file of concourse's pipeline ? As a side project, I'm trying to create an IntelliJ plugin that could do syntax highlight and auto completion for CI/CD Concourse pipelines, and would try to avoid manually retyping all that grammar to minimize error risk and time.
I don't believe there's a "grammar file" - the types are defined in code. For example, the top level pipeline is defined here as:
type Config struct {
Groups GroupConfigs `yaml:"groups" json:"groups" mapstructure:"groups"`
Resources ResourceConfigs `yaml:"resources" json:"resources" mapstructure:"resources"`
ResourceTypes ResourceTypes `yaml:"resource_types" json:"resource_types" mapstructure:"resource_types"`
Jobs JobConfigs `yaml:"jobs" json:"jobs" mapstructure:"jobs"`
}
The atc.Config.Validate() command is also based on code - not an external grammar.
You could probably reason through those source files to determine the structure. It might be possible to generate a jsonschema from the go types and then use that.

rstan() should not run in #'#example?

In package development, each example requires <5s. However, the pair of stan_model() and rstan::sampling() take long times more than 5s as follows:
Examples with CPU or elapsed time > 5s
user system elapsed
fit 1.25 0.11 32.47
So I put \donttest{} for each rstan::sampling() in roxygen comments #'#examples
In examples#'#examples, we should not run sampling() or is there any treatment ?
I had tried to create my package based on the code rstan_package_skeleton(path = 'BayesianAAA') when I was taught from you (Thank you !!) but, I do not understand many things about it.
Previously, rstan_package_skeleton(path = 'BayesianAAA') launched the errors in my computer ( but now the error does not occur).
So, I made my package without the rstan_package_skeleton(), say BayesianAAA, and in my original making, I put the Model_A.stan,Model_B.stan,Model_C.stan,.... in the inst/extdata and I refer my stan files as follows;
scr <- system.file("extdata", "Model_A.stan", package="BayesianAAA")
scr <- rstan::stan_model(scr)
I have many questions about the code rstan_package_skeleton(path = 'BayesianAAA').
1) The first question is How to include my existing stan files and how to refer my .stan files for the rstan::stan_model() ?
According to the page following page, it said that
If we had existing .stan files to include with the package we could use the optional stan_files argument to rstan_package_skeleton to include them.
So, I think I should execute, I am not sure but the following like manner is required;
`rstan_package_skeleton(path = 'BayesianAAA', stan_files = "Model_A.stan" )`.
But I do not know how to write the code for several stan files, say Model_A.stan,Model_B.stan,Model_C.stan in my existing package made without the rstan_package_skeleton(). I do not understand , but the following code is correct ? Since I do not where the files described in the variable stan_files is reflected in the new project created by the rstan_package_skeleton().
`rstan_package_skeleton(path = 'BayesianAAA', stan_files = c("Model_A.stan",`Model_B.stan`,`Model_C.stan` )`.
Here, the another question arise, that is,
2) The second question: Where I execute the code rstan_package_skeleton(path = 'BayesianAAA', stan_files = "Model_A.stan" ) ? I execute it form the R studio console in my existing package project. Is it correct ? And then, the new project arise and it is contained the old existing project. What should I do ?
https://cran.r-project.org/web/packages/rstantools/vignettes/minimal-rstan-package.html
3) I do not quite know about the packages "rstanarm" , but I try to imitate it for my package, but I can not fined any .stan file in it, I am wrong ?
I am sorry for my poor English, and Lack of study about these things.
I would be grateful if you could tell me.
You generally should not be writing a package that calls stan_model at runtime, unless like brms or tmbstan you are generating a Stan program at runtime as opposed to writing it statically. There are dozens of packages on CRAN that provide compiled Stan programs basically by following the build process developed for rstanarm, which is facilitated by the rstantools::rstan_package.skeleton function, the step-by-step guide, and the developer guidelines which directly address your question
CRAN policy permits long installation times but imposes restrictions on the time consumed by examples and unit tests that are much shorter than the time that it takes to compile even a simple Stan program. Thus, it is only possible to adequately test your package if it has pre-compiled Stan programs.
Even then, it can be difficult to sample from a posterior distribution (adequately) in five seconds, so you often have to use small datasets, one chain, a small number of iterations, etc.
It is best to pass the names of your Stan programs (which should end in a .stan extension, not use a period otherwise, and only have ASCII letters, numbers, and the underscore in their names) to rstantools::rstan_package_skeleton(). If doing so from RStudio, I would call it while not in an existing project. Then
During installation, all Stan programs will be compiled and saved in the list stanmodels that can then be used by R function in the package. The rule is that the Stan program compiled from the model code in src/stan_files/foo.stan is stored as list element stanmodels$foo.
There are dozens of R packages that have Stan programs in their src/stan_files directory (although the locations of the Stan programs are going to move to inst/stan for the next rstantools release) that for the most part just followed the vignettes and did not have to do any additional steps except write more R functions.

How do I find the version and authority of a Perl 6 module?

In Bar.pm, I declare a class with an authority (author) and a version:
class Bar:auth<Camelia>:ver<4.8.12> {
}
If I use it in a program, how do I see which version of a module I'm using, who wrote it, and how the module loader found it? As always, links to documentation are important.
This question was also asked on perl6-users but died before a satisfactory answer (or links to docs) appeared.
Another wrinkle in this problem is that many people aren't adding that information to their class or module definitions. It shows up in the META.json file but not the code.
(Probably not a satisfying answer, because the facts of the matter are not very satisfying, especially regarding the state of the documentation, but here it goes...)
If the module or class was versioned directly in the source code à la class Bar:auth<Camelia>:ver<4.8.12>, then any code that imports it can introspect it:
use Bar;
say Bar.^ver; # v4.8.12
say Bar.^auth; # Camelia
# ...which is short for:
say Bar.HOW.ver(Bar); # v4.8.12
say Bar.HOW.auth(Bar); # Camelia
The ver and auth methods are provided by:
Metamodel::ClassHOW (although that documentation page doesn't mention them yet)
Metamodel::ModuleHOW (although that documentation page doesn't exist at all yet)
Unfortunately, I don't think the meta-object currently provides a way to get at the source path of the module/class.
By manually going through the steps that use and require take to load compilation units, you can at least get at the prefix path (i.e. which location from $PERL6LIB or use lib or -I etc. it was loaded from):
my $comp-spec = CompUnit::DependencySpecification.new: short-name => 'Bar';
my $comp-unit = $*REPO.resolve: $comp-spec;
my $comp-repo = $comp-unit.repo;
say $comp-repo.path-spec; # file#/home/smls/dev/lib
say $comp-repo.prefix; # "/home/smls/dev/lib".IO
$comp-unit is an object of type CompUnit.
$comp-repo is a CompUnit::Repository::FileSystem.
Both documentations pages don't exist yet, and $*REPO is only briefly mentioned in the list of dynamic variables.
If the module is part of a properly set-up distribution, you can get at the meta-info defined in its META6.json (as posted by Lloyd Fournier in the mailing list thread you mentioned):
if try $comp-unit.distribution.meta -> %dist-meta {
say %dist-meta<ver>;
say %dist-meta<auth>;
say %dist-meta<license>;
}

Psychopy and pylink example

I'm working on integrating an experiment in psychopy with the eyelink eyetracking system. The way to do this seems to be through pylink. Unfortunately I'm really unfamiliar with pylink and I was hoping there was a sample of an experiment that combines the two. I haven't been able to find one. If anyone would be able to share an example or point me towards a more accessible manual than the pylink api that sr-research provides I'd be really grateful.
Thanks!
I am glad you found your solution. I have not used iohub, but we do use psychopy and an eyelink and therefore some of the following code may be of use to others who wish to invoke more direct communication. Note that our computers use Archlinux. If none of the following makes any sense to you, don't worry about it, but maybe it will help others who are stumbling along the same path we are.
Communication between experimental machine and eye tracker machine
First, you have to establish communication with the eyelink. If your experimental machine is turned on and plugged into a live Eyelink computer then on linux you have to first set your ethernet card up, and then set the default address that Eyelink uses (this also works for the Eyelink 1000 - they kept the same address). Note your ethernet will probably have a different name than enp4s0. Try simply with ip link and look for something similar. NB: these commands are being typed into a terminal.
#To set up connection with Eyelink II computer:
#ip link set enp4s0 up
#ip addr add 100.1.1.2/24 dev enp4s0
Eyetracker functions
We have found it convenient to write some functions for talking to the Eyelink computer. For example:
Initialize Eyetracker
sp refers to the tuple of screenx, screeny sizes.
def eyeTrkInit (sp):
el = pl.EyeLink()
el.sendCommand("screen_pixel_coords = 0 0 %d %d" %sp)
el.sendMessage("DISPLAY_COORDS 0 0 %d %d" %sp)
el.sendCommand("select_parser_configuration 0")
el.sendCommand("scene_camera_gazemap = NO")
el.sendCommand("pupil_size_diameter = %s"%("YES"))
return(el)
NB: the pl function comes from import pylink as pl. Also, note that there is another python library called pylink that you can find on line. It is probably not the one you want. Go through the Eyelink forum and get pylink from there. It is old, but it still works.
Calibrate Eyetracker
el is the name of the eyetracker object initialized above. sp screen size, and cd is color depth, e.g. 32.
def eyeTrkCalib (el,sp,cd):
pl.openGraphics(sp,cd)
pl.setCalibrationColors((255,255,255),(0,0,0))
pl.setTargetSize(int(sp[0]/70), int(sp[1]/300))
pl.setCalibrationSounds("","","")
pl.setDriftCorrectSounds("","off","off")
el.doTrackerSetup()
pl.closeGraphics()
#el.setOfflineMode()
Open datafile
You can talk to the eye tracker and do things like opening a file
def eyeTrkOpenEDF (dfn,el):
el.openDataFile(dfn + '.EDF')
Drift correction
Or drift correct
def driftCor(el,sp,cd):
blockLabel=psychopy.visual.TextStim(expWin,text="Press the space bar to begin drift correction",pos=[0,0], color="white", bold=True,alignHoriz="center",height=0.5)
notdone=True
while notdone:
blockLabel.draw()
expWin.flip()
if keyState[key.SPACE] == True:
eyeTrkCalib(el,sp,cd)
expWin.winHandle.activate()
keyState[key.SPACE] = False
notdone=False
Sending and getting messages.
There are a number of built-in variables you can set, or you can add your own. Here is an example of sending a message from your python program to the eyelink
eyelink.sendMessage("TRIALID "+str(trialnum))
eyelink.startRecording(1,1,1,1)
eyelink.sendMessage("FIX1")
tFix1On=expClock.getTime()
Gaze contingent programming
Here is a portion of some code that uses the eyelink's most recent sample in the logic of the experimental program.
while notdone:
if recalib==True:
dict['recalib']=True
eyelink.sendMessage("RECALIB END")
eyelink.startRecording(1,1,1,1)
recalib=False
eventType=eyelink.getNextData()
if eventType==pl.STARTFIX or eventType==pl.FIXUPDATE or eventType==pl.ENDFIX:
sample=eyelink.getNewestSample()
if sample != None:
if sample.isRightSample():
gazePos = sample.getRightEye().getGaze()
if sample.isLeftSample():
gazePos = sample.getLeftEye().getGaze()
gazePosCorFix = [gazePos[0]-scrx/2,-(gazePos[1]-scry/2)]
posPix = posToPix(fixation)
eucDistFix = sqrt((gazePosCorFix[0]-posPix[0])**2+(gazePosCorFix[1]-posPix[1])**2)
if eucDistFix < tolFix:
core.wait(timeFix1)
notdone=False
eyelink.resetData()
break
Happy Hacking.
rather than PyLink, you might want to look into using the ioHub system within PsychoPy. This is a more general-purpose eye tracking system that also allows for saving data in a common format (integrated with PsychoPy events), and provides tools for data analysis and visualisation.
ioHUb is built to be agnostic to the particular eye tracker you are using. You just need to create a configuration file specific to your EyeLink system, and thereafter use the generic functions ioHiv provides for calibration, accessing gaze data in real-time, and so on.
There are some teaching resources accessible here: http://www.psychopy.org/resources/ECEM_Python_materials.zip
For future readers, I wanted to share my library for combining pylink and psychopy. I've recently updated it to work with python 3. It provides simple to use, high level functions.
https://github.com/colinquirk/templateexperiments/tree/master/eyelinker
You could also work at a lower level with the PsychoPyCustomDisplay class (see the pylink docs for more info about EyeLinkCustomDisplay).
For an example of it in use, see:
https://github.com/colinquirk/ChangeDetectionEyeTracking
(At the time of writing, this experiment code is not yet python 3 ready, but it should still be a useful example.)
The repo also includes other modules for creating experiments and recording EEG data, but they are not necessary if you are just interested in the eyelinker code.

Branching with clearcase remote client

I am trying to branch a file in ClearCase Remote Client.
I have the branch set up and the config spec is updated to handle the branch.
But I can't find the option, and the googling isn't helping much.
The way I understand your question, it sounds like you want to somehow select a command from a Clearcase RC menu(s) and have the branch explicitly created(?)
Clearcase has no explicit "Generate Branch for this File" command; you would want the "Checkout" command in this case. Branching is indirect and is a result of checking out a version of a file in a view that has a config spec with the '-mkbranch ' operation in it. I.e. the following config spec will create the dev_1.0_branch once I check it out (for any and all vobs and files):
element * CHECKEDOUT
element * .../dev_1.0_branch/LATEST
element * /main/LATEST -mkbranch dev_1.0_branch
The first line is standard for views in which you are doing development, line 2 will assure that I see any file that has a dev_1.0_branch (particularly important for the checkout+mkbranch to work as expected :-), and line 3 will select the latest version of any file that does not have a dev_1.0_branch and will create the branch if (and only if) the file version selected by that rule is checked out.
Please let me know if any of the above sounds greek to you, particularly any of the config spec rules. Having worked with ClearCase for a long time, I assume and use a lot of its terminology and concepts as if it's common knowledge :-P.
One thing of note: if you checkout the file, then immediately uncheckout the file, you will leave an empty branch on that file (i.e. in the above you would have a file with a version such as: foo.c##/main/dev_1.0_branch/0, but no /main/dev_1.0_branch/1 version). Many sites prefer to keep the version tree clean and remove empty branches (one can be found in this IBM Rational Technical article)
Just to be clear, I'm familiar with ClearCase Base & ClearCase MultiSite, but have not worked with the Remote Client yet.
--- 2009-Jun-29 Update
In response to Paul's comment below, if you want to be selective in what files are branched, you can modify the "*" to be more specific. For example, if you want to only branch foo.c in the FOODEV VOB, but leave everything else on main:
UNIX config spec:
element * CHECKOUT
element * .../my_dev_branch/LATEST
element /vobs/FOODEV/src/foo.c -mkbranch my_dev_branch
element * /main/LATEST
(For windows, you would want to use Windows conventions. I.e. \FOODEV\src\foo.c).
You can also select a directory and all elements below the directory (again UNIX config spec):
element * CHECKOUT
element * .../my_dev_branch/LATEST
element /vobs/FOODEV/src/mycomponent/... -mkbranch my_dev_branch
element * /main/LATEST
The main page for config_spec (cleartool man config_spec from the command line on windows or unix) provides decent guidance in the "Pattern" section for how to write the element/version selector (2nd column).
You can do a lot of complex version selection with the config specs. Please let me know if you would like more details or specifics.
Here's a config spec that I used for fixing a particular bug, with names changed to disguise some of the guilty.
element * CHECKEDOUT
element * .../TEMP.bugnum171238.jleffler/LATEST
mkbranch -override TEMP.bugnum171238.jleffler
include /clearcase/cspecs/project/version-1.23.45
To create the branch, in each VOB, I used a command:
ct mkbrtype -c 'Branch for bug 171238' TEMP.bugnum171238.jleffler#/vobs/project
Previously, we used config specs with -mkbranch rules appended to the various element lines.