I have a FITS file of a CMB full-sky map, with T, Q, U values.
Using the function healpy.fitsfunc.getformat(t), the output is the "FITS convention format string of data type t".
For my FITS file, I get "A29". Is this a standardized FITS convention?
Related
Seems like some use to knowing a good pattern to make an n-step composition or pipeline from a binary function. Maybe it's obvious or common knowledge.
What I was trying to do was R.either(predicate1, predicate2, predicate3, ...) but R.either is one of these binary functions. I thought R.composeWith might be part of a good solution but didn't get it to work right. Then I think R.o is at the heart of it, or perhaps R.chain somehow.
Maybe there's a totally different way to make an n-ary either that could be better than a "compose-with"(R.either)... interested if so but trying to ask a more general question than that.
One common way for converting a binary function into one that takes many arguments is by using R.reduce. This requires at least the arguments of the binary function and its return type to be the same type.
For your example with R.either, it would look like:
const eithers = R.reduce(R.either, R.F)
const fooOr42 = eithers([ R.equals("foo"), R.equals(42) ])
This accepts a list of predicate functions that will each be given as arguments to R.either.
The fooOr42 example above is equivalent to:
const fooOr42 = R.either(R.either(R.F, R.equals("foo")), R.equals(42))
You can also make use of R.unapply if you want to convert the function from accepting a list of arguments, to a variable number of arguments.
const eithers = R.unapply(R.reduce(R.either, R.F))
const fooOr42 = eithers(R.equals("foo"), R.equals(42))
The approach above can be used for any type that can be combined to produce a value of the same type, where the type has some "monoid" instance. This just means that we have a binary function that combines the two types together and some "empty" value, which satisfy some simple laws:
Associativity: combine(a, combine(b, c)) == combine(combine(a, b), c)
Left identity: combine(empty, a) == a
Right identity: combine(a, empty) == a
Some examples of common types with a monoid instance include:
arrays, where the empty list is the empty value and concat is the binary function.
numbers, where 1 is the empty value and multiply is the binary function
numbers, where 0 is the empty value and add is the binary function
In the case of your example, we have predicates (a function returning a boolean value), where the empty value is R.F (a.k.a (_) => false) and the binary function is R.either. You can also combine predicates using R.both with an empty value of R.T (a.k.a (_) => true), which will ensure the resulting predicate satisfies all of the combined predicates.
It is probably also worth mentioning that you could alternatively just use R.anyPass :)
I want to convert a string into a xstring. I know that there is a function module named "SCMS_STRING_TO_XSTRING"
But since it is not a good habit to use function modules anymore, a class based solution would be my prefered way to go.
I know that there is a class
cl_abap_conv_in_ce
but I can only validate, that this class can convert xstrings into string. I wand to have the reverse case. Does anyone have experience on how to do that class based?
Meanwhile, I found the solution on my own. For people who might be interested:
DATA(lo_conv) = cl_abap_conv_out_ce=>create( ).
lo_conv->write( data = lv_content ).
DATA(lv_xstring) = lo_conv->get_buffer( ).
The help text for XSTRING provides a nice functional method for this:
cl_abap_codepage=>convert_to( )
Firstly, you need to decide how you want it encoded. UTF-8? UTF-16? Just plain HEX?
For UTF-8 You can do the following using system calls (instead of function calls):
First do a global once-off initialization:
STATICS: g_conv_utf8 TYPE xstring. " used for conversion
DATA: l_flags TYPE c LENGTH 1.
system-call convert id 20
srcenc 'SET LOCALE LANGUAGE'
dstenc 'UTF-8'
replacement '#'
type l_flags
cinfo g_conv_utf8.
And then do subsequent calls: l_string -> l_xstring (+ l_len)
SYSTEM-CALL CONVERT ID 24
DATA l_string
ENDIAN ' '
IGNORE_CERR 'X'
N -1
BUFFER l_xstring
LEN l_length
CINFO g_conv_utf_8.
This is the essence of what cl_abap_codepage=>convert_to( ) does internally.
Given a .tfrecord file, we can define a record iterator
record_iterator = tf.python_io.tf_record_iterator(path=record)
Then parse it using
example = tf.train.SequenceExample()
for element in record_iterator:
example.ParseFromString(element)
The question is: how do we infer all the field names in the context ?
If we know the structure in advance, we can say example.context.feature["width"]. In addition, str(example.context) returns a string with the entire record structure. However, I was wondering if there is any built-in function to get the field names and avoid parsing this string (e.g. by searching for "key")
I am trying to get a column in a dataframe form float to string. I have tried
df = readtable("data.csv", coltypes = {String, String, String, String, String, Float64, Float64, String});
but I got complained
syntax: { } vector syntax is discontinued
I also have tried
dfB[:serial] = string(dfB[:serial])
but it didn't work either. So, I'd like to know what would be the proper approach to change column data type in Julia.
thx
On your first attempt, Julia tells you what the problem is - you can't make a vector with {}, you need to use []. Also, the name of the keyword argument should be eltypes rather than coltypes.
On the second try, you don't have a float, you have a Vector of floats. So to change the type you need to change the type of all elements. In Julia, elementwise operations on vectors are generalized by the 'dot' syntax, e.g. string.(collect(dfB[:serial])) . The collect is needed currently to cast the DataArray to a normal Array first – this will fail if the DataArray contains NAs. IMHO the DataFrames interface is still rather wonky, so expect a few headaches like this ATM.
The WRITE statement has a lot of options, so I was wondering, does it call CONVERSION_EXIT_* functions, or how does it print the primitive data types in so many ways?
And if it does use CONVERSION_EXIT_*s, what are those?
The primitive data types (DATA foo TYPE n LENGTH 10) do not have any conversion exits (ALPHA, etc.) assigned to them.
You can choose them manually, for example with
WRITE ... TO ... USING EDIT MASK '==ALPHA'.
or they can be assigned to a data dictionary domain (transaction code SE11). In this case, they are implicitly called for example:
by the screen (dynpro) processing (unless turned off explicitly).
by WRITE
DATA(langu) = CONV syst-langu( 'E' ). " domain SYLANGU has conv.exit ISOLA
DATA text TYPE c LENGTH 2.
WRITE langu TO text. " conv.exit ISOLA converts 'E' into 'EN'
Except WRITE, ABAP itself does very little to support conversion exits - which is a good thing because the conversion should take place only at the input/output borders of the program and not internally.
It's a good idea to keep all of the data in the internal format as long as you're working on it and only convert it right before the output takes place.