\Lilypond Overriding Chord Grid Names - chord-diagram

I have a set of guitar chords that I'm notating w/ chord grid diagrams and most of them have non-standard names (quartal voicings). I can add a custom name under the notation between the treble clef and Tab. Is there a way to override the chord name used in the fretboard diagram?
In the output of the code below, the issue is the "A7 sus 4 b10 b13" above the chord grid. I would like to be able to replace that text with the following:
Custom text, like "A:Quartal"
Add flats or sharps w/ note numbers, similar to what Lilypond generates. In this case, a:Min11 \flat 6 => "A^min11b6
%%%%%%%%% ChordGrid Name Issue Code
\include "predefined-guitar-fretboards.ly"
%%% A Quartal
aQuartal = \relative c' { < a d g c f >1-\markup { \super "Quartal" } }
\storePredefinedDiagram #default-fret-table \aQuartal
#guitar-tuning
#"x; 12-1; 12-1; 12-1; 13-2; 13-2;"
%%% Exceptions
chExceptionMusic = {
\aQuartal
}
chExceptions = #( append
( sequential-music-to-chord-exceptions chExceptionMusic #t)
ignatzekExceptions)
formNames = \chordmode {
% \NOTE Adding \set chordNameExceptions doesn't work here!
\set chordNameExceptions = #chExceptions
\aQuartal
}
music = {
\set chordNameExceptions = #chExceptions
\aQuartal \bar "||"
}
\version "2.22.2" % necessary for upgrading to future LilyPond versions.
\book {
\header { title = "ChordGrid Name Issue" }
%%% Chords
\score { <<
%%% Chord Names
\new ChordNames {
\set chordChanges = ##t
\formNames
}
%%% Chord diagrams
\new FretBoards {
\override FretBoards.FretBoard.size = #'1.2
\override FretBoard.fret-diagram-details.number-type = #'roman-
\music
}
%%% Staff
\new Staff \with { instrumentName = #'"Ex. 1" } {
\clef "treble_8"
\new Voice {
\music
}
}
%%% TAB
\new TabStaff { \music }
>>
\layout {
\context {
\Score
\override SpacingSpanner.base-shortest-duration = #(ly:make-moment 1/16)
}
}
}
}

The chord exceptions should be
entered in absolute (not relative) mode,
and expressed as chords transposed (manually) so that C is the root note.
%%% Exceptions
chExceptionMusic = {
<c f bes ees' aes'>1-\markup { \super {"Quartal"} }
<c ees g aes f'>1-\markup { \super {"min11 "\flat6} }
}

Related

qmk: Make a layer respond to two modifier keys being pressed?

In a qmk_firmware configuration, I can map a single modifier key to a new keymap layer using MO(x).
How can I do this such that two modifier keys must be pressed simultaneously?
Example: I would like to map [Fn][Ctrl]-[d] to generate a Unicode delta, [Fn][Ctrl]-[e] to generate a Unicode epsilon, etc.
You have to put your specific code into the custom Quantum function process_record_user:
bool process_record_user(uint16_t keycode, keyrecord_t *record) {
// enter your code for [Fn][Ctrl]-[α] here
return true;
}
You can try the Quantum tab in the Firmware Builder. Even if it is end-of-life, you'll see, what is meant.
You can also set UNICODEMAP_ENABLE = yes and use the Unicode Map.
Something like this should do (half) the trick:
In function layer_state_set_user, check for layer Fn (some bit arithmetic similar to update_tri_layer_state) and the Ctrl state (get_mods() & MOD_MASK_CTRL). When in Fn layer and Ctrl is down, deactivate Ctrl (del_mods(MOD_MASK_CTRL)) and return state | x; otherwise return state & ~x.
Unfortunately, this will probably require Ctrl to be pressed before Fn.
At least that's what happens in my similar shift + layer _L4 => layer _S_L4 implementation in my keymap. It requires shift to be pressed before LT(_L4,…).
For the other direction (_L4 + shift => _S_L4), I have MO(_S_L4) on the shift keys in layer _L4, but that is somehow disabled by my layer_state_set_user.
What should "Fn + E" and "Fn + D" do (when Ctrl is not held)? For sake of example, I'll assume you want them to do PgUp and PgDn.
Here is how I would go about implementing this:
Enable Unicode input: In rules.mk, add UNICODEMAP_ENABLE = yes. In config.h, add #define UNICODE_SELECTED_MODES UC_WINC if you are on Windows. See the Unicode documentation for other OSs and options.
Define an "Fn" layer in the keymap.
Define a couple custom keycodes, and place them in the Fn layer at the d and e positions.
Handle the custom keys in process_record_user(), using get_mods() to test whether Ctrl is held. See also macros that respond to mods. Then use send_unicode_string() to type the Unicode symbol itself.
Code sketch:
// Copyright 2022 Google LLC.
// SPDX-License-Identifier: Apache-2.0
enum layers { BASE, FN, /* Other layers... */ };
enum custom_keycodes {
UPDIR = SAFE_RANGE,
FN_E,
FN_D,
// Other custom keys...
};
const uint16_t keymaps[][MATRIX_ROWS][MATRIX_COLS] PROGMEM = {
[BASE] = LAYOUT(...),
[FN] = LAYOUT(..., FN_E, ..., FN_D, ...),
...
};
const uint32_t unicode_map[] PROGMEM = {};
static void symbol_key(uint16_t* registered_key,
keyrecord_t* record,
uint16_t default_keycode,
const char* symbol,
const char* uppercase_symbol) {
if (record->event.pressed) { // On key press.
const uint8_t mods = get_mods();
const bool ctrl_held = (mods & MOD_MASK_CTRL) != 0;
const bool shifted = (mods & MOD_MASK_SHIFT) != 0;
if (ctrl_held) { // Is the Ctrl mod held?
unregister_mods(MOD_MASK_CTRL); // Clear the Ctrl mod.
// Type Unicode symbol according to whether shift is active.
send_unicode_string(shifted ? uppercase_symbol : symbol);
*registered_key = KC_NO;
} else {
*registered_key = default_keycode;
register_code16(*registered_key);
}
} else { // On key release.
unregister_code16(*registered_key);
*registered_key = KC_NO;
}
}
bool process_record_user(uint16_t keycode, keyrecord_t* record) {
const uint8_t mods = get_mods();
const bool ctrl_held = (mods & MOD_MASK_CTRL) != 0;
const bool shifted = (mods & MOD_MASK_SHIFT) != 0;
switch (keycode) {
case FN_E: {
static uint16_t registered_key = KC_NO;
symbol_key(&registered_key, record, KC_PGUP, "ε", "Ε");
} return false;
case FN_D: {
static uint16_t registered_key = KC_NO;
symbol_key(&registered_key, record, KC_PGDN, "δ", "Δ");
} return false;
}
return true;
}

How can I do a specific sort using Kotlin?

I have this ArrayList in Kotlin :
a = ArrayList<String>()
a.add("eat")
a.add("animal")
a.add("internet")
And I would like to sort the elements of my ArrayList by frequency of "e" eg I would like to have a new ArrayList such as :
a[0] = "animal" // there is no e in animal
a[1] = "eat" // there is one e in animal
a[2] = "internet" // there is two e in internet
I thought to use Collections.sort(a) but like my sort is specific it won't work...
Do you have any ideas ?
Thank you !
You can also do this without converting each String to a CharArray first (as in the currently the accepted answer), which I don't know why you'd do:
a.sortBy { it.count { it == 'e' } }
Plus, you might want to name nested its:
a.sortBy { word -> word.count { character -> character == 'e' } }
Writing on my phone so the syntax might not be exactly correct, but something like:
a.sortBy { it.toCharArray().count { it == 'e' } }

How do I format a signed integer to a sign-aware hexadecimal representation?

My initial intent was to convert a signed primitive number to its hexadecimal representation in a way that preserves the number's sign. It turns out that the current implementations of LowerHex, UpperHex and relatives for signed primitive integers will simply treat them as unsigned. Regardless of what extra formatting flags that I add, these implementations appear to simply reinterpret the number as its unsigned counterpart for formatting purposes. (Playground)
println!("{:X}", 15i32); // F
println!("{:X}", -15i32); // FFFFFFF1 (expected "-F")
println!("{:X}", -0x80000000i32); // 80000000 (expected "-80000000")
println!("{:+X}", -0x80000000i32); // +80000000
println!("{:+o}", -0x8000i16); // +100000
println!("{:+b}", -0x8000i16); // +1000000000000000
The documentation in std::fmt is not clear on whether this is supposed to happen, or is even valid, and UpperHex (or any other formatting trait) does not mention that the implementations for signed integers interpret the numbers as unsigned. There seem to be no related issues on Rust's GitHub repository either. (Post-addendum notice: Starting from 1.24.0, the documentation has been improved to properly address these concerns, see issue #42860)
Ultimately, one could implement specific functions for the task (as below), with the unfortunate downside of not being very compatible with the formatter API.
fn to_signed_hex(n: i32) -> String {
if n < 0 {
format!("-{:X}", -n)
} else {
format!("{:X}", n)
}
}
assert_eq!(to_signed_hex(-15i32), "-F".to_string());
Is this behaviour for signed integer types intentional? Is there a way to do this formatting procedure while still adhering to a standard Formatter?
Is there a way to do this formatting procedure while still adhering to a standard Formatter?
Yes, but you need to make a newtype in order to provide a distinct implementation of UpperHex. Here's an implementation that respects the +, # and 0 flags (and possibly more, I haven't tested):
use std::fmt::{self, Formatter, UpperHex};
struct ReallySigned(i32);
impl UpperHex for ReallySigned {
fn fmt(&self, f: &mut Formatter) -> fmt::Result {
let prefix = if f.alternate() { "0x" } else { "" };
let bare_hex = format!("{:X}", self.0.abs());
f.pad_integral(self.0 >= 0, prefix, &bare_hex)
}
}
fn main() {
for &v in &[15, -15] {
for &v in &[&v as &UpperHex, &ReallySigned(v) as &UpperHex] {
println!("Value: {:X}", v);
println!("Value: {:08X}", v);
println!("Value: {:+08X}", v);
println!("Value: {:#08X}", v);
println!("Value: {:+#08X}", v);
println!();
}
}
}
This is like Francis Gagné's answer, but made generic to handle i8 through i128.
use std::fmt::{self, Formatter, UpperHex};
use num_traits::Signed;
struct ReallySigned<T: PartialOrd + Signed + UpperHex>(T);
impl<T: PartialOrd + Signed + UpperHex> UpperHex for ReallySigned<T> {
fn fmt(&self, f: &mut Formatter) -> fmt::Result {
let prefix = if f.alternate() { "0x" } else { "" };
let bare_hex = format!("{:X}", self.0.abs());
f.pad_integral(self.0 >= T::zero(), prefix, &bare_hex)
}
}
fn main() {
println!("{:#X}", -0x12345678);
println!("{:#X}", ReallySigned(-0x12345678));
}

Specman - how to perform sub-type multi-extend, also for different kinds of sub-types

I want to achieve the following functionality:
extend RED GREEN BLUE packet {...}
this line will cause the struct members in the curly brackets,
to be added to all the specified subtypes of a certain enumerated type.
the result will look like this:
extend RED packet {...}
extend BLUE packet {...}
extend GREEN packet {...}
extend BIG MEDIUM RED BLUE GREEN packet {...}
this line will extend all possible combinations of the
items from each enumerated type with the struct members
that appear in the curly brackets.
the result will look like this:
extend MEDIUM RED packet {...}
extend MEDIUM BLUE packet {...}
extend MEDIUM GREEN packet {...}
extend BIG RED packet {...}
extend BIG BLUE packet {...}
extend BIG GREEN packet {...}
Thanks,
This macro solves this problem, yet there is a small limitation.
Since this macro is ‘define as computed’ , the struct you are going to apply it on should be defined in a different file than the file that uses the macro.
A simple use case is shown here: (suppose this macro is in a file called dac.e):
define <multi_when'statement> "ext_s \[<detr'name>,...\] <base'type> \{<sm'exp>,...\}" as computed {
var our_struct:rf_struct=rf_manager.get_type_by_name(<base'type>).as_a(rf_struct);
var fields:list of rf_field = our_struct.as_a(rf_struct).get_declared_fields();
var rf_type_list:list of rf_type;
var list_of_0_index:list of uint;
var field_names:list of string;
var list_of_enums:list of rf_enum;
var temp_index:uint=0;
var used_types_list:list of rf_type;
var enumerations:list of string;
var indices:list of uint;
var num_of_each_enum:list of uint;
var size_of_final_list_of_enumerations_to_be_used:uint=1;
var enum_items_list:list of rf_enum_item;
var final_list_of_enumerations_to_be_used: list of string;
var multiplication_list_algrtm1:list of uint;
var multiplication_list_algrtm2:list of uint;
var multiplication_uint_algrtm:uint=1;
if (<detr'names>.is_empty()){
error("you did not supply any when subtypes");
};
for each (field) in fields{
rf_type_list.add(field.get_type());
field_names.add(field.get_name());
};
for each (typ) in rf_type_list{
if (rf_type_list[index] is not a rf_enum){
rf_type_list.delete(index);
field_names.delete(index);
};
};
if (rf_type_list.is_empty()){
error("the type ",<base'type>," does not have any enumerated type fields.");
};
for each (typ) using index (typ_index) in rf_type_list {
num_of_each_enum.add(0);
if(indices.is_empty()){
indices.add(0);
}else {
indices.add(indices[typ_index-1])
};
enum_items_list = typ.as_a(rf_enum).get_items();
for each (enum_item) in <detr'names> {
if (enum_items_list.has(it.get_name()==enum_item)){
out(enum_item, " is found in ",typ.get_name());
enumerations.add(append(enum_item,"'",field_names[typ_index]));
indices[typ_index]+=1;
num_of_each_enum[typ_index]+=1;
};
};
};
for each in num_of_each_enum do { // avoid enums that are not used - to
if (it==0){
list_of_0_index.add(index);
};
};
if (!list_of_0_index.is_empty()){
list_of_0_index=list_of_0_index.reverse();
for each in list_of_0_index {
num_of_each_enum.delete(it);
indices.delete(it);
field_names.delete(it);
}
};
enumerations=enumerations.unique(it);
if (enumerations.is_empty()){
error("no legal enumerated values were used in the ext_s macro, please check that the arguments in square brackets are in the form of [<enum_item1>,<enum_item2>,...]");
};
//remove the last index (not relevant - and add 0 in the indices[0]
indices.add0(0);
indices.delete(indices.size()-1);
for each in num_of_each_enum do {
size_of_final_list_of_enumerations_to_be_used*=it;
};
for each in num_of_each_enum do {
multiplication_uint_algrtm*=it;
multiplication_list_algrtm1.add(size_of_final_list_of_enumerations_to_be_used/multiplication_uint_algrtm);
multiplication_list_algrtm2.add(size_of_final_list_of_enumerations_to_be_used/multiplication_list_algrtm1[index]);
};
//build the final list of string to be used in the extend statement:
for i from 1 to size_of_final_list_of_enumerations_to_be_used{
final_list_of_enumerations_to_be_used.add("");
};
for k from 0 to indices.size()-1 do {
temp_index=0;
for j from 0 to multiplication_list_algrtm2[k]-1 do {
for i from 0 to multiplication_list_algrtm1[k]-1 do {
final_list_of_enumerations_to_be_used[temp_index]=append(final_list_of_enumerations_to_be_used[temp_index]," ",enumerations[indices[k]+j%num_of_each_enum[k]]);
temp_index+=1;
};
};
};
for each in final_list_of_enumerations_to_be_used do {
result = appendf("%s extend %s %s {",result,it, <base'type> );
for each in <sm'exps> do {
result= appendf("%s %s;",result,it);
};
result = append(result , "};");
};
print result;
};
Note that this macro solves an interesting problem:
Suppose you have a list of a bunch of items of certain types (for example : {a1,a2,b1,b2,c1,c2,c3…}),
And you do not preliminarily know how many types are there in this list (in this example there are 3 types-a,b,c - but there could be more or less). So question is, how do you create a list of all possible combinations of all items from all type (for example: 0. a1-b1-c1 1.a1-b1-c2…..11.a2-b2-c3), without knowing how many types are there in the list? You can follow the code and figure out the algorithm to do that (using list of indices, how many items are there from each type and so….).
The file that should be loaded prior to macro (dac.e) is :
Struct.e:
<'
type t1:[A1,A2,A3,A4,A5];
type t2:[B1,B2,B3];
type t3:[C1,C2,C3];
struct s{
a:uint(bits:4);
t_1:t1;
t_2:t2;
t_3:t3;
};
'>
And the test file is :
<'
Import dac.e;
import struct.e;
//use the macro
ext_s [A1,A2,B1,B2] s {x:int,keep x==6,y:int,keep y==10};
extend sys{
s1:A1 B1 s;
s2:A2 B1 s;
s3:A1 s;
run() is also{
print s1;
print s2;
print s3;
};
};
'>
Please comment if you have any question.

My simple ANTLR grammar ignores certain invalid tokens when parsing

I asked a question a couple of weeks ago about my ANTLR grammar (My simple ANTLR grammar is not working as expected). Since asking that question, I've done more digging and debugging and gotten most of the kinks out. I am left with one issue, though.
My generated parser code is not picking up invalid tokens in one particular part of the text that is processed. The lexer is properly breaking things into tokens, but the parser does not kick out invalid tokens in some cases. In particular, when the invalid token is at the end of a phrase like "A and "B", the parser ignores it - it's like the token isn't even there.
Some specific examples:
"A and B" - perfectly valid
"A# and B" - parser properly picks up the invalid # token
"A and #B" - parser properly picks up the invalid # token
"A and B#" - here's the mystery - the lexer finds the # token and the parser IGNORES it (!)
"(A and B#) or C" - further mystery - the lexer finds the # token and the parser IGNORES it (!)
Here is my grammar:
grammar QvidianPlaybooks;
options{ language=CSharp3; output=AST; ASTLabelType = CommonTree; }
public parse
: expression
;
LPAREN : '(' ;
RPAREN : ')' ;
ANDOR : 'AND'|'and'|'OR'|'or';
NAME : ('A'..'Z');
WS : ' ' { $channel = Hidden; };
THEREST : .;
// ***************** parser rules:
expression : anexpression EOF!;
anexpression : atom (ANDOR^ atom)*;
atom : NAME | LPAREN! anexpression RPAREN!;
The code that then processes the resulting tree looks like this:
... from the main program
QvidianPlaybooksLexer lexer = new QvidianPlaybooksLexer(new ANTLRStringStream(src));
QvidianPlaybooksParser parser = new QvidianPlaybooksParser(new CommonTokenStream(lexer));
parser.TreeAdaptor = new CommonTreeAdaptor();
CommonTree tree = (CommonTree)parser.parse().Tree;
ValidateTree(tree, 0, iValidIdentifierCount);
// recursive code that walks the tree
public static RuleLogicValidationResult ValidateTree(ITree Tree, int depth, int conditionCount)
{
RuleLogicValidationResult rlvr = null;
if (Tree != null)
{
CommonErrorNode commonErrorNode = Tree as CommonErrorNode;
if (null != commonErrorNode)
{
rlvr = new RuleLogicValidationResult();
rlvr.IsValid = false;
rlvr.ErrorType = LogicValidationErrorType.Other;
Console.WriteLine(rlvr.ToString());
}
else
{
string strTree = Tree.ToString();
strTree = strTree.Trim();
strTree = strTree.ToUpper();
if ((Tree.ChildCount != 0) && (Tree.ChildCount != 2))
{
rlvr = new RuleLogicValidationResult();
rlvr.IsValid = false;
rlvr.ErrorType = LogicValidationErrorType.Other;
rlvr.InvalidIdentifier = strTree;
rlvr.ErrorPosition = 0;
Console.WriteLine(String.Format("CHILD COUNT of {0} = {1}", strTree, tree.ChildCount));
}
// if the current node is valid, then validate the two child nodes
if (null == rlvr || rlvr.IsValid)
{
// output the tree node
for (int i = 0; i < depth; i++)
{
Console.Write(" ");
}
Console.WriteLine(Tree);
rlvr = ValidateTree(Tree.GetChild(0), depth + 1, conditionCount);
if (rlvr.IsValid)
{
rlvr = ValidateTree(Tree.GetChild(1), depth + 1, conditionCount);
}
}
else
{
Console.WriteLine(rlvr.ToString());
}
}
}
else
{
// this tree is null, return a "it's valid" result
rlvr = new RuleLogicValidationResult();
rlvr.ErrorType = LogicValidationErrorType.None;
rlvr.IsValid = true;
}
return rlvr;
}
Add EOF to the end of your start rule. :)