How can I ignore for special characters in prisma orm?
I need to filter by freijó
but with the special character it does not return the record
name: {
contains: search,
mode: 'insensitive',
}
Related
I'm performing some validation tests on several XML files, some of which contain hyphens in the name. I've created a parameterized test case containing the file names (excluding extensions) but GoogleTest fails because
Note: test names must be non-empty, unique, and may only contain ASCII alphanumeric characters or underscore. Because PrintToString adds quotes to std::string and C strings, it won't work for these types.
class ValidateTemplates :public testing::TestWithParam<string>
{
public:
struct PrintToStringParamName
{
template <class ParamType>
string operator() (const testing::TestParamInfo<ParamType>& info) const
{
auto file_name = static_cast<string>(info.param);
// Remove the file extension because googletest's PrintToString may only
// contain ASCII alphanumeric characters or underscores
size_t last_index = file_name.find_last_of(".");
return file_name.substr(0, last_index);
}
};
};
INSTANTIATE_TEST_CASE_P(
ValidateTemplates,
ValidateTemplates,
testing::ValuesIn(list_of_files),
ValidateTemplates::PrintToStringParamName());
I had the idea of printing the filename with non-alphanumeric characters swapped out for underscores in PrintToStringParamName. But I'd rather keep the parameterized names the same as the file names if possible.
Is there a way to get around this limitation somehow? I can't permanently change the file names and I can't use another testing framework.
That is not possible. You have already quoted the relevant comment from the documentation. The reason is that Google Test uses the test name to generate C++ identifiers (class names). C++ identifiers are limited to alphanumeric characters (and underscores but you should not use underscores in test names).
The closest you can get is change the implementation of PrintToStringParamName::operator()() and remove non-alphanumeric characters from the filename.
I'm trying to write a Raku grammar that can parse commands that ask for programming puzzles.
This is a simplified version just for my question, but the commands combine a difficulty level with an optional list of languages.
Sample valid input:
No language: easy
One language: hard javascript
Multiple languages: medium javascript python raku
I can get it to match one language, but not multiple languages. I'm not sure where to add the :g.
Here's an example of what I have so far:
grammar Command {
rule TOP { <difficulty> <languages>? }
token difficulty { 'easy' | 'medium' | 'hard' }
rule languages { <language>+ }
token language { \w+ }
}
multi sub MAIN(Bool :$test) {
use Test;
plan 5;
# These first 3 pass.
ok Command.parse('hard', :token<difficulty>), '<difficulty> can parse a difficulty';
nok Command.parse('no', :token<difficulty>), '<difficulty> should not parse random words';
# Why does this parse <languages>, but <language> fails below?
ok Command.parse('js', :rule<languages>), '<languages> can parse a language';
# These last 2 fail.
ok Command.parse('js', :token<language>), '<language> can parse a language';
# Why does this not match both words? Can I use :g somewhere?
ok Command.parse('js python', :rule<languages>), '<languages> can parse multiple languages';
}
This works, even though my test #4 fails:
my token wrd { \w+ }
'js' ~~ &wrd; #=> 「js」
Extracting multiple languages works with a regex using this syntax, but I'm not sure how to use that in a grammar:
'js python' ~~ m:g/ \w+ /; #=> (「js」 「python」)
Also, is there an ideal way to make the order unimportant so that difficulty could come anywhere in the string? Example:
rule TOP { <languages>* <difficulty> <languages>? }
Ideally, I'd like for anything that is not a difficulty to be read as a language. Example: raku python medium js should read medium as a difficulty and the rest as languages.
There are two things at issue here.
To specify a subrule in a grammar parse, the named argument is always :rule, regardless whether in the grammar it's a rule, token, method, or regex. Your first two tests are passing because they represent valid full-grammar parses (that is, TOP), as the :token named argument is ignored since it's unknown.
That gets us:
ok Command.parse('hard', :rule<difficulty>), '<difficulty> can parse a difficulty';
nok Command.parse('no', :rule<difficulty>), '<difficulty> should not parse random words';
ok Command.parse('js', :rule<languages> ), '<languages> can parse a language';
ok Command.parse('js', :rule<language> ), '<language> can parse a language';
ok Command.parse('js python', :rule<languages> ), '<languages> can parse multiple languages';
# Output
ok 1 - <difficulty> can parse a difficulty
ok 2 - <difficulty> should not parse random words
ok 3 - <languages> can parse a language
ok 4 - <language> can parse a language
not ok 5 - <languages> can parse multiple languages
The second issue is how implied whitespace is handled in a rule. In a token, the following are equivalent:
token foo { <alpha>+ }
token bar { <alpha> + }
But in a rule, they would be different. Compare the token equivalents for the following rules:
rule foo { <alpha>+ }
token foo { <alpha>+ <.ws> }
rule bar { <alpha> + }
token bar { [<alpha> <.ws>] + }
In your case, you have <language>+, and since language is \w+, it's impossible to match two (because the first one will consume all the \w). Easy solution though, just change <language>+ to <language> +.
To allow the <difficulty> token to float around, the first solution that jumps to my mind is to match it and bail in a <language> token:
token language { <!difficulty> \w+ }
<!foo> will fail if at that position, it can match <foo>. This will work almost perfect until you get a language like 'easyFoo'. The easy fix there is to ensure that the difficulty token always occurs at a word boundary:
token difficulty {
[
| easy
| medium
| hard
]
>>
}
where >> asserts a word boundary on the right.
I know the list of special characters that can be indexed using Apache Lucene. Can some one tell me if there are any special characters that cannot be indexed using Apache Lucene library?
From: https://lucene.apache.org/core/2_9_4/queryparsersyntax.html#Escaping%20Special%20Characters
Lucene supports escaping special characters that are part of the query syntax. The current list special characters are
+ - && || ! ( ) { } [ ] ^ " ~ * ? : \
So basically it looks like you can index anything, just have to escape it.
I am using Lucene version 5.0.0.
In my search string, there is a minus character like “test-”.
I read that the minus sign is a special character in Lucene. So I have to escape that sign, as in the queryparser documentation:
Escaping Special Characters:
Lucene supports escaping special characters that are part of the query syntax. The current list special characters are:
- + - && || ! ( ) { } [ ] ^ " ~ * ? : \ /`
To escape these character use the \ before the character. For example to search for (1+1):2 use the query:
\(1\+1\)\:2
To do that I use the QueryParser.escape method:
query = parser.parse(QueryParser.escape(searchString));
I use the classic Analyzer because I noticed that the standard Analyzer has some problems with escaping special characters.
The problem is that the Parser deletes the special characters and so the Query has the term
content:test
How can I set up the parser and searcher to search for the real value “test-“?
I also created my own query with the content test- but that also didn’t work. I recieved 0 results but my index has entries like:
Test-VRF
Test-IPLS
I am really confused about this problem.
While escaping special characters for the queryparser deals with part of the problem, it doesn't help with analysis.
Neither classic nor standard analyzer will keep punctuation in the indexed form of the field. For each of these examples, the indexed form will be in two terms:
test and vrf
test and ipls
This is why a manually constructed query for "test-" finds nothing. That term does not exist in the index.
The goal of these analyzers is to attempt to index words. As such, punctuation is mostly eliminated, and is not searchable. A phrase query for "test vrf" or "test-vrf" or "test_vrf" are all effectively identical. If that is not what you need, you'll need to look to other analyzers.
The goal to fix this issue is to store the value content in an NOT_ANALYZED way.
Field fieldType = new Field(key.toLowerCase(),value, Field.Store.YES, Field.Index.NOT_ANALYZED);
Someone who has the same problem has to take care how to store the contents in the index.
To request the result create a query in this way
searchString = QueryParser.escape(searchString);
and use for example a WhitespaceAnalyzer.
I'm looking for a regex in order to transform something like
{test}hello world{/test} and {again}i'm coming back{/again} in hello world i'm coming back.
I tried {[^}]+} but with this regex, I can't have only what I have in the test and again tags. Is there a way to complete this regex ?
Doing this properly is generally beyond the capabilities of regular expressions. However, if you can guarantee that those tags will never be nested and your input will never contain curly brackets that do not signify tags, then this regex could do the matching:
\{([^}]+)}(.*?)\{/\1}
Explanation:
\{ # a literal {
( # capture the tag name
[^}]+) # everything until the end of the tag (you already had this)
} # a literal }
( # capture the tag's value
.*?) # any characters, but as few as possible to complete the match
# note that the ? makes the repetition ungreedy, which is important if
# you have the same tag twice or more in a string
\{ # a literal {
\1 # use the tag's name again (capture no. 1)
} # a literal }
So this uses a backreference \1 to make sure that the closing tag contains the same word as the opening tag. Then you will find the tag's name in capture 1 and the tag's value/content in capture 2. From here you can do with these whatever you want (for instance, put the values back together).
Note that you should use the SINGLELINE or DOTALL option, if you want your tags to span multiple lines.