Field input value is longer than screen field ABAP - abap

I have a little problem with a batch input. When running the BATCH INPUT since a program type ABAP I receive the next error:
Field BKPF-BLART . input value is longer than screen field P
PARAMETERS:
p_bldat TYPE CHAR10, "Fecha de documento.
p_budat TYPE CHAR10, "Fecha de contabilización.
p_xblnr TYPE XBLNR, "Referencia.
p_bktxt TYPE BKTXT, "Texto cabecera.
p_blart TYPE BLART, "Clase.
...........
PERFORM OPEN_GROUP.
PERFORM BDC_DYNPRO USING 'SAPMF05A' '0100'.
PERFORM BDC_FIELD USING 'BKPF-BLDAT' 'p_bldat'.
PERFORM BDC_FIELD USING 'BKPF-BUDAT' 'p_budat'.
PERFORM BDC_FIELD USING 'BKPF-XBLNR' 'p_xblnr'.
PERFORM BDC_FIELD USING 'BKPF-BKTXT' 'p_bktxt'.
PERFORM BDC_FIELD USING 'BKPF-BLART' 'p_blart'.
PERFORM BDC_FIELD USING 'BKPF-MONAT' 'p_monat'.
......
I tried utilizing the sentence CONDENSE and changing the type of data CHAR2 of my var BLART.

You are not passing parameters as you think but character sequences that happen to be your parameter names.
It should be done like that (without quotes for parameter names).
PERFORM BDC_FIELD USING 'BKPF-BLDAT' p_bldat.
PERFORM BDC_FIELD USING 'BKPF-BUDAT' p_budat.
PERFORM BDC_FIELD USING 'BKPF-XBLNR' p_xblnr.
PERFORM BDC_FIELD USING 'BKPF-BKTXT' p_bktxt.
PERFORM BDC_FIELD USING 'BKPF-BLART' p_blart.
PERFORM BDC_FIELD USING 'BKPF-MONAT' p_monat.

Related

SuiteScript 2.0 - Logical operator for multiple filter

I have a Transaction's saved search on my Netsuite account and this saved search has some filter condition on Netsuite UI.
Using SuiteScript 2.0, I am loading this search and taking copy of all the filter defined on Netsuite UI (say defaultfilters) and then applying filter for "trandate" as filterExpression then pushing defaultfilters in savedsearch filter collection.
but what happening is that, netsuite only considering "trandate" filter, not those that are defined on Netsuite.
I assume that somehow between two filters "OR" logical operator being applied.
same issue discussed in another query:
SuiteScript 2.0 search.createFilter with formula not working
please guide me for this issue.
Thanks
You can do the following to add searchFilters or FilterExpressions
Load the search
var savedSearch = search.load({ id: SEARCH_ID });
Push custom filters in savedSearch object as below:
// for searchFilter
savedSearch.filters.push(search.createSearchFilters(SEARCH_FILTER_OBJECT);
// for filter expression
var filterExpressions = savedSearch.filterExpressions;
filterExpressions.push('and', [FIELDID, SEARCH_OPERATOR, VALUES]);
savedSearch.filterExpressions = filterExpressions;
As for using formula in filterExpressions, if you use formulanumeric as the fieldName, your operator should be equalto and not is whereas if you use formulatext as fieldName you can use is operator as per NetSuite's Search Operators

Apache Flink Error Handing and Conditional Processing

I am new to Flink and have gone through site(s)/examples/blogs to get started. I am struggling with the correct use of operators. Basically I have 2 questions
Question 1: Does Flink support declarative exception handling, I need to handle parse/validate/... errors?
Can I use org.apache.flink.runtime.operators.sort.ExceptionHandler or similar
to handle errors?
or Rich/FlatMap function my best option?
If Rich/FlatMap the only option then is there a way to get handle to Stream inside Rich/FlatMap function so Sink(s) could be attached for error processing?
Question 2: Can I conditionally attach different Sink(s)?
Based on certain field(s) in keyed split streams I need to select different sink(s), do I split the stream again or use a Rich/FlatMap to handle that?
I am using Flink 1.3.2. Here is the relevant portion of my job
.....
.....
DataStream<String> eventTextStream = env.addSource(messageSource)
KeyedStream<EventPojo, Tuple> eventPojoStream = eventTextStream
// parse, transform or enrich
.flatMap(new MyParseTransformEnrichFunction())
.assignTimestampsAndWatermarks(new EventAscendingTimestampExtractor())
.keyBy("eventId");
// split stream based on eventType as different reduce and windowing functions need to be applied
SplitStream<EventPojo> splitStream = eventPojoStream
.split(new EventStreamSplitFunction());
// need to apply reduce function
DataStream<EventPojo> event1TypeStream = splitStream.select("event1Type");
// need to apply reduce function
DataStream<EventPojo> event2TypeStream = splitStream.select("event2Type");
// need to apply time based windowing function
DataStream<EventPojo> event3TypeStream = splitStream.select("event3Type");
....
....
env.execute("Event Processing");
Am I using the correct operators here?
Update 1:
Tried using the ProcessFunction as suggested by #alpinegizmo but that didn't work as it depends upon a keyed stream which I don't have until I parse/validate input. I get "InvalidProgramException: Field expression must be equal to '*' or '_' for non-composite types. ".
It's such a common use case where your first parse/validate input and won't have keyed stream yet, so how do you solve it?
Thanks for your patience and help.
There's one key building block that you've overlooked. Take a look at side outputs.
This mechanism provides a typesafe way to produce any number of additional output streams. This can be a clean way to report errors, among other uses. In Flink 1.3 side outputs can only be used with ProcessFunction, but 1.4 will add side outputs to ProcessWindowFunction.

Add field/string length to logstash event

I'm trying to add a string length field to an index. Ideally, I'd like to use the kibana script feature as I can 'add' this field later but I keep getting a null_pointer_exception with the following code... I'm trying to sort in a visualization based on the fields length.
doc['field'].value ? doc['field'].length() : 0
Is this correct?
I thought it was because my field isn't always set (sparse data), but I added the ?:0 to combat that (which didn't work)
Any ideas?
You can define an scripted field in Kibana, of type int, language painless, and try this:
return (doc['field'].value != null? doc['field'].value.length(): 0);

REGEX_EXTRACT error in PIG

I have a CSV file with 3 columns: tweetid , tweet, and Userid. However within the tweet column there are comma separated values.
i.e. of 1 row of data:
`396124437168537600`,"I really wish I didn't give up everything I did for you, I'm so mad at my self for even letting it get as far as it did.",savava143
I want to extract all 3 fields individually, but REGEX_EXTRACT is giving me an error with this code:
a = LOAD tweets USING PigStorage(',') AS (f1,f2,f3);
b = FILTER a BY REGEX_EXTRACT(f1,'(.*)\\"(.*)',1);
The error is:
error: Filter's condition must evaluate to boolean.
In the use case shared, reading the data using PigStrorage(',') will result in missing savava143 (last field value)
A = LOAD '/Users/muralirao/learning/pig/a.csv' USING PigStorage(',') AS (f1,f2,f3);
DUMP A;
Output : A : Observe that the last field value is missing.
(396124437168537600,"I really wish I didn't give up everything I did for you, I'm so mad at my self for even letting it get as far as it did.")
For the use case shared, to extract all the values from CSV file with field values having ',' we can use either CSVExcelStorage or CSVLoader.
Approach 1 : Using CSVExcelStorage
Ref : http://pig.apache.org/docs/r0.12.0/api/org/apache/pig/piggybank/storage/CSVExcelStorage.html
Input : a.csv
396124437168537600,"I really wish I didn't give up everything I did for you, I'm so mad at my self for even letting it get as far as it did.",savava143
Pig Script :
REGISTER piggybank.jar;
A = LOAD 'a.csv' USING org.apache.pig.piggybank.storage.CSVExcelStorage() AS (f1,f2,f3);
DUMP A;
Output : A
(396124437168537600,I really wish I didn't give up everything I did for you, I'm so mad at my self for even letting it get as far as it did.,savava143)
Approach 2 : Using CSVLoader
Ref : http://pig.apache.org/docs/r0.9.1/api/org/apache/pig/piggybank/storage/CSVLoader.html
Below script makes use of CSVLoader(), DUMP A will result in the same output seen earlier.
A = LOAD 'a.csv' USING org.apache.pig.piggybank.storage.CSVLoader() AS (f1,f2,f3);
The error is that you do not want to FILTER based on a regex but GENERATE new fields based on a regex. To filter, you need to know if the line have to be filtered, hence the boolean requirement.
Therefore, you have to use :
b = FOREACH a GENERATE REGEX_EXTRACT(FIELD, REGEX, HOW_MANY_GROUPS_TO_RETURN);
However, as #Murali Rao said, your values are not just coma separated but CSV (think how you will handle a coma in tweet : it is not a field separator, just some content).

Error: Unable to cast object of type 'NHibernate.Hql.Ast.HqlBitwiseAnd' to type 'NHibernate.Hql.Ast.HqlBooleanExpression'

I'm using s#arp architecture and NHibernate 3.1.
I'd like to randomly select 10 rows from a table and I'm following the advice given in this page Linq Orderby random ThreadSafe for use in ASP.NET
The solution involves doing XOR in the database and my query looks like this
repo.FindAll()
.OrderBy(q => ((~(q.QuestionId & seed)) & (q.QuestionId | seed)))
.Take(10);
And I get the following error:
Unable to cast object of type 'NHibernate.Hql.Ast.HqlBitwiseAnd' to type 'NHibernate.Hql.Ast.HqlBooleanExpression'
When I remove '~' from the query I don't get the error. Is this a bug in NHQueryProvider that cannot understand '~' as bitwise operation?