BAPI or FM for searching FI documents - sap

I am searching for a BAPI to search FI documents, based on the input criteria (document type, posting date,...). Same as it is on the FB03, but the Document List screen, not the screen with only three inputs (Document Number, Company Code, Fiscal Year).
As I don't have the document number, I need the search enabled BAPI.
I am using the BAPI_ACC_DOCUMENT_POST for posting.
Any ideas?

Need to answer my own question - I was hoping to skip these two days of investigation by getting an answer here :)
BAPI_ACC_CO_DOCUMENT_FIND is correct BAPI to use for searching the posted FI documents. What I found out is that if I want to search by posting date, I have to provide Controlling Area (but instead of an error, there is nothing returned).

Related

Show content based on whether the user has certain tag in Mailchimp

I went through the Merge Tags here and here, but couldn't figure out the syntax that would allow me to show content based on whether the user has certain Tag or not.
Help?
My goal in case it helps:
User subscribes, and is queued for a welcome mail one day later. In meantime that user may get tagged (my way of segmenting them), and so, the next day when that user receives the welcome mail, the content needs to be catered based on the tag that user got.
Got a response from their support saying
merge tags do not work with Tags just yet
here's the whole thing:
While we do have conditional merge tags available, I'm afraid we do
not have any that would work with Tags. To be transparent, Tags were
recently added a few months ago, and there are some features in our
application that has not updated to work with Tags just yet.
Because conditional merge tags do not work with tags yet, the best
option would be to create multiple automations and send them out based
on each tags. If you do it that way, you'll be able to target those in
specific tags with specific content
Dug a little deeper from the first link. There is another link Use Conditional Merge Tag Blocks which contained the below code:
Name
IF-ELSE
Definition
Use ELSE to indicate alternative content to display if the *|MERGE|* tag value is false.
Example
*|IF:MERGE|* content to display *|ELSE:|* alternative content to display *|END:IF|*
Name
ELSEIF
Definition
Use ELSEIF to specify a new *|MERGE|* tag to be matched against if the first *|MERGE|* tag value is false.
Example
*|IF:TRANSACTIONS >= 20|* Enjoy this 40% off coupon! *|COUPON40|*
*|ELSEIF:TRANSACTIONS >= 10|* Enjoy this 20% off coupon! *|COUPON20|*
*|ELSE:|* Enjoy this 10% off coupon! *|COUPON10|* *|END:IF|*
More examples with definitions can be found here.
Hope this is the answer you were after.

A file named Butterfly7198.txt was found and It's boggling me

I happened to come across a file while changing images for a Toontown Rewritten's Context Pack I've been working on and I stumbled across this file marked
Butterfly7198.txt
and upon clicking on it, I was greeted with the following prompt that reads
"Hi. This isn't part of any grand story arc or anything. We're not using this as
some gimmick to announce some bold new feature. There's no big pot of gold at the end of this rainbow. We're looking for people with technical talent who want to help work on
Toontown Rewritten. You can discuss this file and its contents online if you wish. However, it IS meant to be solved alone, so please don't share answers or spoilers. Good luck. -Butterfly 7198"
Then I was greeted with a weird chain of letters and numbers on which I could only assume meant that this was encoded in some language I had no Idea about. The following goes
A/MNCrYNG1ljAAAAAAAAAAADAAAAQAAAAHNEAAAAZAAAZAEAbAAAWgEAZAAAZAEAbAIAWgMAZAIA
hAAAWgQAZAMAhAAAWgUAZAQAZQYAZgEAZAUAhAAAgwAAWVoHAGQBAFMoBgAAAGn/////TmMBAAAA
AwAAAAQAAABDAAAAczoAAABkAQB9AQB4LQB0AABkAwCDAQBEXR8AfQIAdAEAagIAfAEAfAAAF4MB
AGoDAIMAAH0BAHETAFd8AQBTKAQAAABOdAAAAABpAAQAAGkAABAAKAQAAAB0BgAAAHhyYW5nZXQI
AAAAX2hhc2hsaWJ0BgAAAHNoYTI1NnQGAAAAZGlnZXN0KAMAAAB0AQAAAG10AQAAAGh0AQAAAGko
AAAAACgAAAAAcxEAAABzZWNyZXRfbWVzc2FnZS5weXQFAAAAX2lzaGEEAAAAcwgAAAAAAQYBEwAd
AWMCAAAACQAAAAgAAABDAAAAcyMBAAB0AABkAQCDAQB9AgBkAgB9AwB4XwB0AQBkAQCDAQBEXVEA
fQQAfAMAfAIAfAQAGXQCAHwAAHwEAHQDAHwAAIMBABYZgwEAFzd9AwB8AgB8AwBkAwBAGXwCAHwE
ABkCfAIAfAQAPHwCAHwDAGQDAEA8cR8AV2QCAH0EAGQCAH0DAGQEAH0FAHiWAHwBAERdjgB9BgB4
UQB0AQBkBQCDAQBEXUMAfQcAfAQAZAYAF2QDAEB9BAB8AwB8AgB8BAAZF2QDAEB9AwB8AgB8AwAZ
fAIAfAQAGQJ8AgB8BAA8fAIAfAMAPHGgAFd8AgB8AgB8BAAZfAIAfAMAGRdkAwBAGX0IAHwFAHQE
AHQCAHwGAIMBAHwIAEGDAQA3fQUAcY0AV3wFAFMoBwAAAE5pAAEAAGkAAAAAaf8AAABSAAAAAGn9
AwAAaQEAAAAoBQAAAHQFAAAAcmFuZ2VSAQAAAHQDAAAAb3JkdAMAAABsZW50AwAAAGNocigJAAAA
dAMAAABrZXl0BAAAAGRhdGF0AQAAAFN0AQAAAGpSBwAAAHQDAAAAb3V0dAEAAABidAEAAAB4dAEA
AABLKAAAAAAoAAAAAHMRAAAAc2VjcmV0X21lc3NhZ2UucHl0BQAAAF9tcmM0CQAAAHMgAAAAAAEM
AQYBEwEmASkBBgEGAQYBDQETAQ4BEgEhARoBHgF0BQAAAFJvYm90YwAAAAAAAAAAAQAAAEIAAABz
RwAAAGUAAFoBAGQAAIQAAFoCAGQBAIQAAFoDAGQCAIQAAFoEAGQDAIQAAFoFAGQEAIQAAFoGAGQF
AIQAAFoHAGQGAIQAAFoIAFJTKAcAAABjAQAAAAEAAAACAAAAQwAAAHMWAAAAZAMAfAAAXwAAZAIA
fAAAXwEAZAAAUygEAAAATmkAAAAAUgAAAAAoAgAAAGkAAAAAaQAAAAAoAgAAAHQLAAAAX1JvYm90
X19wb3N0CwAAAF9Sb2JvdF9fc3RrKAEAAAB0BAAAAHNlbGYoAAAAACgAAAAAcxEAAABzZWNyZXRf
bWVzc2FnZS5weXQIAAAAX19pbml0X18cAAAAcwQAAAAAAQkBYwEAAAABAAAAAgAAAEMAAABzDQAA
AHwAAGoAAGQBAIMBAFMoAgAAAE50AQAAAGQoAQAAAHQEAAAAbW92ZSgBAAAAUhkAAAAoAAAAACgA
AAAAcxEAAABzZWNyZXRfbWVzc2FnZS5weXQIAAAAbW92ZURvd24gAAAAcwIAAAAAAWMBAAAAAQAA
AAIAAABDAAAAcw0AAAB8AABqAABkAQCDAQBTKAIAAABOdAEAAABsKAEAAABSHAAAACgBAAAAUhkA
AAAoAAAAACgAAAAAcxEAAABzZWNyZXRfbWVzc2FnZS5weXQIAAAAbW92ZUxlZnQjAAAAcwIAAAAA
AWMBAAAAAQAAAAIAAABDAAAAcw0AAAB8AABqAABkAQCDAQBTKAIAAABOdAEAAAByKAEAAABSHAAA
ACgBAAAAUhkAAAAoAAAAACgAAAAAcxEAAABzZWNyZXRfbWVzc2FnZS5weXQJAAAAbW92ZVJpZ2h0
JgAAAHMCAAAAAAFjAQAAAAEAAAACAAAAQwAAAHMNAAAAfAAAagAAZAEAgwEAUygCAAAATnQBAAAA
dSgBAAAAUhwAAAAoAQAAAFIZAAAAKAAAAAAoAAAAAHMRAAAAc2VjcmV0X21lc3NhZ2UucHl0BgAA
AG1vdmVVcCkAAABzAgAAAAABYwIAAAAJAAAABAAAAEMAAABzbgEAAHQAAHwBAIMBAHQBAGsDAHMw
AHQCAHwBAIMBAGQBAGsDAHMwAHwBAGQCAGsHAHI8AHQDAIMAAIIBAG4AAHwAAGoEAFwCAH0CAH0D
AGkEAGQRAGQEADZkEgBkBgA2ZBMAZAcANmQUAGQIADZ8AQAZXAIAfQQAfQUAZAMAfAIAfAQAFwQD
awEAb5IAZAkAawAAbgIAAgFvtABkAwB8AwB8BQAXBANrAQBvsgBkCQBrAABuAgACAXO7AHQFAFN8
AQBkBgBrAwByzQB8AgBuBwB8AgB8BAAXfQYAfAEAZAgAawMAcukAfAMAbgcAfAMAfAUAF30HAGQB
AHwGAGQKABR8AQBkCwBrBgBkCQAUF3wHABc+ZAwAQHIbAXQFAFN8AAAEagYAfAEANwJfBgB8AgB8
BAAXfAMAfAUAF2YCAHwAAF8EAHgmAGQVAERdHgB9CAB8AABqBgBqBwB8CABkEACDAgB8AABfBgBx
SAFXdAgAUygWAAAATmkBAAAAdAQAAABkbHJ1aQAAAABSGwAAAGn/////Uh4AAABSIAAAAFIiAAAA
aSAAAABpQAAAAHQCAAAAZHVsiQAAAMwhqSWNKClirAW/QlF2SzalSZQbJVnGYblNWUQGaxJ3yRqJ
Fyl50iotVjxwYyWCGTF/+WkPFnNwQwtmNJ1UiEiNWtpvp1XmJIBS8UrmIKoZ43ZWaPcLo01iEmEQ
KSUsdKJRv1U+NW4IQi3WerApRUV7KRhweibvL1pwSiDPJMNZhXw5DpN0n2fPPXYt514QI6p4jGJI
HfA1nhjFVwY6YCRSRV1ltX44J3I7dVWhXlIoSgK0aattVFlhRzxZWRSME6kh11FGKWRrZX0XAjsN
4m1qQl0LLgeLJjwUfQqHc+JGoy4teT5iRhbAENNldj3MGkU9xTPZVXJFwESuOXMvwUzsKGYL2AMI
Sfp//3//SCtT1AB0AgAAAHVkdAIAAABscnQCAAAAcmxSAAAAACgCAAAAaQAAAABpAQAAACgCAAAA
af////9pAAAAACgCAAAAaQEAAABpAAAAACgCAAAAaQAAAABp/////ygEAAAAUiYAAABSJQAAAFIn
AAAAUigAAAAoCQAAAHQEAAAAdHlwZXQDAAAAc3RyUgsAAAB0CgAAAFZhbHVlRXJyb3JSFwAAAHQF
AAAARmFsc2VSGAAAAHQHAAAAcmVwbGFjZXQEAAAAVHJ1ZSgJAAAAUhkAAABSGwAAAFITAAAAdAEA
AAB5dAIAAABkeHQCAAAAZHl0AgAAAGN4dAIAAABjeXQCAAAAYnQoAAAAACgAAAAAcxEAAABzZWNy
ZXRfbWVzc2FnZS5weVIcAAAALAAAAHMeAAAAAAEwAAwBDwEsAUAABAEcARwBJAAEAQ8BFwENARwB
YwEAAAAEAAAABAAAAEMAAABzcAAAAHwAAGoAAFwCAH0BAH0CAHwAAGoAAGQHAGsDAHI0AGQCAGQB
AHwBABhkAQB8AgAYZgIAFlN0AQB8AABqAgCDAQB9AwB8AwBqAwBkAwCDAQBzVgBkBABTdAQAfAMA
ZAUAIHQFAGoGAGQGAIMBAIMCAFMoCAAAAE5pHwAAAHM3AAAAWW91IGFyZSBzdGlsbCAlZCBsZWZ0
IGFuZCAlZCBhYm92ZSB3aGVyZSB5b3Ugc2hvdWxkIGJlIXMCAAAABIVzHwAAAENoZWF0ZXIhIEdv
IHNvbHZlIGl0IGNvcnJlY3RseSFp/f///3OAAAAAUWZ2ZlhZbjROVHpCR084eWo5cjBOTjk0ekZ6
VEphczEyUDIvak1Uc2QzUFFlNjJIeVh0WXZIaGxrMXFkbHR4SnhpZ0Fxd1ozczlqK2E4dGhBZVlp
M242TWY3RDU4eCtEZzhDWEkvS1FSRzB6UGhyYVl6TGRnNGJ2TVpJYTl4Yz0oAgAAAGkfAAAAaR8A
AAAoBwAAAFIXAAAAUggAAABSGAAAAHQIAAAAZW5kc3dpdGhSFQAAAHQJAAAAX2JpbmFzY2lpdAoA
AABhMmJfYmFzZTY0KAQAAABSGQAAAFITAAAAUi8AAAB0AQAAAGsoAAAAACgAAAAAcxEAAABzZWNy
ZXRfbWVzc2FnZS5weXQFAAAAc29sdmU6AAAAcxAAAAAAAQ8BDwEWAg8BDwAEAhABKAkAAAB0CAAA
AF9fbmFtZV9fdAoAAABfX21vZHVsZV9fUhoAAABSHQAAAFIfAAAAUiEAAABSIwAAAFIcAAAAUjkA
AAAoAAAAACgAAAAAKAAAAABzEQAAAHNlY3JldF9tZXNzYWdlLnB5UhYAAAAbAAAAcw4AAAAGAQkE
CQMJAwkDCQMJDigIAAAAdAcAAABoYXNobGliUgIAAAB0CAAAAGJpbmFzY2lpUjYAAABSCAAAAFIV
AAAAdAYAAABvYmplY3RSFgAAACgAAAAAKAAAAAAoAAAAAHMRAAAAc2VjcmV0X21lc3NhZ2UucHl0
CAAAADxtb2R1bGU+AQAAAHMIAAAADAEMAgkFCRI=
I was wondering if anyone knew exactly what either language this was encoded with or whether or not there is someone out there that CAN help me decode this.
(Note: I tried using a Decoding website but from all the languages I checked, none of them were matching the language I was trying to find out. This has been boggling me ever since I started working on this 2 years ago.)
It appears to be BASE64 encoded at least in parts. Putting it through a BASE64 decoder gives the text below which includes a few hints as to what this might be.
Yc#sDddlZddlZdZdZdefdYZdS(iNcCs:d}x-tdD]}tj||j}qW|S(Ntii(txranget_hashlibtsha256tdigest(tmthti((ssecret_message.pyt_ishasc Cs#td}d}x_tdD]Q}|||t||t|7}||d#||||<||d#d|dS(NiR(ii(t_Robot__post_Robot__stk(tself((ssecret_message.pytinits cCs
|jdS(Ntd(tmove(R((ssecret_message.pytmoveDown scCs
|jdS(Ntl(R(R((ssecret_message.pytmoveLeft#scCs
|jdS(Ntr(R(R((ssecret_message.pyt moveRight&scCs
|jdS(Ntu(R(R((ssecret_message.pytmoveUp)sc Csnt|tks0t|dks0|dkrd#rtS|j|7||||f|x&dD]}|jj|d|_qHWtS(NitdlruiRiRR R"i i#tdul!%()bBQvK6I%YaMYDkw)y*-V5nB-z)EE{)pz&/ZpJ $Y|9tg=v-^#xbH5W:`$RE]e~8'r;uU^R(JimTYaGbFev=E=3UrED9s/L(fIH+StudtlrtrlR(ii(ii(ii(ii(R&R%R'R(( ttypetstrRt
ValueErrorRtFalseRtreplacetTrue( RRRtytdxtdytcxtcytbt((ssecret_message.pyR,s0,#$
cCsp|j\}}|jdkr4dd|d|fSt|j}|jdsVdSt|d tjdS(Nis7You are still %d left and %d above where you should be!ssCheater! Go solve it correctly!isQfvfXYn4NTzBGO8yj9r0NN94zFzTJas12P2/jMTsd3PQe62HyXtYvHhlk1qdltxJxigAqwZ3s9j+a8thAeYi3n6Mf7D58x+Dg8CXI/KQRG0zPhraYzLdg4bvMZIa9xc=(ii(RRRtendswithRt _binasciit
a2b_base64(RRR/tk((ssecret_message.pytsolve:s( t__name__t
__module__RRRR!R#RR9(((ssecret_message.pyRs (thashlibRtbinasciiR6RRtobjectR(((ssecret_message.pyts

Downloading all full-text articles in PMC and PubMed databases

According to one of the answered questions by NCBI Help Desk , we cannot "bulk-download" PubMed Central. However, can I use "NCBI E-utilities" to download all full-text papers in PMC database using Efetch or at least find all corresponding PMCids using Esearch in Entrez Programming Utilities? If yes, then how? If E-utilities cannot be used, is there any other way to download all full-text articles?
First of all, before you go downloading files in bulk, I highly recommend you read the E-utilities usage guidelines.
If you want full-text articles, you're going to want to limit your search to open access files. Furthermore, I suggest also restricting your search to Medline articles if you want articles that are any good. Then you can do the search.
Using Biopython, this gives us :
search_query = 'medline[sb] AND "open access"[filter]'
# getting search results for the query
search_results = Entrez.read(Entrez.esearch(db="pmc", term=search_query, retmax=10, usehistory="y"))
You can use the search function on the PMC website and it will display the generated query that you can copy/paste into your code.
Now that you've done the search, you can actually download the files :
handle = Entrez.efetch(db="pmc", rettype="full", retmode="xml", retstart=0, retmax=int(search_results["Count"]), webenv=search_results["WebEnv"], query_key=search_results["QueryKey"])
You might want to download in batches by changing retstart and retmax by variables in a loop in order to avoid flooding the servers.
If handle contains only one file, handle.read() contains the whole XML file as a string. If it contains more, the articles are contained in <article></article> nodes.
The full text is only available in XML, and the default parser available in pubmed doesn't handle XML namespaces, so you're going to be on your own with ElementTree (or an other parser) to parse your XML.
Here, the articles are found thanks to the internal history of E-utilities, which is accessed with the webenv argument and enabled thanks to the usehistory="y" argument in Entrez.read()
A few tips about XML parsing with ElementTree : You can't delete a grandchild node, so you're probably going to want to delete some nodes recursively. node.text returns the text in node, but only up to the first child, so you'll need to do something along the lines of "".join(node.itertext()) if you want to get all the text in a given node.
According to one of the answered questions by NCBI Help Desk , we cannot "bulk-download" PubMed Central.
https://www.nlm.nih.gov/bsd/medline.html + https://www.ncbi.nlm.nih.gov/pmc/tools/openftlist/ will download a good portion of it (I don't know the percentage). It will indeed miss the PMC full-texts articles whose license doesn't allow redistribution as explained on https://www.ncbi.nlm.nih.gov/pmc/tools/openftlist/.

How do I access the "See Also" Field in the Wiktionary API?

Many of the Wiktionary pages for Chinese Characters (Hanzi) include links at the top of the page to other similar-looking characters. I'd like to use the Wiktionary API to send a single character in the query and receive a list of similar characters as the response. Unfortunately, I can't seem to find any query that includes the "See Also" field. Is this kind of query possible?
The “see also” field is just a line of wiki code in the page source, and there is no way for the API to know that it's different from any other piece of text on the page.
If you are happy with using only the English version of Wiktionary, you can fetch the wikicode: index.php?title=太&action=raw, and then parse the result for the template also. In this case, the line you are looking for is {{also|大|犬}}.
To check if the template is used on the page at all, query the API for titles=太&prop=templates&tltemplates=Template:also
Similar templates are avilable in more language editions of Wiktionary, in case you want to use other sources than the English one. The current list is:
br:Patrom:gwelet
ca:Plantilla:vegeu
cs:Šablona:Viz
de:Vorlage:Siehe auch
el:Πρότυπο:δείτε
es:Plantilla:desambiguación
eu:Txantiloi:Esanahi desberdina
fi:Malline:katso
fr:Modèle:voir
gl:Modelo:homo
id:Templat:lihat
is:Snið:sjá einnig
it:Template:Vedi
ja:テンプレート:see
no:Mal:se også
oc:Modèl:veire
pl:Szablon:podobne
pt:Predefinição:ver também
ru:Шаблон:Cf
sk:Šablóna:See
sv:Mall:se även
It has been suggested that the WikiData project be expanded to cover Wiktionary. If and when that happens, you might be able to query theWikiData API for that kind of stuff!

Google Custom Search API, Howto return country specific results only

I am making some PHP code which takes a given search phrase and url and searches through the google search results until it finds the url (only first 100 results). My problem is, this is only working for the US. I have tried adding the "&cr=" option, but it still on returns US results.
The full URL I am using for the request is:
https://www.googleapis.com/customsearch/v1?key=API_KEY&cx=CX_VALUE&q=KEYWORD&cr=COUNTRY&alt=JSON
Does anyone have any experience with this? I want to be able to see UK results. Tried inserting &cr=countryUK , but still only does US results.
Thanks :)
Regards,
Stian
Use the gl=<country code> param to limit it to your country of choice (so gl=gb for the uk).
More info here:
http://googleajaxsearchapi.blogspot.com/2009/10/web-search-in-your-country.html