I got the task to front- and backend code a web interface for an overview about all the invoices from a specific customer in our SAP Business One system.
So far so good, but now I also need to call these invoices as a PDF file and display them in a new browser tab, as it gets generated on-the-fly in our system I guess. I found this page from SAP: https://help.sap.com/viewer/284ff5baa45f4057a251ff4266d4fcd1/2011.500/en-US/fae9de62113646cf843291a38210b94e.html
But I doesn't really get the point of it. It's also says "SAP S/4HANA Cloud" but I'm not sure if it's integrated in B1 in some way, or something way different, I'm just looking for a proper REST-API to get invoices as PDF files with a specific identifier.
So I don't know, maybe you guys can help me out. I'm using PHP btw and till now I only used SAP libraries to build CURL calls for our local system like this in the end:
https://sys-sap/b1s/v1/Invoices?$select=DocEntry,DocNum,DocType,CardCode&$filter=CardCode eq 30088
and it returns the desired data of invoices I want. Now I need the PDF's. Thanks in advance!
Related
I created a website that accepts appointments online using wix booking. I already managed to export the appointments with all the details to JSON format.
I am trying to do the same to be able to create the appointments from an external app but for this I want to be able to first obtain getServiceAvailability() from wix-bookings API and I can't find a way to do it. I have tried in all the ways that I know and I do not get any results, I am not a programming expert and that is why I ask for help in this way, if someone who knows wix well and how to work with its API that could help me at least have one basis of how to do it. Thank you
If you want to get the availability information from an external service, you can create an HTTP function to expose that information.
Is it possible to retrieve drl ex:https://host:port /ewebtop/drl/objectId/0900a58e80970f7b of document via .net application?.So that when users clicks on this link they can be able to edit the document and when they close the document the document should be autosaved onto documentum.
First of all: a link is a link. What you decide to do with it I u to you. Default handler in browser will just redirect you to webtop application. If you have SSO you can have the document opened for edit. There are some extra arguments that can be provided (view/edit).
The object id is the only varying part of the URL, so you can easily construct this in code.
Secondly: what is your goal? There is no way to make the document upload itself into Documentum repo. You can write a plugin for every application to handle that, but it seems like a big task - especially dealing with security.
The problem is that upon check-in, user must provide some information - at least about the new version number...
If you're building a thick client in .net I would go with DFS - that's the only real option here.
I work with a company who outsources their website. I'm trying to retrieve data from the site without having to contact those who run it directly. The table data I'm trying to retrieve can be found here:
http://pointstreak.com/prostats/scoringleaders.html?leagueid=49&seasonid=5967
My methodology thus far has been to use google chrome's Developer Tools to find the source page, but when I filter under the network tab for XHL, only the info of the current games can be found. Is there anyway to scrape this data (I have no idea how to do that; any resources or direction would be appreciated) or another way to get it? Am I missing it in the developer tools?
If I had to contact those who run the website, what exactly should I ask for? I'm trying to get JSON data that I can easily turn into my own UITableViewController.
Thank you.
Just load the page source and parse the html.
Depending on your usage there may well be a copyright issue, the page has an explicit copyright notice so you will need to obtain explicit permission for your use.
I have a client who uploads his properties into 3rd party software created by a company called 'estates it', who then send that file, as a .blm to rightmove who process it.
This client wants us to take that .blm file and output the data into a new designed site we are doing. Does anyone know of methods or experience in doing this or working with .blm files? Which is a static file as far as I know.
I don't suppose this will be of help to you anymore, but if anyone else is looking for ways to handle the rightmove BLM file:
You can find several solutions implemented here
Using this repo, for example, you can just:
$blm = new \BLM\Reader(dirname(__FILE__) . '/test.blm')
var_dump($blm->toArray());
Most solutions I've seen are implemented in PHP though, and seem to be pretty old.
In the case the website is a wordpress, there seem to be a couple puglins for this too (example).
I'm not too sure Rightmove still outputs a BLM on the latest version of its api.
This is a pretty simple process. We are currently working the other way. ie our system manages an estate agents property listings and sends them to rightmove for listing.
The rightmove API definition can be found at www.rightmove.co.uk/adf.html
The file comprises a definition and data file within each blm, this should make it quite straight forward to manage
I want to be able to retrieve dynamic data from a web page (share prices). I started out by retrieving the html code before I realised that as it is live data, the html code will be of little use. Although I am looking to capture specific data, all i wish to do is process a webpage that I specify which will return the text off that website and not the HTML code. Basically a copy and paste of the entire page would be great..
Any ideas would be really appreciated!
'Screen Scraping' by parsing HTML is so early 2000s...what I would do is read up on Amazon's Mechnical Turk. You can develop a queued architecture where you submit urls to this Mechnical Turk service. The service would automatically distribute these bits of work to users who would then do the dirty task of copying and pasting out the valuable stock quote information you require. Users around the world would anxiously await delivery of the next URL to their Mechanical Turk inbox...pinning for the opportunity to copy/paste out another share price for your application. Sure, it might take a few minutes to update your prices, but hey, they would be HAND parsed by REAL people around the globe! Just think of the possibilities!
Well, the HTML contains the text of the website, so you "just" need to parse the HTML.
EDIT: If the data is not in the HTML but loaded dynamically, the situation is different. As I can see, you have two options:
Find out how the data is loaded (i.e. read the JavaScript on the page). If it is updated via some web service, you could query the same web service in your program.
Use a web browser to get the data and then get the dynamic HTML tree of the page. Maybe the WPF Webbrowser control can help you with this, but I'm not sure since I've never done this myself.
Is it possible to find this same data provided in a ready-to-consume format rather than scraping HTML for it? It seems like there's probably public web-services for stock quotes.
For example: A quick search for "Stock price webservice" turned up http://www.webservicex.net/stockquote.asmx; an ASMX web-service that is easy to consume in .NET.
In your Visual Studio project you should be add a reference to this service via the "Add Web Reference" command; the dialog you're given varies depending on whether your project is targeting for .NET 2.0 or .NET 3.0/3.5.
I added a reference to the service named StockPriceProxy:
Public Function GetQuote(ByVal symbol As String) As String
Using quoteService As New StockPriceProxy.StockQuote
return quoteService.GetQuote(symbol)
End Using
End Function