Is there a public website that converts swagger json to PDF for HTML? - pdf

Has anyone made where, where you just enter your swagger URL .../swagger/docs/v1
and then the website converts it to HTML, pdf, doc or whatever in a nice readable format? I'd think that site would get a lot of traffic (hint)
I know there are some things on github you can download that will convert things, but I'd think someone has made a public site so I can save some time.

You can use this website : Swagger Editor
Copy your swagger file and in the menu, select 'Generate Client' -> 'HTML' (or Dynamic HTML')
You can also use Swagger Code Generator if you want to add this step in an automatic build flow.

Try this one: swagger2html .
The document generated by swagger-codegen is indeed not readable. This project makes a neat appearance(bootstrap css) and shows the fields in the request/response models in a straightforward manner.

Related

any PDF plugin for cakePHP?

I need to implement in a page/view of a form an option to generate a PDF file; looking for information I have seen that cakepdf exists, but there is hardly any information on how to implement/install it and it seems to be outdated.
Are there other options compatible with CakePHP?
Best option to use is https://wkhtmltopdf.org/ via https://github.com/mikehaertl/phpwkhtmltopdf ... we use this to generate PDF documents in our CakePHP solution.

TYPO3 - Insert api medipim

I want to call products on a web page via the api of Medipim. I have never done this before and I have never worked with TYPO3.
Therefore two questions.
In which (config) file do I place the authentication (I have an ID and secret key) and what exactly does that code look like?
When I want to call up the products, how do I use this in the TYPO3 page environment? Do I have to choose a html page or can I just enter it in the TYPO3 editor on a page?
Documentation: watch
You probably need an extension which converts the data you get from medipim to HTML. I Expect you get information as JSON, XML, or CSV.
As you won't publish your access code you probably will not use a javascript call from the browser to access the API, then you need some PHP.
Using PHP in TYPO3 is done in extensions. You should learn about building extensions in TYPO3. As a healper you might use the TYPO3 extension "Extension Builder" (=EB). As you have no local records you only need the extension frame with just one plugin from the EB.
Depending on your usage (will an editor select products from Medipim (option A) or should the visitor be able to select products (option B)?) you need a plugin with an option to insert desired product identification for BackEnd editors or just an input mask.
you can configure your plugin with typoscript so an integrator can enter the authentification information just once.
For option A you need to enhance your plugin with a field for the product ids.
keyword: flexform
for Option B you need a form.
Then you need to display the product information you get from the API. provide the returned data in variables and use Fluid templates to get a nice display.
Without any knowledge of TYPO3 this will be hard work and a lot to learn. The other possibility: hire an experienced TYPO3 developer and let him build this extension for you.

Scrape a part of website and notify on change

The website of my university unfortunately does not provide feeds but they keep publishing information there that is important for me (deadlines, dates of exams etc.) as links to pdfs
in a certain section of the website.
How can I regularly scrape that section of the site and have me notified (growl, mail something alike).
Normally I would use wget to mirror it but how to extract only parts of the website?
Is there a cli tool that can extract the XHTML via XPATH or similar?
Try this:
wget --spider --server-response http://example.com
This will print the headers which might contain the "Length"-attribute. If it changes, you can notify yourself.
edit: If it changes, you can download the whole html file, grep for a pdf file or whatever you want to look for (maybe for "<div id='news'>(.*?)</div>")
Mmm... You should take a look at QueryPath. QueryPath makes easy to parse HTML. What if the HTML structure changes? What if you want specific elements of the page? QueryPath does the hard work for you. Do you like JQuery? QueryPath is like the JQuery of PHP.
See: http://www.ibm.com/developerworks/opensource/library/os-php-querypath/index.html?S_TACT=105AGX01&S_CMP=HP
See: http://querypath.org/
You might be interested in looking at Pjscrape (disclaimer: this is my project). It's a web-scraping tool built on PhantomJS, giving you full jQuery access to the page in a headless Webkit browser context. It makes it very easy to pull semi-structured data from webpages via the command line, particularly if the page you're scraping has a consistent structure for new elements.
For example, you can pull all the course titles from this course catalog with the following code:
pjs.addScraper(
// the page you're scraping
'http://www.ischool.berkeley.edu/courses/catalog',
// selector for elements you want to pull text from
'.views-row .views-field-title'
);
// suppress STDOUT logging
pjs.config('log', 'none');
Running this from the command line gives you JSON to STDOUT by default:
~> phantomjs /path/to/pjscrape.js my_script.js
["W10. Introduction to Information","24. Freshman Seminar", ...]
So it would be pretty simple to run this script on a regular basis, capture the output in a file, and then alert you when the new output doesn't match the previous scrape. You can also write your own scraper functions, so there's a lot of flexibility for more complex scraping if a simple selector won't do the trick.

Insert relative link in reStructuredText

I am documenting a library that has a Python component and a JavaScript component. The overall user documentation, and the Python API documentation are in reStructuredText, processed with Sphinx. The JavaScript API is in jsdoc and is processed with jsdoc-toolkit. The principal output format will be HTML. I am new to reST, Sphinx and jsdoc.
I have set up a build system so all the generated html pages are dumped into a single directory tree. I now need to insert into the main page (generated from reST) a link to the generated Javascript documentation. This needs to be a relative link, since the docs may be located in different places on different installations. reST will automatically parse a full URL, but I can't figure out how to make it insert a relative link. Constructs like :ref: and :doc: don't seem to help, because they expect the target to be reST.
Any ideas?
Figured it out. The following inserts a relative reference to the document js/index.html:
`Javascript API <js/index.html>`_

Retrieving dynamic text from a website in vb.net (VS2008)

I want to be able to retrieve dynamic data from a web page (share prices). I started out by retrieving the html code before I realised that as it is live data, the html code will be of little use. Although I am looking to capture specific data, all i wish to do is process a webpage that I specify which will return the text off that website and not the HTML code. Basically a copy and paste of the entire page would be great..
Any ideas would be really appreciated!
'Screen Scraping' by parsing HTML is so early 2000s...what I would do is read up on Amazon's Mechnical Turk. You can develop a queued architecture where you submit urls to this Mechnical Turk service. The service would automatically distribute these bits of work to users who would then do the dirty task of copying and pasting out the valuable stock quote information you require. Users around the world would anxiously await delivery of the next URL to their Mechanical Turk inbox...pinning for the opportunity to copy/paste out another share price for your application. Sure, it might take a few minutes to update your prices, but hey, they would be HAND parsed by REAL people around the globe! Just think of the possibilities!
Well, the HTML contains the text of the website, so you "just" need to parse the HTML.
EDIT: If the data is not in the HTML but loaded dynamically, the situation is different. As I can see, you have two options:
Find out how the data is loaded (i.e. read the JavaScript on the page). If it is updated via some web service, you could query the same web service in your program.
Use a web browser to get the data and then get the dynamic HTML tree of the page. Maybe the WPF Webbrowser control can help you with this, but I'm not sure since I've never done this myself.
Is it possible to find this same data provided in a ready-to-consume format rather than scraping HTML for it? It seems like there's probably public web-services for stock quotes.
For example: A quick search for "Stock price webservice" turned up http://www.webservicex.net/stockquote.asmx; an ASMX web-service that is easy to consume in .NET.
In your Visual Studio project you should be add a reference to this service via the "Add Web Reference" command; the dialog you're given varies depending on whether your project is targeting for .NET 2.0 or .NET 3.0/3.5.
I added a reference to the service named StockPriceProxy:
Public Function GetQuote(ByVal symbol As String) As String
Using quoteService As New StockPriceProxy.StockQuote
return quoteService.GetQuote(symbol)
End Using
End Function