Sorry, I am not sure this question belongs here. We are building a solution that requires a "mapping engine", so to speak. We need to map our own db fields against multiple external APIs and vice versa. To us, this is a very time-consuming and repetitive process, and I have been unable to find any solution out there other than maker, zapier, etc. which work with pre-existing integrations. What we are dealing with is a bunch of legacy custom APIs that are never part of any of these solutions.
I was hoping there's a solution out there that would just let me paste the JSON and XML bodies of successful requests and responses of the legacy APIs we're working with, then allow us to match our internal fields against those fields in a table as API templates that we can define, and then run as an automation middleware, executing on the templates so to speak.
Does this exist?
Related
We have a medium size app (100+ SQL tables), and we often need to integrate it with partner APIs (with our system as a client/consumer). Process of designing such integration is non-trivial:
We often need to map columns in our database to fields in requests to partner API.
Some fields in requests to partner API must be constant, or conditional
In rare occasions output from one API response becomes an input to another API request
There are many resources on the web to document REST APIs - there are specific formats for that (Swagger, RAML, etc.). These formats allow efficient generation of client code and human-readable documentation. However these formats are not very helpful for describing how your app integrates with an API. We create lengthy Microsoft Word documents which contain more or less a copy of partner API methods with comments how every individual field should be used. Such solution seems sub-optimal.
Googling for better options did not yield many results, namely Swaggerhub seems to have "comments" functionality which seems to target the problem above and pretty much that's all.
Question: are there some tools, formats, workflows, ideas, etc. which facilitate designing and documenting API integrations described above?
I dont know which language you use, but i work with ApiDoc
https://apidocjs.com/
He is perfect to generate an API REST doc with comment in NodeJS, he can be used with many language
Like most people, we're pretty impressed with BigQuery. We're willing to put up with it being based on proprietary "Dremel" in exchange for not having to configure a ton of servers in our LAN, on EC2, or anywhere else.
The REST API is excellent, and we're incorporating that into our apps, but we still find ourselves using the BQ Browser interface as well. We'd like to incorporate something like a 'generic SQL window' into our app, without divulging that the backend is BQ or that data is stored in Google at all, for that matter. Does Google provide a way to use their BQ browser tool in a white-label manner?
Note also, that even extending access to the existing browser tool is problematic. It relies on user-accounts existing in one's own domain - something that can't be done, in our case, with a customer's email address. The REST interface solves this with service-level accounts, but that doesn't get you to the SQL window/browser tool.
If the folks at Google are listening (and I know that you are), consider the benefits of white-labeling the browser tool: I think you'd find a lot of software companies integrating it into their suites of products and, then, running circles around any Hadoop/CDH/EMR/Impala/Hive combination.
So, to summarize: How does a software developer import or emulate the BQ browser tool (with all it's autocompletes, query histories, etc..) in their own web-based app?
The initial version of the BigQuery web interface was considered just an 'example' UI that anyone could create themselves. It uses only the public BigQuery API to talk to BigQuery.
There are a couple of Google-internal things we've added since then, such as the current design of 'saved queries', and an auth shortcut so that users don't have to explicitly grant permission to the UI to access BigQuery data. But it is still mostly plain-ol-javascript talking to BigQuery via the REST API the same way anybody else does.
The javascript is obfuscated, however, but my understanding is that this is just for compression purposes so that it downloads more quickly.
The SQL highlighting is done by CodeMirror with special configuration for the BigQuery SQL variant.
I'll talk to the other members of the BigQuery team about open-sourcing the javascript code in the Web UI. It may be difficult to do at this point, but it doesn't hurt to have a conversation about it. I'll bring this up with the team and update this thread. The most likely answer will be "We'll think about it", but hopefully we can also think about it and start working on it too :-)
Let me know if that sounds like it would meet your needs. It might not solve the auth problems you mention, since your users likely won't have BigQuery accounts, but you may be able to solve that by proxying oauth2 access tokens.
I recently finished one of my first AgilityJS projects, which is a web-based file browser that lets you create and manage folders and files, and navigate around the folder tree. I followed the various AgilityJS recommendations regarding the design and ended up with all my HTML and Javascript in a single Javascript file.
Now, I would like to provide a "read-only" version of this app which does not have the ability to add/edit/remove files and folders. I'd like to have 2 user types on the website, one type which can only read the files and folders, and another user type who can administer.
My question is, how do I proliferate these permission differences to my AgilityJS app? I know how to secure my endpoints and operations on the server side, but I'm wonder about the best way to do this on the client side. Should I create a separate version of the app with a limited set of functionality? Should I simply hide certain buttons/features? Are there theories, frameworks, etc.? which deal with this issue? Any point in the right direction would be helpful.
LOL - probably one could write books about that topic. Some very basic ideas:
I would start with the philosophical debate according to MVC. There are people argue with the help of MVC that any piece of code and also any piece of data model should never be implemented twice. Business logic and model to the server. The opposite view is focussing on serving users at any cost - even if that means to double maintain code or the model for the sake of avoiding extra round trips. The way in between defines a master source for business code and model and makes sure to follow on other places that leading master (the master will be changed first). Take your choice. Your answer to that question results into boundaries for how the user interface can/have to look like for the user.
You need to think by hard about a permissions concept. Looking at Microsoft I would assume that they invested for all their applications a couple of dozens man years to make up the permission concepts. The ideal permission concept very much depends on your application. So it is close to impossible to work this out without knowing at least a very little of your application. However the permission concept has to come up with policies deciding on roles, groups, access rigths, access levels, context driven permissions (eg. based IP address), permissions black or white listing (permissions each user has at creation). An example from Microsoft: http://office.microsoft.com/en-us/windows-sharepoint-services-help/permission-levels-and-permissions-HA010100149.aspx
Data on the client is not secured!!! Whatever you do on the client, be it data hiding, encryption, compression... - if this is done on the client there are ways to read the data (even by disabling the data manipulation) or by reverting those. Somebody can send data to your server, where the client should not even have given an update form could be implemented by hackers. So as soon as you start to implement permissions make sure, that for all data you send to clients users are permitted to read and that you inlcude permissions checking for each time you add/update data to the database.
I have a website that revolves around transactions between two users. Each user needs to agree to the same terms. If I want an API so other websites can implement this into their own website, then I want to make sure that the other websites cannot mess with the process by including more fields in between or things that are irrelevant to my application. Is this possible?
If I was to implement such a thing, I would allow other websites to use tokens/URLs/widgets that would link them to my website. So, for example, website X wants to use my service to agree user A and B on the same terms. Their page will have an embedded form/frame which would be generated from my website and user B will also receive an email with link to my website's page (or a page of website X with a form/frame generated from my server).
Consider how different sites use eBay to enable users to pay. You buy everything on the site but when you are paying, either you are taken to ebay page and come back after payment, or the website has a small form/frame that is directly linked to ebay.
But this is my solution, one way of doing it. Hope this helps.
It depends on how your API is implemented. It takes considerably more work, thought, and engineering to build an API that can literally take any kind of data or to build an API that can take additional, named, key/value pairs as fields.
If you have implemented your API in this manner, then it's quite possible that users of this API could use it to extend functionality or build something slightly different by passing in additional data.
However, if your API is built to where specific values must be passed and these fields are required, then it becomes much more difficult for your API to be used in a manner that differs from what you originally intended.
For example, Google has many different API's for different purposes, and each API has a very specific number of required parameters that a developer must use in order to make a successful HTTP request. While the goal of these API's are to allow developers to extend functionality, they do allow access to only very specific pieces of data.
Lastly, you can use authentication to prevent unauthorized access to your API. The specific implementation details depend largely on the platform you're working with as well as how the API will be used. For instance, if users must login to use services provided by your API, then a form of OAuth may suffice. However, if other servers will consume your API, then the authorization will have to take place in the HTTP headers.
For more information on API best practices, see 7 Rules of Thumb When You Build an API, and a slideshow from a Google Engineer titled How to Design a Good API and Why That Matters.
What things should a developer designing and implementing an API for a community based website know before starting the heavy coding? There are a bunch of APIs out there like Twitter API, Facebook API, Flickr API, etc which are all good examples. But how would you build your own API?
What technologies would you use? I think it's a good idea to use REST-like interface so that the API is accessible from different platforms/clients/browsers/command line tools (like curl). Am I right? I know that all the principles of web development should be met like caching, availability, scalability, security, protection against potential DOS attacks, validation, etc. And when it comes to APIs some of the most important things are backward compatibility and documentation. Am I missing something?
On the other hand, thinking from user's point of view (I mean the developer who is going to use your API), what would you look for in an API? Good documentation? Lots of code samples?
This question was inspired by Joel Coehoorn's question "What should a developer know before building a public web site?".
This question is a community wiki, so I hope you will help me put in one place all the things that should be addressed when building an API for a community based website.
If you really want to define a REST api, then do the following:
forget all technology issues other than HTTP and media types.
Identify the major use cases where a client will interact with the API
Write client code that perform those "use cases" against a hypothetical HTTP server. The only information that client should start with is the response from a GET request to the root API url. The client should identify the media-type of the response from the HTTP content-type header and it should parse the response. That response should contain links to other resources that allow the client to perform all of the APIs required operations.
When creating a REST api it is easier to think of it as a "user interface" for a machine rather than exposing an object model or process model. Imagine the machine navigating the api programmatically by retrieving a response, following a link, processing the response and following the next link. The client should never construct a URL based on its knowledge of how the server organizes resources.
How those links are formatted and identified is critical. The most important decision you will make in defining a REST API is your choice of media types. You either need to find standard ways of representing that link information (think Atom, microformats, atom link-relations, Html5 link relations) or if you have specialized needs and you don't need really wide reach to many clients, then you could create your own media-types.
Document how those media types are structured and what links/link-relations they may contain. Specific information about media types is critical to the client. Having a server return Content-Type:application/xml is useless to a client if it wants to do anything more than parse the response. The client cannot know what is contained in a response of type application/xml. Some people do believe you can use XML schema to define this but there are several disadvantages to this and it violates the REST "self-descriptive message" constraint.
Remember that what the URL looks like has absolutely no bearing on how the client should operate. The only exception to this, is that a media type may specify the use of templated URIs and may define parameters of those templates. The structure of the URL will become significant when it comes to choosing a server side framework. The server controls the URL structure, the client should not care. However, do not let the server side framework dictate how the client interacts with the API and be very cautious about choosing a framework that requires you to change your API. HTTP should be the only constraint regarding the client/server interaction.