I got a project assigned where we already have an up and running website and one of our clients wants to be able to track statistics from the website.
We want to make this available to all our clients as soon as we finish the development. Note that each 'client' have their own 'subdomain' to say so. Eg. www.website.com/client1 , www.website.com/client2 , etc. And we want to track the usage separately for each of these clients.
We will need to create statistics based on the usage of our own platform, pull in data registered by Google Analytics and also pull in data from a 3rd party which they will offer by an API of their own (they have a 3rd party solution that uses the data accessible via our API).
All this data needs to be shown on a webpage with graphs and tables.
I wanted to make sure we choose the right architecture from the start, in order to avoid scalability issues later on.
Started reading about Private and Public API's lately.
For now, we do not have another (internal) application yet that would use our own statisics, it would just be the website using it. But in order to be able to scale-up later if needed, and another application would like to use the statistics I think a private API would benefit us greatly.
In order to allow 3rd parties to use the statistical data we chose to let out, I was thinking of creating a Public API.
Is a Private&Public API the correct way to go about this?
One of the questions I am stuck with is how does the architecture for these API's look like. Mostly, right now we already have a public API regarding vacancy data. This 'API' is basically just a PHP class (controller) inside our CodeIgniter solution. It gets called via its URL and returns a JSON object with the results. (e.g. www.website.com/api/vacancy/xxx)
In order to create a (proper) private & public API solution/architecture. Should the API be set free from the website (CodeIgniter)? What are the common go-to solutions for this?
Or is it fine to keep it in our current platform the way it is now? (and people call the stats API via www.website.com/api/stats/xxx for example?)
It's almost always right to go with microservices like architecture so your initial thoughts sounds reasonable. Acting like this will give the possibility to scale and deploy your api independently and also will help you avoid performance side effects to your site (and vice versa). Pay attention how you access your main site data from within the new api if you don't want to finish with a monolith application.
Regarding the API i would suggest you to implement protocol like oauth2 in order to achieve the flexibility you (might) need. Also you can use swagger to document and test your API.
All i said might helps you a lot but first you have to answer yourself do you really need to go so deep or you just need a simple solution.
I think multitenancy is the best choice. Generally speaking, multitenancy is when every customer has own database. Data is separate. The codebase is same and already exists. As I understood the project is in progress status. You do not redesign and rewrite anything.
Related
I'm working on a game-companion app and I was looking for a neat way to fetch data from multiple APIs initiated by starting my app (and maybe also if the user is refreshing the current page with swipe-to-refresh).
The thing is, I planned a dashboard-style homescreen (mostly reduced data from every API), where the user can navigate to different pages (all fed with different API's) and get detailed information for the specific section. I'm not sure how I can provide the API-data 'globally'.
I feel like my main.dart would be way too overloaded, but i can't think of any other way.
What's your thought on that?
Few approaches:
You can use a persistent db and write a wrapper DAO over it that such that anybody with access to this DAO object can write and read from DB. Packages like sqlite help in this regard.
If you are adopting bloc way of state management in your flutter app, then you can create blocs which could be used to share data across your app. flutter_bloc is a good place to start with this.
Via Singleton , If you want a sensitive data that needs to be available within the app globally, then you can create a singleton class that provides read and write from a secure vault. For this secure vault, flutter_secure_storage is convenient or shared_preferences for non sensitive data
Like most people, we're pretty impressed with BigQuery. We're willing to put up with it being based on proprietary "Dremel" in exchange for not having to configure a ton of servers in our LAN, on EC2, or anywhere else.
The REST API is excellent, and we're incorporating that into our apps, but we still find ourselves using the BQ Browser interface as well. We'd like to incorporate something like a 'generic SQL window' into our app, without divulging that the backend is BQ or that data is stored in Google at all, for that matter. Does Google provide a way to use their BQ browser tool in a white-label manner?
Note also, that even extending access to the existing browser tool is problematic. It relies on user-accounts existing in one's own domain - something that can't be done, in our case, with a customer's email address. The REST interface solves this with service-level accounts, but that doesn't get you to the SQL window/browser tool.
If the folks at Google are listening (and I know that you are), consider the benefits of white-labeling the browser tool: I think you'd find a lot of software companies integrating it into their suites of products and, then, running circles around any Hadoop/CDH/EMR/Impala/Hive combination.
So, to summarize: How does a software developer import or emulate the BQ browser tool (with all it's autocompletes, query histories, etc..) in their own web-based app?
The initial version of the BigQuery web interface was considered just an 'example' UI that anyone could create themselves. It uses only the public BigQuery API to talk to BigQuery.
There are a couple of Google-internal things we've added since then, such as the current design of 'saved queries', and an auth shortcut so that users don't have to explicitly grant permission to the UI to access BigQuery data. But it is still mostly plain-ol-javascript talking to BigQuery via the REST API the same way anybody else does.
The javascript is obfuscated, however, but my understanding is that this is just for compression purposes so that it downloads more quickly.
The SQL highlighting is done by CodeMirror with special configuration for the BigQuery SQL variant.
I'll talk to the other members of the BigQuery team about open-sourcing the javascript code in the Web UI. It may be difficult to do at this point, but it doesn't hurt to have a conversation about it. I'll bring this up with the team and update this thread. The most likely answer will be "We'll think about it", but hopefully we can also think about it and start working on it too :-)
Let me know if that sounds like it would meet your needs. It might not solve the auth problems you mention, since your users likely won't have BigQuery accounts, but you may be able to solve that by proxying oauth2 access tokens.
So. I have embarked on the journey of learning Laravel in the last couple of weeks, and am thoroughly enjoying it.
It has come time for a site redesign and I thought it was about time to tighten up some of our functionality, so I am making the switch from CodeIgniter to Laravel.
I was wondering whether it is worth starting off with a RESTful API layer in Laravel (easy enough to create) and use it as a base even for the web application. In the future we are likely to build a mobile app that will need to use the API. So:
Is it worth having the web application connect to the API
and what is the easiest way/s to make calls to the API without having to write a bazillion
lines for cURL everytime I want to make a request?
It is definitely worth it.
I am currently redesigning a messy PHP code for an established hosting company turning it into beautiful Laravel code. I already have a mobile app working with it - Laravel makes it easy to return JSON data with one line -
Response::json('OK', 200);
or
Response::eloquent(Auth::user());
or
$tasks = Task::all();
Response::eloquent($tasks);
You don't need to use CURL as far as I know.
You can make all requests with simple AJAX, jQuery would simplify that.
Also using some MVC JS framework would help you make the application code structure more elegant for the client side, and the advantage is that you can easily package that into PhoneGap when you are ready to have your API take some real testing.
Today I posted on my blog about a simple example that you can try to see if this approach is worth your time : http://maxoffsky.com/code-blog/login-to-laravel-web-application-from-phonegap-or-backbone/
Check it out and accept the answer if you think it's on the right track.
As always, your results may vary, but this is exactly what I'm going through at the moment. I'm migrating a large .Net site with this API architecture and I've decided to keep it for Laravel.
I personally decided for this because:
More scalable. I can setup api.domain.com and then add additional
boxes/vm/whatever as our traffic grows. In fact, you could load
balance just the api by "round robin" or multiple dns entries for
that domain.
Future proofing for new sites and apps. Sounds like you're in the
same situation. I can see an app or two being added in the next year
or so.
Lost cost. You'll already be laying out your controllers, so really
it can be just a matter of setting them to RESTful and making small
tweaks to accommodate.
To be fair, some counter points:
Possibly additional load time, from processing through the API, though this should be minimal.
Additional security items to consider if you'd like to lock things down to just your app.
Either way, welcome to Laravel!
and what is the easiest way/s to make calls to the API without having to write a bazillion lines for cURL everytime I want to make a request?
#Sneaksta try postman chrome extension for calling rest services. you can create forms in this extension and pass data from these forms to you Rest services
https://chrome.google.com/webstore/detail/postman-rest-client/fdmmgilgnpjigdojojpjoooidkmcomcm?utm_source=chrome-ntp-icon
I have a website that revolves around transactions between two users. Each user needs to agree to the same terms. If I want an API so other websites can implement this into their own website, then I want to make sure that the other websites cannot mess with the process by including more fields in between or things that are irrelevant to my application. Is this possible?
If I was to implement such a thing, I would allow other websites to use tokens/URLs/widgets that would link them to my website. So, for example, website X wants to use my service to agree user A and B on the same terms. Their page will have an embedded form/frame which would be generated from my website and user B will also receive an email with link to my website's page (or a page of website X with a form/frame generated from my server).
Consider how different sites use eBay to enable users to pay. You buy everything on the site but when you are paying, either you are taken to ebay page and come back after payment, or the website has a small form/frame that is directly linked to ebay.
But this is my solution, one way of doing it. Hope this helps.
It depends on how your API is implemented. It takes considerably more work, thought, and engineering to build an API that can literally take any kind of data or to build an API that can take additional, named, key/value pairs as fields.
If you have implemented your API in this manner, then it's quite possible that users of this API could use it to extend functionality or build something slightly different by passing in additional data.
However, if your API is built to where specific values must be passed and these fields are required, then it becomes much more difficult for your API to be used in a manner that differs from what you originally intended.
For example, Google has many different API's for different purposes, and each API has a very specific number of required parameters that a developer must use in order to make a successful HTTP request. While the goal of these API's are to allow developers to extend functionality, they do allow access to only very specific pieces of data.
Lastly, you can use authentication to prevent unauthorized access to your API. The specific implementation details depend largely on the platform you're working with as well as how the API will be used. For instance, if users must login to use services provided by your API, then a form of OAuth may suffice. However, if other servers will consume your API, then the authorization will have to take place in the HTTP headers.
For more information on API best practices, see 7 Rules of Thumb When You Build an API, and a slideshow from a Google Engineer titled How to Design a Good API and Why That Matters.
I'm going to try to phrase this as a generic question.
A company runs a website that has a lot of valuable information on it. This information is queried from an internal private database. So technically, the information in the database is the valuable part.
If this company wished to develop an API that developers could use to access their database of valuable & useful information, what approach should the company take?
It's important to give developers what they need. But it is also important to keep competing websites from essentially using the API to steal everything and essentially steal all traffic from the company's website.
Is there was some way the API could be used in a way that drives traffic back to the original company's website somehow? Something that gives users a reason to keep going there.
This is a design consideration that my company is struggling with that I can imagine other web-based services have come across before.
Institute API keys - don't make it public. Maybe make the signup process more complex than "anyone with an e-mail address".
Rate limit the API based on keys. If you're running more than X requests a minute, you're likely mining the database.
Don't provide a "fetch everything" API. Make the users know something to get information on it. Don't reveal what you know.
I've seen a lot of companies giving out API keys and stating a TOS that all developers must adhere to. For example, any page that uses data from the API must include your logo and a link back to your website. If any developer is found breaking the rules, the API key can be cancelled and your data is safe again.
Who is meant to use the API?
A good general method of solving this problem is to limit access to the data to end users (rather than allow applications or developers at it). Provide applications and users with identification, each, and make sure that to access a subset of the data, a combination of both user and application key is required.
Following this pattern, each user will have access to a very limited subset of the data (presumably, the data that they require for their own specific use), and you can put measures in place to enforce this. Any attempts at data-mining will become obvious.
This type of approach meshes well with capability-type security models on the server side.