Cross Cloud Storage Adapter? [closed] - api

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 8 years ago.
Improve this question
Do any APIs/Libraries/tools exist that act as adapters/provider interfaces for accessing different cloud storage services through a common interface? Something similar to ODBC or OLE-DB, except for cloud storage instead of databases.
Such that, if I wrote a front end for taking notes, and I utilized such an API, and let the user provide configuration for which cloud storage provider they have an account with, the API library would handle translating my cloud.Save() call into the commands specific to whiever provider was being utilized. This would allow my front-end app to be cloud storage provider agnostic.
So maybe I wrote some chrome extension or portable thumb drive app for storing notes, or encrypting and storing passwords, or some such, and you tell it which cloud storage provider you have an account with, and it uses it for syncing. This way your use of that tool doesn't tie you to a specific cloud provider. As long as you backup your data, you could migrate to another provider and just reconfigure the app should you become unhappy with that provider or they go bankrupt.
WebDAV for example is one potential candidate since it seems some storage services offer it, but that is not quite what I have in mind, since it depends on the storage providers to offer that as an option. I also don't know enough about WebDAV to know if it really would serve in the capacity I'm imagining. But feel free to post that as an option with pros/cons for comment/discussion.
I more imagine something that is a middle layer external to each cloud provider. Of course since each provider offers a different web service for interacting with files, the middle layer would have adapter for each backend. But on the front-end, it would expose a common API that is provider agnostic.
Does anything of this type exist?
Even just an open source GUI that allows you to store files in any provider, which would imply that in its source code exists the beginnings of such a middle layer. I would think someone has already made a tool that helps you unify all the free GB that you can get from various services. Sort of a JBOD layer for the cloud(although that is not the goal of this post, the point being such a tool accessing many different services would imply it has the beginnings of a middle layer for standardizing access to them).
My main interest though is in abstractions for personal cloud storage services, that would be appropriate for applications used by individuals, to put the control of storage in the hands of the individual so that they can have the freedom to move between personal cloud storage services. It seems what I've found so far is more oriented for CDN, websites, or services.
Please make seperate posts per suggestion so that votes and comments/discussion can take place specific to that suggestion.

Kloudless provides a common API to several different cloud storage APIs (Dropbox, Box, GDrive, OneDrive, etc.). Kloudless also provides SDKs in popular languages and UI widgets to handle authentication and other user interactions.
You can find more information and sign up here: https://kloudless.com/
Disclosure: I work at Kloudless.

Apache jclouds presents cloud-agnostic abstractions, with stable implementations of ComputeService and BlobStore.
visit https://jclouds.apache.org/
Apache jclouds® is an open source multi-cloud toolkit for the Java
platform that gives you the freedom to create applications that are
portable across clouds while giving you full control to use
cloud-specific features.

Apache Libcloud: "a unified interface to the cloud"
http://libcloud.apache.org/

A couple of months ago I did a survey of personal cloud storage aggregator services and applications. And one seems relevant to your question.
Oxtio is a service that connects multiple cloud storage services and includes a WebDAV service for accessing it's own service.

Cloud storage providers each have different specifics which makes it hard to use exactly one interface for all (or even some) of them. CloudBlackbox package of our SecureBlackbox product offers a unified interface for major storage providers (S3, Azure, Google Drive, SkyDrive/OneDrive, Dropbox) with focus on security of the data, but due to mentioned specifics we have individual classes (descendants of one superclass) to serve each provider. SecureBlackbox is available for use from .NET, Java, C++ on Windows and Delphi.

Check out Boto, a highly regarded Python library which provides an abstraction layer atop Amazon's S3 and Google Cloud Storage.
https://github.com/boto/boto

-StorageMadeEasy (SME)
-Otixo (But they do not offer FREE tier anymore since Feb 2013)
-Joukuu
-Gladinet
-Egistec CloudHub
...
All of above allows you to connect several cloud storages, but they do not actually combine it.
If you wan to combine several personal cloud storages, you need to make it yourself, which is what I am doing for the past few months.
So far I have combined several clouds (Dropbox, Box, Google Drive, Skydrive) using their Android API/SDK, then I process the data splitting/merging/compression/encryption inside my Android application (not a good choice, just for the sake of prototype)
In the future, maybe I will add more providers that has an API, such as Amazon S3, SugarSync, but right now there is lack of manpower.
If you just want to connect multiple clouds on Android (not combining), then you can try ES File Explorer or ASTRO File Manager, and several other applications

I think webdav is the ultimate protocol:
webdav->dropdav->dropbox
webdav->box.net
webdav->DAV-pocket->google drive
webdav->Otixo(free for 14 days)->SugarSync

Related

SPA with Backend API and new B2B API - how to deploy

I have currently delivered a SPA (Vue.js) web application with a Java API backend. Everything is currently sitting in AWS, with the frontend being in CloudFront and the backend in ECS connecting to a RDS instance.
As part of the next phase of delivery, we are creating a B2B API. My question is that of architectural design and deployment strategy, is it commonplace to just extend the existing API with B2B functionality? Should I keep them both separate with an API gateway in front? We envisage that the B2B use eventually will outgrow the SPA use case so the initial deployment configuration needs to have the most flexibility to grow and expand.
Is there some sort of best practice here? I imagine that a lot of code would be similar between the two backends as well.
Thanks,
Terry
First off - Deciding on service boundaries is one of the most difficult problems in a service oriented architecture design and the answer strongly depends on your exact domain requirements.
Usually I would split service implementations by the domain/function as well as by organizational concerns (e.g. separate teams developing them) and not by their target audience. This usually avoids awkward situation where team responsibility is not clear, etc. If it will grow into a very large project there may also be a need for multiple layers of services and shared libraries - And at that point you would likely run into necessary re-factorings / restructurings.
So if there is a very large overlap in function between your b2b and the regular api you may not want to split the implementation.
However, you may also have to consider how the service access is provided and an API Gateway could help with providing different endpoints for the different audiences, different charging models, different auth options, etc. Depending on your exact requirements an API Gateway may not be enough and this may also require another thin service layer implementation that uses common domain services.

How does the Bezos API mandate affect Database Design?

The Bezos API Mandate speaks in volumes about how externalized APIs must be designed.
However it is unclear from the points listed in the mandate as how databases for microservices are maintained.
Do teams (services) use a shared schema and manage data handling/processing with a separate microservice on their own (DAO service)?
Do teams (services) have their own isolated schemas and database engines?
Thank you!
Please go through the 12-factors of microservices.
The question to your answer in simple words is, Every microservices is having its isolated database( maybe the dedicated table or in NOSQL it's separate bucket for that microservice). And most important, only that microservice can interact with its databases: all other services must go through that service (e.g. via REST/HTTP or a message bus).
Read this link which gives a detailed explanation.
https://12factor.net/backing-services
See below URL::
https://www.nginx.com/blog/microservices-reference-architecture-nginx-twelve-factor-app/

Can BigQuery's browser interface be white-labeled?

Like most people, we're pretty impressed with BigQuery. We're willing to put up with it being based on proprietary "Dremel" in exchange for not having to configure a ton of servers in our LAN, on EC2, or anywhere else.
The REST API is excellent, and we're incorporating that into our apps, but we still find ourselves using the BQ Browser interface as well. We'd like to incorporate something like a 'generic SQL window' into our app, without divulging that the backend is BQ or that data is stored in Google at all, for that matter. Does Google provide a way to use their BQ browser tool in a white-label manner?
Note also, that even extending access to the existing browser tool is problematic. It relies on user-accounts existing in one's own domain - something that can't be done, in our case, with a customer's email address. The REST interface solves this with service-level accounts, but that doesn't get you to the SQL window/browser tool.
If the folks at Google are listening (and I know that you are), consider the benefits of white-labeling the browser tool: I think you'd find a lot of software companies integrating it into their suites of products and, then, running circles around any Hadoop/CDH/EMR/Impala/Hive combination.
So, to summarize: How does a software developer import or emulate the BQ browser tool (with all it's autocompletes, query histories, etc..) in their own web-based app?
The initial version of the BigQuery web interface was considered just an 'example' UI that anyone could create themselves. It uses only the public BigQuery API to talk to BigQuery.
There are a couple of Google-internal things we've added since then, such as the current design of 'saved queries', and an auth shortcut so that users don't have to explicitly grant permission to the UI to access BigQuery data. But it is still mostly plain-ol-javascript talking to BigQuery via the REST API the same way anybody else does.
The javascript is obfuscated, however, but my understanding is that this is just for compression purposes so that it downloads more quickly.
The SQL highlighting is done by CodeMirror with special configuration for the BigQuery SQL variant.
I'll talk to the other members of the BigQuery team about open-sourcing the javascript code in the Web UI. It may be difficult to do at this point, but it doesn't hurt to have a conversation about it. I'll bring this up with the team and update this thread. The most likely answer will be "We'll think about it", but hopefully we can also think about it and start working on it too :-)
Let me know if that sounds like it would meet your needs. It might not solve the auth problems you mention, since your users likely won't have BigQuery accounts, but you may be able to solve that by proxying oauth2 access tokens.

how to build google gadget with persistent storage

I'm trying to make a google gadget that stores some data (say, statistics of users' actions) in a persistent way (i.e. statistics accumulates over time and over multiple users). Also I want these data to be placed at google free hosting, possibly together with the gadget itself.
Any ideas on how to do that?
I know, Google gadgets API has tools for working with remote data, but then the question is where to host it. Google Wave seemed to be an option, but it is no longer supported.
You should get a server and host it there.
You have then the best control over the code, the performance and the data itself.
There are several hosting providers out there who provide hosting for a reasonable price.
Naming some: Hostgator.com (US), Hetzner.de (DE), http://swedendedicated.com (SE, never used, just a quick search on the internet).

Google Analytics API for commercial website

This question is for anyone that used Google Analytic API on a commercial website.
For instance you have a website where members can upload music and pay for a membership to track via Analytics how many people visited their uploads.
Does Google allow to use the Analytics API for commercial use?
The Analytics API may be used for both commercial and noncommercial purposes in ways consistent with these API Terms.
http://code.google.com/apis/analytics/docs/gdata/gdataTermsOfService.html
Please refer to the lengthy Terms of Service :)
EDIT: As #yc pointed out, this is a question toward the the API (thanks btw).
While I don't think my original answer is totally correct, I think it is worth mentioning the "Privacy" paragraph in the "Regular" Analytics TOS:
PRIVACY . You will not (and will not allow any third party to) use the
Service to track or collect personally
identifiable information of Internet
users, nor will You (or will You allow
any third party to) associate any data
gathered from Your website(s) (or such
third parties' website(s)) with any
personally identifying information
from any source as part of Your use
(or such third parties' use) of the
Service. You will have and abide by an
appropriate privacy policy and will
comply with all applicable laws
relating to the collection of
information from visitors to Your
websites. You must post a privacy
policy and that policy must provide
notice of your use of a cookie that
collects anonymous traffic data.
I am by no means a lawyer, but just want to point out that you need to be careful about what data you collect. Especially when using Event Tracking and Custom Variables.
From the API page:
What does the Google Analytics Data
Export API cost?
The Google Analytics
Data Export API is free. We intend to
always provide a basic level of
service for free. As we continue to
build out more advanced features and
functionality, we may revisit this
later.