Angular JS / SQLite / Query - sql

I´m buidling a mobile App with Angular JS and SQLite for offline storage.
Does anybody have an idea how to structure the SQL statements?
I have my controllers and they call the factories, but is it a good idea to write the
statements into the factories? Http-request are not useful in this case. Is there a further "abstraction layer"?
<!-- language: lang-js -->
app.controller( 'loginController', function loginController($scope, loginFactory) {
$scope.loginFactory = function() {
return loginFactory.login($scope.firstnameLogin, $scope.passwordLogin);
};
});
app.factory('loginFactory', function() {
return {
login : function(firstnameLogin, passwordLogin) {
// HERE THE SQL-STATEMENT? //
}
}
});
Edit: Added some code.

I've never actually used javascript to connect to a SQLite database in the manner which you are asking, but it does seem that if you're using HTML5, you can connect to a data store in the browser and store you data in a SQL like manner.
Adobe has a walk through of how to connect and do your standard CRUD activities with this data store. After reading through the document, I would guess that you can then use the basics to build up an AngularJS factory to return the Create, Read, Update and Delete methods for you mobile application to use.
Good luck, hope this helps you come.
Store data in the HTML5 SQLite database

Related

query to cfscript properly in coldfusion?

CLARIFICATION: The question here is, "how do I stop using <cfquery> to retrieve data and replace it with a call to an API?"
I have a sql query where I need help to convert it to cfscript. However as you can see I have tried converting it. But, I need some verification if I am on the right path to convert from sql query to cfscript. if not, can anyone help me by converting from sql query to cfscript? thanks for the help. here is my code.
CFSCRIPT:
<cfset jsonDatas = fileRead("c:\Users\Desktop\MyApi.json" )>      
<cfset jsonData = deserializeJSON(jsonDatas) />       
<cfif arrayLen(jsonData)>  
<cfloop array="#jsonData#" index="prop">       
<cfoutput>  
<cfscript>  
// writedump(jsonData);for (item in jsonData[1])   {  
  if (#prop.payGrade# == 0) {
         #prop.divisionNbr#;        
 #prop.probationBeginDate#;        
 #prop.legacyStatus#;        
 #prop.payStep#;        
 #prop.creationDate#;           
  } 
I am going to go for broke here. Do you have data in a DB and you just want to return a JSON representation of it? As in
<cfquery name="result">
SELECT ...
FROM whereever
</cfquery>
<cfoutput>#SerializeJSON(result, 'struct')#</cfoutput>
And then something else tries to consume this data? I have been looking and your questions, and I don't know if we are on the generating data side of the world, or in the consuming data side of the world.
You are asking about two different things.
SQL defines the query run on the database.
CFSCRIPT is just a script style syntax for ColdFusion code (as opposed to tags).
Your example code is
Here is some SQL
Here I am reading JSON and converting it to a ColdFusion struct using script style code instead of tags.
Are you trying to go from currently calling a database, which returns data as a ColdFusion query object, to calling an API, which returns data as a JSON packet? You need to then convert the JSON data into the same or similar CF structures you currently use?
UPDATE: If you want to replace the existing query with an API call,
Does the API already exist?
Does it return the same data as the current query?
Is your team moving to APIs to decouple existing code?
Do you need to know HOW to convert a <cfquery> call to an API call?
This is really a comment, but I need a lot of space to write it.
Are you looking to get data from a remote source and display it on a web page. Are you looking to do something like this?
I think we need to clarify which are client side technologies and which are server side technologies.
Update based on comments**
Consider a VueJS solution
<!-- Showing stuff on screen -->
<div id="app">
{{ info }}
</div>
I can show stuff on the screen. This is similar, but not identical to #info#
<!-- Getting data -->
<cfscript>
new Vue({
el: '#app',
data () {
return {
info: null
}
},
mounted () {
axios
.get('https://api.coindesk.com/v1/bpi/currentprice.json')
.then(response => (this.info = response))
}
})
<cfscript>
And this is getting data from a remote source and putting that data into a javascript variable.
Explaination
So why am I doing this with Javascript rather than ColdFusion? Javascript runs on the browser; ColdFusion runs on the server. If you want to consume an API on the browser, you have to use browser based technologies.
My example is in VueJS, but Angular and React are also options. It is a bit dated, but jQuery can do this kind of stuff too.
Code Source: https://v2.vuejs.org/v2/cookbook/using-axios-to-consume-apis.html

"Default Apollo Queries" VS "AsyncData" (Nuxt.js)

I'm building a site with Nuxt/Vue, and it's using a GraphQL backend API. We access this using the Apollo module for Nuxt.
In a page component, you can do this (I think this is called a Smart Query, but I'm not sure):
apollo: {
pages: {
query: pagesQuery,
update(data) {
return _get(data, "pageBy", {});
}
},
}
}
But you can also do the query like this I think, using the Nuxt asyncData hook:
asyncData(context) {
let client = context.app.apolloProvider.defaultClient;
client.query({query, variables})
.then(({ data }) => {
// do what you want with data
});
}
}
I'm not sure what the difference is between these two ways, and which is better. Does anyone know? I couldn't find an explanation in the docs anywhere.
Yeah, good question. The code you have shown at the top is indeed called a Smart Query. In fact
Each query declared in the apollo definition (that is, which doesn't
start with a $ char) in a component results in the creation of a smart
query object.
A nuxt project using the #nuxtjs/apollo module can use these out of the box. The beauty of the smart query is the options that it comes with and one of these is the 'prefetch' option. This, as it sounds, allows prefetching and is by default set to true. It can also accept a variables object or a function. You can see the docs here.
This means that the outcome of a smart query or an asyncData query will essentially be the same. They should be resolved in the same timeframe.
So why choose one or the other? This would probably be down to preference, but with all the options that a smart query allows you can do a lot more, and you can include subscriptions which might not be possible in asyncData.
More about smart queries here.

AngularAMD: the app depends on services but services depend on the app

I am clearly missing something very basic.
The instructions are to create an app, like this:
define(['angularAMD'], function (angularAMD) {
var app = angular.module(app_name, ['webapp']);
... // Setup app here. E.g.: run .config with $routeProvider
return angularAMD.bootstrap(app);
});
And then create subsequent items like this:
define(['app'], function (app) {
app.factory('Pictures', function (...) {
...
});
});
And there is this helpful line:
Any subsequent module definitions would simply need to require app to create the desired AngularJS services
Well that's just great for subsequent module definitions, but app.config and app.run need lots of prerequisite modules that I am supposed to create -- as would any application beyond the level of a toy. So there is obviously some simple solution that I am missing. How do I create services that the app depends on?
You can simply use 'angularAMD' injection to create services.
For example,
define(['angularAMD'], function (angularAMD) {
angularAMD.service('LoggerService',['$log',function($log){
return function(msg){
$log.log('message:', msg);
}
}]);
});
The services created using this method are available before the application is bootstrapped. Hence app can depend on these services.
More similar code can be found at angular-AMD sample app.

Store and Sync local Data using Breezejs and MVC Web API

I want to use breezejs api for storing data in local storage (indexdb or websql) and also want to sync local data with sql server.
But I am failed to achieve this and also not able to find sample app of this type of application using breezejs, knockout and mvc api.
My requirement is:
1) If internet is available, the data will come from sql server by using mvc web api.
2) If internet is shutdown, the application will retrieve data from cached local storage (indexdb or websql).
3) As soon as internet is on, the local data will sync to sql server.
Please let me know Can I achieve this requirement by using breezejs api or not?
If yes, please provide me some and links and sample.
If no, what other can we use for achieving this type of requirement?
Thanks.
Please help me to meet this requirement.
You can do this, but I would suggest simply using localstorage. Basically, every time you read from the server or save to the server, you export the entities and save that to local storage. THen, when you need to read in the data, if the server is unreachable, you read the data from localstorage and use importentities to get it into the manager and then query locally.
function getData() {
var query = breeze.EntityQuery
.from("{YourAPI}");
manager.executeQuery.then(saveLocallyAndReturnPromise)
.fail(tryLocalRestoreAndReturnPromise)
// If query was successful remotely, then save the data in case connection
// is lost
function saveLocallyAndReturnPromise(data) {
// Should add error handling here. This code
// assumes tis local processing will be successful.
var cacheData = manager.exportEntities()
window.localStorage.setItem('savedCache',cacheData);
// return queried data as a promise so that this detour is
// transparent to viewmodel
return Q(data);
}
function tryLocalRestoreAndReturnPromise(error) {
// Assume any error just means the server is inaccessible.
// Simplified for example, but more robust error handling is
// warranted
var cacheData = window.localStorage.getItem('savedCache');
// NOTE: should handle empty saved cache here by throwing error;
manager.importEntities(cacheData); // restore saved cache
var query = query.using(breeze.FetchStrategy.FromLocalCache);
return manager.executeQuery(query); // this is a promise
}
}
This is a code skeleton for simplicity. You should check catch and handle errors, add an isConnected function to determine connectivity, etc.
If you are doing editing locally, there are a few more hoops to jump through. Every time you make a change to the cache, you will need to export either the whole cache or the changes (probably depending on the size of the cache). When there is a connection, you will need to test for local changes first and, if found, save them to the server before requerying the server. In addition, any schema changes made while offline complicate matters tremendously, so be aware of that.
Hope this helps. A robust implementation is a bit more complex, but this should give you a starting point.

Sharing code between Worklight Adapters

In most cases I've dealt with so far the Worklight Adapter implementation has been pretty trivial, just a few lines of JavaScript.
On the current project, using WL 5.0.6, we have several adapters, each with several procedures. Our particular backends require some common logic to set up requests and interpret responses. Seems ideal for refactoring common code to shared library, execpt that as far as I can see there's no "library" concept in the adapter environment unless we want to drop down into Java.
Are there any patterns for code-reuse between adapters?
I think you are right. There is currently no way of importing custom JavaScript libraries.
There is a way to include/load Javascript files in Mozilla Rhino engine by using the "load(xyz.js)" function, but this will make your Worklight adapter undeployable.
But I've noticed, that this will make your Worklight adapter undeployable. If you deploy a second *.js file within an adapter, you'll get the following error message:
Adapter deployment failed: Procedure 'getStories' is not implemented in the adapter's JavaScript file.
It seems like Worklight Server can only handle one JavaScript file per adapter.
I have shared some common functionality between adapters by implementing the functionality in Java code and including the jar file in the Worklight war file. This came in handy to invoke stored procs via JDBC that can handle multiple out parms and also retrieving PDF content from internal backend services. The jar needs to be in the lib dir of the worklight.war web app that the adapter will be deployed to.
Example of creating a java object in the adapter:
var parm = new org.apache.http.message.BasicNameValuePair("QBS_Reference_ID",refId);
One way to share JavaScript between adapters is to follow a pattern somewhat like this:
CommonAdapter-impl.js:
var commonObject = {
invokeBackend: function(input, options) {
// Do stuff to check/modify input & options
response = WL.Server.invokeHttp(input);
// Do stuff to check/modify response
return response;
}
}
getCommonObject: function() {
return commonObject;
}
NormalAdapter-impl.js:
function getSomeData(dataNumber) {
var input = {
method: 'get',
returnedContentType: 'json',
path: '/restservices/getsomedata',
}
return _getCommonObject().invokeBackend(input);
}
function _getCommonObject() {
var invocationData = {
adapter: 'CommonAdapter',
procedure: 'getCommonObject',
parameters: []
}
return WL.Server.invokeProcedure(invocationData);
}
In this particular case, the common adapter is being used to provide a "wrapper" function around WL.Server.invokeHttp, but it can be used to provide any other utility functions as well.
The reason for this pattern in particular is that it allows the WL.Server.invokeHttp to run in the context of the "calling" adapter, which means the endpoint (hostname, port number, etc.) specified in the calling adapter's .xml file will be used. It does mean that the short _getCommonObject() function has to be repeated in each "normal" adapter, but this is a fairly small piece of boilerplate.
Note that this pattern has been used with Worklight 6.1 - there is no guarantee it will work in future or past versions.