Stream api with fetch in a react-native app - react-native

I was trying to use Stream api with fetch in react-native app, I implemented with the help of a great example mentioned at jeakearchibald.com . code is something similar to :-
fetch('https://html.spec.whatwg.org/').then(function(response) {
console.log('response::-', response)
var reader = response.body.getReader();
var bytesReceived = 0;
reader.read().then(function processResult(result) {
if (result.done) {
console.log("Fetch complete");
return;
}
bytesReceived += result.value.length;
console.log(`Received ${bytesReceived} bytes of data so far`);
return reader.read().then(processResult);
});
});
Stream api reference is :-
https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API
But it seems fetch implementation of react-native is little different than of browsers and it is not easy to use Stream in the same way as used on web.
There is already an unresolved issue on react-native for the same
https://github.com/facebook/react-native/issues/12912
On web we can access Stream from response.body.getReader(), where response is just normal result retuned from fetch call of stream url, but in react-native there is no way we can access body and hence getReader from response of fetch call.
So to overcome this I tried to use rn-fetch-blob npm package , because it supports Streams, but that to seems to support only from locale file paths because there readStream functions doesn't seems to have support to pass Authorization and other necessary headers, so I tried to use RNFetchBlob.fetch with the remote url and necessary headers and then using readStream method from response but that always returns me there is no stream with the current response.
RNFetchBlob.fetch('GET', 'https://html.spec.whatwg.org/')
.progress((received, total) => {
console.log('progress', received / total);
})
.then((resp) => {
// const path = resp.path();
console.log('resp success:-', resp);
RNFetchBlob.fs.readStream(path, 'utf8').then((stream) => {
let data = '';
stream.open();
stream.onData((chunk) => {
data += chunk;
});
stream.onEnd(() => {
console.log('readStream::-', data);
});
// });
})
.catch((err) => {
console.log('trackAppointmentStatus::-', err);
});
I may be doing something wrong in both approaches of mine, so little guidance may help me or someone else in the future. Or I may need to find a way to do it natively with writing a bridge.

Related

Get a JSON file from an AppScript backend, using an AppScript front end, without getting a CORS error?

I'm trying to build a an API-driven front end in Google AppsScript that calls a REST API hosted on AppScript to make some database queries.
I am currently simply trying to retrieve a JSON file with a GET request.
Everything I try, I get "CORS Missing Allow Origin".
My understand of CORS is that I might experience this with POST request (but maybe there's some people who have phrased their requests to get work this?)
I have a sense that the situation has changed over time, and what has worked in previous SO threads, doesn't seem to work for me now.
Sigh. I feel like Google's Documentation Team would benefit from a dedicated article to explaining how this is supposed to work.
If anyone can shed light on how I can get this to work, I've be most grateful:
client side code:
useEffect(() => {
fetch('https://script.google.com/macros/s/AKfycbz3_hgjZe0E35ZI2mw7aNs3ASkYCct77qIzL_WTOQMu_ZZeax9WpHpPIwm-MFPhZAW77g/exec/get/all', {
redirect: "follow",
headers: {
"Content-Type": "text/plain",
},
})
.then(result => result.json())
.then(rowData => setRowData(rowData))
}, []);
Server side code:
export function doGet(e) {
if (e.pathInfo.startsWith('get/all')) {
return getAllRecords(e);
}
else if (e.pathInfo.startsWith('get')) {
return getRecord(e);
}
else {
return getAllRecords(e);
//return HtmlService.createHtmlOutput('Error: invalid path- ' + e.pathInfo + '\n\n' + e.parameter + e);
}
}
function getAllRecords(e) {
// Connect to the MySQL database using the JDBC connector
const conn = Jdbc.getConnection(url, username, password);
// Construct the SELECT statement
const sql = `SELECT * FROM cars LIMIT 100`;
// Execute the INSERT statement
const stmt = conn.prepareStatement(sql);
const results = stmt.executeQuery();
// Return the inserted record with the generated id
const records = [];
while (results.next()) {
const record = {
id: results.getInt('id'),
name: results.getString('name'),
make: results.getString('make'),
price: results.getInt('price')
};
records.push(record);
}
return ContentService.createTextOutput(JSON.stringify(records)).setMimeType(ContentService.MimeType.TEXT);
// return ContentService.createTextOutput(JSON.stringify(records)).setMimeType(ContentService.MimeType.JAVASCRIPT);
}
I've tried various combination of MIME Type, and request headers and I'll try any combinations people suggest.
In order to use pathInfo, in this case, it is required to use the access token. I thought that this might be the reason for your current issue. But, when the access token is used, I'm worried that is might not be useful for your actual situation. So, in this answer, I would like to propose the following 2 patterns.
Pattern 1:
In this pattern, your script is modified using the access token. In this case, please modify your Javascript as follows.
From:
fetch('https://script.google.com/macros/s/AKfycbz3_hgjZe0E35ZI2mw7aNs3ASkYCct77qIzL_WTOQMu_ZZeax9WpHpPIwm-MFPhZAW77g/exec/get/all', {
redirect: "follow",
headers: {
"Content-Type": "text/plain",
},
})
.then(result => result.json())
.then(rowData => setRowData(rowData))
To:
const accessToken = "###"; // Please set your access token.
fetch('https://script.google.com/macros/s/AKfycbz3_hgjZe0E35ZI2mw7aNs3ASkYCct77qIzL_WTOQMu_ZZeax9WpHpPIwm-MFPhZAW77g/exec/get/all?access_token=' + accessToken)
.then(result => result.json())
.then(rowData => setRowData(rowData))
When you use the access token, please include the scopes of Drive API. Please be careful about this.
Pattern 2:
In this pattern, I would like to propose the modification without using the access token. When the access token cannot be used, unfortunately, pathInfo cannot be used. So, in this pattern, the query parameter is used instead of pathInfo.
Please modify your Javascript as follows.
From:
fetch('https://script.google.com/macros/s/AKfycbz3_hgjZe0E35ZI2mw7aNs3ASkYCct77qIzL_WTOQMu_ZZeax9WpHpPIwm-MFPhZAW77g/exec/get/all', {
redirect: "follow",
headers: {
"Content-Type": "text/plain",
},
})
.then(result => result.json())
.then(rowData => setRowData(rowData))
To:
fetch('https://script.google.com/macros/s/AKfycbz3_hgjZe0E35ZI2mw7aNs3ASkYCct77qIzL_WTOQMu_ZZeax9WpHpPIwm-MFPhZAW77g/exec?value=get%2Fall') // or ?value=get
.then(result => result.json())
.then(rowData => setRowData(rowData))
And also, please modify doGet of your Google Apps Script as follows.
Modified script:
function doGet(e) {
if (e.parameter.value == "get/all") {
return getAllRecords(e);
} else if (e.parameter.value = "get") {
return getRecord(e);
} else {
return getAllRecords(e);
}
}
Note:
In this modification, it supposes that your getAllRecords(e) works fine. Please be careful about this.
And, in this modification, it supposes that your Web Apps is deployed as Execute as: Me and Who has access to the app: Anyone. Please be careful about this.
When you modified the Google Apps Script of Web Apps, please modify the deployment as a new version. By this, the modified script is reflected in Web Apps. Please be careful about this.
You can see the detail of this in my report "Redeploying Web Apps without Changing URL of Web Apps for new IDE (Author: me)".
Thit is a sample modification. So, please modify this for your actual situation.
Reference:
Taking advantage of Web Apps with Google Apps Script (Author: me)

hapi 18 eventsourcing not working without stream.end()

Try to archive:
I try to use the HTML5 EventSourcing API https://developer.mozilla.org/de/docs/Web/API/EventSource to push events to my client application (javascript).
working example code with plain node http:
With a plain example node implementation it works perfectly and as expected. Example code: https://www.html5rocks.com/en/tutorials/eventsource/basics/
Problem:
When i try to integrate EventSourcing (or SSE) into my API endpoint which is based on hapi (currently using latest - 18.1.0) it does not work.
My route handler code mixed with some code i found:
const Stream = require('stream');
class ResponseStream extends Stream.PassThrough {
setCompressor (compressor) {
this._compressor = compressor;
}
}
const stream = new ResponseStream();
let data = 0;
setInterval(() => {
data++;
stream.write('event: message\n');
stream.write('data:' + data + '\n\n');
console.log('write data...', data);
// stream.end();
}, 1000);
return h
.response(stream)
.type('text/event-stream')
.header('Connection', 'keep-alive')
.header('Cache-Control', 'no-cache')
Findings:
I already searched and it seems since hapi 17.x there they exposed the flush method for the compressor < https://github.com/hapijs/hapi/issues/3658 >, section features.
But it still does not working.
They only way it sends a message is to uncomment the stream.end() line after sending the data. The problem obviously is that i cant send further data if i close the stream :/.
If i kill the server (with stream.end() line commented) the data gets transmitted to the client in a "single transmission". I think the problem is is still somewhere with the gzip buffering even when flushing the stream.
There are some code examples in the hapi github but i got none working with hapi 17 or 18 (all exmaples where hapi =< 16) :/
Someone know how to solve the problem or has a working EventSource example with latest hapi? I would kindly appreciate any help or suggestions.
Edit - Solution
The solution from the post below does work but i had also an nginx reverse proxy in front of my api endpoint it seems the main problem was not my code it was the nginx which had also buffered the eventsource messages.
To avoid this sort of problem add in your hapi: X-Accel-Buffering: no; and it works flawless
Well I just tested with Hapi 18.1.0 and managed to create a working example.
This is my handler code:
handler: async (request, h) => {
class ResponseStream extends Stream.PassThrough {
setCompressor(compressor) {
this._compressor = compressor;
}
}
const stream = new ResponseStream();
let data = 0;
setInterval(() => {
data++;
stream.write('event: message\n');
stream.write('data:' + data + '\n\n');
console.log('write data...', data);
stream._compressor.flush();
}, 1000);
return h.response(stream)
.type('text/event-stream')
}
and this is client code just to test
var evtSource = new EventSource("http://localhost/");
evtSource.onmessage = function(e) {
console.log("Data", + e.data);
};
evtSource.onerror = function(e) {
console.log("EventSource failed.", e);
};
These are the resources that where I found my way to working example
https://github.com/hapijs/hapi/blob/70f777bd2fbe6e2462847f05ee10a7206571e280/test/transmit.js#L1816
https://github.com/hapijs/hapi/issues/3599#issuecomment-485190525

getaddrinfo ENOTFOUND API Google Cloud

I'm trying to execute API.AI tutorial for building a weather bot for Google Assistant (the one here: https://dialogflow.com/docs/getting-started/basic-fulfillment-conversation)
I made everything successfully, created the bot within API, created the Fulfillments, installed NodeJS on my pc, connected Google Cloud Platform, etc.
Then I created the index.js file by copying it exactly how it's stated on API.ai tutorial with my API key from World Weather Organisation (see below).
But when I use the bot, it doesn't work. On the Google Cloud Platform the error is always the same:
Error: getaddrinfo ENOTFOUND api.worldweatheronline.com
api.worldweatheronline.com:80
at errnoException (dns.js:28)
at GetAddrInfoReqWrap.onlookup (dns.js:76)
No matter how often I do it I get the same error. So I don't actually reach the API. I tried to see if anything changed from WWO side (URL, etc.) but apparently no. I updated NodeJS and still same issue. I refreshed the Google Cloud platform completely and didn't help.
That one I really can't debug. Could anyone help?
Here's the code from API.ai:
'use strict';
const http = require('http');
const host = 'api.worldweatheronline.com';
const wwoApiKey = '[YOUR_API_KEY]';
exports.weatherWebhook = (req, res) => {
// Get the city and date from the request
let city = req.body.result.parameters['geo-city']; // city is a required param
// Get the date for the weather forecast (if present)
let date = '';
if (req.body.result.parameters['date']) {
date = req.body.result.parameters['date'];
console.log('Date: ' + date);
}
// Call the weather API
callWeatherApi(city, date).then((output) => {
// Return the results of the weather API to Dialogflow
res.setHeader('Content-Type', 'application/json');
res.send(JSON.stringify({ 'speech': output, 'displayText': output }));
}).catch((error) => {
// If there is an error let the user know
res.setHeader('Content-Type', 'application/json');
res.send(JSON.stringify({ 'speech': error, 'displayText': error }));
});
};
function callWeatherApi (city, date) {
return new Promise((resolve, reject) => {
// Create the path for the HTTP request to get the weather
let path = '/premium/v1/weather.ashx?format=json&num_of_days=1' +
'&q=' + encodeURIComponent(city) + '&key=' + wwoApiKey + '&date=' + date;
console.log('API Request: ' + host + path);
// Make the HTTP request to get the weather
http.get({host: host, path: path}, (res) => {
let body = ''; // var to store the response chunks
res.on('data', (d) => { body += d; }); // store each response chunk
res.on('end', () => {
// After all the data has been received parse the JSON for desired data
let response = JSON.parse(body);
let forecast = response['data']['weather'][0];
let location = response['data']['request'][0];
let conditions = response['data']['current_condition'][0];
let currentConditions = conditions['weatherDesc'][0]['value'];
// Create response
let output = `Current conditions in the ${location['type']}
${location['query']} are ${currentConditions} with a projected high of
${forecast['maxtempC']}°C or ${forecast['maxtempF']}°F and a low of
${forecast['mintempC']}°C or ${forecast['mintempF']}°F on
${forecast['date']}.`;
// Resolve the promise with the output text
console.log(output);
resolve(output);
});
res.on('error', (error) => {
reject(error);
});
});
});
}
Oh boy, in fact the reason was most stupid ever. I didn't enable "billing" on Google Cloud Platform and that's why it blocked everything (even though I'm using a free test of the API). They just wanted my credit card number. It works now
I had the same issue trying to hit my db. Billing wasn't the fix as I had billing enabled already.
For me it was knexfile.js setup for MySql - specifically the connection object. In that object, you should replace the host key with socketPath; and prepend /cloudsql/ to the value. Here's an example:
connection: {
// host: process.env.APP_DB_HOST, // The problem
socketPath: `/cloudsql/${process.env.APP_DB_HOST}`, // The fix
database: process.env.APP_DB_NAME,
user: process.env.APP_DB_USR,
password: process.env.APP_DB_PWD
}
Where process.env.APP_DB_HOST is your Instance connection name.
PS: I imagine that even if you're not using Knex, the host or server parameter of a typical DB connectionstring will have to be called socketPath when connecting to Google Cloud SQL.

Firefox add-on SDK: Get http response headers

I'm new to add-on development and I've been struggling with this issue for a while now. There are some questions here that are somehow related but they haven't helped me to find a solution yet.
So, I'm developing a Firefox add-on that reads one particular header when any web page that is loaded in any tab in the browser.
I'm able to observer tab loads but I don't think there is a way to read http headers inside the following (simple) code, only url. Please correct me if I'm wrong.
var tabs = require("sdk/tabs");
tabs.on('open', function(tab){
tab.on('ready', function(tab){
console.log(tab.url);
});
});
});
I'm also able to read response headers by observing http events like this:
var {Cc, Ci} = require("chrome");
var httpRequestObserver =
{
init: function() {
var observerService = Cc["#mozilla.org/observer-service;1"].getService(Ci.nsIObserverService);
observerService.addObserver(this, "http-on-examine-response", false);
},
observe: function(subject, topic, data)
{
if (topic == "http-on-examine-response") {
subject.QueryInterface(Ci.nsIHttpChannel);
this.onExamineResponse(subject);
}
},
onExamineResponse: function (oHttp)
{
try
{
var header_value = oHttp.getResponseHeader("<the_header_that_i_need>"); // Works fine
console.log(header_value);
}
catch(err)
{
console.log(err);
}
}
};
The problem (and a major source of personal confusion) is that when I'm reading the response headers I don't know to which request the response is for. I want to somehow map the request (request url especially) and the response header ("the_header_that_i_need").
You're pretty much there, take a look at the sample code here for more things you can do.
onExamineResponse: function (oHttp)
{
try
{
var header_value = oHttp.getResponseHeader("<the_header_that_i_need>");
// URI is the nsIURI of the response you're looking at
// and spec gives you the full URL string
var url = oHttp.URI.spec;
}
catch(err)
{
console.log(err);
}
}
Also people often need to find the tab related, which this answers Finding the tab that fired an http-on-examine-response event

How to consume WCF soap web service in node.js

I tried lot of examples available in the net using node module wcf.js. But could not get any appropriate result. I'm using the below url
https://webservice.kareo.com/services/soap/2.1/KareoServices.svc?wsdl
Any one who can explain me with the help of code will be really helpful. I want to know how to access the wsdl in node.js
Thanks.
Please have a look at wcf.js
In short you can follow these steps:
npm install wcf.js
Write your code like this:
code
var Proxy = require('wcf.js').Proxy;
var BasicHttpBinding = require('wcf.js').BasicHttpBinding;
var binding = new BasicHttpBinding();
//Ensure the proxy variable created below has a working wsdl link that actually loads wsdl
var proxy = new Proxy(binding, "http://YourHost/YourService.svc?wsdl");
/*Ensure your message below looks like a valid working SOAP UI request*/
var message = "<soapenv:Envelope xmlns:soapenv='http://schemas.xmlsoap.org/soap/envelope/' xmlns:sil='http://YourNamespace'>" +
"<soapenv:Header/>" +
"<soapenv:Body>" +
"<sil:YourMethod>" +
"<sil:YourParameter1>83015348-b9dc-41e5-afe2-85e19d3703f9</sil:YourParameter1>" +
"<sil:YourParameter2>IMUT</sil:YourParameter2>" +
"</sil:YourMethod>" +
"</soapenv:Body>" +
"</soapenv:Envelope>";
/*The message that you created above, ensure it works properly in SOAP UI rather copy a working request from SOAP UI*/
/*proxy.send's second argument is the soap action; you can find the soap action in your wsdl*/
proxy.send(message, "http://YourNamespace/IYourService/YourMethod", function (response, ctx) {
console.log(response);
/*Your response is in xml and which can either be used as it is of you can parse it to JSON etc.....*/
});
You don't have that many options.
You'll probably want to use one of:
node-soap
douche
soapjs
i tried node-soap to get INR USD rate with following code.
app.get('/getcurr', function(req, res) {
var soap = require('soap');
var args = {FromCurrency: 'USD', ToCurrency: 'INR'};
var url = "http://www.webservicex.net/CurrencyConvertor.asmx?WSDL";
soap.createClient(url, function(err, client) {
client.ConversionRate(args, function(err, result) {
console.log(result);
});
});
});
Code Project has got a neat sample which uses wcf.js for which api's are wcf like so no need to learn new paradigm.
I think that an alternative would be to:
use a tool such as SoapUI to record input and output xml messages
use node request to form input xml message to send (POST) the request to the web service (note that standard javascript templating mechanisms such as ejs or mustache could help you here) and finally
use an XML parser to deserialize response data to JavaScript objects
Yes, this is a rather dirty and low level approach but it should work without problems
You'll probably want to use one of:
node-soap
douche
soapjs
Aslo, there's an existing question.
In my case, I used https://www.npmjs.com/package/soap. By default forceSoap12Headers option was set to false which prevented node-soap to generate correct soap message according to SOAP 1.2. Check for more details: I am confused about SOAP namespaces. After I set it to true, I was able to make a call to .NET WCF service. Here is a TypeScript code snipper that worked for me.
import * as soap from 'soap';
import { IOptions } from 'soap';
// ...
const url = 'https://www.your-domain.com/stock.svc?wsdl';
const opt: IOptions = {
forceSoap12Headers: true,
};
soap.createClient(url, opt, (err, client: soap.Client) => {
if (err) {
throw err;
}
const wsSecurityOptions = {
hasTimeStamp: false,
};
const wsSecurity = new soap.WSSecurity('username', 'password', wsSecurityOptions);
client.setSecurity(wsSecurity);
client.addSoapHeader({ Action: 'http://tempuri.org/API/GetStockDetail' }, undefined, 'wsa', 'http://www.w3.org/2005/08/addressing');
client.addSoapHeader({ To: 'https://www.your-domain.com/stock.svc' }, undefined, 'wsa', 'http://www.w3.org/2005/08/addressing');
const args = {
symbol: 'GOOG',
};
client.GetStockDetail(
args,
(requestErr, result) => {
if (requestErr) {
throw requestErr;
}
console.log(result);
},
);
});
Here couple links to the documentation of node-soap usage:
https://github.com/vpulim/node-soap/tree/master/test
https://github.com/vpulim/node-soap