I have two websites, one client website and a pricing WEBAPI website. I have a performance issue with the pricing website as it often suspends itself due to low usage and on the first call takes time to initialize. If I repeat the request immediately after that, it is very quick.
I know that it will be used on certain pages of the client website, I therefore wish to call it when that page loads so its ready when the users valid request comes in seconds later. Please note the pricing WEBAPI site is not available from the client, only the client website can access it on the server side.
I don't know the best approach to this, I don't wish to impact the performance of the client website. I have considered an 1px x 1px iFrame calling a page but concerned it will block the page. Is an Ajax call more appropriate, but how to I call something on the client website to in turn call the webservice? Is there a better approach I haven't considered?
Known issue on shared hosting environments, a workaround is fine but I would suggest upgrading your server. My hosting has a DotNetNuke option, which essentially means it will reserve memory on the server and don't recycle the app pool due inactivity. Compared to VPS this is cheaper.
If it is not shared hosting, these IIS settings could help you.
Anyway, back to your workaround:
You say the client cannot access the webapi, only the back-end of your website can. Seems weird because an webapi exposes REST GET,POST methods. Either way you could do an async call to your webapi on server side that does not wait for a response or do a javascript call to your API.
Assuming your website is also ASP.NET:
public static async Task StartupWebapi()
{
string requestUrl = "http://yourwebapi.com/api/startup";
using (var client = new HttpClient())
{
//client.Timeout = new TimeSpan(0, 0, 20); timeout if needed
try
{
HttpResponseMessage response = await client.GetAsync(requestUrl);
if (response.IsSuccessStatusCode)
{
resultString = await response.Content.ReadAsStringAsync();
}
}
}
}
Then, somewhere in your code, that will be called at least when your client website starts.
HostingEnvironment.QueueBackgroundWorkItem(ct => SomeClass.StartupWebapi());
Or in javascript, which is executed asynchronously.
$.ajax({
url: "http://yourwebapi.com/api/startup",
type: "GET",
success: function (response) {
},
error: function (response) {
}
});
See this question for some other workarounds.
Related
I have an endpoint written in expressjs
router.post("/mkBet", async (req, res) => {
console.log(req.body)
const betToPush = new Bet({
addr: req.body.address,
betAmount: req.body.amount,
outcome: req.body.didWin,
timePlaced: Math.round(+new Date()/1000)
})
try {
const newBet = await betToPush.save()
res.status(201).json(newBet)
} catch(err) {
res.status(400).json({message: err.message})
}})
And I am trying to make it so that it can only be called when an action is performed on the frontend. So users cannot call it with custom arguments to make it so the outcome is always true. What would be the best way to achieve this?
It is not possible to securely tell what client sent the request. In other words, a request from your client and a different one can be identical in general.
Talking about unmodified browsers (as opposed to non-browser tools like Postman), you can tell the origin of the request (~the url loaded in the browser when the request was sent) with some confidence from the Origin and Referer request headers. However, your backend needs authentication and authorization anyway.
With the latter two in place, ie. with proper security implemented on the server-side, it doesn't matter anymore what client sends the requests.
I have a standalone Google Apps Script deployed as a web app. The app is executed as me, because I want it to access files stored on my Drive, and because I want it to generate Google Sheets files that have some ranges protected from the user that are still editable by the script. However, I want these files to be segregated into folders, and each folder is assigned to a user, so I need to know who the user is each time the app runs.
Session.getActiveUser().getEmail() doesn't work since the web app is deployed as me and not as the user. My other thought was to make the app "available to everyone, even anonymous" (right now it's just "available to everyone") to skip Google's login screen, and use some kind of third-party authentication service or script. Building my own seems like overkill because this seems like it should already exist, but so far my research has only turned up things like Auth0 which seem incompatible with my simple Google Apps Script-based app, or else I'm too inexperienced to figure out how to use them.
Does anyone have a suggestion for how to authenticate users for this kind of web app? Preferably something that comes with a beginner-friendly tutorial or documentation? Or, is there another way for me to find out who's running the app while still executing it as myself?
I am so new to this I'm not even sure I'm asking this question in the right way, so suggested edits are taken gratefully.
I can think of two ways you might approach this where the Web App is deployed to execute as the user accessing it:
Scenario A: Create a service-account to access files stored on your Drive and to generate google sheets.
Scenario B: Create a separate Apps Script project deployed as an API Executable and call its functions from the main Web App.
These methods are viable but there are a number of pros and cons to each.
Both require OAuth2 authentication, but that bit is fairly easy to handle thanks to Eric Koleda's OAuth2 library.
Also, in both scenarios, you'll need to bind/link your main Apps Script project to a GCP project and enable the appropriate services, in your case Google Sheets and Google Drive APIs (see documentation for more details).
For Scenario A, the service account must be created under the same GCP project. For Scenario B, the secondary Apps Script project for the API executable must also be bound to the same GCP project.
Issues specific to Scenario A
You'll need to share the files and folders you want to access/modify (and/or create content in) with the service account. The service account has it own email address and you can share google drive files/folders with it as you would with any other gmail account.
For newly created content, permissions could be an issue, but thankfully files created under a folder inherit that folder's permissions so you should be good on that front.
However, you'll have to use the REST APIs for Drive and Sheets services directly; calling them via UrlFetch along with the access token (generated using the OAuth2 library) for the Service Account.
Issues specific to Scenario B
You'll need to setup a separate Apps Script project and build out a public API (collection of non-private functions) that can be called by a 3rd party.
Once the script is bound to the same GCP project as the main Web App, you'll need to generate extra OAuth2 credentials from GCP console under the IAM (Identity Access Management) panel.
You'll use the Client ID and Client Secret, to generate a refresh token specific to your account (using the OAuth2 library). Then you'll use this refresh token in your main Web App to generate the requisite access token for the API executable (also using the OAuth2 library). As in the previous scenario, you'll need to use UrlFetch to invoke the methods on the API Executable using the generated access token.
One thing to note, you cannot use triggers within the API executable code as they are not allowed.
Obviously, I've glossed over a lot of the details but that should be enough to get you started.
Best of luck.
Now that I've implemented TheAddonDepot's suggested Scenario B successfully, I wanted to share a few details that might help other newbies.
Here's what the code in my web app project looks like:
function doGet(e) {
// Use user email to identify user folder and pass to var data
var userEmail = Session.getActiveUser().getEmail();
// Check user email against database to fetch user folder name and level of access
var userData = executeAsMe('getUserData', [userEmail]);
console.log(userData);
var appsScriptService = getAppsScriptService();
if (!appsScriptService.hasAccess()) { // This block should only run once, when I authenticate as myself to create the refresh token.
var authorizationUrl = appsScriptService.getAuthorizationUrl();
var htmlOutput = HtmlService.createHtmlOutput('Authorize.');
htmlOutput.setTitle('FMID Authentication');
return htmlOutput;
} else {
var htmlOutput = HtmlService.createHtmlOutputFromFile('Index');
htmlOutput.setTitle('Web App Page Title');
if (userData == 'user not found') {
var data = { "userEmail": userEmail, "userFolder": null };
} else {
var data = { "userEmail": userData[0], "userFolder": userData[1] };
}
return appendDataToHtmlOutput(data, htmlOutput);
}
}
function appendDataToHtmlOutput(data, htmlOutput, idData) { // Passes data from Google Apps Script to HTML via a hidden div with id=idData
if (!idData)
idData = "mydata_htmlservice";
// data is encoded after stringifying to guarantee a safe string that will never conflict with the html
var strAppend = "<div id='" + idData + "' style='display:none;'>" + Utilities.base64Encode(JSON.stringify(data)) + "</div>";
return htmlOutput.append(strAppend);
}
function getAppsScriptService() { // Used to generate script OAuth access token for API call
// See https://github.com/gsuitedevs/apps-script-oauth2 for documentation
// The OAuth2Service class contains the configuration information for a given OAuth2 provider, including its endpoints, client IDs and secrets, etc.
// This information is not persisted to any data store, so you'll need to create this object each time you want to use it.
// Create a new service with the given name. The name will be used when persisting the authorized token, so ensure it is unique within the scope
// of the property store.
return OAuth2.createService('appsScript')
// Set the endpoint URLs, which are the same for all Google services.
.setAuthorizationBaseUrl('https://accounts.google.com/o/oauth2/auth')
.setTokenUrl('https://accounts.google.com/o/oauth2/token')
// Set the client ID and secret, from the Google Developers Console.
.setClientId('[client ID]')
.setClientSecret('[client secret]')
// Set the name of the callback function in the script referenced
// above that should be invoked to complete the OAuth flow.
.setCallbackFunction('authCallback')
// Set the property store where authorized tokens should be persisted.
.setPropertyStore(PropertiesService.getScriptProperties())
// Enable caching to avoid exhausting PropertiesService quotas
.setCache(CacheService.getScriptCache())
// Set the scopes to request (space-separated for Google services).
.setScope('https://www.googleapis.com/auth/drive https://www.googleapis.com/auth/spreadsheets')
// Requests offline access.
.setParam('access_type', 'offline')
// Consent prompt is required to ensure a refresh token is always
// returned when requesting offline access.
.setParam('prompt', 'consent');
}
function authCallback(request) { // This should only run once, when I authenticate as WF Analyst to create the refresh token.
var appsScriptService = getAppsScriptService();
var isAuthorized = appsScriptService.handleCallback(request);
if (isAuthorized) {
return HtmlService.createHtmlOutput('Success! You can close this tab.');
} else {
return HtmlService.createHtmlOutput('Denied. You can close this tab.');
}
}
function executeAsMe(functionName, paramsArray) {
try {
console.log('Using Apps Script API to call function ' + functionName.toString() + ' with parameter(s) ' + paramsArray.toString());
var url = '[API URL]';
var payload = JSON.stringify({"function": functionName, "parameters": paramsArray, "devMode": true})
var params = {method:"POST",
headers: {Authorization: 'Bearer ' + getAppsScriptService().getAccessToken()},
payload:payload,
contentType:"application/json",
muteHttpExceptions:true};
var results = UrlFetchApp.fetch(url, params);
var jsonResponse = JSON.parse(results).response;
if (jsonResponse == undefined) {
var jsonResults = undefined;
} else {
var jsonResults = jsonResponse.result;
}
return jsonResults;
} catch(error) {
console.log('error = ' + error);
if (error.toString().indexOf('Timeout') > 0) {
console.log('Throwing new error');
throw new Error('timeout');
} else {
throw new Error('unknown');
}
} finally {
}
}
I generated the OAuth2 credentials at https://console.cloud.google.com/ under APIs & Services > Credentials > Create Credentials > OAuth Client ID, selecting "Web application". I had to add 'https://script.google.com/macros/d/[some long ID]/usercallback' as an authorized redirect URI, but I apologize as I did this two weeks ago and can't remember how I figured out what to use there :/ Anyway, this is where you get the client ID and client secret used in function getAppsScriptService() to generate the access token.
The other main heads up I wanted to leave here for others is that while Google Apps Scripts can run for 6 minutes before timing out, URLFetchApp.fetch() has a 60s timeout, which is a problem when using it to call a script via the API that takes more than 60s to execute. The Apps Script you call will still finish successfully in the background, so you just have to figure out how to handle your timeout error and call a follow-up function to get whatever the original function should have returned. I'm not sure if that makes sense, but here's the question I asked (and answered) on that issue.
I tried to integrate external soap based api using servicenow client side scipt options. My intention is to initiate an external api call when an incident is created.
But i am getting uncaught reference error sn_ws is not defined exception.
function onSubmit() {
try {
var s = new sn_ws.SOAPMessageV2('global.IQTrack', 'VerifyApiKey');
s.setStringParameterNoEscape('VerifyApiKey.apiKey', 'dfghdhgdjh');
var response = s.execute();
var responseBody = response.getBody();
var status = response.getStatusCode();
}
catch(ex) {
alert(ex);
}
}
Is this the way to initiate api call? If it is so why it is getting sn_ws is not defined.
That's because sn_ws is a server-side API.
You need to either use GlideAjax, or a client-side webservices API such as XMLHttpRequest.
You can find an excellent article on GlideAjax, here: http://snprotips.com/blog/2016/2/6/gliderecord-client-side-vs-server-side
If your aim is to initiate the message once a ticket is created, then you should definitely be doing this server-side, not in a client script.
I hope,sn_ws is a server-side API.
I think GlideAjax method will help you to get rid of this issues.
please go through below links,I think it will help you to sort out this issues.
http://wiki.servicenow.com/index.php?title=GlideAjax#gsc.tab=0
And alternative is use client-side webservices API like XMLHttpRequest
Is there a way to intercept a resource request and give it a response directly from the handler? Something like this:
page.onRequest(function(request){
request.reply({data: 123});
});
My use case is for using PhantomJS to render a page that makes calls to my API. In order to avoid authentication issues, I'd like to intercept all http requests to the API and return the responses manually, without making the actual http request.
onResourceRequest almost does this, but doesn't have any modification capabilities.
Possibilities that I see:
I could store the page as a Handlebars template, and render the data into the page and pass it off as the raw html to PhantomJS (instead of a URL). While this would work, it would make changes difficult since I'd have to write the data layer for each webpage, and the webpages couldn't stand alone.
I could redirect to localhost, and have a server there that listens and responds to the requests. This assumes that it would be ok to have an open, un-authenticated version of the API on localhost.
Add the data via page.evaluate to the page's global window object. This has the same problems as #1: I'd need to know a-priori what data the page needs, and write server side code unique to each page.
I recently needed to do this when generating pdfs with phantom js.
It's slightly hacky, but seems to work.
var page = require('webpage').create(),
server = require('webserver').create(),
totallyRandomPortnumber = 29522,
...
//in my actual code, totallyRandomPortnumber is created by a java application,
//because phantomjs will report the port in use as '0' when listening to a random port
//thereby preventing its reuse in page.onResourceRequested...
server.listen(totallyRandomPortnumber, function(request, response) {
response.statusCode = 200;
response.setHeader('Content-Type', 'application/json;charset=UTF-8');
response.write(JSON.stringify({data: 'somevalue'}));
response.close();
});
page.onResourceRequested = function(requestData, networkRequest) {
if(requestData.url.indexOf('interceptme') != -1) {
networkRequest.changeUrl('http://localhost:' + totallyRandomPortnumber);
}
};
In my actual application I'm sending some data to phantomjs to overwrite request/responses, so I'm doing more checking on urls both in server.listen and page.onResourceRequested.
This feels like a poor-mans-interceptor, but it should get you (or whoever this may concern) going.
Okay,
Here I have an MVC 4 application and I am trying to create an Asynchronous ActionResult with in that.
Objective : User has a download PDF Icon on the WebPage, and downloading takes much of time. So while server is busy generating the PDF, the user shall be able to perform some actions in webpage.
(clicking "download PDF" link is sending and ajax request to the server, server is fetching some data and is pushing back the PDF)
What is happening is while I call the ajax to download the PDF it starts the process, but blocks every request until and unless it returns back to the browser. That is simple blocking request.
What I have tried so far.
1) Used AsyncController as a base class of controller.
2) Made the ActionResult to an async Task DownloadPDF(), and here I wrapped the whole code/logic to generate PDF into a wrapper. This wrapper is eventually an awaitable thing inside DownloadPDF()
something like this.
public async Task<ActionResult> DownloadPDF()
{
string filepath = await CreatePDF();
//create a file stream and return it as ActionResult
}
private async Task<string> CreatePDF()
{
// creates the PDF and returns the path as a string
return filePath;
}
YES, the Operations are session based.
Am I missing some thing some where?
Objective : User has a download PDF Icon on the WebPage, and downloading takes much of time. So while server is busy generating the PDF, the user shall be able to perform some actions in webpage.
async will not do this. As I describe in my MSDN article, async yields to the ASP.NET runtime, not the client browser. This only makes sense; async can't change the HTTP protocol (as I mention on my blog).
However, though async cannot do this, AJAX can.
What is happening is while I call the ajax to download the PDF it starts the process, but blocks every request until and unless it returns back to the browser. That is simple blocking request.
AFAIK, the request code you posted is completely asynchronous. It is returning the thread to the ASP.NET thread pool while the PDF is being created. However, there are several other aspects to concurrent requests. In particular, one common hangup is that by default the ASP.NET session state cannot be shared between multiple requests.
1) Used AsyncController as a base class of controller.
This is unnecessary. Modern controllers inspect the return type of their actions to determine whether they are asynchronous.
YES, the Operations are session based.
It sounds to me like the ASP.NET session is what is limiting your requests. See Concurrent Requests and Session State. You'll have to either turn it off or make it read-only in order to have concurrent requests within the same session.