How to work around maximum execution time when uploading to S3 Bucket? - amazon-s3

I am using the S3-for-Google-Apps-Script library to export full attachments from Gmail to an S3 bucket. I changed the S3 code to upload the actual content of the attachment rather than an encoded string, as detailed in this post.
However, when attempting to upload an attachment approximately > 5 MB, apps script throws the following error: "Maximum Execution Time Exceeded". I used timestamps to measure the difference in time to ensure that the time issue occurred in the s3.putObject(bucket,objectKey,file) function.
It might be also helpful to note that for a file barely over the limit, it still gets uploaded to my s3 bucket, but apps script returns that the execution time has been exceeded (30 seconds) to the user, disrupting user flow.
Reproducible Example
This is basically a simple button that scrapes a current email for all attachments, if they are pdf's then it calls the export function. and it exports those attachments to our s3 instance. the problem is that when the file > 5mb, it throws the error:
"exportHandler exceeded execution time"
If you're trying to reproduce this be aware that you need to copy an instance of s3 for gas and initialize that as a separate library in apps script with the changes made here.
In order to link the libraries, go to file>libraries, and add the respective library id, version, and development mode in the google apps script console. You'll also need to save your AWS access key and secret key in your property service cache, as detailed in the library documentation.
An initial button that triggers an export of a single attachment on the current Gmail thread:
export default function testButton() {
const Card = CardService.newCardBuilder();
const exportButtonSection = CardService.newCardSection();
const exportWidget = CardService.newTextButton()
.setText('Export File')
.setOnClickAction(CardService.newAction().setFunctionName('exportHandler'));
exportButtonSection.addWidget(exportWidget);
Card.addSection(exportButtonSection);
return Card.build();
}
Export an attachment to a specified s3 bucket. Note that S3Modified is an instance of the s3 for google apps script that is modified in accordance to the post outlined above, it's a separate Apps Script file, s3.putObject is where it takes a long time to process an attachment (this is where the error occurs I think).
credentials initialize your s3 awsAccessKey and awsBucket, and can be stored in PropertiesService.
function exportAttachment(attachment) {
const fileName = attachment.getName();
const timestamp = Date.now();
const credentials = PropertiesService.getScriptProperties().getProperties();
const s3 = S3Modified.getInstance(credentials.awsAccessKeyId, credentials.awsSecretAccessKey);
s3.putObject(credentials.awsBucket, fileName, attachment, { logRequests: true });
const timestamp2 = Date.now();
Logger.log('difference: ', timestamp2 - timestamp);
}
This gets all the attachments that are PDFs in the current email message, this function is pretty much the same as the one on the apps script site for handling Gmail attachments, this specifically looks for pdf's though (not a requirement for the code):
function getAttachments(event) {
const gmailAccessToken = event.gmail.accessToken;
const messageIdVal = event.gmail.messageId;
GmailApp.setCurrentMessageAccessToken(gmailAccessToken);
const mailMessage = GmailApp.getMessageById(messageIdVal);
const thread = mailMessage.getThread();
const messages = thread.getMessages();
const filteredAttachments = [];
for (let i = 0; i < messages.length; i += 1) {
const allAttachments = messages[i].getAttachments();
for (let j = 0; j < allAttachments.length; j += 1) {
if (allAttachments[j].getContentType() === 'application/pdf') {
filteredAttachments.push(allAttachments[j]);
}
}
}
return filteredAttachments;
}
the global handler that gets attachments and exports them to the s3 bucket when the button is clicked:
function exportHandler(event) {
const currAttachment = getAttachments(event).flat()[0];
exportAttachment(currAttachment);
}
global.export = exportHandler;
To be absolutely clear, the bulk of the time is being processed in the second code sample (exportAttachment), since that is where the object is being put into the s3 application.
The timestamps help log how much time that function takes, test it with a 300kb file, you'll get 2 seconds, 4mb 20 seconds, >5mb approx 30 seconds. This part contributes the most to the max execution time.
So this is what leads me to my question, why do I get the maximum execution time exceeded error and how can I fix it? Here are my two thoughts on potential solutions:
Why does the execution limit occur? The quotas say that the runtime limit for a custom function is 30 seconds, and the runtime limit for the script is 6 minutes.
After some research, I only found custom function mentions in the context of AddOns in Google Sheets, but the function where I'm getting the error is a global function (so that it can be recognized by a callback) in my script. Is there a way to change it to not be recognized as a custom function so that I'm not limited to the 30-second execution limit?
Now, how can I work around this execution limit? Is this an issue with the recommendation to modify the S3 library in this post? Essentially, the modification suggests that we export the actual bytes of the attachment rather than the encoded string.
This definitely increases the load that Apps Script has to handle which is why it increases the execution time required. How can I work around this issue? Is there a way to change the S3 library to improve processing speed?

Regarding the first question
From https://developers.google.com/gsuite/add-ons/concepts/actions#callback_functions
Warning: The Apps Script Card service limits callback functions to a maximum of 30 seconds of execution time. If the execution takes longer than that, your add-on UI may not update its card display properly in response to the Action.
Regarding the second question
On the answer to Google Apps Script Async function execution on Server side it's suggested a "hack": Use an "open link" action to call something that can run asynchronously the task that will requiere a long time to run.
Related
How to use HtmlService in Gmail add-on using App Script
Handling Gmail Addon Timeouts
Can't serve HTML on the Google Apps Script callback page in a GMail add-on
Answer to rev 1.
Regarding the first question
In Google Apps Script, a custom function is a function to be used in a Google Sheets formula. There is no way not extend this limit. Reference https://developers.google.com/app-script/guides/sheets/functions
onOpen and onEdit simple triggers has also a 30 seconds execution time limit. Reference https://developers.google.com/apps-script/guides/triggers
Functions being executed from the Google Apps Script editor, a custom menu, an image that has assigned the function, installable triggers, client side code, Google Apps Script API has an execution time limit of 6 minutes for regular Google accounts (like those that have a #gmail.com email address) by the other hand G Suite accounts have a 30 minutes limit.

Related

Resume interrupted uploads via filepond

I'm using filepond to handle chunk uploads. Everything works fine, except one thing. Is there any way to continue interrupted uploads? I mean, for example, the customer started to upload a large video using mobile net, but she terminated it around 40%. Then, a few hours later, she want to continue the upload using wifi. Same file, but different browser, different IP address. In this case I'd like to continue the upload from the last completed chunk, not from the beginning.
As the documentation wrote:
If one of the chunks fails to upload after the set amount of retries in chunkRetryDelays the user has the option to retry the upload.
In my case there are no failed chunk uploads. The customer simply set the same file to upload.
Exactly this is what I'd want:
As FilePond remembers the previous transfer id the process now starts of with a HEAD request accompanied by the transfer id (12345) in the URL. server responds with Upload-Offset set to the next expected chunk offset in bytes. FilePond marks all chunks with lower offsets as complete and continues with uploading the chunk at the requested offset.
During upload, I send a custom header with a unique hash identifier of the file/user id, and store it in the db. When the customer wants to upload the same file, and there is an uncompleted version already uploaded, I can able to find it and send back an Upload-Offset header. This is clear for me. But I couldn't ask filepond to send HEAD/GET request before start the chunk upload, to get the correct offset. It always starts from zero.
I already checked this question, but my case is different. I don't want to continue a paused upload, I'd like to handle an abandoned but later re-uploaded file.
As I see the filepond.js (4.30.3) source code, I can create a workaround, simply add value to state.serverId. In this case the requestTransferOffset will fired, and continues the upload from the given offset.
// let's go!
if (!state.serverId) {
requestTransferId(function(serverId) {
// stop here if aborted, might have happened in between request and callback
if (state.aborted) return;
// pass back to item so we can use it if something goes wrong
transfer(serverId);
// store internally
state.serverId = serverId;
processChunks();
});
} else {
requestTransferOffset(function(offset) {
// stop here if aborted, might have happened in between request and callback
if (state.aborted) return;
// mark chunks with lower offset as complete
chunks
.filter(function(chunk) {
return chunk.offset < offset;
})
.forEach(function(chunk) {
chunk.status = ChunkStatus.COMPLETE;
chunk.progress = chunk.size;
});
// continue processing
processChunks();
});
}
...but I think this is NOT a clear way.
Was anybody facing this issue yet? Or do I missed anything, and is there a simplest way to continue interrupted uploads?

When requesting azure TTS through the ssml statement, there is a delay of about 2 seconds

As shown in the code below, when azure TTS is requested with the ssml statement, the response is delayed for about 2 seconds.
public static async Task SynthesizeAudioAsync()
{
var config = SpeechConfig.FromSubscription("xxxxxxxxxKey", "xxxxxxxRegion");
using var synthesizer = new SpeechSynthesizer(config, null);
var ssml = File.ReadAllText("C:/ssml.xml");
var result = await synthesizer.SpeakSsmlAsync(ssml); <=== The delay is right here
​ using var stream = AudioDataStream.FromResult(result);
await stream.SaveToWaveFileAsync("C:/file.wav");
}
What is the problem?
Or is it normal for the response speed to be delayed by 2 seconds?
var result = await synthesizer.SpeakSsmlAsync(ssml);
As far as I know, the time taken would be proportional to the data you have in the ssml for it to be converted speech.
What happens in the backend is that : The ssml data is relayed to Azure Speech Services, azure speech service processes the text, builds the required audio bytes using the ML Model returns to the requesting client.
I am assuming the delay you are talking about is the latency for doing the above process. If the data/text to be recognized is large, there will be considerable time in building required audio bytes.
Since you are using await, the code will be waiting until it is completed.

How to create "Round Robin Call Forwarding Function" in Twilio Stack

I have researched high and low through multiple websites and have not found a single fully documented solution for round-robin call forwarding with-in the Twilio stack; let alone within Twilio Studio. The last time this question was asked in detail was in 2013 so your help is greatly appreciated. I am looking to find a solution to the following to educate myself and others:
[Round Robin Scenario]
Mentioned by Phil Krnjeu on Aug 1 '13 at 23:04, "I'm trying to create a website that has a phone number on it (say, a phone number for a school). When you call that number, it has different secretary offices (A,B,C, D). I want to create something where the main number is called, and then it goes and calls phone number A the first time, the second time someone calls the main number, number B is called, C, then D. Once D is called (which would be the 4th call), the 5th call goes back to A."
The response to the above question was to use an IVR Screening & Recording application which requires the caller to pick an agent which is not a true Round Robin solution. The solution I am looking for and many others require the system to know which agent is in a group and which agent is next to receive a call.
[Key Features Needed]
Ability to add forwarding numbers as identified above A, B, C, D as a group or IVR extensions such as 1 = Management, 2 = Sales and etc...
Set a subsequent calling rule that notates in a DB of some sort. Caller A through D, for example, equals 1 unsuccessful. When caller A has been forwarded a call it now equals 0 successful then the script stops and allows the call to be answered by the user or its voicemail. Then the next call comes in and is forwarded to user B and assigned a 0 successful value, then the script stops.
After the caller finishes the call or finishes leaving a voicemail the script needs to end the call.
[Final Destination]
The round-robin should finalize its call with the forwarded phone numbers voicemail.
[Known Issues]
Forwarding a call to multiple numbers not stopping when someone answers
[Options]
Once this question is posted I am sure someone will ask in the near future what if I wanted the call to be forwarded to a Twilio voicemail instead of using the forwarded phone number's voicemail which could be let's say a cell phone. I do not necessarily need this feature, however, making an additional comment would be very helpful to the community. Thank you for your time.
I have limited knowledge of programming besides having the ability to review articles posted by other users. One article I researched in detail that did not work for me was, "IVR: Screening & Recording with PHP and Laravel."
The solution I am looking for first would be to make this code through the new Twilio Studio interface if that is not possible then any other solution would be helpful to all.
Sam here from the Twilio Support Team. You can build what you've described using Twilio's Runtime suite, Studio, and Functions.
I wrote a blog post with detailed instructions and screenshots here, and I've included a summarized version below as well.
CREATE YOUR VARIABLE
First, you need to create a serverless Variable which will be used as the round robin counter. The variable must be inside an Environment, which is inside a Service. This is the only part of the application where you will need your own laptop. You can see how to create these with any of the SDKs or using curl in the docs.
Create a Service
Create an Environment
Create a Variable
Be sure to copy the SIDs of your Service, Environment, and Variable since you will need that for your function.
For convenience, this is how you create the Variable in NodeJS.
const accountSid = 'your_account_sid';
const authToken = 'your_auth_token';
const client = require('twilio')(accountSid, authToken);
client.serverless.services('ZSXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX')
.environments('ZEXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX')
.variables
.create({key: 'key', value: 'value'})
.then(variable => console.log(variable.sid));
CREATE YOUR FUNCTION
Create the following Environment Variables here in the console and save set them equal to their respective SID that you saved earlier.
RR_SERVICE_SID
RR_ENV_SID
RR_VAR_SID_CTR
Next, make sure you check the Enable ACCOUNT_SID and AUTH_TOKEN checkbox above the Environment Variables section under Credentials.
Be sure your Twilio Client version in the Dependencies section is set to the latest, so we can be sure it includes the Serverless resources. At the time of writing (March 2020), the default Client version does not include them, so we upgraded to 3.41.1, which was the latest.
Go here in the console and create a blank Function.
Copy and paste the following code and replace the numbers with the ones you would like to include in your Round Robin (make sure the environment variables you just created match what's in the code).
exports.handler = function(context, event, callback) {
// Number List
let numbers = [
"+18652142345", //Sam
"+18651092837", //Tina
"+19193271892", //Matt
// Copy and paste line above to add another number.
];
// Initialize Twilio Client
let client = context.getTwilioClient();
// Fetch Round Robin Ctr
client.serverless.services(context.RR_SERVICE_SID)
.environments(context.RR_ENV_SID)
.variables(context.RR_VAR_SID_CTR)
.fetch()
.then(variable => {
// Use counter value to determine number to call
let number = numbers[variable.value];
// Create var with new ctr value
let ctr = variable.value;
// Check if current counter value is less than RR length (i.e. the number of numbers in round robin)
// If so, increment
if(ctr == numbers.length-1) {
ctr = 0;
}
// Otherwise reset ctr
else ctr++;
// Update counter value
client.serverless.services(context.RR_SERVICE_SID)
.environments(context.RR_ENV_SID)
.variables(context.RR_VAR_SID_CTR)
.update({value: ctr})
.then(resp => {
// Return number to call
let response = {number};
// Return our counter value and a null error value
callback(null, response);
});
});
};
CREATE YOUR STUDIO FLOW
Click the red plus sign to create a new Flow here.
Give the Flow a name and click Next.
Scroll to the bottom of the templates and click 'Import from JSON' and click Next.
Paste the Flow JSON shown here and click Next.
Click the RoundRobin function widget and select the Function you just created under the Default service.
Click the FunctionError widget, click MESSAGING & CHAT CONFIG, and change the SEND MESSAGE TO number to a number that you would like to notify by text in the event of a Function failure.
Click the DefaultNumber widget and change the default number that will be forwarded to in the event of a Function failure.
Click the Publish button at the top of your Flow.
CONFIGURE YOUR TWILIO NUMBER
Go here in the console.
Click the Twilio number you would like to configure.
Scroll down to the A CALL COMES IN dropdown in the Voice section and select Studio Flow.
Select your new Flow in the Select a Flow dropdown to the right.
Click Save at the bottom.
And that's it. You're now all set to test!

CSRF failure in custom mongoose pre-hook (Keystone.js)

using keystone LocalFile type to handle image uploads. similar to the Cloudinary autoCleanup option, I want to be able to delete the uploaded file itself, in addition to the corresponding mongo entry when deleting entries through the admin ui.
in this case, I want to delete an "Album", and it's corresponding album cover.
Album.schema.pre('remove', function(next){
var path = this._original.album_cover.path + "/" + this._original.album_cover.filename
fs.unlink(path, function () {
console.log('deleted');
})
I get "CSRF failure" when using the fs module. I thought all CSRF protection was handled internally with Keystone.
Anyone know of a better solution to this?
Took a 10 minute break and came back and it seems to be working now. I also found this, which seems to be the explanation.
"Moreover double check your session timeout. In my dev settings the session duration is set to 3 minutes. So, if I end up editing something for more than that time, Keystone will return a CSRF error on save because the new session (generate in the meantime) invalidates the old token."
https://github.com/keystonejs/keystone/issues/1330

store.load() triggers read + create

I'm developing a app where a list is automatically refreshed every 15 sec. To do so, I load the store every 15 sec from server (sending the params) via php page linked to a postgreSQL DB. So far, so good, and it works OK.
Buy I have noticed that every time the store is loaded, it sends two requests to the server (read + create). While the read request is necessary to load new elements to the store, the create is completely useless, because it sends the whole store as payload and receives nothing making use of the network for nothing.
How can I make the store to read, and only read, from the server when it loads?
Thanks
Some week sago I had some unexpected creates too. Googles learned me that there is an issue with Sencha with store.load(). It seems loaded records stay phantoms after loading. A store.sync() will create all records in a store that are phantoms (means they are not yet in back end).
I have next code in my on load callbacks:
callback: function(records, operation, success) {
var x = records.length;
for (i = 0; i < x; i++) {
records[i].phantom = false;
}
}
This solved my problem.