I'm working on an Expo project where users will take videos and upload them for organization and viewing them. I've been trying to upload videos taken using the expo-camera API to a storage bucket in AWS. I've been using the expo-file-system's upload in background functionality to upload videos, but I've come across an error that I don't have any idea how to handle. It generally happens the bigger the video is, and if it is over a minute, the app is likely to crash when I start to upload. Here is the FileSystem function call I make to upload the video:
FileSystem.uploadAsync(awsurl, localVideo, {
successActionStatus: 201,
sessionType: FileSystem.FileSystemSessionType.BACKGROUND,
httpMethod: 'POST',
fieldName: 'file',
mimeType: 'video/mov',
uploadType: FileSystem.FileSystemUploadType.MULTIPART,
parameters: awsparams
})
and here is the error that is occurring
Error: Unable to upload the file: 'Error Domain=NSURLErrorDomain Code=-997 "Lost connection to background transfer service" UserInfo={NSErrorFailingURLStringKey=https://s3.us-west-2.amazonaws.com/{{bucket}}, NSErrorFailingURLKey=https://s3.us-west-2.amazonaws.com/{{bucket}}, _NSURLErrorRelatedURLSessionTaskErrorKey=(
"BackgroundUploadTask <40D1EB30-0F5E-457B-8A19-18610D4A6C3A>.<1>"
), _NSURLErrorFailingURLSessionTaskErrorKey=BackgroundUploadTask <40D1EB30-0F5E-457B-8A19-18610D4A6C3A>.<1>, NSLocalizedDescription=Lost connection to background transfer service}'
I've tried multiple combinations of fieldName, mimeType and uploadType, but have had no success in getting large uploads to work. Any ideas how to fix this or effective ways to debug would be great! If anyone knows of a way in Expo to break videos into smaller chunks for uploads that would also be very helpful
Expo version is 43
Related
I am building a strapi app that creates a pdf and in which I want to send an email to the user after that, with the pdf attached.
The issue is that the moment the pdf is saved in the upload/files directory the server reloads. Being so, the script is interrupted and the email is never sent.
What I want is that the server doesn't restart at the upload, the script continues and so the email is sent.
I am using strapi v3 and the upload plugin.
I already tried to use the watchIgnoreFiles pointing to the upload/files directory in the admin.js, but the strapi server still restarts. (https://forum.strapi.io/t/change-file-without-triggering-a-reload-of-strapi/4026/8). But I am in doubt if this is the right way, since I am not changing a file, but creating a new one, which is content and not a system file. My admin.js file is as follows:
module.exports = ({ env }) => ({ admin: { autoOpen: false, watchIgnoreFiles: [ './extensions/upload/files/', '**/extensions/upload/files/' ] } })
Thank you for your help.
I am able to add multiple ids in the id array and it loads well and properly adds the script on local
However, when the code makes it to production, it throws invalid errors because the IDs are getting concatenated(probably due to minified files, not sure). How can I fix this?
Pictures of local and deployed version are attached
Google tag manager on prodution
Google tag manager on my local machine
Here's is my config code using vue-gtm
Vue.use(VueGtm, {
id: ['GTM-M4X2575', 'GTM-T98TL4V'],
enabled: true,
debug: true
});
Am I missing anything?
Using stencil 3.0.3, node 12.21 cornerstone theme was working and suddenly stopped with a weird server error:
Debug: internal, implementation, error
Error: The BigCommerce server responded with a 500 error
at Object.internals.getResponse (/usr/local/lib/node_modules/#bigcommerce/stencil-cli/server/plugins/renderer/renderer.module.js:128:15)
at processTicksAndRejections (internal/process/task_queues.js:97:5)
at async internals.implementation (/usr/local/lib/node_modules/#bigcommerce/stencil-cli/server/plugins/renderer/renderer.module.js:39:20)
at async module.exports.internals.Manager.execute (/usr/local/lib/node_modules/#bigcommerce/stencil-cli/node_modules/#hapi/hapi/lib/toolkit.js:45:28)
at async Object.internals.handler (/usr/local/lib/node_modules/#bigcommerce/stencil-cli/node_modules/#hapi/hapi/lib/handler.js:46:20)
at async exports.execute (/usr/local/lib/node_modules/#bigcommerce/stencil-cli/node_modules/#hapi/hapi/lib/handler.js:31:20)
at async Request._lifecycle (/usr/local/lib/node_modules/#bigcommerce/stencil-cli/node_modules/#hapi/hapi/lib/request.js:312:32)
at async Request._execute (/usr/local/lib/node_modules/#bigcommerce/stencil-cli/node_modules/#hapi/hapi/lib/request.js:221:9)
I have tried to re-install big commerce stencil, clean build and still get the same error.... at this point I have no idea what could be causing this issue.
Importantly, there was no code changes in between the day where everything was working and the day where it stopped.
Are you using CloudFlare for a CDN? I've ran into this issue and have had to edit my hosts file
I add the following 3 lines.
The first one is for the Store Domain, i.e. if you have MyShoeStore.com
The second 2 contain the auto-generated store-hash for a BigCommerce store that should be visible in your URL bar if you are on the BigCommerce control panel (Logged into the admin view - that's the easiest way to get it, there are some others)
35.186.223.98 mystoredomain.com
35.186.223.98 store-abcdefg[store-hash].mybigcommerce.com
35.186.223.98 www.store-abcdefg[store-hash].mybigcommerce.com
For some reason, Stencil-CLI gets thrown for a loop when trying to connect to a BigCommerce store that is running behind CloudFlare CDN. This has caused 500 errors in the Stencil-CLI, and a failure to render, during local development, but the store is able to be uploaded and hosted in production without a problem.
I'm not sure if this is the actual issue that you had which caused your 500 error, (It can be due to an error in the syntax of one of the HandleBars templates, but usually Stencil-CLI will output an informative error for that reason.)
The other situation I have seen this in is if you are trying to run Cornerstone on a very old BigCommerce store which is still running their old template framework (Blueprint) - in that situation, certain pages will 500, but others will work.
This may be helpful for others who are running into this.
I installed a proxy tool to capture traffic between stencil and BigCommerce in order to perhaps find more details on this error.
Unfortunately, as soon as I did that, the site worked again...
So I am trying to upload and encode file on azure media service. If the video format is MPEG-4 it uploads successfully, but if the format is MPEG-PS it fails.
Error Code:
ErrorExecutingTaskUnsupportedFormat
Error Message:
An error has occurred. Stage: AnalyzeInputMedia. Code: System.IO.InvalidDataException. System.IO.InvalidDataException: Failed to create MediaItem for blob-ea71728299ee44a5b9866e478292a2a0: Invalid data found when processing input!
I believe the mentioned exception is caused by the unsupported input file format. The official docs say only MPEG-TS is supported.
The input protocols supported by Live Encoding are: RTMP, RTP (MPEG
TS) and Smooth Streaming. You can send in a live feed where the video
is encoded with MPEG-2 (up to 422 Profile), or H.264...
The difference between MPEG-TS and MPEG-PS
MPEG-TS - transport stream for communication and broadcasting app,
MPEG-PS - used for storage application (DVD).
Simply, Azure Media Services supports only MPEG-TS containers.
New to Amazon S3 usage.I get the following error when trying to access the file from Amazon S3 using a simple java method.
2016-08-23 09:46:48 INFO request:450 - Received successful response:200, AWS Request ID: F5EA01DB74D0D0F5
Caught an AmazonClientException, which means the client encountered an
internal error while trying to communicate with S3, such as not being
able to access the network.
Error Message: Unable to store object contents to disk: Read timed out
The exact lines of code worked yesterday.I was able to download 100% of 5GB file in 12 min. Today I'm in a better connected environment but only 2% or 3% of the file is downloaded and then the program fails.
Code that I'm using to download.
s3Client.getObject(new GetObjectRequest("mybucket", file.getKey()), localFile);
You need to set the connection timeout and the socket timeout in your client configuration.
Click here for a reference article
Here is an excerpt from the article:
Several HTTP transport options can be configured through the com.amazonaws.ClientConfiguration object. Default values will suffice for the majority of users, but users who want more control can configure:
Socket timeout
Connection timeout
Maximum retry attempts for retry-able errors
Maximum open HTTP connections
Here is an example on how to do it:
Downloading files >3Gb from S3 fails with "SocketTimeoutException: Read timed out"