React Native FileSystem file could not be read? What the Blob? - react-native

I am trying to send an audio file recorded using expo-av library to my server using web sockets.
Websocket will allow me to only send String, ArrayBuffer or Blob. I spent whole day trying to find out how to convert my .wav recording into a blob but without success. I tried to use expo-file-system method FileSystem.readAsStringAsync to read the filed as a string but I get an error that the file could not be read. How is that possible? I passed it the correct URI (using recording.getURI()).
I tried to re-engineer my approach to use fetch and FormData post request with the same URI and audio gets sent correctly. But I really would like to use WebSockets so that later I could try to make it stream the sound to the server in real time instead of recording it first and then sending it.

You can try ... But I can't find a way to read the blob itself
// this is from my code ...
let recording = Audio.Recording ;
const info = await FileSystem.getInfoAsync(recording.getURI() || "");
console.log(`FILE INFO: ${JSON.stringify(info)}`);
// get the file as a blob
const response = await fetch(info.uri);
const blob = await response.blob(); // slow - Takes a lot of time

Related

Error: Network Connection Lost - saving form data (file) to R2 bucket

I have this handler in my worker:
const data = await event.request.formData();
const key = data.get('filename');
const file = data.get('file');
if (typeof key !== 'string' || !file) {
return res.send(
{ message: 'Post body is not valid.' },
undefined,
400
);
}
await BUCKET.put(key, file);
return new Response(file);
If I comment out the await BUCKET.put(key, file); line, then I get the response of the file as expected. But with that line in the function, I get the error:
Uncaught (in promise) Error: Network connection lost.
I have confirmed that by changing the put to a get, I can retrieve files from that bucket, so there doesn't seem to be a problem with the connection itself.
Are you still having this problem? I'll need your account ID to figure out what's going on. If you DM me (Vitali) on Discord your account ID (& this SO link for context) I can probably help you out (or email me directly at cloudflare.com using vlovich as the account if you don't have/don't want to sign up on Discord). I'm the tech lead for R2.
EDIT 2022-09-07.
I just noticed that you're calling formData on the request. This is causing you to read the object into RAM. Workers has a 128 MiB limit so what's likely happening is that you're exceeding that limit (probably egregiously since we do give some buffer) and thus Cloudflare is terminating your Worker.
What you'll want to do is make sure you upload the file raw (not as a form) and access the raw ReadableStream. Alternatively, you can try writing a TransformStream to parse out the payload in a streaming fashion if you're confident the file payload (& any metadata you need) will come after the name. Usually it's easier to change your upload mechanism.

Sending files using pure HTTP request using telegram bot

hello everyone I was tring to send files using my bot like http://api.telegram.org/botTOKEN/sendDocument?document=http://my_path&chat_id but it ain't support .txt .docx..... and other formats..... any help please
According to https://core.telegram.org/bots/api#sending-files
Sending by URL In sendDocument, sending by URL will currently only
work for gif, pdf and zip files.
You may try to use this approach
Post the file using multipart/form-data in the usual way that files
are uploaded via the browser. 10 MB max size for photos, 50 MB for
other files.
This answer might give you some ideas on how to do that.
UPDATE
It is also good idea to use someone's library to understand how it works there.
For example, I use longman/telegram-bot from this repo. There is encodeFile method in Request class.
Method is follows:
public static function encodeFile($file)
{
$fp = fopen($file, 'rb');
if ($fp === false) {
throw new TelegramException('Cannot open "' . $file . '" for reading');
}
return $fp;
}
Which means simple fopen method with 'rb' parameter is enough to convert file.

Soundcloud API /stream endpoint giving 401 error

I'm trying to write a react native app which will stream some tracks from Soundcloud. As a test, I've been playing with the API using python, and I'm able to make requests to resolve the url, pull the playlists/tracks, and everything else I need.
With that said, when making a request to the stream_url of any given track, I get a 401 error.
The current url in question is:
https://api.soundcloud.com/tracks/699691660/stream?client_id=PGBAyVqBYXvDBjeaz3kSsHAMnr1fndq1
I've tried it without the ?client_id..., I have tried replacing the ? with &, I've tried getting another client_id, I've tried it with allow_redirects as both true and false, but nothing seems to work. Any help would be greatly appreciated.
The streamable property of every track is True, so it shouldn't be a permissions issue.
Edit:
After doing a bit of research, I've found a semi-successful workaround. The /stream endpoint of the API is still not working, but if you change your destination endpoint to http://feeds.soundcloud.com/users/soundcloud:users:/sounds.rss, it'll give you an RSS feed that's (mostly) the same as what you'd get by using the tracks or playlists API endpoint.
The link contained therein can be streamed.
Okay, I think I have found a generalized solution that will work for most people. I wish it were easier, but it's the simplest thing I've found yet.
Use API to pull tracks from user. You can use linked_partitioning and the next_href property to gather everything because there's a maximum limit of 200 tracks per call.
Using the data pulled down in the JSON, you can use the permalink_url key to get the same thing you would type into the browser.
Make a request to the permalink_url and access the HTML. You'll need to do some parsing, but the url you'll want will be something to the effect of:
"https://api-v2.soundcloud.com/media/soundcloud:tracks:488625309/c0d9b93d-4a34-4ccf-8e16-7a87cfaa9f79/stream/progressive"
You could probably use a regex to parse this out simply.
Make a request to this url adding ?client_id=... and it'll give you YET ANOTHER url in its return json.
Using the url returned from the previous step, you can link directly to that in the browser, and it'll take you to your track content. I checked on VLC by inputting the link and it streams correctly.
Hopefully this helps some of you out with your developing.
Since I have the same problem, the answer from #Default motivated me to look for a solution. But I did not understand the workaround with the permalink_url in the steps 2 and 3. The easier solution could be:
Fetch for example user track likes using api-v2 endpoint like this:
https://api-v2.soundcloud.com/users/<user_id>/track_likes?client_id=<client_id>
In the response we can finde the needed URL like mentioned from #Default in his answer:
collection: [
{
track: {
media: {
transcodings:[
...
{
url: "https://api-v2.soundcloud.com/media/soundcloud:tracks:713339251/0ab1d60e-e417-4918-b10f-81d572b862dd/stream/progressive"
...
}
]
}
}
...
]
Make request to this URL with client_id as a query param and you get another URL with that you can stream/download the track
Note that the api-v2 is still not public and the request from your client probably will be blocked by CORS.
As mentioned by #user208685 the solution can be a bit simpler by using the SoundCloud API v2:
Obtain the track ID (e.g. using the public API at https://developers.soundcloud.com/docs)
Get JSON from https://api-v2.soundcloud.com/tracks/TRACK_ID?client_id=CLIENT_ID
From JSON parse MP3 progressive stream URL
From stream URL get MP3 file URL
Play media from MP3 file URL
Note: This link is only valid for a limited amount of time and can be regenerated by repeating steps 3. to 5.
Example in node (with node-fetch):
const clientId = 'YOUR_CLIENT_ID';
(async () => {
let response = await fetch(`https://api.soundcloud.com/resolve?url=https://soundcloud.com/d-o-lestrade/gabriel-ananda-maceo-plex-solitary-daze-original-mix&client_id=${clientId}`);
const track = await response.json();
const trackId = track.id;
response = await fetch(`https://api-v2.soundcloud.com/tracks/${trackId}?client_id=${clientId}`);
const trackV2 = await response.json();
const streamUrl = trackV2.media.transcodings.filter(
transcoding => transcoding.format.protocol === 'progressive'
)[0].url;
response = await fetch(`${streamUrl}?client_id=${clientId}`);
const stream = await response.json();
const mp3Url = stream.url;
console.log(mp3Url);
})();
For a similar solution in Python, check this GitHub issue: https://github.com/soundcloud/soundcloud-python/issues/87

Sails Skipper: how to read and validate a csv file and exclude the invalid file types during upload?

I'm trying to write a controller that uploads a file to S3 location. However, before upload I need to validate if the incoming file type is a csv or not. And then I need to read the file to check for header colummns in the files etc. I got the type of the file as per below snippet:
req.file('foo')._files[0].stream
But, how to read the entire file stream and check for headers and data etc?There were other similar Qs like (Sails.js Skipper: How to read the uploaded file stream during upload?). But the solution mentioned is to use skipper-csv adapter(which i cannot use as I already use skipper-s3 to upload to s3).
Can someone please post an example on how to read the upstreams and perform any validations before the upload?
Here is how my problem got solved: I'm making a copy of the stream to validate before actual upload. And then checking my validations on the original stream and once passed, I upload the copied stream to my desired location.
For reading the Csv stream, I found a npm package: csv-parser(https://github.com/mafintosh/csv-parser) , which I felt easy to handle events like headers, data.
For creating the copy of the stream, I used the following logic:
const upstream = req.file('file');
const fileStreamMap = {};
const fileStreamMapCopy = {};
_.each(upstream._files, (file) => {
const stream = PassThrough();
const streamCopy = PassThrough();
file.stream.pipe(stream);
file.stream.pipe(streamCopy);
fileStreamMap[fileName] = stream;
fileStreamMapCopy[fileName] = streamCopy;
});
// validate and upload files to S3, if Valid.
validateAndUploadFile(fileStreamMap, fileStreamMapCopy);
}
validateAndUploadFile() contains my custom validation logic for my csv upload.
Also, we can use aws-sdk(https://www.npmjs.com/package/aws-sdk) for s3 upload.
Hope, this helps someone.

ASP.NET Web API - Reading querystring/formdata before each request

For reasons outlined here I need to review a set values from they querystring or formdata before each request (so I can perform some authentication). The keys are the same each time and should be present in each request, however they will be located in the querystring for GET requests, and in the formdata for POST and others
As this is for authentication purposes, this needs to run before the request; At the moment I am using a MessageHandler.
I can work out whether I should be reading the querystring or formdata based on the method, and when it's a GET I can read the querystring OK using Request.GetQueryNameValuePairs(); however the problem is reading the formdata when it's a POST.
I can get the formdata using Request.Content.ReadAsFormDataAsync(), however formdata can only be read once, and when I read it here it is no longer available for the request (i.e. my controller actions get null models)
What is the most appropriate way to consistently and non-intrusively read querystring and/or formdata from a request before it gets to the request logic?
Regarding your question of which place would be better, in this case i believe the AuthorizationFilters to be better than a message handler, but either way i see that the problem is related to reading the body multiple times.
After doing "Request.Content.ReadAsFormDataAsync()" in your message handler, Can you try doing the following?
Stream requestBufferedStream = Request.Content.ReadAsStreamAsync().Result;
requestBufferedStream.Position = 0; //resetting to 0 as ReadAsFormDataAsync might have read the entire stream and position would be at the end of the stream causing no bytes to be read during parameter binding and you are seeing null values.
note: The ability of a request's content to be read single time only or multiple times depends on the host's buffer policy. By default, the host's buffer policy is set as always Buffered. In this case, you will be able to reset the position back to 0. However, if you explicitly make the policy to be Streamed, then you cannot reset back to 0.
What about using ActionFilterAtrributes?
this code worked well for me
public HttpResponseMessage AddEditCheck(Check check)
{
var request= ((System.Web.HttpContextWrapper)Request.Properties.ToList<KeyValuePair<string, object>>().First().Value).Request;
var i = request.Form["txtCheckDate"];
return Request.CreateResponse(HttpStatusCode.Ok);
}