Agora Cloud Recording [ Web Page Recording ] : Error 404 (no Route matched with those values) - agora.io

I trying to use new mode recording from Agora Here.
But It doesn't work for me. I always got Error 404 with message ('no Route matched with those values')
Here is my url path
'/v1/apps/{appid}/cloud_recording/resourceid/{my-resource-id}/mode/web/start'
I already check on Cloud Recording RESTful API to find pattern for request body but It doesn't said anything about Web Page Recording. (Maybe because It's just beta for now)
Here is my start request body that I copy from tutorial but It doesn't matched with RESTful API
const extensionServiceConfig = {
errorHandlePolicy: "error_abort",
extensionServices: [{
serviceName: "web_recorder_service",
errorHandlePolicy: "error_abort",
serviceParam: {
url: "myurl",
audioProfile: 0,
videoWidth: 1280,
videoHeight: 720,
maxRecordingHour: 2
}
}]
};

The "404 No Routes" error is caused by Cloud Recording not being enabled on your project. To enable cloud recording on your project, you’ll need to click into the Products & Usage section of the Agora.io Dashboard and select the project name from the drop-down in the upper left-hand corner, click the Duration link below Cloud Recording.
Here is a step by step guide I wrote: https://medium.com/agora-io/agora-cloud-recording-quickstart-guide-with-postman-demo-c4a6b824e708
Also Agora offers a Postman Collection that provides a sample body for making a request and if viewed through the Postman app it will generate the JavaScript snippet for you.

Related

Twitter API : Error: Request failed with code 403 (image upload)

There are a few things I don't understand about the Twitter API.
I am using the following package: https://www.npmjs.com/package/twitter-api-v2
Which allows me to connect to a twitter account.
I recover all the credentials of the connected account.
(Among other things, I manage to recover user data and send simple tweets)
I am trying to upload an image.
So I have the following code:
const result = await client.v1.post(
"media/upload.json",
{
command: "INIT",
media: file,
},
{
prefix: "https://upload.twitter.com/1.1/",
}
);
And I got the following error returned to me:
Error: Request failed with code 403 - You currently have Essential access which includes access to Twitter API v2 endpoints only. If you need access to this endpoint, you'll need to apply for Elevated access via the Developer Portal. You can learn more here: https://developer.twitter.com/en/docs/twitter-api/getting-started/about-twitter-api#v2-access-leve (Twitter code 453)
I have read and write permissions.
(since I manage to send tweets)
I understand that the V2 api of twitter does not allow image upload, but here I am using V1?
Can you give some information ?

Trying to get Vimeo API to recognize the upload file

I am using Expo managed Workflow with react-native and trying desperately to not have to eject. The only sticking point in the entire app is allowing users of the app to upload video into a usable format. I am hoping to use Vimeo as the video hosting service.
I am authenticated and verified and interacting with the Vimeo API.
On my express server when I run this code:
let Vimeo = require('vimeo').Vimeo;
let client = new Vimeo({vimeo creds});
client.request({
method: 'GET',
path: '/tutorial'
}, function (error, body, status_code, headers) {
if (error) {
console.log(error);
}
console.log(body);
})
I see this message:
message: 'Success! You just interacted with the Vimeo API. Your dev environment is configured correctly, and the client ID, client secret, and access token that you provided are all working fine.',
next_steps_link: 'https://developer.vimeo.com/api/guides/videos/upload',
token_is_authenticated: true
I successfully uploaded a video from my express server using this code and placing the file on the same directory as the server code.
client.upload(
"{filename}",
{
'name': ' Test Video 1',
'description': 'The description goes here.'
},
function (uri) {
console.log('Your video URI is: ' + uri);
},
function (bytes_uploaded, bytes_total) {
var percentage = (bytes_uploaded / bytes_total * 100).toFixed(2)
console.log(bytes_uploaded, bytes_total, percentage + '%')
},
function (error) {
console.log('Failed because: ' + error)
}
)
Then, I added the same functionality to my web app locally for testing but not on an express server, trying to do it within a react native app running in a browser (for starters). I still get the message that says I am successfully interacting with Vimeo API and that I am authenticated.
However, When I try to add the file in the react native app with the exact same code, except this is how I am accessing the video file from react native: filename = require({pathToFile});
and the failed response says:
"Failed because: Unable to locate file to upload."
I do not know if the message is incorrect and if Vimeo API is having issues understanding the file or if it could be that because I am no longer running on a server.
So I keep trying to make sense of how to do this but wonder if I am missing something at this stage of testing.
Because...
I am able to upload the video to Firebase Storage by using expo-image-picker.
When I try to give the Vimeo API the file from what I have retrieved from the filesystem, it still says it is unable to locate file to upload.
In this case the string I am trying to upload looks like this: data:video/quicktime;base64,AAAAHGZ0eXBtcDQyAAAAAW…5AQLMKwEjd8ABNvCVAUgYoQFR1yYBVZYxAVX9NwFXEy8BYh0m
Any suggestions of a way to get Vimeo api to recognize this file?
I really want to use Vimeo for this project I can't find anything on stack overflow related to this except this one question which noone has attempted to answer:
Failed because: Unable to locate file to upload but path is correct (Vimeo API)

Sonos Music API service reporting and manifest file

We've built a SMAPI implementation that is serving up audiobooks. We're able to browse books and play them, but we're running into problems getting reporting to work correctly. We saw that the reporting endpoints for SMAPI have been deprecated, so we're attempting to follow the directions from the "Add reporting" page.
We added a reporting path at https://<our_service>/v1/reporting and added endpoints for requests to /context and /timePlayed off of that base path. We're able to hit them directly ourselves, so they're running.
We also created and hosted a manifest file at https://<our_service>/v1/files/manifest.json, which we're also able to hit directly and get the JSON file.
{
"schemaVersion": "1.0",
"endpoints": [
{
"type": "reporting",
"uri": "https://<our_service>/v1/reporting"
}
],
"strings": {
"uri": "https://<our_service>/v1/files/strings.xml",
"version": 1
}
}
After that we added our service for testing using the customsd page. We're still able to navigate the menus and play audiobooks, but Sonos appears to be sending the deprecated reporting requests to our SOAP service instead of the new reporting endpoints.
We found this question where someone appeared to be using a SMAPI implementation along with the new endpoints, but we haven't been able to figure out what we're doing differently that's causing the problem. Any ideas or suggestions would be much appreciated.
It looks like you have unsupported version numbers for the reporting endpoint and the Manifest Uri. v1 is not supported. Acceptable version numbers are v1.0, v2.0 or later. For reference, see:
The example under Add a manifest file with an endpoint in Add reporting.
POST /timePlayed for a list of features for each version.
Cloud queue base URL and API version in Play audio (cloud queue) for details about the URL and API version format.
Updated with more details:
The endpoint doesn’t have to have report at the end, it can be called anything.
The order doesn’t matter. Both /v2.1/reporting and /stuff/report/v2.3 are valid.
The reporting endpoint doesn’t have to be HTTPS, it can be insecure HTTP.
The manifest URL cannot be insecure, it must use HTTPS.

PhantomJS and jquery screen scraping - no output

I am attempting to do screen scraping using phantom js.
I have copied some phantomjs code from this site: http://snippets.aktagon.com/snippets/534-How-to-scrape-web-pages-with-PhantomJS-and-jQuery
Starting with that script, I have modified into this: http://jsfiddle.net/dqfTa/ (see javascript)
My aim is to collect the prices from a website, which are the inner html of the ".price" tags, into a javascript array. Right now I am trying to console.log() them to my screen.
I am running phantomjs v1.6 and jquery v1.8 through the ubuntu 12.04 console. I am setting the user agent to "iPhone".
Here is my output:
nwo#aws-chaos-us-w-1:~/sandbox$ phantomjs usingjqueryandphantom.js
hello
success
============================================
Step "0"
============================================
It never gets past step 0. Take a look at my code, I did a console.log("h1"); but it won't output it. What am I doing wrong here?
Phantomjs requires you to hook into the console output coming from it's page context. From the API reference:
This callback is invoked when there is a JavaScript console message on
the web page. The callback may accept up to three arguments: the
string for the message, the line number, and the source identifier.
By default, console messages from the web page are not displayed.
Using this callback is a typical way to redirect it.
page.onConsoleMessage = function(msg) {
console.log("This message came from the webpage: "+ msg);
};

Log in to my web from a chrome extension

I've got a web where logged in users can fill out a form to send information. I wanted my users to do this from a chrome extension too. I managed to get the form to sen information working but I only want to be logged in users to be able to do that. It's like a twitter or Springpad extension when the user first opens up the extension, it would have to log in or register. I saw the following answer at stack overflow: Login to website with chrome extension and get data from it
I gave it a try and put this code in background.html:
function login() {
$.ajax({
url: "http://localhost/login", type: "GET", dataType: "html", success: function() {
$.ajax({
url: "http://localhost/login", type: "POST", data: {
"email": "me#alberto-elias.com",
"password": "mypassword",
},
dataType: "html",
success: function(data) {
//now you can parse your report screen
}
});
}
});
}
In my popup.html I put the following code:
var bkg = chrome.extension.getBackgroundPage()
$(document).ready(function() {
$('#pageGaffe').val(bkg.getBgText());
bkg.login();
});
And on my server, which is in node.js, I've got a console.log that shows user information when he logs in, so I saw that when I load my extension, it does log in. The problem is how can I get the user to log in by itself, instead of manually putting my details in the code, how to stay logged in in the extension and when submitting the form, sending the user's details to the web.
I hope I've managed to explain myself correctly.
Thanks.
Before answering this question I would like to bring to your notice that you can make cross origin xhr from your content scripts as of Chrome 13 if you have declared proper permissions. Here is the extract from the page
Version note: As of Chrome 13, content scripts can make cross-origin requests to the same servers as the rest of the extension. Before Chrome 13, a content script couldn't directly make requests; instead, it had to send a message to its parent extension asking the extension to make a cross-origin request.
Coming to the point. You simply have to make an XmlHttpRequest to your domain from your extension (content script or background page) and wait for the response.
At Server
Read the request and session cookie. If session is valid send proper response, else send an error code. 401 or anything else.
At client
If response is proper display it, else display a login link directing to login page of your website.
How it works:
It will work if cookies in user's browser is enabled. Whenever user logs in to your website your server sets a session cookie which resides in user's browser. Now with every request that the browser sends to your domain, this cookie is transmitted. This cookie will be transmitted even if the request is from a Google Chrome Extension.
Caveat
Make sure you display proper cues to user indicating that they are logged in to your application. Since your UI will be mostly inside the extension, it is possible that user might not be aware of their valid session with your website and they might leave an active session unattended which might be abused if user is accessing it from a public internet kiosk.
Reference
You can take a look at a similar implementation that I have done with my extension here.
So the user logs into your server and you know that. I am a bit confused if you then want the user to be able to browse your website with those credentials or a third party website with those credentials.
If it is your website then you should be able to set a cookie that indicates whether they are logged in. Then detect this server side when they navigate your site.
If it is a third party site then the best you can do is create a content script that either fills out the login form and autosubmits for them or analyze the login post data and send it along yourself, then force a refresh.