Issues with creating default broadcast (POST https://www.googleapis.com/youtube/v3/liveBroadcasts) - youtube-livestreaming-api

I'm trying to create default broadcast for my live stream with privacy set to 'unlisted' or 'private' bud it's always being created with privacy 'public', event though privacyStatus field is provided in request body:
REQUEST:
const res = await this.request(callback => youtube.liveBroadcasts.insert({
auth: auth,
part: 'snippet,contentDetails,status',
resource: {
snippet: {
title: "Some Title",
description: "Some description",
scheduledStartTime: "2020-03-11T12:08:43.087Z,
isDefaultBroadcast: true
},
status: {
privacyStatus: 'unlisted'
},
}
}, callback))
CHUNK OF RESPONSE:
data:
...
status:
{ lifeCycleStatus: 'ready',
privacyStatus: 'public',
recordingStatus: 'notRecording',
selfDeclaredMadeForKids: false }},
...
Is this a normal behaviour, or am i doing something wrong ? BTW update works fine.
If this is a normal behaviour it should be mentioned somewhere here:
https://developers.google.com/youtube/v3/live/docs/liveBroadcasts/insert

Related

Keystone.js 6 access denied adminMeta

i want to seed data onConnect, but i have access denied, using this query :
{
keystone: keystone {
adminMeta {
lists {
key
description
label
singular
plural
path
fields {
path
}
}
}
}
i have this error even iam using sudo, context.sudo().graphql.raw :
[
Error: Access denied
at /Users/sidalitemkit/work/web/yet/wirxe/wirxe-app/node_modules/#keystone-next/admin-ui/system/dist/admin-ui.cjs.dev.js:552:19
at processTicksAndRejections (node:internal/process/task_queues:94:5)
at async Promise.all (index 0)
at async Promise.all (index 0) {
locations: [ [Object] ],
path: [ 'keystone', 'adminMeta' ]
}
]
here my config :
export default auth.withAuth(
config({
db: {
adapter: 'prisma_postgresql',
url:
'postgres://admin:aj093bf7l6jdx5hm#wirxe-app-database-do-user-9126376-0.b.db.ondigitalocean.com:25061/wirxepool?schema=public&pgbouncer=true&sslmode=require',
onConnect: initialiseData,
},
ui: {
isAccessAllowed: (context) => !!context.session?.data,
},
lists,
session: withItemData(
statelessSessions({
maxAge: sessionMaxAge,
secret: sessionSecret,
}),
{ User: 'email' },
),
}),
);
i figured out that when i do :
isAccessAllowed: (context) => true
it's working
any advice here
context.sudo() disabled access control. there could be some issue with your query. isAccessAllowed: (context) => true is related to admin-ui and not to the backend implementation of graphql. This could be a bug please open a bug in the repo. They whould be able to fix it quickly.
I do not see sample initialiseData to try myself. Also the graphql is designed as such if you try to access some non existing item then it may give you access denied error even though there is not access control (all access set to true).
There is also another api which is easier in creating the initial items. You should use new list api, available as context.sudo().lists.<ListName>.createOne or createMany like this
const user = await context.sudo().lists.User.createOne({
data: {
name: 'Alice',
posts: { create: [{ title: 'My first post' }] },
},
query: 'id name posts { id title }',
});
or
const users = await context.lists.User.createOne({
data: [
{
data: {
name: 'Alice',
posts: [{ create: { title: 'Alices first post' } }],
},
},
{
data: {
name: 'Bob',
posts: [{ create: { title: 'Bobs first post' } }],
},
},
],
query: 'id name posts { id title }',
});
for more details see List Items API and Database Items API in their preview documentation.
You can find a working example in keystonejs repository (blog)
You have to await and pass context to the initialiseData() method. The onConnect hook already provides this context for you
also, you can look for an argument like '--seed-data' so it's only run once
and run the code as:
keystone --seed-data
export default auth.withAuth(
config({
db: {
adapter: 'prisma_postgresql',
url:
'postgres://admin:aj093bf7l6jdx5hm#wirxe-app-database-do-user-9126376-0.b.db.ondigitalocean.com:25061/wirxepool?schema=public&pgbouncer=true&sslmode=require',
async onConnect(context) {
if (process.argv.includes('--seed-data')) {
await initialiseData(context);
}
},
},
ui: {
isAccessAllowed: (context) => !!context.session?.data,
},
lists,
session: withItemData(
statelessSessions({
maxAge: sessionMaxAge,
secret: sessionSecret,
}),
{ User: 'email' },
),
}),
);

Validate request body separately from request as a whole

I have a question for validating a PUT request. The body of the request is an array of objects. I want the request to succeed if the body contains an array of at least length one, but I also need to do a separate validation on each object in the array and pass that back in the response. So my put body would be:
[1, 2, {id: "thirdObject"}]
The response should be 200 even though the first two items are not even objects. The request just needs to succeed if an array of length 1 is passed in the body. The response needs to be something like:
[{id: firstObject, status: 400, error: should be object}, {id: secondObject, status: 400, error: should be object}, { id: thirdObject, status: 204 }]
Currently I am validating the body as such with fluent schema:
body: S.array().items(myObjectSchema)
.minItems(1)
Which will result in a 400 if any of the items in the body don’t match the myObjectSchema. Was wondering if you have any idea how to achieve this?
The validation doesn't tell you if a schema is successful (eg { id: thirdObject, status: 204 }), so you need to manage it by yourself.
To do that, you need to create an error handler to read the validation error and merge with the request body:
const fastify = require('fastify')()
const S = require('fluent-schema')
fastify.put('/', {
handler: () => { /** this will never executed if the schema validation fail */ },
schema: {
body: S.array().items(S.object()).minItems(1)
}
})
const errorHandler = (error, request, reply) => {
const { validation, validationContext } = error
// check if we have a validation error
if (validation) {
// here the validation error
console.log(validation)
// here the body
console.log(request.body)
reply.send(validation)
} else {
reply.send(error)
}
}
fastify.setErrorHandler(errorHandler)
fastify.inject({
method: 'PUT',
url: '/',
payload: [1, 2, { id: 'thirdObject' }]
}, (_, res) => {
console.log(res.json())
})
This will log:
[
{
keyword: 'type',
dataPath: '[0]',
schemaPath: '#/items/type',
params: { type: 'object' },
message: 'should be object'
},
{
keyword: 'type',
dataPath: '[1]',
schemaPath: '#/items/type',
params: { type: 'object' },
message: 'should be object'
}
]
[ 1, 2, { id: 'thirdObject' } ]
As you can see, thanks to validation[].dataPath you are able to understand which elements of the body array is not valid and merge the data to return your info.
Consider that the handler will be not executed in this scenario. If you need to execute it regardless the validation, you should do the validation job in a preHandler hook and avoid the default schema validation checks (since it is blocking)
edit
const fastify = require('fastify')()
const S = require('fluent-schema')
let bodyValidator
fastify.decorateRequest('hasError', function () {
if (!bodyValidator) {
bodyValidator = fastify.schemaCompiler(S.array().items(S.object()).minItems(1).valueOf())
}
const valid = bodyValidator(this.body)
if (!valid) {
return bodyValidator.errors
}
return true
})
fastify.addHook('preHandler', (request, reply, done) => {
const errors = request.hasError()
if (errors) {
console.log(errors)
// show the same errors as before
// you can merge here or set request.errors = errors to let the handler read them
reply.send('here merge errors and request.body')
return
}
done() // needed to continue if you don't reply.send
})
fastify.put('/', { schema: { body: S.array() } }, (req, reply) => {
console.log('handler')
reply.send('handler')
})
fastify.inject({
method: 'PUT',
url: '/',
payload: [1, 2, { id: 'thirdObject' }]
}, (_, res) => {
console.log(res.json())
})
I don't know the schema syntax you are using, but using draft 7 of the JSON Schema (https://json-schema.org/specification-links.html, and see also https://json-schema.org/understanding-json-schema for some reference material), you can do:
{
"type": "array",
"minItems": 1
}
If you want to ensure that at least one, but not necessarily all items match your object type, then add the "contains" keyword:
{
...,
"contains": ... reference to your object schema here
}

How to document rest api using aws cdk

I'm creating a REST API using AWS CDK version 1.22 and I would like to document my API using CDK as well, but I do not see any documentation generated for my API after deployment.
I've dived into aws docs, cdk example, cdk reference but I could find concrete examples that help me understand how to do it.
Here is my code:
const app = new App();
const api = new APIStack(app, 'APIStack', { env }); // basic api gateway
// API Resources
const resourceProps: APIResourceProps = {
gateway: api.gateway,
}
// dummy endpoint with some HTTP methods
const siteResource = new APISiteStack(app, 'APISiteStack', {
env,
...resourceProps
});
const siteResourceDocs = new APISiteDocs(app, 'APISiteDocs', {
env,
...resourceProps,
});
// APISiteDocs is defined as follow:
class APISiteDocs extends Stack {
constructor(scope: Construct, id: string, props: APIResourceProps) {
super(scope, id, props);
new CfnDocumentationVersion(this, 'apiDocsVersion', {
restApiId: props.gateway.restApiId,
documentationVersion: config.app.name(`API-${config.gateway.api.version}`),
description: 'Spare-It API Documentation',
});
new CfnDocumentationPart(this, 'siteDocs', {
restApiId: props.gateway.restApiId,
location: {
type: 'RESOURCE',
method: '*',
path: APISiteStack.apiBasePath,
statusCode: '405',
},
properties: `
{
"status": "error",
"code": 405,
"message": "Method Not Allowed"
}
`,
});
}
}
Any help/hint is appreciated, Thanks.
I have tested with CDK 1.31 and it is possible to use the CDK's default deployment option and also add a document version to the stage. I have used the deployOptions.documentVersion in rest api definition to set the version identifier of the API documentation:
import * as cdk from '#aws-cdk/core';
import * as apigateway from "#aws-cdk/aws-apigateway";
import {CfnDocumentationPart, CfnDocumentationVersion} from "#aws-cdk/aws-apigateway";
export class CdkSftpStack extends cdk.Stack {
constructor(scope: cdk.Construct, id: string, props?: cdk.StackProps) {
super(scope, id, props);
const documentVersion = "v1";
// create the API
const api = new apigateway.RestApi(this, 'books-api', {
deploy: true,
deployOptions: {
documentationVersion: documentVersion
}
});
// create GET method on /books resource
const books = api.root.addResource('books');
books.addMethod('GET');
// // create documentation for GET method
new CfnDocumentationPart(this, 'doc-part1', {
location: {
type: 'METHOD',
method: 'GET',
path: books.path
},
properties: JSON.stringify({
"status": "successful",
"code": 200,
"message": "Get method was succcessful"
}),
restApiId: api.restApiId
});
new CfnDocumentationVersion(this, 'docVersion1', {
documentationVersion: documentVersion,
restApiId: api.restApiId,
description: 'this is a test of documentation'
});
}
}
From what I can gather, if you use the CDK's default deployment options which create stage and deployment on your behalf, it won't be possible to append the stage with a documentation version set.
Instead, the solution would be to set the RESTAPI's option object to deploy:false and define the stage and deployment manually.
stack.ts code
import * as cdk from '#aws-cdk/core';
import * as apigateway from '#aws-cdk/aws-apigateway';
import { Stage, Deployment, CfnDocumentationPart, CfnDocumentationVersion, CfnDeployment } from '#aws-cdk/aws-apigateway';
export class StackoverflowHowToDocumentRestApiUsingAwsCdkStack extends cdk.Stack {
constructor(scope: cdk.Construct, id: string, props?: cdk.StackProps) {
super(scope, id, props);
// create the API, need to not rely on CFN's automatic deployment because we need to
// make our own deployment to set the documentation we create
const api = new apigateway.RestApi(this, 'books-api',{
deploy: false
});
// create GET method on /books resource
const books = api.root.addResource('books');
books.addMethod('GET');
// // create documentation for GET method
const docpart = new CfnDocumentationPart(this, 'doc-part1', {
location: {
type: 'METHOD',
method: 'GET',
path: books.path
},
properties: JSON.stringify({
"status": "successful",
"code": 200,
"message": "Get method was succcessful"
}),
restApiId: api.restApiId
});
const doc = new CfnDocumentationVersion(this, 'docVersion1', {
documentationVersion: 'version1',
restApiId: api.restApiId,
description: 'this is a test of documentation'
});
// not sure if this is necessary but it made sense to me
doc.addDependsOn(docpart);
const deployment = api.latestDeployment ? api.latestDeployment: new Deployment(this,'newDeployment',{
api: api,
description: 'new deployment, API Gateway did not make one'
});
// create stage of api with documentation version
const stage = new Stage(this, 'books-api-stage1', {
deployment: deployment,
documentationVersion: doc.documentationVersion,
stageName: 'somethingOtherThanProd'
});
}
}
OUTPUT:
Created a feature request for this option here.
I had the same exact problem. The CfnDocumentationVersion call has to occur after you create all of your CfnDocumentationPart. Using your code as an example, it should look something like this:
class APISiteDocs extends Stack {
constructor(scope: Construct, id: string, props: APIResourceProps) {
super(scope, id, props);
new CfnDocumentationPart(this, 'siteDocs', {
restApiId: props.gateway.restApiId,
location: {
type: 'RESOURCE',
method: '*',
path: APISiteStack.apiBasePath,
statusCode: '405',
},
properties: JSON.stringify({
"status": "error",
"code": 405,
"message": "Method Not Allowed"
}),
});
new CfnDocumentationVersion(this, 'apiDocsVersion', {
restApiId: props.gateway.restApiId,
documentationVersion: config.app.name(`API-${config.gateway.api.version}`),
description: 'Spare-It API Documentation',
});
}
}

Fine Uploader Concurrent Chunking S3

I've been trying to get this very Fine Uploader (fresh from NPM - 5.12.0-alpha) set up to push some data to S3 and I've been having some issues with chunking. I have enabled chunking I believe based on the example from Concurrent Chunking but I have not seen multiple chunks being uploaded in the XHR console.
const fu = require('fine-uploader/lib/s3');
const SA = require('superagent');
let x = new fu.s3.FineUploaderBasic({
request: {
endpoint: 'they-taken-mah-bucket.s3.amazonaws.com'
},
credentials: {
accessKey: 'invalid',
expiration: new Date(),
secretKey: 'invalid',
sessionToken: 'invalid'
},
objectProperties: {
bucket: 'they-taken-my-bucket',
key: 'filename'
},
autoUpload: false,
debug: true,
callbacks: {
onComplete: function(){
moveUpload({from:'active', to:'finished', hash: activeUpload.hash}).then( function() { good(hash); });
},
onError: function(id, name, reason, xhrCache){
moveUpload({from:'active', to:'error', hash: activeUpload.hash}).then( () => bad(new Error('upload error - '+reason)) );
},
onProgress: function(id, name, uploaded, total){
const elapsed = (Date.now() - t.getTime()) / 1000;
const rate = uploaded / elapsed;
updateUploadProgress({hash: activeUpload.hash, progress: (100*uploaded/total).toFixed(0), rate: rate});
},
chunking: {
enabled: true,
concurrent: {
enabled: true
}
},
maxConnections: 5,
retry: {
enableAuto: true,
maxAutoAttempts: 10
},
onCredentialsExpired: function () {
return fetchCredentials();
}
}
});`
The behavior I'm seeing: http://recordit.co/z5VnLR63eT
Essentially I see the OPTIONS request, that goes fine, and the upload starts correctly but I only see 1 outbound connection - and the content type is not what I would expect, it's multipart form instead of raw. Though perhaps I'm wrong in this expectation, I would have expected it to just be a raw bin post.
Any advice would be most appreciated.
Your options are not set correctly, and this is why concurrent chunking is not enabled.
You defined the chunking option inside of the callbacks section. Move it out of callbacks (along with maxConnections and retry).

Why success callback is not called in extjs form submission?

I'm trying to upload a file using Ext JS forms and in case of success or failure, show appropriate messages. But I'm not able to get the desired result. I'm not able to make success or failure callbacks work in form.submit action.
What I've done till now is:
Creating a form with this script:
new Ext.FormPanel({
fileUpload: true,
frame: true,
url: '/profiler/certificate/update',
success: function() {
console.log(arguments);
},
failure: function() {
console.log(arguments);
}
}).getForm().submit()
​/*
The response Content-Type is text/html (with charcode=utf8);
The response JSON is: { "success": true }
*/​​
Setting the response Content-Type to text/html based on this answer.
Sending an appropriate JSON result back, based on Ext JS docs. The response captured via Fiddler is:
{"success":false}
or
{"success":true}
I even set the response Content-Type to application/json. But still no success.
I've read links like this and this, but none of them helped. Please note that I also tried another script which creates a form, with an upload field in it, and a save button, and I submitted the form in the handler of the save button. But still no callback is fired.
Here's a working example - Javascript code:
Ext.onReady(function () {
Ext.define('ImagePanel', {
extend: 'Ext.form.Panel',
fileUpload: true,
title: 'Upload Panel',
width: 300,
height: 100,
onUpload: function () {
this.getForm().submit({
url: 'upload.php',
scope: this,
success: function (formPanel, action) {
var data = Ext.decode(action.response.responseText);
alert("Success: " + data.msg);
},
failure: function (formPanel, action) {
var data = Ext.decode(action.response.responseText);
alert("Failure: " + data.msg);
}
});
},
initComponent: function () {
var config = {
items: [
{
xtype: 'fileuploadfield',
buttonText: 'Upload',
name: 'uploadedFile',
listeners: {
'change': {
scope: this,
fn: function (field, e) {
this.onUpload();
}
}
}
}
]
};
Ext.apply(this, Ext.apply(this.initialConfig, config));
this.callParent(arguments);
}
});
var panel = Ext.create('ImagePanel', {
renderTo: Ext.getBody()
});
});
And PHP code:
<?php
if (isset($_FILES)) {
$temp_file_name = $_FILES['uploadedFile']['tmp_name'];
$original_file_name = $_FILES['uploadedFile']['name'];
echo '{"success": true, "msg": "'.$original_file_name.'"}';
} else {
echo '{"success": false, "msg": "No Files"}';
}
I have been struggling with this for quite some time now as well. Here's my code:
Ext.getCmp('media-upload-form').getForm().doAction('submit', {
url: './services/recordmedia/upload',
method: 'post',
waitMsg: 'Please wait...',
params: {
entityId: this.entityId,
},
failure: function(form, action){
alert(_('Error uploading file'));
this.fireEvent('file-upload');
this.close();
},
success: function(form, action){
this.fireEvent('file-upload');
this.close();
},
scope: this
})
The response was always wrapped in <pre> tags by the browser, what caused the Extj lib not to call the callbacks. To fix this:
make sure your server returns the correct json: {"success":true}
make sure that the content-type is set to text/html
Actually, this is well covered by docs for Ext.form.Panel and Ext.form.Basic. The problem with your code not working is that there are no config options "success", "failure" for the form panel. You should put them in the config object passed to the submit action. So your code should look like:
new Ext.FormPanel({
fileUpload: true,
frame: true
}).getForm().submit({
url: '/profiler/certificate/update',
success: function() {
console.log(arguments);
},
failure: function() {
console.log(arguments);
}
});
Note the difference: In Ext 4, there is a form component (Ext.form.Panel) which is basically a view component concerned with how you form looks, and then there is the underlying form class (e.g. Ext.form.Basic) concerned with the functionality. Form submissions are handled by Ext.form.Basic (or whatever returned by your form.getForm()).