I am trying to test/run/learn some WebRTC code and am having problem understanding this code from Mozilla:
...
var pc = new RTCPeerConnection();
...
pc.createOffer(function(offer) {
pc.setLocalDescription(new RTCSessionDescription(offer), function() {
// send the offer
}, error);
}, error);
The problem I have in understanding this is that pc.createOffer already returns an "offer" object with two properties: type and sdp. So, why is "new RTCSessionDescription(offer)" passed as an argument to pc.setLocalDescription and not "offer" itself as returned by pc.createOffer?
I read on RTCSessionDescription interface here. What did I miss?
Note that the MDN page you reference says "Draft. This page is not complete."
The code will work as written - because an RTCSessionDescription can effectively be cloned by passing one into another's constructor - but you're right that it is redundant, so I've updated the page to remove it. The object returned from createOffer will always suffice. Thanks for catching it.
Note that you still need to call new RTCSessionDescription in situations where the offer/answer comes in over the wire, for now.
But you'll be glad to know that that's being fixed as well soon: The spec recently changed, so you wont have to call the new RTCSessionDescription constructor ever. Much simpler. Note though that browsers like Firefox hasn't updated to allow this simpler syntax yet. This is just a convenience though, and there is no behavioral difference.
Update: adapter.js has a shim for this, so you can get rid of them now if you're using adapter.js.
Related
I have the following code...
async function GetFirstAssessment() {
try {
const response = await axios.get('http://192.168.254.10/App/GetFirstAssessment/');
return response.data;
} catch (error) {
alert('error: ' + error);
console.error(error);
}
};
It's been working fine for some time, but suddenly it no longer works and eventually times out. Or I don't even know if it "times out" since I believe the default timeout for axios is 0 but eventually it does error with the message "Error: Network Error". I've also added a picture of the stack trace but I don't think it's helpful.
I can put the url in a browser and it returns the json I'm expecting, so the problem is definitely not on the server side. I am testing from an android device connected via usb and developing with cli (not expo).
If there is any other information I can provide please let me know... not sure what to do next because this really makes no sense. I would wonder if it was a security issue except that it was working perfectly earlier. Also I have updated some other code that calls this, but even after reverting I still have the same problem... and seeing as how it is making it to the catch, I don't see how any other code could be affecting this.
I did just install the standalone react native debugger. I believe it has worked since I installed it, though I'm not 100% certain on that. Still doesn't work after closing it.
I set a break point in the server code on the first line of the api method, but it doesn't get hit. Not sure how to troubleshoot further up the chain though. I also just thought to check fiddler and it doesn't show any request coming in, though I honestly don't know if it should normally or not.
I'm trying to make my online bot undetectable. I read number of articles how to do it and I took all tips together and used them. One of them is to change window.navigator.webdriver.
I managed to change window.navigator.webdriver within puppeteer by this code:
await page.evaluateOnNewDocument(() => {
Object.defineProperty(navigator, 'webdriver', {
get: () => undefined
});
});
I'm bypassing this test just fine:
However this test is still laughing at me somehow:
Why WEBDRIVER is inconsistent?
Try this,
First, remove the definition, it will not work if you define and delete from prototype.
Object.defineProperty(navigator, 'webdriver', ()=>{}) // <-- delete this part
Replace your code with this one.
delete navigator.__proto__.webdriver;
Result:
Why does it work?
Removing directly just remove the instance of the object rather than the actual definition. The getter and setter is still there, so browser can find it.
However if you remove from the actual prototype, it will not exist at all in any instance anymore.
Additional Tips
You mentioned you want to make your app undetectable, there are many plugins which achieve the same, for example this package called puppeteer-extra-plugin-stealth includes some cool anti-bot detection techniques. Sometimes it's better to just reuse some packages than to re-create a solution over and over again.
PS: I might be wrong above the above explanation, feel free to guide me so I can improve the answer.
In Meteor JS I want to perform a task before adding an object to my collection. So I created my own method, eg: addObject like so:
Meteor.methods({
...
addObject: function(obj) {
/*
// this is what i'm trying to figure out...
if ( !MyColl.allow('insert', Meteor.userId, obj) )
throw Meteor.Error(403, 'Sorry');
*/
MyColl.update({ ... }}, { 'multi': true });
MyColl.insert(obj);
},
...
});
But I noticed that .allow is no longer being called because it's "trusted" code. The thing is the method is on the server but being called from the client (through ObjectiveDDP) so I still need a way to validate that the client has permissions to call addObject - is there any way to manually call .allow() on a collection from my server code? I tried it but getting an internal server error, and not sure what the syntax should be... couldn't find anything in the Meteor docs.
Edit:
I just found out that this works:
var allowedToInsert = MyColl._validators.insert.allow[0];
if (!allowedToInsert)
throw new Meteor.Error(403, 'Invalid permissions.');
But that's probably a no-no calling private methods such as _validators. Does anyone know of a more 'best practices' way?
You can do validation in the addobject method. For example, if you only want logged in users to be able to add an object, you can write:
if (!Meteor.user()) throw new Meteor.Error();
at the beginning of the addobject method.
Personally I never use allow.
On a related note, the collection2 and simple-schema packages can often help a lot with validation.
The validation pattern you are using might not be the best way to do things.
If you assume that methods can be called from the client, you shouldn't 'hack' it into doing something it's not meant to do.
If you are calling a method that changes data in the database you should check within that method that the currently logged in user has permission to do so.
However, are you sure you want to do this? Meteor also has Collection.allow and Collection.deny methods that you can use to define read/write/update/delete permission with. That's the recommended way to handle permissions, so what you are doing is an anti-pattern. However, perhaps in your case it is strictly necessary? If not, you might want to rethink the use case.
Like another response suggests, using something like Collections or SimpleSchema to validate data structure is also a good idea.
I get the following error within Magento CE 1.6.1.0
Warning: session_start() [<a href='function.session-start'>function.session-start</a>]: Cannot send session cookie - headers already sent by (output started at /home/dev/env/var/www/user/dev/wdcastaging/lib/Zend/Controller/Response/Abstract.php:586) in /home/dev/env/var/www/user/dev/wdcastaging/app/code/core/Mage/Core/Model/Session/Abstract/Varien.php on line 119
when accessing /api/soap/?wsdl
Apparently, a session_start() is being attempted after the entire contents of the WSDL file have already been output, resulting in the error.
Why is magento attempting to start a session after outputting all the datums? I'm glad you asked. So it looks like controller_front_send_response_after is being hooked by Mage_Persistent in order to call synchronizePersistentInfo(), which in turn ends up getting that session_start() to fire.
The interesting thing is that this wasn't always happening, initially the WSDL loaded just fine for me, initially I racked my brains to try and see what customization may have been made to our install to cause this, but the tracing I've done seems to indicate that this is all happening entirely inside of core.
We have also experienced a tiny bit of (completely unrelated) strangeness with Mage_Persistent which makes me a little more willing to throw my hands up at this point and SO it.
I've done a bit of searching on SO and have found some questions related to the whole "headers already sent" thing in general, but not this specific case.
Any thoughts?
Oh, and the temporary workaround I have in place is simply disabling Mage_Persistent via the persistent/options/enable config data. I also did a little bit of digging as to whether it might be possible to observe an event in order to disable this module only for the WSDL controller (since that seems to be the only one having problems), but it looks like that module relies exclusively on this config flag to determine it's enabled status.
UPDATE: Bug has been reported: http://www.magentocommerce.com/bug-tracking/issue?issue=13370
I'd report this is a bug to the Magento team. The Magento API controllers all route through standard Magento action controller objects, and all these objects inherit from the Mage_Api_Controller_Action class. This class has a preDispatch method
class Mage_Api_Controller_Action extends Mage_Core_Controller_Front_Action
{
public function preDispatch()
{
$this->getLayout()->setArea('adminhtml');
Mage::app()->setCurrentStore('admin');
$this->setFlag('', self::FLAG_NO_START_SESSION, 1); // Do not start standart session
parent::preDispatch();
return $this;
}
//...
}
which includes setting a flag to ensure normal session handling doesn't start for API methods.
$this->setFlag('', self::FLAG_NO_START_SESSION, 1);
So, it sounds like there's code in synchronizePersistentInf that assumes the existence of a session object, and when it uses it the session is initialized, resulting in the error you've seen. Normally, this isn't a problem as every other controller has initialized a session at this point, but the API controllers explicitly turns it off.
As far as fixes go, your best bet (and probably the quick answer you'll get from Magento support) will be to disable the persistant cart feature for the default configuration setting, but then enable it for specific stores that need it. This will let carts
Coming up with a fix on your own is going to be uncharted territory, and I can't think of a way to do it that isn't terribly hacky/unstable. The most straight forward way would be a class rewrite on the synchronizePersistentInf that calls it's parent method unless you've detected this is an API request.
This answer is not meant to replace the existing answer. But I wanted to drop some code in here in case someone runs into this issue, and comments don't really allow for code formatting.
I went with a simple local code pool override of Mage_Persistent_Model_Observer_Session to exit out of the function for any URL routes that are within /api/*
Not expecting this fix to need to be very long-lived or upgrade-friendly, b/c I'm expecting them to fix this in the next release or so.
public function synchronizePersistentInfo(Varien_Event_Observer $observer)
{
...
if ($request->getRouteName() == 'api') {
return;
}
...
}
I'm very new at this, in fact this is my first Dojo attempt. I'm trying to get data from a website with:
<script
text="text/javascript"
src="http://o.aolcdn.com/dojo/1.3/dojo/dojo.xd.js"
djConfig="parseOnLoad:true,isDebug:true"
></script>
<script type="text/javascript">
//How are we supposed to know what else to include in the dojo thing? like query?
dojo.addOnLoad(function(){
console.log("hi");
dojo.xhrPost({
url: "http://www.scrapbookingsuppliesrus.com/catalog/layout", //who knows how to set relative urls?
handleAs: "json", //get json data from server
load: function(response, ioArgs){
console.log("got");
console.log(response); //helps with the debugging
return response; //that way goods should be a very large array of the data we want
},
error: function(response, ioArgs){
console.log("nope didn't make it", response+' '+ioArgs); //helps with the debugging
return response; //who knows what this does
} //last item, thus no extra comma
});
});
</script>
But nothing happens. While I'm at it, what exactly is the response and ioArgs variables. They're supposed to magically be the response to the request that I guess is specially defined already. But, who knows. Further, I figured after every attempt it would trigger something in load or error, but alas.
There used to be an error that I was going to a prohibited uri, but then firebug would reference a very large dojo script where it was impossible to tell why it was broken. What environment are the rest of you developing on?
Well, there are a couple of issues.
Lets start with the very easy ones first.
When you are developing you want to use the "uncompressed" version of Dojo, which can be found by appending .uncompressed.js to the path of the Dojo library being used:
http://o.aolcdn.com/dojo/1.3/dojo/dojo.xd.js.uncompressed.js
This will make it much easier to see what breaks, if it is in core-Dojo that is.
Next, the djConfig parameter. I'm quite certain that Dojo can handle a string, but traditionally it has been defined with an object, so, once you've included your Dojo library:
<script src="path to dojo"></script>
Start a new script block and define the djConfig object in there:
<script>
djConfig = {
parseOnLoad: true,
isDebug: true
};
</script>
Next simplest, I use IntelliJ JIDEA to develop, it has build-in code-sense for Dojo, makes life much easier. Otherwise, the standard package, Firefox + Firebug.
The complex stuff:
It seems that you are requesting data using the XHR method, hopefully you are aware that this means that your script and the data being accessed must reside on the same domain, if they don't you'll have security errors. How can this be solved? you use the technique called cross-domain scripting, which dojo also supports via the dojo.io.script.get functionality.
The more complex stuff:
Dojo works with something called "Deferred" objects. That means that the request doesn't actually go out as soon as the object is created, rather it goes out when you ask it to go out, that's the concept of "Deferred", you defer the execution of a block of code to a later time. The way this problem would then be solvedin your case is like this:
var deferred = dojo.xhrPost({
url: "http://www.scrapbookingsuppliesrus.com/catalog/layout", //who knows how to set relative urls?
handleAs: "json" //get json data from server
});
if(deferred) {
deferred.addCallback(function(response){
console.log("got");
console.log(response); //helps with the debugging
return response; //that way goods should be a very large array of the data we want
});
deferred.addErrback(function(response){
console.log("nope didn't make it", response+' '+ioArgs); //helps with the debugging
return response; //who knows what this does
});
}
And this should work out now.
As a personal note, i would not recommend using XHR but rather the dojo.io.script.get methodology, which over the long term is much more portable.
There where several very basic errors in your code.
<script ... djConfig="parseOnLoad:true,isDebug:true"/></script>
Here you use the short form of the script tag /> (which is forbidden) and at the same include the closing </script> too.
console.log("hi")
There is a semi-colon missing at the end of the statement
You try to load data from http://www.scrapbookingsuppliesrus.com/catalog/layout is your script also running on that domain? Else the security restrictions on Crossdomain-Ajax (google it) will prevent you from loading the data.
This question was also asked on the dojo-interest list and there are a few replies to the question on that list.
If your page is being served from www.scrapbookingsuppliesrus.com, then the code you've posted looks correct. The way you declare djConfig will work, and specifying load and error in the argument to xhrGet is correct. You're going to have dig in and debug.
Take a look in Firebug's Console window. You should see the GET request, complete with request and response HTTP headers, and the response text. If you don't see it, then I suspect something else has gone wrong before your onLoad function from being called - I'd throw a console.log at the top of that too. If your onLoad function isn't getting called, you may want to click the little down arrow in the Script tab and set "Break on all errors"
As to what the response and ioArgs are.
Response is, well, the response. If the request was successful, it will be the JSON object (or XML DOM object, or HTML DOM object, or text, depending on handleAs). If the request failed, it will contain an object with details about the error.
ioArgs is an object that includes a slew of lower-level details about the xhr request. As an asice, I've found that if you attach a callback to the deferred returned by xhrGet:
var dfd = dojo.xhrGet(args);
dfd.addCallbacks(function(response) {...}, function(error){...});
then the callback is not passed the ioArgs argument, just the response (or error).
dojocampus.org is the official dojo documentation site, and includes a lot of detail and examples.
Dojo: The Definitive Guide from O'Reilly is an excellent book. If you're going to be doing a lot of dojo dev., it's a great resource.
First, let's start with the basics, then you can add all the functionality.
You only need this to implement Ajax (take advantage of Dojo):
function sendData(dataToPost) {
dojo.xhrPost({
url: "http://myUrl.html",
postData: "data="+dataToPost,
handleAs: "text",
load: function(text){
getData(text);
},
error: function(error){
alert(error);
}
});
}
Now you can use the data in your "getData" function
function getData(text){
myVariable = text;
}
You can find many tutorials to see how it's implemented. This is the most complete example that I found, here you can see how it's implemented and all the code is available:
http://www.ibm.com/developerworks/java/library/j-hangman-app/index.html