I'm trying to insert a code for a website client of mine that works specifically for IE11, which is producing a DIV in the wrong manner (but not in any other browser). When I fetched the browser to display in the browser, IE11 said Mozilla 5.0...what?
I'm using:
if (strpos($_SERVER['HTTP_USER_AGENT'], 'Trident/7.0; rv:11.0') !== false) {
echo '<div style="position:relative; top:-15px;">';
}
[non IE code]
if (strpos($_SERVER['HTTP_USER_AGENT'], 'Trident/7.0; rv:11.0') !== false) {
echo '</div>';
}
Help?
This may help.
IE 11 has user agent strings changed and that is the reason why you see MOzilla/5.0 when you output the browser user agent string. For more information
IE 11 sends different User-Agent header to different subdomains
http://msdn.microsoft.com/en-us/library/ie/hh869301%28v=vs.85%29.aspx
Related
I'm using OWA in safari and when I call getAccessTokenAsync it return 13001, but it works fine on chrome
Windows (ie, edge, chrome) works fine
MAC (chrome) works fine
MAC (safari) return 13001
I have try to pass { forceAddAccount:true }
Office.onReady().then(function (value) {
Office.context.auth.getAccessTokenAsync({ forceConsent: false }, function (result) {
console.log('Checking token: ' + result.status);
});
});
Expected result: status:'succeeded'
Actual result: 13001 the user is not signed in Office
You need to check if you have 3rd party cookies enabled or disabled on Safari. If disabled, can you follow the instructions given here to enable 3rd party cookies:
https://www.clubrunnersupport.com/article/710-enable-third-party-cookies-for-safari
I was checking the active links in a website with selenium web driver and java. I have passed the links to the array and while verifying I am getting the response as 403 forbidden for all links in the site. It is just a public website anyone can access. The links are working properly when clicking manually. I wanted to know Why it is not showing 200 and what can be done on this situation.
This is for Selenium webdriver with Java
for(int j=0;j< activelinks.size();j++) {
System.out.println("Active Link address and status >>> " + activelinks.get(j).getAttribute("href"));
HttpURLConnection connection = (HttpURLConnection)new URL(activelinks.get(j).getAttribute("href")).openConnection();
connection.connect();
String response = connection.getResponseMessage();
int responsecode = connection.getResponseCode();
connection.disconnect();
System.out.println(activelinks.get(j).getAttribute("href")+ ">>"+ response+ " " + responsecode);}
I expect the response code as 200, but the actual output is 403
I believe your need to add the relevant Cookies to the HTTPUrlConnection, or even better consider switching to OkHttp library which is under the hood of Selenium Java Client
So you basically need to fetch the cookies from the browser using driver.manage.getCookies() function and generate a proper Cookie request header for the subsequent calls.
Example code:
driver.manage().getCookies()
.forEach(cookie -> cookieBuilder
.append(cookie.getName())
.append("=")
.append(cookie.getValue())
.append(";"));
OkHttpClient client = new OkHttpClient().newBuilder().build();
for (WebElement activelink : activelinks) {
Request request = new Request.Builder()
.url(activelink.getAttribute("href"))
.addHeader("Cookie", cookieBuilder.toString())
.build();
Response urlResponse = client.newCall(request).execute();
String response = urlResponse.message();
int responsecode = urlResponse.code();
System.out.println(activelink.getAttribute("href") + ">>" + response + " " + responsecode);
}
If you need nothing else but response code you can consider using HEAD method to avoid executing calls for the full URLs - this will allow you to save traffic and your test will be much faster.
403 Forbidden
The HTTP 403 Forbidden client error status response code indicates that the server understood the request but refuses to authorize it.
This status is similar to 401, but in this case, re-authenticating will make no difference. The access is permanently forbidden and tied to the application logic, such as insufficient rights to a resource.
Reason
I don't see any such issue in your code block. However, there is a possibility that the WebDriver controlled Browser Client is getting detected and hence the subsequent requests are getting blocked and there can be numerous factors as follows:
User agent
Plugins
Languages
WebGL
Browser features
Missing image
You can find a couple of detailed discussion in:
How does recaptcha 3 know I'm using selenium/chromedriver?
Selenium and non-headless browser keeps asking for Captcha
Solution
A generic solution will be to use a proxy or rotating proxies from the Free Proxy List.
You can find a detailed discussion in Change proxy in chromedriver for scraping purposes
Outro
You can a couple relevant discussions in:
Can a website detect when you are using selenium with chromedriver?
Selenium webdriver: Modifying navigator.webdriver flag to prevent selenium detection
Failed to load resource: the server responded with a status of 429 (Too Many Requests) and 404 (Not Found) with ChromeDriver Chrome through Selenium
Had the same problem, user agent was the issue in my case (read more here: https://www.javacodegeeks.com/2018/05/how-to-handle-http-403-forbidden-error-in-java.html).
Also check what request methods are allowed on your website, you can do that by looking at any endpoint in "Network" tab in Chrome. It should list the allowed request methods, in my case I couldn't use "HEAD", but "GET" did the trick.
Code:
List<WebElement> links = driver.findElements(By.tagName("a"));
boolean brokenLink = false;
for (WebElement link : links) {
String url = link.getAttribute("href");
HttpURLConnection conn = (HttpURLConnection) new URL(url).openConnection();
conn.setRequestMethod("GET");
conn.setRequestProperty("Content-Type", "application/json");
conn.setRequestProperty("User-Agent",
"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.181 Safari/537.36");
conn.connect();
int httpCode = conn.getResponseCode();
if (httpCode >= 400) {
System.out.println("BROKEN LINK: " + url + " " + httpCode);
brokenLink = true;
Assert.assertFalse(brokenLink);
}
else {
System.out.println("Working link: " + url + " " + httpCode);
}
}
I'm developping an angular2 application (single page application). My page is never "reloaded", but it's content changes according to user interactions.
I'm having some cache problems especially with images.
Context :
My page contains an editable image list :
<ul>
<li><img src="myImageController/1">Edit</li>
<li><img src="myImageController/2">Edit</li>
<li><img src="myImageController/3">Edit</li>
</ul>
When i want to edit an image (Edit link), my dom content is completly changed to show another angular component with a fileupload component.
The myImageController returns the LastModified header, and cache-control : no-cache and must-revalidate.
After a refresh (hit F5), my page does a request to get all img src, which is correct : if image has been modified, it is downloaded, if not, i just get a 304 which is fine.
Note : my images are stored in database as blob fields.
Problem :
When my page content is dynamically reloaded with my single page app, containing img tags, the browser do not call a GET http request, but immediatly take image from cache. I assume this a browser optimization to avoid getting the same resource on the same page multiple times.
Wrong solutions :
The first solution is to add something like ?time=(new Date()).getTime() to generate unique urls and avoid browser cache. This won't send the If-Modified-Since header in the request, and i will download my image every time completly.
Do a "real" refresh : the first page load in angular apps is quite slow, and i don't to refresh all.
Tests
To simplify the problem, i trying to create a static html page containing 3 images with the exact same link to my controller : /myImageController/1. With the chrome developper tool, i can see that only one get request is called. If i manage to get mulitple server calls in this case, it would probably solve my problem.
Thank you for your help.
5th version of HTML specification describes this behavior. Browser may reuse images regardless of cache related HTTP headers. Check this answer for more information. You probably need to use XMLHttpRequest and blobs. In this case you also need to consider Same-origin policy.
You can use following function to make sure user agent performs every request:
var downloadImage = function ( imgNode, url ) {
var xhr = new XMLHttpRequest();
xhr.open("GET", url, true);
xhr.responseType = "blob";
xhr.onreadystatechange = function () {
if (xhr.readyState == 4) {
if (xhr.status == 200 || xhr.status == 304) {
var blobUrl = URL.createObjectURL(xhr.response);
imgNode.src = blobUrl;
// You can also use imgNode.onload callback to release blob resources.
setTimeout(function () {
URL.revokeObjectURL(blobUrl);
}, 1000);
}
}
};
xhr.send();
};
For more information check New Tricks in XMLHttpRequest2 article by Eric Bidelman, Working with files in JavaScript, Part 4: Object URLs article by Nicholas C. Zakas and URL.createObjectURL() MDN page and Same-origin policy MDN page.
You can use the random ID trick. This changes the URL so that the browser reloads the image. Not that this can be done in the query parameters to force a full cache break or in the hash to allow the browser to re-validate the image from the cache (and avoid re-downloading it if unchanged).
function reloadWithCache(img: HTMLImageElement, url: string) {
img.src = url.replace(/#.*/, "") + "#" + Math.random();
}
function reloadBypassCache(img: HTMLImageElement, url: string) {
let sep = img.indexOf("?") == -1? "?" : "&";
img.src = url + sep + "nocache=" + Math.random()
}
Note that if you are using reloadBypassCache regularly you are better off fixing your cache headers. This function will always hit your origin server leading to higher running costs and making CDNs ineffective.
I read Basic authentication with Selenium in Internet Explorer 10
And I change my register key and when I use the user and pass in the url I don't see the basic authentication popup, but actually the page is not load. I see blank page!
I see my url in the IE but nothing happened - I see white page.
Must I change somethin in IE too?
It is not possible without some workarounds.
I also needed the same feature and previous SO answer confirms, that is it either impossible or possible with high probability of failure.
One thing I learned about Protrator is not to try to make too complicated stuff with it, or I'll have a bad time.
As for the feature- I ended up making Protractor to initiate Node.js task, which use request to make the authentication and provide back the data.
Taken straight from request module:
request.get('http://some.server.com/').auth('username', 'password', false);
// or
request.get('http://some.server.com/', {
'auth': {
'user': 'username',
'pass': 'password',
'sendImmediately': false
}
});
// or
request.get('http://some.server.com/').auth(null, null, true, 'bearerToken');
// or
request.get('http://some.server.com/', {
'auth': {
'bearer': 'bearerToken'
}
});
I am testing with Selenium-rc 1.0.3 and I am getting a Permission denied error message in IE when I run my IDE script from the command line.
I am trying to run an IDE script in Internet explorer using the selenium control RC 1.0.3
from the command line:
java -jar selenium-server.jar -htmlsuite "*iexploreproxy" "url
address/where" "C:\Users\sat\Documents\selenium\suite.html"
"C:\Users\sat\Documents\selenium scripts\results.htm" at this point
The IE window pops up saying as below
I get a security warning saying "Do you want to view only the webpage content that was delivered securely?" I hit Yes and I see this error in the test runner window:
Webpage error details
Message: Access is denied.
Line: 177
Char: 9
Code: 0
URI: xx.xx.xx.xxx/selenium-server/core/scripts/selenium-testrunner.js
UPDATE:
I looked at the line 177 and char :9 in the script and it points to
var runInterval = 0;
/** SeleniumFrame encapsulates an iframe element */
var SeleniumFrame = classCreate();
objectExtend(SeleniumFrame.prototype, {
initialize : function(frame) {
this.frame = frame;
addLoadListener(this.frame, fnBind(this._handleLoad, this));
},
getWindow : function() {
return this.frame.contentWindow;
},
getDocument : function() {
return this.frame.contentWindow.document; - line 177 char 9
},
_handleLoad: function() {
this._attachStylesheet();
this._onLoad();
if (this.loadCallback) {
this.loadCallback();
}
Do you know what the error is about? Why do I get that? I see my test cases and everything in the test runner window, but I can't run them in the IE browser. I searched the web with no avail.
I do not have much experience with running the test from CLI, but have you tried starting Selenium RC with administrator permission?
Any particular reason for not using the new Selenium 2 IWebDriver and a test framework?
The error might be caused by the use of iframe/frameset and IE's security settings. Default settings are that, if a site uses iframe/frameset and the frame content originates from 2 different root domains, then this is a security risk. Try is add the sites to your list thrusted site for IE. Have you tried to ude the firefox driver instead (will not have this security restriction).