Custom User Agent for a WebView - windows-8

can I set a custom User Agent for a WebView?
I need to show mobile style of websites.

It's easy to do:
string ua = "Mozilla/5.0 (iPhone; CPU iPhone OS 6_0 like Mac OS X)" + "AppleWebKit/536.26 (KHTML, like Gecko) Version/6.0 Mobile/10A5376e Safari/8536.25";
var httpRequestMessage = new HttpRequestMessage(HttpMethod.Get, new Uri(url));
httpRequestMessage.Headers.Add("User-Agent",ua);
webView1.NavigateWithHttpRequestMessage(hrm);

Per this MSDN Forum posting you cannot. Could you host a lightweight proxy service (say Azure Web Site) to proxy the request for you?

You can load HTML with custom user agent and then pass the html to WebView
Loading html
var handler = new HttpClientHandler {AllowAutoRedirect = false};
var client = new HttpClient(handler);
client.DefaultRequestHeaders.Add("user-agent",
"Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.2;
WOW64; Trident/6.0)");
var response = await client.GetAsync(url);
response.EnsureSuccessStatusCode();
var html = await response.Content.ReadAsStringAsync();
assign html to WebView
WebView.NavigateToString(html);

Related

How to use a UserAgent, headers and CookieContainer on WebView2?

Using a WebView2 control, I am trying to load into a webpage, but after I log into it, it seems it has some sort of block for generic browser that is not well configured because it keeps loading instead of proceed after the login, so I would like to add a CookieContainer and specify to use Cookies, add headers that specify that decompression is supported and what decompression methods are handled and User agent on WebView2 control same way this answer
works for HttpRequest.
Looking online I only found some code that I've tried to put together, but that's c# and I'm trying to convert it for vb.net but no online tool succeded to convert it yet
Private Sub webView2_NavigationStarting(sender As Object, e As Microsoft.Web.WebView2.Core.CoreWebView2NavigationStartingEventArgs) Handles webView2.NavigationStarting
webView2.AddScriptToExecuteOnDocumentCreated("
window.WebView2.addEventListener('beforenavigate', function(event) {
event.preventDefault();
var xhr = new XMLHttpRequest();
xhr.open(event.detail.verb, event.detail.uri, true);
xhr.setRequestHeader('User-Agent', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/99.0.4844.51 Safari/537.36');
xhr.setRequestHeader('Cache-Control', 'no-cache');
xhr.setRequestHeader('Accept-Encoding', 'gzip, deflate');
xhr.onreadystatechange = function() {
if (xhr.readyState === XMLHttpRequest.DONE) {
window.WebView2.injectWebResource(event.detail.id, xhr.responseText);
}
};
xhr.send();
});
")
End Sub
Am I using the rights methods?
edit1:
I've managed to add the UserAgent
Private Sub WebView21_NavigationStarting(sender As Object, args As Microsoft.Web.WebView2.Core.CoreWebView2NavigationStartingEventArgs) Handles WebView21.NavigationStarting
Dim userAgent As String = "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/99.0.4844.51 Safari/537.36"
Dim script As String = $"window.navigator.userAgent = '{userAgent}';"
WebView21.CoreWebView2.AddScriptToExecuteOnDocumentCreatedAsync(script)
End Sub
but still it doesn't proceed after the login.

Converting HTML to PDF from https requiring authentication

I've been trying to convert html to pdf from my company's https secured authentication required web.
I tried directly converting it with pdfkit first.
pdfkit.from_url("https://companywebsite.com", 'output.pdf')
However I'm receiving these errors
Error: Authentication Required
Error: Failed to load https://companywebsite.com,
with network status code 204 and http status code 401 - Host requires authentication
So I added options to argument
pdfkit.from_url("https://companywebsite.com", 'output.pdf', options=options)
options = {'username': username,
'password': password}
It's loading forever without any output
My second method was to try creating session with requests
def download(session,username,password):
session.get('https://companywebsite.com', auth=HTTPBasicAuth(username,password),verify=False)
ua = 'Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2228.0 Safari/537.36'
session.headers = {'User-Agent': ua}
payload = {'UserName':username,
'Password':password,
'AuthMethod':'FormsAuthentication'}
session.post('https://companywebsite.com', data = payload, headers = session.headers)
my_html = session.get('https://companywebsite.com/thepageiwant')
my_pdf = open('myfile.html','wb+')
my_pdf.write(my_html.content)
my_pdf.close()
path_wkthmltopdf = 'C:\Program Files\wkhtmltopdf\bin\wkhtmltopdf.exe'
config = pdfkit.configuration(wkhtmltopdf=bytes(path_wkthmltopdf, 'utf8'))
pdfkit.from_file('myfile.html', 'out.pdf')
download(session,username,password)
Could someone help me, I am getting 200 from session.get so its definitely getting the session
Maybe try using selenium to access to that site and snap the screenshot

Unable to login into PSN using Python requests module

I am trying to login into PSN https://www.playstation.com/en-in/sign-in-and-connect/ using python requests module and API got from the inspect element of browser. Below is the code
import requests
login_data = {
'password': "mypasswordhere",
'username': "myemailhere",
}
header = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.129 Safari/537.36'
}
with requests.Session() as s1:
url = "https://auth.api.sonyentertainmentnetwork.com/2.0/oauth/token"
r = s1.post(url, data = login_data, headers = header)
print(r.text)
With this, I got below response from the server.
{"error":"invalid_client","error_description":"Bad client credentials","error_code":4102,"docs":"https://auth.api.sonyentertainmentnetwork.com/docs/","parameters":[]}
Can I know any alternative method to login into PSN network? Preferably using API model instead of selenium? My objective is to login into PSN network with my credentials and change password but seems got stuck in login page only...

Using Selenium with Phantomjs in node not returning results

I have the following node route using selenium and chrome driver which is working correctly and returning expected html in the console:
app.get('/google', function (req, res) {
var driver = new webdriver
.Builder()
.forBrowser('chrome')
.build();
driver.get('https://www.google.com')
driver
.manage()
.window()
.setSize(1200, 1024);
driver.wait(webdriver.until.elementLocated({xpath: '//*[#id="lst-ib"]'}));
return driver
.findElement({xpath: '//*[#id="lst-ib"]'})
.sendKeys('stackoverflow' + webdriver.Key.RETURN)
.then((html) => {
return driver
.findElement({xpath: '//*[#id="rso"]/div[1]/div/div/div/div'})
.getAttribute("innerHTML")
})
.then((result) => {
console.log(result)
})
.then(() => {
res
.status(200)
.send('ok')
});
I have also installed the phantom js driver and tested that its working by returning the URL title - it works. When I use the above exact route and replace the chrome with phantomjs I get no results returned. There are no errors - just no print out in my console. The status and result are never sent to the browser so it doesn't appear to be stepping through promise chain.
Any suggestions?
The issue was that there was different html being rendered depending on the user agent. By forcing a user agent I was able to retrieve the results i needed.
Here is the code snippet replaced above to get this working.
.Builder()
// .forBrowser('phantomjs')
.withCapabilities(webdriver.Capabilities.phantomjs()
.set("phantomjs.page.settings.userAgent", "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/59.0.3071.115 Safari/537.36"))
.build();

Using WebClient to Download CSV File From SSRS Report

I'm trying to download a csv file from an SSRS report using the following code.
Const URI As String = "https://blah.blah.com/blah/_layouts/15/ReportServer/RSViewerPage.aspx?rv:RelativeReportUrl=/blah/Production%20Reports/The_File.rdl&rs:format=csv"
Const DESTINATION As String = "C:\MyFile.csv"
Using myWebClient As WebClient = New WebClient()
With myWebClient
.Headers.Add("Accept", "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8")
.Headers.Add("Accept-Encoding", "gzip, deflate, sdch, br")
.Headers.Add("Accept-Language", "en-US,en;q=0.8")
.Headers.Add("Content-Disposition", "attachment; filename=%22The%5FFile.csv%22")
.Headers.Add("Content-Encoding", "gzip")
.Headers.Add("Content-Type", "text/csv")
.Headers.Add("Vary", "Accept-Encoding")
.Headers.Add("User-Agent", "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.116 Safari/537.36")
.Headers.Add("Upgrade-Insecure-Requests", "1")
.Headers.Add("Referer", URI)
.Headers.Add("Cache-Control", "private")
.Credentials = New NetworkCredential("<my username>", "<my password>")
.DownloadFile(URI, DESTINATION)
End With
End Using
The problem is that the file that gets downloaded isn't a csv file. When I open it in any text editor, all I see are "garbage" characters which seem like some sort of encoding is going on. If I comment out the "Accept-Encoding" header and rerun the code, I get the code of the resulting HTML page - not the csv file I need. Anyone know how I can download the file correctly? BTW, I'm not sure all of the headers I added are necessary.
You need to change the URI constant from this:
Const URI As String = "https://blah.blah.com/blah/_layouts/15/ReportServer/RSViewerPage.aspx?rv:RelativeReportUrl=/blah/Production%20Reports/The_File.rdl&rs:format=csv"
To this:
Const URI As String = "https://blah.blah.com/blah/_layouts/15/ReportServer/RSViewerPage.aspx?rv:RelativeReportUrl=/blah/Production%20Reports/The_File.rdl&rs:format=csv&rs:Command=Render"
You are missing the Command=Render part of the Uri.