I was wondering if anyone had more information on what the specific risks for using chromedriver as was concerned by this statement.
"If possible, run ChromeDriver with a test account that has no access to sensitive local or network data. ChromeDriver should never be run with a privileged account."
Would like to know what the specific risks are when using a privileged account and what if any preventative measures can be taken to protect against them.
Thank you in advance!
How Google Chrome Browser Works
In the article Chrome Browser Security #STEPHANIE CRAWFORD mentioned, Google has leveraged its power as a search engine by creating its Safe Browsing technology which will automatically warn you if Chrome detects that a site you're visiting contains malware or phishing.
Chrome deploys this security measure through a unique security feature termed as Sandboxing. Sandboxing implies, separating each process out into independent spaces to see how they function individually. Chrome handles its workload as a series of multiple processes rather than as part of one large browser process. Each time you open a Web page, Chrome launches one or more new processes to run the scripts on that page. Also, each Chrome extension and app runs in its own process. Chrome implements sandboxing through its multi-process architecture. The security advantage in sandboxing comes with Chrome being able to control the access token for each process. These access token for a process allows that process access to important information about your system, like its files and registry keys. Chrome intercepts each access token from the processes launched from the browser, and it modifies that token to limit its access to that information. So, Chrome's sandboxing helps block web pages that try to install malware, capture your personal information or obtain data from your hard drive. The drawback of sandboxing is that, it can't catch everything. A sandboxed process might still be able to access less secure file systems. It's also likely to miss protecting registry keys and files managed by third party software, like a game or chat program that isn't native to the system.
WebDriver driven Chrome
While initiating a WebDriver controled Chrome Browsing Context using Selenium recently we had been advocating to use a certain command line argument:
--no-sandbox: Disables the sandbox for all process types that are normally sandboxed.
See:
WebDriverException: unknown error: DevToolsActivePort file doesn't exist while trying to initiate Chrome Browser
How to configure ChromeDriver to initiate Chrome browser in Headless mode through Selenium?
unknown error: session deleted because of page crash from unknown error: cannot determine loading status from tab crashed with ChromeDriver Selenium
No Sandbox
There are a couple of more Sandbox related flags available which enables the sandboxed processes to run without a job object assigned to them. This flag is required to allow Chrome to run in RemoteApps or Citrix. This flag can reduce the security of the sandboxed processes and allow them to do certain API calls like shut down Windows or access the clipboard. Also we lose the chance to kill some processes until the outer job that owns them finishes.
--allow-no-sandbox-job: Disables usage of sandbox job.
--allow-sandbox-debugging: Allows debugging of sandboxed processes.
--disable-gpu-sandbox: Disables the GPU process sandbox.
--disable-namespace-sandbox: Disables usage of the namespace sandbox.
--disable-seccomp-filter-sandbox: Disable the seccomp filter sandbox (seccomp-bpf) (Linux only).
--disable-setuid-sandbox: Disable the setuid sandbox (Linux only).
--disable-win32k-lockdown: Disables the Win32K process mitigation policy for child processes.
--enable-audio-service-sandbox: enable the audio service sandbox.
--gpu-sandbox-allow-sysv-shm: Allows shmat() system call in the GPU sandbox.
--gpu-sandbox-failures-fatal: Makes GPU sandbox failures fatal.
--no-sandbox-and-elevated: Disables the sandbox and gives the process elevated privileges (Windows only).
Sandbox
Sandbox leverages the OS-provided security to allow code execution that cannot make persistent changes to the computer or access information that is confidential. The architecture and exact assurances that the sandbox provides are dependent on the operating system.
windows implementation principles:
Do not re-invent the wheel: It is tempting to extend the os kernel with a better security model. Don't. Let the operating system apply its security to the objects it controls. On the other hand, it is just okay to create application-level objects (abstractions) that have a custom security model.
Principle of least privilege: This should be applied both to the sandboxed code and to the code that controls the sandbox. In other words, the sandbox should work even if the user cannot elevate to super-user.
Assume sandboxed code is malicious code: For threat-modeling purposes, we consider the sandbox compromised (that is, running malicious code) once the execution path reaches past a few early calls in the main() function. In practice, it could happen as soon as the first external input is accepted, or right before the main loop is entered.
Be nimble: Non-malicious code does not try to access resources it cannot obtain. In this case the sandbox should impose near-zero performance impact. It's ok to have performance penalties for exceptional cases when a sensitive resource needs to be touched once in a controlled manner. This is usually the case if the OS security is used properly.
Emulation is not security: Emulation and virtual machine solutions do not by themselves provide security. The sandbox should not rely on code emulation, code translation, or patching to provide security.
linux implementation
macos implementation
Related
When 2 tests are running in Chrome, i have observed that too many Google Chrome(32 Bit) processes are running in Task manager, Is this a correct behavior of Chome Driver
When multiple automated tests are getting executed through Google Chrome you must have observed that there are potentially dozens of Google Chrome processes running which can be observed through Windows Task Manager's Processes tab.
Snapshot:
As per the article SOLVED: Why Google Chrome Has So Many Processes for a better user experience Google Chrome initiates a lot of windows background processes for each tab that have been opened by your Automated Tests. Google tries to keep the browser stable by separating each web page into as many processes as it deems fit to ensure that if one process fails on a page, that particular process(es) can be terminated or refreshed without needing to kill or refresh the entire page.
However, from 2018 onwards Google Chrome was actually redesigned to create a new process for each of the following entities:
Tab
HTML/ASP text on the page
Plugin those are loaded
App those are loaded
Frames within the page
In a Chromium Blog Multi-process Architecture it is mentioned:
Google Chrome takes advantage of these properties and puts web apps and plug-ins in separate processes from the browser itself. This means that a rendering engine crash in one web app won't affect the browser or other web apps. It means the OS can run web apps in parallel to increase their responsiveness, and it means the browser itself won't lock up if a particular web app or plug-in stops responding. It also means we can run the rendering engine processes in a restrictive sandbox that helps limit the damage if an exploit does occur.
As a conclusion, the many processes you are seeing is pretty much in line with the current implementation of google-chrome
Outro
You can find a relevant discussion in How to quit all the Firefox processes which gets initiated through GeckoDriver and Selenium using Python
I'd like to crawl a set of random websites received from a URL generator, using Selenium's ChromeDriver with Crawljax to do static code analysis on the captured DOM states.
Is this potentially unsafe for the machine doing the crawling?
My concern is that one of the randomly generated sites is malicious and that execution of JavaScript from ChromeDriver (which is used to capture the new DOM states) infects the machine running the test somehow. Should I be running this in some kind of sandboxed environment?
--edit--
If it matters, the crawler is implemented entirely in Java.
Simple answer, no. Only if your afraid of cookies, and even if you are, your machine isn't.
It's hard to say it's very secure,you should aware of that there is no absolute secure in network.Recently,a chrome RCE has been put out,details:
SSD Advisory – Chrome Turbofan Remote Code Execution – SecuriTeam Blogs
Maybe this can effect on Selenium's ChromeDriver
But you can do some enforce on your system,such as change your firewall mode to white list,only allow your python script and selenium to access internet on port 80,443.
Even if your system pwned by RCE,the malicious code still can't access internet,unless it inject to you python process(I think it's very hard to do with js script in Browser RCE).
Another option:Install HIPS,if your python script want to do anything else but crawl web page(such as start an other process) or read/write some other files,you will know it and decide what to do.
In my oppion,do your crawl thing in a VM and do some enforce on firewall(Windows firewall or Linux iptables),shutdown useless services in windows.That's enough.
In a word,it's diffcult to find the balance between security and convenience and do not believe your system is unbreakable
I have a desktop application that attempts to limit the user to one instance per session (so each user/remote desktop connection can run a copy)
I do this by creating an EventWaitHandle with a "Local\..." prefix on the event name, and if isn't created, I exit the program.
The warning from the verifier tool looks like this:
WARNING
Multi user session test
• Warning: The multi user session test detected the following errors:◦An error occurred while performing the testing process.
• Impact if not fixed: Multiple users might not be able to launch the app in concurrent sessions.
• How to fix: Make sure that the app doesn’t block multiple concurrent sessions, either locally or remotely. The app must not depend on global mutexes or other named-objects to check for or block multiple concurrent sessions. If the app can’t allow multiple concurrent sessions per user, use per-user or per-session namespaces for mutexes or other named-objects. See link below for more information:
Remote Desktop Services programming guidelines
http://msdn.microsoft.com/library/windows/desktop/aa383490(v=vs.85).aspx
Any idea on what this error means, and how to get rid of it?
I've tested the program while logged into multiple accounts, and it correctly detects that the program is not running on the new session, despite running on a previous one.
Is there a way to get more detailed descriptions of the failures?
this link suggests that there is a bug in Windows Application Certification Kit (WACK) 2.2 and it is resolved in WACK 3.0 available for Windows 8.1 Preview. I chose to ignore this particular warning for now.
Set for user (choice user or machine) when you run Windows App Cert
Is it possible for a website to automatically find a folder on usb stick and upload all the files in it to the web server by clicking only one button?
The problem is that I don't know how to make upload form automatically detect usb stick as the drive name(ie. G:, F:, etc) may vary from computer to computer, so hard coding path is not possible.
Ps. I'm using yii framework for site development, but can add a new page that will handle this in any other language as the client really wants this feature.
Web sites are not allowed to set default files to upload (it's a major security risk!). Also, web sites cannot scan the hard drive/enumerate what file systems exist on a system, again, for security purposes.
It might be possibly to do this with Flash/Silverlight/Java. Java seems the most likely to allow a web developer to do this (Java plugin seems to be quite willing to give out every permission under the Sun).
Short answer: No.
Long answer: Allowing automatic uploads in web browsers would be a huge security hole so the browsers intentionally prevent it. Even if you manage to find a hole that permits it, the browser makers will break it as soon as they find out.
However, if you have an environment where an actual separate program can be installed on the end user's computer you could easily write a program to do automated uploads of specified directories when launched.
I'm writing a suite of applications that all require login to a server. It's come together quite nicely, but I've run into a logistic snag. The nature of the applications require that they be closed and launched again later with some frequency. It is very annoying to have to login every time one of the applications needs to launch.
I'm trying to think of a secure way of perhaps having the login information stored on the local user's machine. Is there a good way to even go about that? Permissions protected config files? The registry? How does Firefox store its passwords? Have you ever had to do something like this?
The suite is more of a protocol than anything, all the applications are written in a variety of languages (Python, C#, Java, etc) and run on a variety of operating systems (Windows, Linux, OSX, etc). I'm not really looking for code examples, but more just general approaches to this problem. Is it wise to have locally stored passwords? How can you have a session login for a suite with such disparate components? Right now I use application.rc config files stored locally to each application, but they are plain text and far from secure.
I'm going with Jeff on this one and assuming that since you mention the registry, you're referring to Windows. I'm also going to assume that you're talking about a desktop application (otherwise you could just use the builtin browser cookies to store the user's session).
Off the top of my head, I'd engineer the application so that when the user logs in to the server, the server returns a unique session id that identifies the authenticated user. I would then store than id along with an salted/encryped timestamp (which gives you the option of expiring the cached credentials).
The storage mechanism is up to you. You could store them in the HKEY_LOCAL_USERS section of the windows registry, or the Application Data folder in Windows. Both give you the option of user segmented storage.
Typically, this sort of thing is done by use of a "cookie"; a key which (securely) indicates that the user has successfully previously logged in to the server resource. This is how most web sites manage login information, and Firefox (all browsers, really) store the cookies that the browsers set on the user login. A few important things about cookies: they should be encrypted, to assure that malicious programs cannot generate one and thereby bypass the login process, they should match to server-kept resources (same reason), and they should age out, so that while you can maintain login information on a site for a while, your login information is not permanent (which is another security hole).
Personally I would use an encrypted local config file with some sort of an ID value of the machine (motherboard ID, Chip ID, HD ID etc) as part of the encryption key so that the config file cant be just copied from one machine to another. I would also include the date and time so you can expire it when you decide it gets stale.
Alternatively, you can create a host exe or launcher that does the log in and then goes to sleep and wake it up each time you want to launch a new application. The host exe would take the application as a parameter and decide whether or not to ask for login credentials (usually when the first app is started and then keep the login user and an encrypted password in memory. When the host exe has exited the login info is forgotten and when you start up again the cycle starts over.)
Tomcat 6 supports persistence/replication of sessions, so you should care about choosing the manager and configure it ;-)
More info: http://tomcat.apache.org/tomcat-6.0-doc/config/manager.html