File upload in selenium - selenium

We have application where we do lot of file upload operations, the below code works in local if we keep the file in classpath say "src\test\resources\csvImports\sample.csv". Since its in local it gets the absolute path, however when we try to run from remote machine or Jenkins it fails saying path not found.
File f = new File("src/test/resources/csvImports/"+fileName);
getDriver().findElement(By.xpath("//input[#type='file']")).sendKeys(f.getAbsolutePath());

You can try:
File f = new File(System.getProperty("user.dir") + "src/test/resources/csvImports/"+fileName);

Related

How to get the path of the current temporary working directory in nextflow

I'm trying to create a file in the processes temporary directory that is checked by nextflow for output using the native scripting language (Groovy) of nextflow.
Here is a minimal example:
#!/usr/bin/env nextflow
nextflow.enable.dsl = 2
process test {
echo true
output:
file('createMe')
exec:
path = 'createMe'
println path
output = file(path)
output.append('exemplary file content')
}
workflow {
test()
}
Simply creating the file in the current directory would work when using python as the scripting language, but here it fails with this message:
Error executing process > 'test'
Caused by:
Missing output file(s) `createMe` expected by process `test`
Source block:
path = 'createMe'
println path
output = file(path)
output.append('exemplary file content')
Work dir:
/home/some_user/some_path/work/89/915376cbedb92fac3e0a9b18536809
Tip: view the complete command output by changing to the process work dir and entering the command `cat .command.out`
I also tried to set the path to workDir + '/createMe', but the actual working directory seems to be a subdirectory of that path.
There was actually an issue (#2628) opened a few days ago regarding this exact behavior. The solution is to use task.workDir to specify the task work directory:
This is caused by the fact the relative path is always resolved by the
Jvm against the main current launching directory.
Therefore the task work directory should be taken using the attribute
task.workDir e.g.
task.workDir.resolve('test.txt').text = "hello $world"
https://github.com/nextflow-io/nextflow/issues/2628#issuecomment-1034189393

RSelenium makeFirefoxProfile with Windows Task Scheduler

I am navigating a web page with firefox using RSelenium package. When i start building my script i used makeFirefoxProfile function to create temporary profile for setting download directory and related file type to download needed file into specific directory.
When i was trying to do that i got an error about zip files. After some research I installed rtools and succesfully managed this error. My script worked as I expected.
Now i want to that operation periodically on Windows Machine. To do that When I try to use taskscheduleR packgage to create task for Windows Task Scheduler i got the some zip error due to windows doesnt have built in comman-line zip tool
You can check the error code below, after i tried to operate the task
Error in file(tmpfile, "rb") : cannot open the connection
Calls: makeFirefoxProfile -> file
In addition: Warning messages:
1: In system2(zip, args, input = input, invisible = TRUE) :
'"zip"' not found
2: In file(tmpfile, "rb") :
cannot open file 'C:\Users\user\AppData\Local\Temp\RtmpKCFo30\file1ee834ae3394.zip': No such file or directory
Execution halted
Within R-Studio when i run my script there is no problem. Thank you for your help

Pyinstaller does not work with local files

I've made an app with PyQt5 and it works perfectly fine on my environment, and now I wan't to deploy it to .exe and .dmg with pyinstaller.
My app uses two local files certificate.yml and data.pkl which each contains certificate data for AWS and data. These are located in the same directory with main.py, which starts my app.
In my main.spec file I've added following
a.datas += [('certificate.yml', 'certificate.yml', 'DATA'),
('data.pkl', 'data.pkl', 'DATA')]
and made .app. However, when I start my .app, it does not find certificate.yml file and raise following error.
FileNotFoundError: [Errno 2] No such file or directory: 'certificate.yml'
How can I include my local files with pyinstaller?

Different locations of file downloads

I am using webdriver.io for my end-to-end testing. I want to check if proper files are being downloaded.
My problem is with the file download location. I want to use separate downloads directory for each test browser instance (hence for each test file), because I want to have fresh directory under test.
I tried to set (in wdio.conf.js):
chromeOptions.prefs['download.default_directory'] = path.join(__dirname, "/downloads/", browserName, process.pid.toString());
using PID of the process, but it does not work. But the process.pid is the same for all tests. So how can I accomplished that? How to set different download directory (for Chrome browser) for each test browser instance and then grab that directory path in the test itself?
You can use timestamp as it will be different for all tests.
chromeOptions.prefs['download.default_directory'] =
path.join(__dirname, "/downloads/", browserName, new Date().getTime());

What is the path for a bootstrapped file for a Pig job running in Amazon EMR

I bootstrap a data file in my EMR job. The bootstrapping succeeds and the file is copied to /home/hadoop/contents/ folder with right permissions.
However when I try to access it in the Pig script like below:
userdidstopick = load '/home/hadoop/contents/UserIdsToPick.txt' AS (uid:chararray);
I get an error that the input path does not exist:
hdfs://10.183.166.176:9000/home/hadoop/contents/UserIdsToPick.txt
When running Ruby jobs the bootstrapped file was always accessible under /home/hadoop/contents/ folder and everything worked for me.
Is it different for Pig?
By default Pig on EMR is configured to access HDFS location instead of local filesystem. The error shows the HDFS location.
There are 2 ways to solve this:
Either copy the file on S3, and directly load file from s3
userdidstopick = load 's3_bucket_location/UserIdsToPick.txt' AS (uid:chararray);
Or you can first copy the file on HDFS (instead of local filesystem), and then directly use it as path you are doing today.
I would prefer first option.