Groovy URL getText() returns a PasswordAuthentication instance - authentication

I am trying to download the content of a password-protected Gerrit URL in a Jenkins pipeline Groovy script. HTTPBuilder is not accessible so I am using the URL class with Authenticator:
// To avoid pipline bailing out since data PasswordAuthentication is non-serializable
#NonCPS
def getToString(data) {
data.toString()
}
def fetchCommit(host, project, version) {
withCredentials([usernamePassword(credentialsId: 'my-credentials',
usernameVariable: 'user',
passwordVariable: 'PASSWORD')]) {
proj = java.net.URLEncoder.encode(project, 'UTF-8')
echo "Setting default authentication"
Authenticator.default = {
new PasswordAuthentication(env.user, env.PASSWORD as char[])
} as Authenticator
echo "https://${host}/a/projects/${proj}/commits/${version}"
url = "https://${host}/a/projects/${proj}/commits/${version}".toURL()
result = getToString(url.getText())
echo "${result}"
}
}
The result is a PasswordAuthentication instance, and not the expected data:
[Pipeline] echo
java.net.PasswordAuthentication#3938b0f1
I have been wrestling with this for a while. I have tried different ways to setup the authentication and reading the data, but those mostly end up with an exception. Using eachLine() on the url does not enter the closure at all. The job also exits far to quickly, giving the impression it not even tries to make a connection.
Refs:
https://kousenit.org/2012/06/07/password-authentication-using-groovy/

Related

How to create a Hashicorp Vault user using Terraform

I am trying to create a Vault user in Terraform but can't seem to find the appropriate command to do so. I've searched the Terraform Registry and also performed some online searches but all to no avail.
All I'm looking to do is create a user, using the corresponding Terraform command to the Vault CLI command below:
vault write auth/userpass/users/bob password="passworld123" policies="default"
Any suggestions?
#hitman126 I guess you can take use of 'vault' provider module and 'vault_auth_backend' resource block. I guess your code should look like something similar to below
terraform {
required_providers {
vault = {
source = "hashicorp/vault"
version = "3.5.0"
}
}
}
provider "vault" {
}
resource "vault_auth_backend" "example" {
type = "userpass"
}
resource "vault_generic_secret" "developer_sample_data" {
path = "secret/foo"
data_json = <<EOT
{
"username": "bob",
"password": "passworld123"
}
EOT
}
In above code block, path is one full logic path where we write given data.To write data into the "generic" secret backend mounted in Vault by default, this should be prefixed with 'secret/'.
This might not be a full-fledged solution, but you can try something like this
Solution-2 :
If you have installed vault in machine and you would like to achieve above use case using vault command alone(if you don't want to use terraform-vault provider), then you can try something below
create one small sh script with above vault command. (valut-write.sh)
touch vault-write.sh
let content of script can be similar to below
#!/bin/sh
vault write auth/userpass/users/bob password="passworld123" policies="default"
chmod +x vault-write.sh
Create a .tf file with null resource, local-exec provisioner and invoke this sh script.
touch vault.tf
contents of vault.tf file can be similar to below
terraform {
required_version = "~> 1.1.1"
}
resource "null_resource" "vault_write" {
provisioner "local-exec" {
command = "/bin/sh vault-write.sh"
}
}

chrome:headless (MacOS) results with ' 1) AssertionError: expected 'about:blank' to include $target page'

I am using TestCafe in combination with gherkinTestcafe (steps) / cucumber.
I am also using environment variables so that i can run my tests on 2 different environments.
My code is as follows, although through debugging, i don't believe this is something strictly code related, as much as it is related to:
chrome:headless
environment
version of chrome / MacOS
import Enviorments from "../../../../../../AEM_Engine/Enviorment/Enviorments";
import { Helper } from "../../../../../TestActions/Test_specific/Career_helper";
import {AddAuthCredentialsHook} from "../../../../../TestActions/BasicAuth";
const {Before, Given, Then} = require('cucumber');
let publisher = new Publish();
let aemEnv = new Enviorments();
let helper = new Helper;
let careersPage = '/career';
Before('#basicAuth', async testController => {
const addAuthCredentialsHook = new AddAuthCredentialsHook('$someUserName', '$somePassword');
await testController.addRequestHooks(addAuthCredentialsHook);
});
Before('#disableCookie', async testController => {
await testController.addRequestHooks(publisher.mockCookieResponse);
});
Given('I am at Careers page', async testController => {
await publisher.Navigate(testController, aemEnv.frontEndURL + careersPage);
await publisher.verifyURL(testController, aemEnv.frontEndURL + careersPage);
});
.
.
.
When i wait for the script to run i have
1) AssertionError: expected 'about:blank' to include $expectedPage
As i mentioned, i don't believe the problem is in the code. Even if i remove the step for verifying the current URL location, the test fails on the next step after.
Tests pass on
Chrome (with UI shell)
Other browsers (firefox, safari), headless or with UI shell
Second (staging) environment
When Tests are run and TestCafe starts, i get the following info
Running tests in:
- HeadlessChrome 99.0.4844 / Mac OS X 10.15.7
Feature: Careers Page Available
(node:87344) Warning: Setting the NODE_TLS_REJECT_UNAUTHORIZED environment variable to '0' makes TLS connections and HTTPS requests insecure by disabling certificate verification.
I tried re-installing some packages, re-writing some of the steps, adding some flags to clear cache, change chrome port or similar, but nothing worked.
Any thoughts on what might be causing this and how to solve it?

Gulp task to SSH and then mysqldump

So I've got this scenario where I have separate Web server and MySQL server, and I can only connect to the MySQL server from the web server.
So basically everytime I have to go like:
step 1: 'ssh -i ~/somecert.pem ubuntu#1.2.3.4'
step 2: 'mysqldump -u root -p'password' -h 6.7.8.9 database_name > output.sql'
I'm new to gulp and my aim was to create a task that could automate all this, so running one gulp task would automatically deliver me the SQL file.
This would make the developer life a lot easier since it would just take a command to download the latest db dump.
This is where I got so far (gulpfile.js):
////////////////////////////////////////////////////////////////////
// Run: 'gulp download-db' to get latest SQL dump from production //
// File will be put under the 'dumps' folder //
////////////////////////////////////////////////////////////////////
// Load stuff
'use strict'
var gulp = require('gulp')
var GulpSSH = require('gulp-ssh')
var fs = require('fs');
// Function to get home path
function getUserHome() {
return process.env.HOME || process.env.USERPROFILE;
}
var homepath = getUserHome();
///////////////////////////////////////
// SETTINGS (change if needed) //
///////////////////////////////////////
var config = {
// SSH connection
host: '1.2.3.4',
port: 22,
username: 'ubuntu',
//password: '1337p4ssw0rd', // Uncomment if needed
privateKey: fs.readFileSync( homepath + '/certs/somecert.pem'), // Uncomment if needed
// MySQL connection
db_host: 'localhost',
db_name: 'clients_db',
db_username: 'root',
db_password: 'dbp4ssw0rd',
}
////////////////////////////////////////////////
// Core script, don't need to touch from here //
////////////////////////////////////////////////
// Set up SSH connector
var gulpSSH = new GulpSSH({
ignoreErrors: true,
sshConfig: config
})
// Run the mysqldump
gulp.task('download-db', function(){
return gulpSSH
// runs the mysql dump
.exec(['mysqldump -u '+config.db_username+' -p\''+config.db_password+'\' -h '+config.db_host+' '+config.db_name+''], {filePath: 'dump.sql'})
// pipes output into local folder
.pipe(gulp.dest('dumps'))
})
// Run search/replace "optional"
SSH into the web server runs fine, but I have an issue when trying to get the mysqldump, I'm getting this message:
events.js:85
throw er; // Unhandled 'error' event
^
Error: Warning:
If I try the same mysqldump command manually from the server SSH, I get:
Warning: mysqldump: unknown variable 'loose-local-infile=1'
Followed by the correct mylsql dump info.
So I think this warning message is messing up my script, I would like to ignore warnings in cases like this, but don't know how to do it or if it's possible.
Also I read that using the password directly in the command line is not really good practice.
Ideally, I would like to have all the config vars loaded from another file, but this is my first gulp task and not really familiar with how I would do that.
Can someone with experience in Gulp orient me towards a good way of getting this thing done? Or do you think I shouldn't be using Gulp for this at all?
Thanks!
As I suspected, that warning message was preventing the gulp task from finalizing, I got rid of it by commenting the: loose-local-infile=1 From /etc/mysql/my.cnf

Reuse the browser session for Selenium WebDriver for Nightwatch.js tests

I need to write multiple tests (e.g. login test, use application once logged in tests, logout test, etc.) and need them all to be in separate files. The issue I run into is after each test, at the beginning of the next test being run, a new browser session start and it is no longer logged in due to the new session, so all my tests will fail except the login test.
So, is there a way to use the same browser session to run all of my tests sequentially without having to duplicate my login code? Sorry if this is a repost but I have searched and researched and not found any answers.
OR, is there a way to chain the test files somehow? Like having one file that you run that just calls all the other test files?
Using this function to chain together files:
extend = function(target) {
var sources = [].slice.call(arguments, 1);
sources.forEach(function (source) {
for (var prop in source) {
target[prop] = source[prop];
}
});
return target;
}
and adding files to this master file like this:
require("./testName.js");
module.exports = extend(module.exports,testName);
and having the test file look like this:
testName = {
"Test" : function(browser) {
browser
// Your test code
}
};
allowed me to have one file that could link all the tests to, and keep the same browser session the entire time. It runs the tests in the order you require them in the master file and if you do not call browser.end() until the last test is finished it will use one browser window for all tests.
Reuse of session is not good idea as you may run tests in different oreder, but
You could place login code into before function or even extract it into custom commands.
Example:
https://github.com/dimetron/backbone_app/blob/master/nightwatch/custom-commands/login.js
1 - In nightwatch config add
"custom_commands_path" : "nightwatch/custom-commands",
2 - Create custom-commands/login.js
exports.command = function(username, password, callback) {
var self = this;
this
.frame(null)
.waitForElementPresent('input[name=username]', 10000)
.setValue('input[name=username]', username)
.waitForElementPresent('input[name=password]', 10000)
.setValue('input[name=password]', password)
.click('#submit');
if( typeof callback === "function"){
callback.call(self);
}
return this; // allows the command to be chained.
};
3 - Test code - Before using .login(user, apssword)
module.exports = {
before: function(browser) {
console.log("Setting up...");
browser
.windowSize('current', 1024, 768)
.url("app:8000/")
.waitForElementVisible("body", 1000)
.login('user', 'password')
},
after : function(browser) {
browser.end()
console.log("Closing down...");
},
beforeEach: function(browser) {
browser
.pause(2000)
.useCss()
},
"Test 1": function(browser) {
browser
.assert.containsText("#div1", "some tex")
.pause(5000);
},
"Test 2": function(browser) {
browser
.assert.containsText("#div2", "some text")
.pause(5000);
}
}

Symfony and uploadify

I want to use uploadify with Symfony 1.4, but so far I couldn't.
Uploadify loads correctly, I choose my files, it says that the files were successfully uploaded, but the are nowhere.
(I'm doing this on localhost)
Is there anybody who met this problem before?
Thanks, Tom
$file = $request->getParameter('file');
$filename = sha1($file->getOriginalName()).$file->getExtension($file->getOriginalExtension());
$file->save(sfConfig::get('sf_upload_dir').'/'.$filename);
in my project session stored in cookies so I found solution by create extra session storage class
class MySessionStorage extends sfSessionStorage
{
public function initialize($options = null)
{
$request = sfContext::getInstance()->getRequest();
// work-around for uploadify
if ($request->getParameter('uploadify') == "onUpload")
{
$sessionName = $options["session_name"];
if($value = $request->getParameter($sessionName))
{
session_name($sessionName);
session_id($value);
}
}
parent::initialize($options);
}
}
then changed factories.yml to
all:
storage:
class: MySessionStorage
and then "uploader" param will like this
uploader : '<?php echo url_for("attachments/upload?uploadify=onUpload&" . session_name() . "=" . session_id(), true)?>',
I can only guess that it's because you're trying to upload while logged into a system, but flash does not inherit session data from the browser, this means you will always be denied permission to whatever function you are trying to access since symfony thinks you're not logged in.
So you need to manually set variables in order for flash to use the same login session as the browser:
jQuery Code (needs to be in a php file, will not work in a js file):
$('#file_upload').uploadify({
.... config here
'scriptData': { '<?php echo session_name() ?>': '<?php echo session_id() ?>' }
});