Selenoid[/usr/bin/selenoid: browsers config: read error: open /etc/selenoid/browsers.json: no such file or directory] - selenoid

While working in Selenoid with Docker, in docker logs I can see the error as " [/usr/bin/selenoid: browsers config: read error: open /etc/selenoid/browsers.json: no such file or directory]" . My volume mapping is "-v $PWD/config/:/etc/selenoid/:ro" . if I do "cat $PWD/config/browsers.json" , my browsers.json content is opened and same I can validate manually as well that file is present .
Below commands I am using . These commands I am executing directly through Jenkins . In My local same exact command is working fine , but in jenkins its giving error .
mkdir -p config
cat <$PWD/config/browsers.json
{
"firefox": {
"default": "57.0",
"versions": {
"57.0": {
"image": "selenoid/firefox:90.0",
"port": "4444",
"path": "/wd/hub"
},
"58.0": {
"image": "selenoid/firefox:90.0",
"port": "4444",
"path": "/wd/hub"
},
"59.0": {
"image": "selenoid/firefox:90.0",
"port": "4444",
"path": "/wd/hub"
}
}
}
}
EOF
chmod +rwx $PWD/config/browsers.json
cat $PWD/config/browsers.json
docker pull aerokube/selenoid:latest
docker pull aerokube/cm:latest
docker pull aerokube/selenoid-ui:latest
docker pull selenoid/video-recorder:latest-release
docker pull selenoid/vnc_chrome:92.0
docker pull selenoid/vnc_firefox:90.0
docker stop selenoid ||true
docker rm selenoid ||true
docker run -d --name selenoid -p 4444:4444 -v /var/run/docker.sock:/var/run/docker.sock
-v $PWD/config/:/etc/selenoid/:ro aerokube/selenoid

The error is self-explaining: you don't have browsers.json in directory you are mounting to /etc/selenoid inside container. I would recommend using absolute paths instead of $PWD variable.

Related

Opening a new WSL2 tab in Windows terminal and executing command

I'm using WSL2 on Windows terminal. I have an app that needs front end and backend booted before it can be used, so every time I nave to open a terminal window, navigate to a folder and run a command.
I would like to set an alias that would open a new tab, navigate to a folder and do go run .
I saw suggestions for linux, but none of those work on Windows Terminal with WSL2. Anyone have experience with this setup?
You could create a profile for this. Something like
{
"commandline": "wsl.exe -d Ubuntu ping 8.8.8.8",
"name": "backend",
"startingDirectory": "\\\\wsl$\\Ubuntu\\home\\zadjii\\path\\to\\project",
},
(of course, replace ping 8.8.8.8 with the actual command you want to run, replace Ubuntu with the name of the distro you're using, and replace home\\zadjii\\path\\to\\project with your actual path, delimited by double-backslashes.)
Now, if you wanted to get really crazy, you could create an action in the Command Palette which opened up multiple commands all at once:
{
"name": "Run my project",
"command": {
"action": "multipleActions",
"actions": [
// Create a new tab with two panes
{ "action": "newTab", "tabTitle": "backend", "commandline": "wsl.exe -d Ubuntu run_my_backend", "startingDirectory": "\\\\wsl$\\Ubuntu\\home\\zadjii\\path\\to\\backend" },
{ "action": "splitPane", "tabTitle": "frontend", "commandline": "wsl.exe -d Ubuntu run_my_frontend", "startingDirectory": "\\\\wsl$\\Ubuntu\\home\\zadjii\\path\\to\\frontend" },
]
}
}
see multipleActions

Windows Terminal profile not showing up

I've just installed WSL2 and am using the Windows Terminal on Win10 1909 (18363.1256). I'm trying to set up 2 different profiles, one that launches a local WSL2 Ubuntu shell, and one that launches another WSL2 shell that will automatically ssh to a specific host.
The local one works great, shows up without an issue, however I can't seem to get my 2nd profile to show up in the list of profiles.
My settings.json looks like this:
"profiles":
{
"defaults":
{
// Put settings here that you want to apply to all profiles.
"colorScheme": "One Half Dark",
"fontFace": "JetbrainsMono NF",
"fontSize": 11
},
"list":
[
{
"guid": "{2c4de342-38b7-51cf-b940-2309a097f518}",
"hidden": false,
"name": "Ubuntu",
"source": "Windows.Terminal.Wsl",
"startingDirectory": "//wsl$/Ubuntu/home/sensanaty",
"tabTitle": "WSL2"
},
{
"guid": "{15c5814b-7ed1-4cec-bc64-d165274958fa}",
"hidden": false,
"name": "External Host",
"source": "Windows.Terminal.Wsl",
"commandline": "ssh example#123.456.7.89",
"tabTitle": "External Host"
},
]
},
With the above, I only get the Ubuntu profile in my list
I thought maybe it was the guid generated or something, but I just did a simple uuidgen and pasted it into the json so it shouldn't really be causing any issues there. I've also obviously tried restarting my system, to no avail. The default profiles show up fine if I disable the option to stop auto-generating them, as well.
Any clue as to what might help me out?
The 'source' attribute is for dynamically generated profiles, for which WSL will create one for each instance installed. You can't control the command line for these dynamically generated profiles. What you need is for your new profile to extend the command line to tell Terminal to use WSL. Remove the 'source' attribute entirely, so that your new profile is static.
In your case, that should be ...
{
"guid": "{15c5814b-7ed1-4cec-bc64-d165274958fa}",
"hidden": false,
"name": "External Host",
//"source": "Windows.Terminal.Wsl",
"commandline": "wsl.exe ssh example#123.456.7.89",
"tabTitle": "External Host"
}//,
As bwolfbarn mentioned, you should also ditch that trailing comma if it really comes at the end of the "list" block.
Here are a few lines from mine as additional examples as well ...
{
"guid": "{2c4de342-38b7-51cf-b940-2309a097f518}",
"hidden": false,
"name": "Ubuntu 20.04 WSL2 tmux",
//"source": "Windows.Terminal.Wsl",
"commandline": "wsl.exe -d Ubuntu -e sh -c \"/usr/bin/tmux has-session -t main 2>/dev/null && /usr/bin/tmux attach-session -d -t main || /usr/bin/tmux -2 new-session -t main -s main -c ${HOME}\"",
"cursorShape": "filledBox"
},
{
"guid": "{4e04fa7e-76c7-4746-a322-a227e70dde6c}",
"hidden": false,
"name": "Ubuntu 20.04 WSL1 tmux",
//"commandline": "wsl.exe -d Ubuntu20.04_WSL1",
"commandline": "wsl.exe -d Ubuntu20.04_WSL1 -e sh -c \"/usr/bin/tmux has-session -t main 2>/dev/null && /usr/bin/tmux attach-session -d -t main || /usr/bin/tmux -2 new-session -t main -s main -c ${HOME}\"",
"cursorShape": "filledBox"
}
Note that you could, I believe, use "wsl.exe -e" (a.k.a. --execute), but it's not really necessary in your case.
If you want to see your "source": "Windows.Terminal.Wsl" in Windows Terminal Menu it must exist in the registry
[HKCU\SOFTWARE\Microsoft\Windows\CurrentVersion\Lxss\{UUID}]
(The registry UUID is not related to Windows Terminal UUID).
This registry entry can be created by running "wsl --import" or by cloning existing entry (if you are comfortable messing with the registry).
If you still don't see your profile after confirming that the registry entry exists, remove all entries for "generatedProfiles" in state.json file located in the same folder as settings.json. This will force Windows Terminal to update state.json. If you generated Windows Terminal profile UUID yourself, it may ignore it and create its own one. In this case you will see duplicate entries for the profile in settings.json. Remove the ones that were generated manually, and leave the one generated by the terminal.
At least the last comma should be removed (I commented it in your example) as the element "External Host" is the last of the list.
[
{
"guid": "{2c4de342-38b7-51cf-b940-2309a097f518}",
"hidden": false,
"name": "Ubuntu",
"source": "Windows.Terminal.Wsl",
"startingDirectory": "//wsl$/Ubuntu/home/sensanaty",
"tabTitle": "WSL2"
},
{
"guid": "{15c5814b-7ed1-4cec-bc64-d165274958fa}",
"hidden": false,
"name": "External Host",
"source": "Windows.Terminal.Wsl",
"commandline": "ssh example#123.456.7.89",
"tabTitle": "External Host"
}//,
]

Selenium isn't able to reach a docker container with docker-compose run

I have the following docker-compose.yml which starts a chrome-standalone container and a nodejs application:
version: '3.7'
networks:
selenium:
services:
selenium:
image: selenium/standalone-chrome-debug:3
networks:
- selenium
ports:
- '4444:4444'
- '5900:5900'
volumes:
- /dev/shm:/dev/shm
user: '7777:7777'
node:
image: node_temp:latest
build:
context: .
target: development
args:
UID: '${USER_UID}'
GID: '${USER_GID}'
networks:
- selenium
env_file:
- .env
ports:
- '8090:8090'
volumes:
- .:/home/node
depends_on:
- selenium
command: >
sh -c 'yarn install &&
yarn dev'
I'm running the containers as follows:
docker-compose up -d selenium
docker-compose run --service-ports node sh
and starting the e2e from within the shell.
When running the e2e tests, selenium can be reached from the node container(through: http://selenium:4444), but node isn't reachable from the selenium container.
I have tested this by VNC'ing into the selenium container and pointing the browser to: http://node:8090. (The node container is reachable on the host however, through: http://localhost:8090).
I first thought that docker-compose run doesn't add the running container to the proper network, however by running docker network inspect test_app I get the following:
[
{
"Name": "test_app_selenium",
"Id": "df6517cc7b6446d1712b30ee7482c83bb7c3a9d26caf1104921abd6bbe2caf68",
"Created": "2019-06-30T16:08:50.724889157+02:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.31.0.0/16",
"Gateway": "172.31.0.1"
}
]
},
"Internal": false,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"8a76298b237790c62f80ef612debb021549439286ce33e3e89d4ee2f84de3aec": {
"Name": "test_app_node_run_78427bac2fd1",
"EndpointID": "04310bc4e564f831e5d08a0e07891d323a5953fa936e099d20e5e384a6053da8",
"MacAddress": "02:42:ac:1f:00:03",
"IPv4Address": "172.31.0.3/16",
"IPv6Address": ""
},
"ef087732aacf0d293a2cf956855a163a081fc3748ffdaa01c240bde452eee0fa": {
"Name": "test_app_selenium_1",
"EndpointID": "24a597e30a3b0b671c8b19fd61b9254bea9e5fcbd18693383d93d3df789ed895",
"MacAddress": "02:42:ac:1f:00:02",
"IPv4Address": "172.31.0.2/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {
"com.docker.compose.network": "selenium",
"com.docker.compose.project": "test_app",
"com.docker.compose.version": "1.24.1"
}
}
]
Which shows both containers running on the "selenium" network. I'm not sure however if the node container is properly aliased on the network and if this is proper behaviour.
Am I missing some config here?
Seems like docker-compose run names the container differently to evade the service namespace as noted in docker-compose.yml. http://node:8090 was therefore not reachable.
I solved this by adding a --name flag as follows:
docker-compose run --service-ports --name node node sh
EDIT:
It took me a while to notice, but I was overcomplicating the implementation by a lot. The above docker-compose.yml can be simplified by adding host networking. This simply exposes all running containers on localhost and makes them reachable on localhost by their specified ports. Considering that I don't need any encapsulation (it's meant for dev), the following docker-compose.yml sufficed:
version: '3.7'
services:
selenium:
image: selenium/standalone-chrome:3
# NOTE: port definition is useless with network_mode: host
network_mode: host
user: '7777:7777'
node:
image: node_temp:latest
build:
context: .
target: development
args:
UID: '${USER_UID}'
GID: '${USER_GID}'
network_mode: host
env_file:
- .env
volumes:
- .:/home/node
command: >
sh -c 'yarn install &&
yarn dev'

how to consume rest api of apache nutch docker

I pulled and started apache nutch docker
started it with
docker run --name my_nutch -d -p 8899:8899 -e SOLRURL=192.168.99.100:8983 -t meabed/nutch
any action I try to consume (according to their rest api) - I get 404
for example
192.168.99.100:8899/admin
tried also
GET http://192.168.99.100:8899/nutch/#/admin
I get in postman (for all GET REST requests, POST I get 404)
[
[
"admin",
"Service admin actions"
],
[
"confs",
"Configuration manager"
],
[
"db",
"DB data streaming"
],
[
"jobs",
"Job manager"
]
]

Packer.io fails using puppet provisioner: /usr/bin/puppet: line 3: rvm: command not found

I'm trying to build a Vagrant box file using Packer.io and Puppet.
I have this template as a starting point:
https://github.com/puphpet/packer-templates/tree/master/centos-7-x86_64
I added the Puppet provisioner after the shell provisioner:
{
"type": "puppet-masterless",
"manifest_file": "../../puphpet/puppet/site.pp",
"manifest_dir": "../../puphpet/puppet/nodes",
"module_paths": [
"../../puphpet/puppet/modules"
],
"override": {
"virtualbox-iso": {
"execute_command": "echo 'vagrant' | {{.FacterVars}}{{if .Sudo}} sudo -S -E bash {{end}}/usr/bin/puppet apply --verbose --modulepath='{{.ModulePath}}' {{if ne .HieraConfigPath \"\"}}--hiera_config='{{.HieraConfigPath}}' {{end}} {{if ne .ManifestDir \"\"}}--manifestdir='{{.ManifestDir}}' {{end}} --detailed-exitcodes {{.ManifestFile}}"
}
}
}
When I start building the image like
packer-io build -only=virtualbox-iso template.json
Then I get this error:
==> virtualbox-iso: Provisioning with Puppet...
virtualbox-iso: Creating Puppet staging directory...
virtualbox-iso: Uploading manifest directory from: ../../puphpet/puppet/nodes
virtualbox-iso: Uploading local modules from: ../../puphpet/puppet/modules
virtualbox-iso: Uploading manifests...
virtualbox-iso:
virtualbox-iso: Running Puppet: echo 'vagrant' | sudo -S -E bash /usr/bin/puppet apply --verbose --modulepath='/tmp/packer-puppet-masterless/module-0' --manifestdir='/tmp/packer-puppet-masterless/manifests' --detailed-exitcodes /tmp/packer-puppet-masterless/manifests/site.pp
virtualbox-iso: /usr/bin/puppet: line 3: rvm: command not found
==> virtualbox-iso: Unregistering and deleting virtual machine...
==> virtualbox-iso: Deleting output directory...
Build 'virtualbox-iso' errored: Puppet exited with a non-zero exit status: 127
If I log in into the box via tty, I can run both rvm and puppet commands as vagrant user.
What did I do wrong?
I am trying out the exact same route as you are:
Use relevant scripts for provisioning the vm from this repo.
Use the puppet scripts from a puphpet.com configuration to further provision the vm using puppet-masterless provioner in packer.
Still working on it, not a successful build yet, but I can share the following:
Inspect line 50 from puphpet/shell/install-puppet.sh. So the puppet command will trigger rvm to be executed.
Inspect your packer output during provisioning. Your read something along the lines of:
...
Creating alias default for ruby-1.9.3-p551
To start using RVM you need to run `source /usr/local/rvm/scripts/rvm` in all
your open shell windows, in rare cases you need to reopen all shell windows.
Cleaning up rvm archives
....
Apparently the command source /usr/local/rvm/scripts/rvm is needed for each user that needs to run rvm. It is executed and set to bash profiles in the script puphpet/shell/install-ruby.sh. However, this does not seem to affect the context/scope of the puppet masterless provisioning execute_command of packer. Reason for the line /usr/bin/puppet: line 3: rvm: command not found in your output.
My current way forward is the following configuration in template.json (packer template), the second and third line will help get beyond the point where you are stuck currently:
{
"type": "puppet-masterless",
"prevent_sudo": true,
"execute_command": "{{if .Sudo}}sudo -E {{end}}bash -c \"source /usr/local/rvm/scripts/rvm; {{.FacterVars}} puppet apply --verbose --parser future --modulepath='{{.ModulePath}}' {{if ne .HieraConfigPath \"\"}}--hiera_config='{{.HieraConfigPath}}' {{end}} {{if ne .ManifestDir \"\"}}--manifestdir='{{.ManifestDir}}' {{end}} --detailed-exitcodes {{.ManifestFile}}\"",
"manifest_file": "./puphpet/puppet/site.pp",
"manifest_dir": "./puphpet/puppet",
"hiera_config_path": "./puphpet/puppet/hiera.yaml",
"module_paths": [
"./puphpet/puppet/modules"
],
"facter": {
"ssh_username": "vagrant",
"provisioner_type": "virtualbox",
"vm_target_key": "vagrantfile-local"
}
},
Note the following things:
Probably running puppet as vagrant user will not complete provisioning due to permission issues. In that case we need a way to run source /usr/local/rvm/scripts/rvm in a sudo and affect the scope of the puppet provisioning command.
The puphpet.com output scripts have /vagrant/puphpet hardcoded in their puppet scripts (e.g. puphpet/puppet/nodes/Apache.pp first line). So you might require a packer file provisioning to your vm before you execute puppet masterless, in order for it to find the dependencies in /vagrant/.... My packer.json conf for this:
{
"type": "shell",
"execute_command": "sudo bash '{{.Path}}'",
"inline": [
"mkdir /vagrant",
"chown -R vagrant:vagrant /vagrant"
]
},
{
"type": "file",
"source": "./puphpet",
"destination": "/vagrant"
},
Puppet will need some Facter variables as they are expected in the puphpet/puppet/nodes/*.pp scripts. Refer to my template.json above.
As said. No success in a complete puppet provisioning yet on my side, but the above got me beyond the point where you are stuck currently. Hope it helps.
Update:
I replaced my old execute command for puppet provisioner
"execute_command": "source /usr/local/rvm/scripts/rvm && {{.FacterVars}}{{if .Sudo}} sudo -E{{end}} puppet apply --verbose --parser future --modulepath='{{.ModulePath}}' {{if ne .HieraConfigPath \"\"}}--hiera_config='{{.HieraConfigPath}}' {{end}} {{if ne .ManifestDir \"\"}}--manifestdir='{{.ManifestDir}}' {{end}} --detailed-exitcodes {{.ManifestFile}}"
with a new one
"execute_command": "{{if .Sudo}}sudo -E {{end}}bash -c \"source /usr/local/rvm/scripts/rvm; {{.FacterVars}} puppet apply --verbose --parser future --modulepath='{{.ModulePath}}' {{if ne .HieraConfigPath \"\"}}--hiera_config='{{.HieraConfigPath}}' {{end}} {{if ne .ManifestDir \"\"}}--manifestdir='{{.ManifestDir}}' {{end}} --detailed-exitcodes {{.ManifestFile}}\""
This will ensure puppet (rvm) is running as root and finishes provisioning successfully.
As an alternative to my other answer, I hereby provide my steps and configuration to get this provisioning scenario working with packer & puphpet.
Assuming the following to be in place:
./: a local directory acting as your own repository being
./ops/: a directory ops inside which holds packer scripts and required files
./ops/template.json: the packer template used to build the VM
./ops/template.json expects the following is in place:
./ops/packer-templates/: a clone of this repo
./ops/ubuntu-14.04.2-server-amd64.iso: the iso for the ubuntu you want to have running in your vm
./puphpet: the output of walking through the configuration steps on puphpet.com (so this is one level up from ops)
The contents of template.json:
{
"variables": {
"ssh_name": "vagrant",
"ssh_pass": "vagrant",
"local_packer_templates_dir": "./packer-templates/ubuntu-14.04-x86_64",
"local_puphput_dir": "../puphpet",
"local_repo_dir": "../",
"repo_upload_dir": "/vagrant"
},
"builders": [
{
"name": "ubuntu-14.04.amd64.virtualbox",
"type": "virtualbox-iso",
"headless": false,
"boot_command": [
"<esc><esc><enter><wait>",
"/install/vmlinuz noapic preseed/url=http://{{ .HTTPIP }}:{{ .HTTPPort }}/preseed.cfg ",
"debian-installer=en_US auto locale=en_US kbd-chooser/method=us ",
"hostname={{ .Name }} ",
"fb=false debconf/frontend=noninteractive ",
"keyboard-configuration/modelcode=SKIP keyboard-configuration/layout=USA keyboard-configuration/variant=USA console-setup/ask_detect=false ",
"initrd=/install/initrd.gz -- <enter>"
],
"boot_wait": "10s",
"disk_size": 20480,
"guest_os_type": "Ubuntu_64",
"http_directory": "{{user `local_packer_templates_dir`}}/http",
"iso_checksum": "83aabd8dcf1e8f469f3c72fff2375195",
"iso_checksum_type": "md5",
"iso_url": "./ubuntu-14.04.2-server-amd64.iso",
"ssh_username": "{{user `ssh_name`}}",
"ssh_password": "{{user `ssh_pass`}}",
"ssh_port": 22,
"ssh_wait_timeout": "10000s",
"shutdown_command": "echo '/sbin/halt -h -p' > shutdown.sh; echo '{{user `ssh_pass`}}'|sudo -S bash 'shutdown.sh'",
"guest_additions_path": "VBoxGuestAdditions_{{.Version}}.iso",
"virtualbox_version_file": ".vbox_version",
"vboxmanage": [
["modifyvm", "{{.Name}}", "--memory", "2048"],
["modifyvm", "{{.Name}}", "--cpus", "4"]
]
}
],
"provisioners": [
{
"type": "shell",
"execute_command": "echo '{{user `ssh_pass`}}'|sudo -S bash '{{.Path}}'",
"scripts": [
"{{user `local_packer_templates_dir`}}/scripts/base.sh",
"{{user `local_packer_templates_dir`}}/scripts/virtualbox.sh",
"{{user `local_packer_templates_dir`}}/scripts/vagrant.sh",
"{{user `local_packer_templates_dir`}}/scripts/puphpet.sh",
"{{user `local_packer_templates_dir`}}/scripts/cleanup.sh",
"{{user `local_packer_templates_dir`}}/scripts/zerodisk.sh"
]
},
{
"type": "shell",
"execute_command": "sudo bash '{{.Path}}'",
"inline": [
"mkdir {{user `repo_upload_dir`}}",
"chown -R vagrant:vagrant {{user `repo_upload_dir`}}"
]
},
{
"type": "file",
"source": "{{user `local_repo_dir`}}",
"destination": "{{user `repo_upload_dir`}}"
},
{
"type": "shell",
"execute_command": "sudo bash '{{.Path}}'",
"inline": [
"rm -fR {{user `repo_upload_dir`}}/.vagrant",
"rm -fR {{user `repo_upload_dir`}}/ops"
]
},
{
"type": "puppet-masterless",
"execute_command": "{{if .Sudo}}sudo -E {{end}}bash -c \"source /usr/local/rvm/scripts/rvm; {{.FacterVars}} puppet apply --verbose --parser future --modulepath='{{.ModulePath}}' {{if ne .HieraConfigPath \"\"}}--hiera_config='{{.HieraConfigPath}}' {{end}} {{if ne .ManifestDir \"\"}}--manifestdir='{{.ManifestDir}}' {{end}} --detailed-exitcodes {{.ManifestFile}}\"",
"manifest_file": "{{user `local_puphput_dir`}}/puppet/site.pp",
"manifest_dir": "{{user `local_puphput_dir`}}/puppet",
"hiera_config_path": "{{user `local_puphput_dir`}}/puppet/hiera.yaml",
"module_paths": [
"{{user `local_puphput_dir`}}/puppet/modules"
],
"facter": {
"ssh_username": "{{user `ssh_name`}}",
"provisioner_type": "virtualbox",
"vm_target_key": "vagrantfile-local"
}
},
{
"type": "shell",
"execute_command": "sudo bash '{{.Path}}'",
"inline": [
"echo '{{user `repo_upload_dir`}}/puphpet' > '/.puphpet-stuff/vagrant-core-folder.txt'",
"sudo bash {{user `repo_upload_dir`}}/puphpet/shell/important-notices.sh"
]
}
],
"post-processors": [
{
"type": "vagrant",
"output": "./build/{{.BuildName}}.box",
"compression_level": 9
}
]
}
Narration of what happens:
execute the basic provisioning of the VM using the scripts that are used to build puphpet boxes (first shell provisioner block)
create a directory /vagrant in the VM and set permissions for vagrant user
upload local repository to /vagrant (important as puphpet/puppet expects it to exist at that location in its scripts)
remove some unneeded stuff from /vagrant after upload
start puppet provisioner with custom execute_command and facter configuration
process the remaining provisioning scripts. To be extended with exec once/always, start once/always files
Note: you might need to prepare some more things before the puppet provisioner kicks off. E.g. I need a directory in place that will be the docroot of a vhost in apache. Use shell provisioning to complete the template for your own puphpet configuration.