I have been trying to get my permissions working for my jellyfin server.
I have a folder on my second hard drive (auto-mounting on start, formated as exFAT)
jellyfin/
├── Cache
├── Config
...
└── Media
├── movies
└── Batman
└── ...
├── music
├── photos
└── shows
When I cd into Media > movies, I cannot view any other folders inside the movies folder, even though I can in a file viewer. It just appears empty
I tried fixing this by doing
chown -R 1000:1000 jellyfin
Since my jellyfin docker executes as 1000:1000.
But it still has the same problem. ls -l returns 0 folders.
Any advice is appreciated
Related
I have this variable here, set in a .yaml variables file
patch_plan: 'foo-{{ patch_plan_week_and_day }}-bar'
I want my patch_plan_week_and_day variable to be set dynamically, based on role and environment which are 2 other variables set elsewhere (doesn't matter now) outside this variables file.
For instance, I will explain 3 cases:
If role = 'master' and environment = 'srvb' then patch_plan_week_and_day = 'Week1_Monday' and thus the end result of patch_plan = 'foo-Week1_Monday-bar'.
If role != 'master' and environment = 'srvb' then patch_plan_week_and_day = 'Week1_Tuesday' and thus the end result of patch_plan = 'foo-Week1_Tuesday-bar'
If role = 'slave' and environment = 'pro' then patch_plan_week_and_day = 'Week3_Wednesday' and hus the end result of patch_plan = 'foo-Week3_Wednesday-bar'
This is the idea of the code:
patch_plan: 'foo-{{ patch_plan_week_and_day }}-bar'
# Patch Plans
## I want something like this:
# case 1
patch_plan_week_and_day: Week1_Monday
when: role == 'master' and environment == 'srvb'
# case 2
patch_plan_week_and_day: Week1_Tuesday
when: role != 'master' and environment == 'srvb'
# case 3
patch_plan_week_and_day: Week3_Wednesday
when: role == 'slave' and environment == 'pro'
I have 14 cases in total.
Put the logic into a dictionary. For example,
patch_plan_week_and_day_dict:
srvb:
master: Week1_Monday
default: Week1_Tuesday
pro:
slave: Week3_Wednesday
default: WeekX_Wednesday
Create the project for testing
shell> tree .
.
├── ansible.cfg
├── hosts
├── pb.yml
└── roles
├── master
│ ├── defaults
│ │ └── main.yml
│ └── tasks
│ └── main.yml
├── non_master
│ ├── defaults
│ │ └── main.yml
│ └── tasks
│ └── main.yml
└── slave
├── defaults
│ └── main.yml
└── tasks
└── main.yml
10 directories, 9 files
shell> cat ansible.cfg
[defaults]
gathering = explicit
inventory = $PWD/hosts
roles_path = $PWD/roles
retry_files_enabled = false
stdout_callback = yaml
shell> cat hosts
localhost
shell> cat pb.yml
- hosts: localhost
vars:
patch_plan_week_and_day_dict:
srvb:
master: Week1_Monday
default: Week1_Tuesday
pro:
slave: Week3_Wednesday
default: WeekX_Wednesday
roles:
- "{{ my_role }}"
The code of all roles is identical
shell> cat roles/master/defaults/main.yml
patch_plan_role: "{{ (my_role in patch_plan_week_and_day_dict[env].keys()|list)|
ternary(my_role, 'default') }}"
patch_plan_week_and_day: "{{ patch_plan_week_and_day_dict[env][patch_plan_role] }}"
shell> cat roles/master/tasks/main.yml
- debug:
var: patch_plan_week_and_day
Example 1.
shell> ansible-playbook pb.yml -e env=srvb -e my_role=master
...
patch_plan_week_and_day: Week1_Monday
Example 2.
shell> ansible-playbook pb.yml -e env=srvb -e my_role=non_master
...
patch_plan_week_and_day: Week1_Tuesday
Example 3.
shell> ansible-playbook pb.yml -e env=pro -e my_role=slave
...
patch_plan_week_and_day: Week3_Wednesday
A lot of considerations here ...
It seems you try to use Ansible as a programming language which it isn't. You've started to implement something without any description about your use case and what is actually the problem. The given example looks like an anti-pattern.
... set dynamically, based on role and environmentv ...
It is in fact "static" and based on the properties of the systems. You only try to generate the values at runtime. Timeslots when patches can or should be applied (Patch Window) are facts about the system and usually configured within the Configuration Management Database (CMDB). So this kind of information should be already there, either in a database or within the Ansible inventory or as a Custom fact on the system itself.
... which are 2 other variables set elsewhere (doesn't matter now) outside this variables file. ...
Probably it does matter and maybe you could configure the Patch Cycle or Patch Window there.
By pursuing your approach further you'll mix up Playbook Logic with Infrastructure Description or Configuration Properties leading fast into less readable and probably future unmaintainable code. You'll deny yourself the opportunity to maintain the system configuration within a Version Control System (VCS), CMDB or the inventory.
Therefore avoid CASE, SWITCH and IF THEN ELSE ELSEIF structures and describe the desired state of your systems instead.
Some Further Readings
In addition to the sources already given.
Best Practices - Content Organization
General tips
At last, this is what fixed it, thank you everyone
patch_plan: 'foo-{{ patch_plan_week_and_day[environment][role] }}-bar'
srvb:
master: Week1_Monday
slave: Week1_Tuesday
pre:
master: Week1_Sunday
slave: Week1_Friday
pro:
master: Week1_Thursday
slave: Week1_Wednesday
Trying to perform a fairly simple gzip command across my fastq files, but a strange error returns.
#!/usr/bin/env nextflow
nextflow.enable.dsl=2
params.gzip = "sequences/sequences_split/sequences_trimmed/trimmed*fastq"
workflow {
gzip_ch = Channel.fromPath(params.gzip)
GZIP(gzip_ch)
GZIP.out.view()
}
process GZIP {
input:
path read
output:
stdout
script:
"""
gzip ${read}
"""
}
Error:
Command error:
gzip: trimmed_SRR19573319_R2.fastq: Too many levels of symbolic links
Tried running a loop in the script instead or run gzip on individual files which works, but would rather use the nextflow syntax.
By default, Nextflow will try to stage process input files using symbolic links. The problem is that gzip actually ignores symbolic links. From the GZIP(1) man page:
The gzip command will only attempt to compress regular files. In particular, it will ignore symbolic links.
If the objective is to create a reproducible workflow, it's usually best to avoid modifying the workflow inputs directly anyway. Either use the stageInMode directive to change how the input files are staged in. For example:
process GZIP {
stageInMode 'copy'
input:
path fastq
output:
path "${fastq}.gz"
"""
gzip "${fastq}"
"""
}
Or, preferably, just modify the command to redirect stdout to a file:
process GZIP {
input:
path fastq
output:
path "${fastq}.gz"
"""
gzip -c "${fastq}" > "${fastq}.gz"
"""
}
Michael!
I can't reproduce your issue. I created the folders in my current directory like you described and created four files in it, as you can see below:
➜ ~ tree sequences/
sequences/
└── sequences_split
└── sequences_trimmed
├── trimmed_a_fastq
├── trimmed_b_fastq
└── trimmed_c_fastq
Then I copy-pasted your Nextflow script file (the only change I did was to use gzip -f ${read} instead of without the -f option. Then everything worked fine. The reason you need -f is because Nextflow has every task contained to a subfolder within work. This means your input files are symbolically linked and gunzip will complain they're not regular files (happened here, macOS Ventura) or something like that (It may depend on OS? Not sure). The -f solves for this issue.
N E X T F L O W ~ version 22.10.1
Launching `ex2.nf` [golden_goldstine] DSL2 - revision: 70559e4bcb
executor > local (3)
[ad/447348] process > GZIP (1) [100%] 3 of 3 ✔
➜ ~ tree work
work
├── 0c
│ └── ded66d5f2e56cfa38d85d9c86e4e87
│ └── trimmed_a_fastq.gz
├── 67
│ └── 949c28cce5ed578e9baae7be2d8cb7
│ └── trimmed_c_fastq.gz
└── ad
└── 44734845950f28f658226852ca4200
└── trimmed_b_fastq.gz
They're gzip compressed files (even though they may look just like text files, depending on the demo content). I decided to reply with an answer because it allows me to use markdown to show you how I did it. Feel free to comment this answer if you want to discuss this topic.
I am trying to set up an Apache server with the ModSecurity Rule set inside a Docker container. I followed a few tutorials (this, this and this) to build a secure Apache server. But I am unable to make the server work with the rule set.
I get this error:
AH00526: Syntax error on line 855 of /etc/httpd/modsecurity.d/crs-setup.conf:
ModSecurity: Found another rule with the same id
I searched for the error and according to the answers on this page the fault lies in including the same rules twice. But as far as I can see, I am not including the same rules twice and I wonder if the error lies elsewhere.
My project file structure is the following:
.
├── conf
│ └── httpd.conf
├── Dockerfile
├── index.html
├── modsecurity.d
│ ├── crs-setup.conf
│ ├── modsecurity.conf
│ └── rules
The httpd.conf file is the default config file used for an Apache server and the modsecurity configurations are inserted via commands in the Dockerfile.
The Dockerfile has the following configuration
FROM centos:7
RUN yum -y update && \
yum -y install less which tree httpd mod_security && \
yum clean all
COPY index.html /var/www/html/
#COPY conf/ /etc/httpd/conf/
COPY modsecurity.d/crs-setup.conf /etc/httpd/modsecurity.d/
COPY modsecurity.d/modsecurity.conf /etc/httpd/modsecurity.d/
COPY modsecurity.d/rules/* /etc/httpd/modsecurity.d/rules/
RUN echo "ServerName localhost" >> /etc/httpd/conf/httpd.conf
RUN echo "<IfModule security2_module>" >> /etc/httpd/conf/httpd.conf
RUN echo " Include modsecurity.d/crs-setup.conf" >> /etc/httpd/conf/httpd.conf
RUN echo " Include modsecurity.d/rules/*.conf" >> /etc/httpd/conf/httpd.conf
RUN echo " SecRuleEngine On" >> /etc/httpd/conf/httpd.conf
RUN echo "</IfModule>" >> /etc/httpd/conf/httpd.conf
EXPOSE 80
CMD ["/usr/sbin/httpd", "-D", "FOREGROUND"]
index.html is just a basic hello file:
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8" lang="en">
</head>
<body>
<h1>Hello there</h1>
</body>
</html>
crs-setup.conf has the following content (excluding all the comments)
SecRuleEngine On
SecDefaultAction "phase:1,log,auditlog,pass"
SecDefaultAction "phase:2,log,auditlog,pass"
SecCollectionTimeout 600
SecAction \
"id:900990,\
phase:1,\
nolog,\
pass,\
t:none,\
setvar:tx.crs_setup_version=310"
modsecurity.conf has only these two lines
SecRequestBodyAccess On
SecStatusEngine On
rules is a directory which contains the ModSecurity rule set.
I also placed the project files on github if anyone wants to have a look at the whole setup.
I found out why I got the error. The ModSecurity configuration file was misnamed and the rule files had been placed in the wrong directory.
The ModSecurity file was modsecurity.conf, when in fact it should have been mod_security.conf, notice the underscore (source). The rule files should have been placed in a folder called activated_rules(source).
In my working configuration I now have the following folder structure:
.
├── conf
│ └── httpd.conf
├── Dockerfile
├── index.html
└── modsecurity.d
├── crs-setup.conf
├── mod_security.conf
└── activated_rules
The Dockerfile is as follows
FROM centos:7
RUN yum -y update && \
yum -y install less which tree httpd mod_security && \
yum clean all
COPY index.html /var/www/html/
RUN echo "ServerName localhost" >> /etc/httpd/conf/httpd.conf
RUN echo "<IfModule security2_module>" >> /etc/httpd/conf/httpd.conf
RUN echo "Include modsecurity.d/crs-setup.conf" >> /etc/httpd/conf/httpd.conf
RUN echo "Include modsecurity.d/activated_rules/*.conf" >> /etc/httpd/conf/httpd.conf
RUN echo "</IfModule>" >> /etc/httpd/conf/httpd.conf
COPY modsecurity.d/crs-setup.conf /etc/httpd/modsecurity.d/
COPY modsecurity.d/mod_security.conf /etc/httpd/conf.d/
COPY modsecurity.d/rules/* /etc/httpd/modsecurity.d/activated_rules/
EXPOSE 80
CMD ["/usr/sbin/httpd", "-D", "FOREGROUND"]
I'm trying to create a docker image with openssl DLLs, but something goes wrong.
Before I build, I have following folder structure on my PC:
/docker-sandbox
├── deployment/
│ ├── libeay32.dll (OpenSSL)
│ ├── ssleay32.dll (OpenSSL)
│ └── deployment.ps1
└── Dockerfile
Dockerfile:
FROM microsoft/windowsservercore
COPY deployment/ C:/deployment
RUN "powershell C:\deployment\deployment.ps1"
And I have the powershell script:
#go to corrext location
Set-Location C:\deployment
#locate openssl libraries
New-Item -ItemType Directory -Path C:\openssllib
Move-Item -Path '.\libeay32.dll' -Destination 'C:\openssllib\libeay32.dll'
Move-Item -Path '.\ssleay32.dll' -Destination 'C:\openssllib\ssleay32.dll'
After I execute "docker build ." in the docker-sandbox folder, I start the container with following command:
docker run -i $IMAGE_ID powershell
But the C:\openssllib folder in the container is empty (and there are also no OpenSSL DLLs in the "deployment" folder). Why?
When I remove RUN from the dockerfile and execute "deployment.ps1" in the container by hand, openssl DLLs are located in the right place. What is the reason for such behavior?
I have a complex project where there are many directories that have POM files, but only some of which are sub-modules (possibly transitively) of a particular parent project.
Obviously, Maven knows the list of relevant files because it parses all the <module> tags to find them. But, I only see a list of the <name>s in the [INFO] comments, not the paths to those modules.
Is there a way to have Maven output a list of all the POM files that provided references to projects that are part of the reactor build for a given project?
This is quite simple but it only gets the artifactId, from the root (or parent) module:
mvn --also-make dependency:tree | grep maven-dependency-plugin | awk '{ print $(NF-1) }'
If you want the directories
mvn -q --also-make exec:exec -Dexec.executable="pwd"
The following command prints artifactId's of all sub-modules:
mvn -Dexec.executable='echo' -Dexec.args='${project.artifactId}' exec:exec -q
Example output:
build-tools
aws-sdk-java-pom
core
annotations
utils
http-client-spi
http-client-tests
http-clients
apache-client
test-utils
sdk-core
...
mvn help:evaluate -Dexpression=project.modules
mvn help:evaluate -Dexpression=project.modules[0]
mvn help:evaluate -Dexpression=project.modules[1]
IFS=$'\n'
modules=($(mvn help:evaluate -Dexpression=project.modules | grep -v "^\[" | grep -v "<\/*strings>" | sed 's/<\/*string>//g' | sed 's/[[:space:]]//'))
for module in "${modules[#]}"
do
echo "$module"
done
Here's a way to do this on Linux outside of Maven, by using strace.
$ strace -o opens.txt -f -e open mvn dependency:tree > /dev/null
$ perl -lne 'print $1 if /"(.*pom\.xml)"/' opens.txt
The first line runs mvn dependency:tree under strace, asking strace to output to the file opens.txt all the calls to the open(2) system call, following any forks (because Java is threaded). This file looks something like:
9690 open("/etc/ld.so.cache", O_RDONLY) = 3
9690 open("/lib/libncurses.so.5", O_RDONLY) = 3
9690 open("/lib/libdl.so.2", O_RDONLY) = 3
The second line asks Perl to print any text inside quotes that happens to end in pom.xml. (The -l flag handles printing newlines, the -n wraps the code single quotes in a loop that simply reads any files on the command line, and the -e handles the script itself which uses a regex to find interesting calls to open.)
It'd be nice to have a maven-native way of doing this :-)
The solution I found is quite simple:
mvn -B -f "$pom_file" org.codehaus.mojo:exec-maven-plugin:1.4.0:exec \
-Dexec.executable=/usr/bin/echo \
-Dexec.args='${basedir}/pom.xml'| \
grep -v '\['
This is a little bit tricky due to the need to grep out the [INFO|WARNING|ERROR] lines and make it usable for scripting but saved me a lot of time since you can put any expression there.
Get exactly name. Not ID. Result is appropriate for mvn -pl.
mvn help:evaluate -Dexpression=project.modules -q -DforceStdout | tail -n +2 | head -n -1 | sed 's/\s*<.*>\(.*\)<.*>/\1/'
or with main pom.xml
cat pom.xml | grep "<module>" | sed 's/\s*<.*>\(.*\)<.*>/\1/'
I don't have a direct answer to the question. But using some kind of "module path" as naming convention for the <name> of my modules works for me. As you'll see, this convention is self explaining.
Given the following project structure:
.
├── pom
│ ├── pom.xml
│ └── release.properties
├── pom.xml
├── samples
│ ├── ejb-cargo-sample
│ │ ├── functests
│ │ │ ├── pom.xml
│ │ │ └── src
│ │ ├── pom.xml
│ │ └── services
│ │ ├── pom.xml
│ │ └── src
│ └── pom.xml
└── tools
├── pom.xml
└── verification-resources
├── pom.xml
└── src
Here is the output of a reactor build:
$ mvn compile
[INFO] Scanning for projects...
[INFO] Reactor build order:
[INFO] Personal Sandbox - Samples - Parent POM
[INFO] Personal Sandbox - Samples - EJB3 and Cargo Sample
[INFO] Personal Sandbox - Tools - Parent POM
[INFO] Personal Sandbox - Tools - Shared Verification Resources
[INFO] Personal Sandbox - Samples - EJB3 and Cargo Sample - Services
[INFO] Personal Sandbox - Samples - EJB3 and Cargo Sample - Functests
[INFO] Sandbox Externals POM
...
This gives IMHO a very decent overview of what is happening, scales correctly, and it's pretty easy to find any module in the file system in case of problems.
Not sure this does answer all your needs though.
I had the same problem but solved it without strace. The mvn exec:exec plugin is used to touch pom.xml in every project, and then find the recently modified pom.xml files:
ctimeref=`mktemp`
mvn --quiet exec:exec -Dexec.executable=/usr/bin/touch -Dexec.args=pom.xml
find . -mindepth 2 -type f -name pom.xml -cnewer "$ctimeref" > maven_projects_list.txt
rm "$ctimeref"
And you have your projects list in the maven_projects_list.txt file.
This is the command I use for listing all pom.xml files inside a project at the root of the project.
find -name pom.xml | grep -v target | sort
What the command do :
find -name pom.xml what I search
grep -v target avoid to list pom.xml inside target/ directory
sort list the result in alphabetical order
An example to list all modules and the parent of each
export REPO_DIR=$(pwd)
export REPO_NAME=$(basename ${REPO_DIR})
echo "${REPO_DIR} ==> ${REPO_NAME}"
mvn exec:exec -q \
-Dexec.executable='echo' \
-Dexec.args='${basedir}:${project.parent.groupId}:${project.parent.artifactId}:${project.parent.version}:${project.groupId}:${project.artifactId}:${project.version}:${project.packaging}' \
| perl -pe "s/^${REPO_DIR//\//\\\/}/${REPO_NAME}/g" \
| perl -pe 's/:/\t/g;'
I prepared the script below as mvn exec:exec runs slow on gitlab. I couldn't find a free time to investigate it more but I'm suspicious about it tries to get a new runner as it needs a new Runtime. So, if you're working with quite limited runners, it affects the overall build time in an unpredictable way if you used mvn exec:exec to determine the modules.
The below snippet gives you the module name, packaging and path to the module
#!/bin/bash
set -e;
mvnOptions='--add-opens java.base/java.lang=ALL-UNNAMED';
string=$(MAVEN_OPTS="$mvnOptions" mvn help:active-profiles)
delimiter='Active Profiles for Project*';
modules=()
while read -r line; do
if [[ $line == $delimiter ]]; then
module=$(echo $line | sed -E "s/.*'(.*):(.*):(.*):(.*)'.*/\2/");
packaging=$(echo $line | sed -E "s/.*'(.*):(.*):(.*):(.*)'.*/\3/");
path=$(MAVEN_OPTS="$mvnOptions" mvn help:evaluate -Dexpression=project.basedir -pl "$module" -q -DforceStdout || true);
if [[ $path == *" $module "* ]]; then
path=$(pwd);
fi
modules+=("$module" "$packaging" "$path")
fi;
done <<< "$string"
size="$(echo ${#modules[#]})";
moduleCount=$(( $size / 3 ));
# prints the found modules
if [ $moduleCount -gt 0 ]; then
echo "$moduleCount module(s) found"
for (( i=0; i<$moduleCount; ++i)); do
line=$(($i + 1));
moduleIndex=$(($i * 3));
pathIndex=$(($i * 3+2));
module=${modules[moduleIndex]};
path=${modules[pathIndex]};
echo " $line. '$module' at '$path'";
done;
fi;