Below is the output of the Error when i build a Dockerfile
My Dockerfile seems like that :
FROM microsoft/aspnetcore-build:6.0 AS build-env
WORKDIR /app
# copy csproj and restore as distinct layers
COPY *.csproj ./
RUN dotnet restore
# copy everything else and build
COPY . ./
RUN dotnet publish -c Release -o out
# build runtime image
FROM microsoft/aspnetcore:6.0
WORKDIR /app
COPY — from=build-env /app/out .
EXPOSE 80/tcp
ENTRYPOINT ["dotnet", "AuditModule.dll"]
We're using a aws/codebuild/standard:5.0 codebuild image to build our own docker images. I have a buildspec that calls docker build against our Dockerfile and push to ECR. The Dockerfile uses Microsoft dotnet base images to call dotnet pubish to build our binaries. This all works fine.
We then added a build stage to our Dockerfile to run unit tests (using dotnet test) and we followed the "FROM scratch" advice combined with docker build --output to try and pull unit test results files out of the multi-stage target:
docker build --target export-test-results -f ./Dockerfile --output type=local,dest=out .
This works fine locally (an out dir is created containing the files), but when I run this in Codebuild, I cannot find where the output may be (the command succeeds - but I've no idea where it's going). I've added ls commands everywhere, and cannot locate the out dir, so of course my artifacts step has nothing to archive.
Question is: where is the output being created inside the CodeBuild instance?
My (abbreviated) Dockerfile
ARG VERSION=3.1-alpine3.13
FROM mcr.microsoft.com/dotnet/aspnet:$VERSION AS base
WORKDIR /usr/local/bin
FROM mcr.microsoft.com/dotnet/sdk:$VERSION AS source
#Using pattern here to bypass need for recursive copy from local src folder: https://github.com/moby/moby/issues/15858#issuecomment-614157331
WORKDIR /usr/local
COPY . ./src
RUN mkdir ./proj && \
cd ./src && \
find . -type f -a \( -iname "*.sln" -o -iname "*.csproj" -o -iname "*.dcproj" \) -exec cp --parents "{}" ../proj/ \;
FROM mcr.microsoft.com/dotnet/sdk:$VERSION AS projectfiles
# Copy only the project files with correct directory structure
# then restore packages - this will mean that "restore" will be saved in a layer of its own
COPY --from=source /usr/local/proj /usr/local/src
FROM projectfiles AS restore
WORKDIR /usr/local/src/Postie
RUN dotnet restore --verbosity minimal -s https://api.nuget.org/v3/index.json Postie.sln
FROM restore AS unittests
#Copy all the source files
COPY --from=source /usr/local/src /usr/local/src
RUN cd Postie.Domain.UnitTests && \
dotnet test --no-restore --logger:nunit --verbosity normal || true
FROM scratch as export-test-results
COPY --from=unittests /usr/local/src/Postie/Postie.Domain.UnitTests/TestResults/TestResults.xml ./Postie.Domain.UnitTests.TestResults.xml
My (abbreviated) Buildspec:
version: 0.2
phases:
pre_build:
commands:
- echo Logging in to Amazon ECR...
- aws ecr get-login-password | docker login --username AWS --password-stdin $DOCKER_REGISTRY_SERVER
build:
commands:
- export IMAGE_TAG=:$(echo $CODEBUILD_RESOLVED_SOURCE_VERSION | cut -c 1-7).$CODEBUILD_BUILD_NUMBER
- export JENKINS_TAG=:$(echo $JENKINS_VERSION_NUMBER | tr '+' '-')
- echo Build started on `date` with version $IMAGE_TAG
- cd ./Src/
- echo Testing the Docker image...
#see the following for why we use the --output option
#https://docs.docker.com/engine/reference/commandline/build/#custom-build-outputs
- docker build --target export-test-results -t ${DOCKER_REGISTRY_SERVER}/postie.api${IMAGE_TAG} -f ./Postie/Postie.Api/Dockerfile --output type=local,dest=out .
artifacts:
files:
- '**/*'
name: builds/$JENKINS_VERSION_NUMBER/artifacts
(I should note that the "artifacts" step above is actually archiving my entire source tree to S3 so that I can prove that the upload is working and also so that I can try to find the "out" dir - but it's not to be found)
I know this is old, but just in case anyone else stumbles across this one, you need to add the Docker Buildkit variable to the CodeBuild environment, otherwise the files will not get exported.
version: 0.2
... etc
phases:
build:
commands:
... etc
- echo Testing the Docker image...
- export DOCKER_BUILDKIT=1
- docker build --target export-test-results ... etc
... etc
If you want to display more output along with this you can also add
- export BUILDKIT_PROGRESS=plain
- export PROGRESS_NO_TRUNC=1
under the buildkit variable.
I am trying to set up an Apache server with the ModSecurity Rule set inside a Docker container. I followed a few tutorials (this, this and this) to build a secure Apache server. But I am unable to make the server work with the rule set.
I get this error:
AH00526: Syntax error on line 855 of /etc/httpd/modsecurity.d/crs-setup.conf:
ModSecurity: Found another rule with the same id
I searched for the error and according to the answers on this page the fault lies in including the same rules twice. But as far as I can see, I am not including the same rules twice and I wonder if the error lies elsewhere.
My project file structure is the following:
.
├── conf
│ └── httpd.conf
├── Dockerfile
├── index.html
├── modsecurity.d
│ ├── crs-setup.conf
│ ├── modsecurity.conf
│ └── rules
The httpd.conf file is the default config file used for an Apache server and the modsecurity configurations are inserted via commands in the Dockerfile.
The Dockerfile has the following configuration
FROM centos:7
RUN yum -y update && \
yum -y install less which tree httpd mod_security && \
yum clean all
COPY index.html /var/www/html/
#COPY conf/ /etc/httpd/conf/
COPY modsecurity.d/crs-setup.conf /etc/httpd/modsecurity.d/
COPY modsecurity.d/modsecurity.conf /etc/httpd/modsecurity.d/
COPY modsecurity.d/rules/* /etc/httpd/modsecurity.d/rules/
RUN echo "ServerName localhost" >> /etc/httpd/conf/httpd.conf
RUN echo "<IfModule security2_module>" >> /etc/httpd/conf/httpd.conf
RUN echo " Include modsecurity.d/crs-setup.conf" >> /etc/httpd/conf/httpd.conf
RUN echo " Include modsecurity.d/rules/*.conf" >> /etc/httpd/conf/httpd.conf
RUN echo " SecRuleEngine On" >> /etc/httpd/conf/httpd.conf
RUN echo "</IfModule>" >> /etc/httpd/conf/httpd.conf
EXPOSE 80
CMD ["/usr/sbin/httpd", "-D", "FOREGROUND"]
index.html is just a basic hello file:
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8" lang="en">
</head>
<body>
<h1>Hello there</h1>
</body>
</html>
crs-setup.conf has the following content (excluding all the comments)
SecRuleEngine On
SecDefaultAction "phase:1,log,auditlog,pass"
SecDefaultAction "phase:2,log,auditlog,pass"
SecCollectionTimeout 600
SecAction \
"id:900990,\
phase:1,\
nolog,\
pass,\
t:none,\
setvar:tx.crs_setup_version=310"
modsecurity.conf has only these two lines
SecRequestBodyAccess On
SecStatusEngine On
rules is a directory which contains the ModSecurity rule set.
I also placed the project files on github if anyone wants to have a look at the whole setup.
I found out why I got the error. The ModSecurity configuration file was misnamed and the rule files had been placed in the wrong directory.
The ModSecurity file was modsecurity.conf, when in fact it should have been mod_security.conf, notice the underscore (source). The rule files should have been placed in a folder called activated_rules(source).
In my working configuration I now have the following folder structure:
.
├── conf
│ └── httpd.conf
├── Dockerfile
├── index.html
└── modsecurity.d
├── crs-setup.conf
├── mod_security.conf
└── activated_rules
The Dockerfile is as follows
FROM centos:7
RUN yum -y update && \
yum -y install less which tree httpd mod_security && \
yum clean all
COPY index.html /var/www/html/
RUN echo "ServerName localhost" >> /etc/httpd/conf/httpd.conf
RUN echo "<IfModule security2_module>" >> /etc/httpd/conf/httpd.conf
RUN echo "Include modsecurity.d/crs-setup.conf" >> /etc/httpd/conf/httpd.conf
RUN echo "Include modsecurity.d/activated_rules/*.conf" >> /etc/httpd/conf/httpd.conf
RUN echo "</IfModule>" >> /etc/httpd/conf/httpd.conf
COPY modsecurity.d/crs-setup.conf /etc/httpd/modsecurity.d/
COPY modsecurity.d/mod_security.conf /etc/httpd/conf.d/
COPY modsecurity.d/rules/* /etc/httpd/modsecurity.d/activated_rules/
EXPOSE 80
CMD ["/usr/sbin/httpd", "-D", "FOREGROUND"]
My project has this structure :
root
|- src/
|- bin/
|- include/
|- CMakeFiles/
|- CMakeLists.txt
When i run the cmake command it generates some files : CMakeCache.txt and cmake_install.cmake there is a way to send it automatically in the CMakeFiles or just to delete it afterwards ?
I want to use the cmake only (not a command line combo like cmake -G"Unix Makefiles" && rm -f CMakeCache.txt cmake_install.cmake)
It is better to use out-of-source build tree.
Create new directory build:
root
|- src/
|- bin/
|- include/
|- CMakeFiles/
|- CMakeLists.txt
build
and run cmake in the build directory:
cd build
cmake ../root
All CMake generated files and build artifacts will be located in the build tree and will not pollute source directory.
It is a recommended way to work with CMake : http://www.cmake.org/Wiki/CMake_FAQ#Out-of-source_build_trees
I've found a way to do it not buy CMake but using the makefile
if(UNIX)
add_custom_target (distclean #echo cleaning for source distribution)
add_custom_command(
COMMENT "distribution clean"
COMMAND make
ARGS -C ${CMAKE_CURRENT_BINARY_DIR} clean
COMMAND find
ARGS ${CMAKE_CURRENT_BINARY_DIR} -name "CMakeCache.txt" | xargs rm -rf
COMMAND find
ARGS ${CMAKE_CURRENT_BINARY_DIR} -name "CMakeFiles" | xargs rm -rf
COMMAND find
ARGS ${CMAKE_CURRENT_BINARY_DIR} -name "Makefile" | xargs rm -rf
COMMAND find
ARGS ${CMAKE_CURRENT_BINARY_DIR} -name "*.cmake" | xargs rm -rf
COMMAND find
ARGS ${CMAKE_CURRENT_SOURCE_DIR} -name "*.qm" | xargs rm -rf
COMMAND rm
ARGS -rf ${CMAKE_CURRENT_BINARY_DIR}/install_manifest.txt
TARGET distclean
)
endif(UNIX)
and then I just have to do make distclean
I have a complex project where there are many directories that have POM files, but only some of which are sub-modules (possibly transitively) of a particular parent project.
Obviously, Maven knows the list of relevant files because it parses all the <module> tags to find them. But, I only see a list of the <name>s in the [INFO] comments, not the paths to those modules.
Is there a way to have Maven output a list of all the POM files that provided references to projects that are part of the reactor build for a given project?
This is quite simple but it only gets the artifactId, from the root (or parent) module:
mvn --also-make dependency:tree | grep maven-dependency-plugin | awk '{ print $(NF-1) }'
If you want the directories
mvn -q --also-make exec:exec -Dexec.executable="pwd"
The following command prints artifactId's of all sub-modules:
mvn -Dexec.executable='echo' -Dexec.args='${project.artifactId}' exec:exec -q
Example output:
build-tools
aws-sdk-java-pom
core
annotations
utils
http-client-spi
http-client-tests
http-clients
apache-client
test-utils
sdk-core
...
mvn help:evaluate -Dexpression=project.modules
mvn help:evaluate -Dexpression=project.modules[0]
mvn help:evaluate -Dexpression=project.modules[1]
IFS=$'\n'
modules=($(mvn help:evaluate -Dexpression=project.modules | grep -v "^\[" | grep -v "<\/*strings>" | sed 's/<\/*string>//g' | sed 's/[[:space:]]//'))
for module in "${modules[#]}"
do
echo "$module"
done
Here's a way to do this on Linux outside of Maven, by using strace.
$ strace -o opens.txt -f -e open mvn dependency:tree > /dev/null
$ perl -lne 'print $1 if /"(.*pom\.xml)"/' opens.txt
The first line runs mvn dependency:tree under strace, asking strace to output to the file opens.txt all the calls to the open(2) system call, following any forks (because Java is threaded). This file looks something like:
9690 open("/etc/ld.so.cache", O_RDONLY) = 3
9690 open("/lib/libncurses.so.5", O_RDONLY) = 3
9690 open("/lib/libdl.so.2", O_RDONLY) = 3
The second line asks Perl to print any text inside quotes that happens to end in pom.xml. (The -l flag handles printing newlines, the -n wraps the code single quotes in a loop that simply reads any files on the command line, and the -e handles the script itself which uses a regex to find interesting calls to open.)
It'd be nice to have a maven-native way of doing this :-)
The solution I found is quite simple:
mvn -B -f "$pom_file" org.codehaus.mojo:exec-maven-plugin:1.4.0:exec \
-Dexec.executable=/usr/bin/echo \
-Dexec.args='${basedir}/pom.xml'| \
grep -v '\['
This is a little bit tricky due to the need to grep out the [INFO|WARNING|ERROR] lines and make it usable for scripting but saved me a lot of time since you can put any expression there.
Get exactly name. Not ID. Result is appropriate for mvn -pl.
mvn help:evaluate -Dexpression=project.modules -q -DforceStdout | tail -n +2 | head -n -1 | sed 's/\s*<.*>\(.*\)<.*>/\1/'
or with main pom.xml
cat pom.xml | grep "<module>" | sed 's/\s*<.*>\(.*\)<.*>/\1/'
I don't have a direct answer to the question. But using some kind of "module path" as naming convention for the <name> of my modules works for me. As you'll see, this convention is self explaining.
Given the following project structure:
.
├── pom
│ ├── pom.xml
│ └── release.properties
├── pom.xml
├── samples
│ ├── ejb-cargo-sample
│ │ ├── functests
│ │ │ ├── pom.xml
│ │ │ └── src
│ │ ├── pom.xml
│ │ └── services
│ │ ├── pom.xml
│ │ └── src
│ └── pom.xml
└── tools
├── pom.xml
└── verification-resources
├── pom.xml
└── src
Here is the output of a reactor build:
$ mvn compile
[INFO] Scanning for projects...
[INFO] Reactor build order:
[INFO] Personal Sandbox - Samples - Parent POM
[INFO] Personal Sandbox - Samples - EJB3 and Cargo Sample
[INFO] Personal Sandbox - Tools - Parent POM
[INFO] Personal Sandbox - Tools - Shared Verification Resources
[INFO] Personal Sandbox - Samples - EJB3 and Cargo Sample - Services
[INFO] Personal Sandbox - Samples - EJB3 and Cargo Sample - Functests
[INFO] Sandbox Externals POM
...
This gives IMHO a very decent overview of what is happening, scales correctly, and it's pretty easy to find any module in the file system in case of problems.
Not sure this does answer all your needs though.
I had the same problem but solved it without strace. The mvn exec:exec plugin is used to touch pom.xml in every project, and then find the recently modified pom.xml files:
ctimeref=`mktemp`
mvn --quiet exec:exec -Dexec.executable=/usr/bin/touch -Dexec.args=pom.xml
find . -mindepth 2 -type f -name pom.xml -cnewer "$ctimeref" > maven_projects_list.txt
rm "$ctimeref"
And you have your projects list in the maven_projects_list.txt file.
This is the command I use for listing all pom.xml files inside a project at the root of the project.
find -name pom.xml | grep -v target | sort
What the command do :
find -name pom.xml what I search
grep -v target avoid to list pom.xml inside target/ directory
sort list the result in alphabetical order
An example to list all modules and the parent of each
export REPO_DIR=$(pwd)
export REPO_NAME=$(basename ${REPO_DIR})
echo "${REPO_DIR} ==> ${REPO_NAME}"
mvn exec:exec -q \
-Dexec.executable='echo' \
-Dexec.args='${basedir}:${project.parent.groupId}:${project.parent.artifactId}:${project.parent.version}:${project.groupId}:${project.artifactId}:${project.version}:${project.packaging}' \
| perl -pe "s/^${REPO_DIR//\//\\\/}/${REPO_NAME}/g" \
| perl -pe 's/:/\t/g;'
I prepared the script below as mvn exec:exec runs slow on gitlab. I couldn't find a free time to investigate it more but I'm suspicious about it tries to get a new runner as it needs a new Runtime. So, if you're working with quite limited runners, it affects the overall build time in an unpredictable way if you used mvn exec:exec to determine the modules.
The below snippet gives you the module name, packaging and path to the module
#!/bin/bash
set -e;
mvnOptions='--add-opens java.base/java.lang=ALL-UNNAMED';
string=$(MAVEN_OPTS="$mvnOptions" mvn help:active-profiles)
delimiter='Active Profiles for Project*';
modules=()
while read -r line; do
if [[ $line == $delimiter ]]; then
module=$(echo $line | sed -E "s/.*'(.*):(.*):(.*):(.*)'.*/\2/");
packaging=$(echo $line | sed -E "s/.*'(.*):(.*):(.*):(.*)'.*/\3/");
path=$(MAVEN_OPTS="$mvnOptions" mvn help:evaluate -Dexpression=project.basedir -pl "$module" -q -DforceStdout || true);
if [[ $path == *" $module "* ]]; then
path=$(pwd);
fi
modules+=("$module" "$packaging" "$path")
fi;
done <<< "$string"
size="$(echo ${#modules[#]})";
moduleCount=$(( $size / 3 ));
# prints the found modules
if [ $moduleCount -gt 0 ]; then
echo "$moduleCount module(s) found"
for (( i=0; i<$moduleCount; ++i)); do
line=$(($i + 1));
moduleIndex=$(($i * 3));
pathIndex=$(($i * 3+2));
module=${modules[moduleIndex]};
path=${modules[pathIndex]};
echo " $line. '$module' at '$path'";
done;
fi;