zaproxy - API scan - missing c in config - zap

I run the following command: docker run -v /etc/hosts:/etc/hosts -v $(pwd):/zap/wrk:rw -t owasp/zap2docker-weekly zap-api-scan.py -t api.yaml -f openapi -r zap_report.html -config replacer.full_list\(0\).description=auth1 -config replacer.full_list\(0\).enabled=true -config replacer.full_list\(0\).matchtype=REQ_HEADER -config replacer.full_list\(0\).matchstr=X-XXXXX-APIkey -config replacer.full_list\(0\).regex=false -config replacer.full_list\(0\).replacement=123456789
But got the error:
Traceback (most recent call last):
File "/zap/zap-api-scan.py", line 539, in
File "/zap/zap-api-scan.py", line 246, in main
with open(base_dir + config_file) as f:
IOError: [Errno 2] No such file or directory: '/zap/wrk/onfig'
How is it possible?

The problem is in how you pass parameters to the python script. The python script parse the -config as -c onfig, and trying to read configuration from the file onfig. You should pass zap params using the following format: -z "-config aaa=bbb -config ccc=ddd"'

Related

GCP Dataproc - Failed to construct kafka consumer, Failed to load SSL keystore dataproc.jks of type JKS

I'm trying to run a Structured Streaming program on GCP Dataproc, which accesses the data from Kafka and prints it.
Access to Kafka is using SSL, and the truststore and keystore files are stored in buckets.
I'm using Google Storage API to access the bucket, and store the file in the current working directory. The truststore and keystores are passed onto the Kafka Consumer/Producer.
However - i'm getting an error
Command :
gcloud dataproc jobs submit pyspark /Users/karanalang/Documents/Technology/gcp/DataProc/StructuredStreaming_Kafka_GCP-Batch-feb2-v2.py --cluster dataproc-ss-poc --properties spark.jars.packages=org.apache.spark:spark-sql-kafka-0-10_2.12:3.2.0 --region us-central1
Code is shown below :
spark = SparkSession.builder.appName('StructuredStreaming_VersaSase').getOrCreate()
kafkaBrokers='<broker-ip>:9094'
topic = "versa-sase"
security_protocol="SSL"
# Google Storage API to access the keys in the buckets
client = storage.Client()
bucket = client.get_bucket('ssl-certs-karan')
blob_ssl_truststore = bucket.get_blob('cap12.jks')
ssl_truststore_location = '{}/{}'.format(os.getcwd(), blob_ssl_truststore.name)
blob_ssl_truststore.download_to_filename(ssl_truststore_location)
ssl_truststore_password="<ssl_truststore_password>"
blob_ssl_keystore = bucket.get_blob('dataproc-versa-sase-p12-1.jks')
ssl_keystore_location = '{}/{}'.format(os.getcwd(), blob_ssl_keystore.name)
blob_ssl_keystore.download_to_filename(ssl_keystore_location)
ssl_keystore_password="<ssl_keystore_password>"
consumerGroupId = "versa-sase-grp"
checkpoint = "gs://ss-checkpoint/"
print(" SPARK.SPARKCONTEXT -> ", spark.sparkContext)
df = spark.read.format('kafka')\
.option("kafka.bootstrap.servers",kafkaBrokers)\
.option("kafka.security.protocol","SSL") \
.option("kafka.ssl.truststore.location",ssl_truststore_location) \
.option("kafka.ssl.truststore.password",ssl_truststore_password) \
.option("kafka.ssl.keystore.location", ssl_keystore_location)\
.option("kafka.ssl.keystore.password", ssl_keystore_password)\
.option("subscribe", topic) \
.option("kafka.group.id", consumerGroupId)\
.option("startingOffsets", "earliest") \
.load()
print(" df -> ", df)
query = df.selectExpr("CAST(value AS STRING)", "CAST(key AS STRING)", "topic", "timestamp") \
.write \
.format("console") \
.option("numRows",100)\
.option("checkpointLocation", checkpoint) \
.option("outputMode", "complete")\
.option("truncate", "false") \
.save("output")
Error :
Traceback (most recent call last):
File "/tmp/3e7304f8e27d4436a2f382280cebe7c5/StructuredStreaming_Kafka_GCP-Batch-feb2-v2.py", line 83, in <module>
query = df.selectExpr("CAST(value AS STRING)", "CAST(key AS STRING)", "topic", "timestamp") \
File "/usr/lib/spark/python/lib/pyspark.zip/pyspark/sql/readwriter.py", line 1109, in save
File "/usr/lib/spark/python/lib/py4j-0.10.9-src.zip/py4j/java_gateway.py", line 1304, in __call__
File "/usr/lib/spark/python/lib/pyspark.zip/pyspark/sql/utils.py", line 111, in deco
File "/usr/lib/spark/python/lib/py4j-0.10.9-src.zip/py4j/protocol.py", line 326, in get_return_value
py4j.protocol.Py4JJavaError22/02/02 23:11:08 DEBUG org.apache.hadoop.ipc.Client: IPC Client (1416219052) connection to dataproc-ss-poc-m/10.128.0.78:8030 from root sending #171 org.apache.hadoop.yarn.api.ApplicationMasterProtocolPB.allocate
22/02/02 23:11:08 DEBUG org.apache.hadoop.ipc.Client: IPC Client (1416219052) connection to dataproc-ss-poc-m/10.128.0.78:8030 from root got value #171
22/02/02 23:11:08 DEBUG org.apache.hadoop.ipc.ProtobufRpcEngine: Call: allocate took 2ms
: An error occurred while calling o84.save.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID 3) (dataproc-ss-poc-w-0.c.versa-kafka-poc.internal executor 1): org.apache.kafka.common.KafkaException: Failed to construct kafka consumer
at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:823)
at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:665)
at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:613)
at org.apache.spark.sql.kafka010.consumer.InternalKafkaConsumer.createConsumer(KafkaDataConsumer.scala:124)
at org.apache.spark.sql.kafka010.consumer.InternalKafkaConsumer.<init>(KafkaDataConsumer.scala:61)
at org.apache.spark.sql.kafka010.consumer.InternalKafkaConsumerPool$ObjectFactory.create(InternalKafkaConsumerPool.scala:206)
at org.apache.spark.sql.kafka010.consumer.InternalKafkaConsumerPool$ObjectFactory.create(InternalKafkaConsumerPool.scala:201)
at org.apache.commons.pool2.BaseKeyedPooledObjectFactory.makeObject(BaseKeyedPooledObjectFactory.java:60)
at org.apache.commons.pool2.impl.GenericKeyedObjectPool.create(GenericKeyedObjectPool.java:1041)
at org.apache.commons.pool2.impl.GenericKeyedObjectPool.borrowObject(GenericKeyedObjectPool.java:342)
at org.apache.commons.pool2.impl.GenericKeyedObjectPool.borrowObject(GenericKeyedObjectPool.java:265)
at org.apache.spark.sql.kafka010.consumer.InternalKafkaConsumerPool.borrowObject(InternalKafkaConsumerPool.scala:84)
at org.apache.spark.sql.kafka010.consumer.KafkaDataConsumer.retrieveConsumer(KafkaDataConsumer.scala:573)
at org.apache.spark.sql.kafka010.consumer.KafkaDataConsumer.getOrRetrieveConsumer(KafkaDataConsumer.scala:558)
at org.apache.spark.sql.kafka010.consumer.KafkaDataConsumer.$anonfun$getAvailableOffsetRange$1(KafkaDataConsumer.scala:359)
at org.apache.spark.util.UninterruptibleThread.runUninterruptibly(UninterruptibleThread.scala:77)
at org.apache.spark.sql.kafka010.consumer.KafkaDataConsumer.runUninterruptiblyIfPossible(KafkaDataConsumer.scala:618)
at org.apache.spark.sql.kafka010.consumer.KafkaDataConsumer.getAvailableOffsetRange(KafkaDataConsumer.scala:358)
at org.apache.spark.sql.kafka010.KafkaSourceRDD.resolveRange(KafkaSourceRDD.scala:123)
at org.apache.spark.sql.kafka010.KafkaSourceRDD.compute(KafkaSourceRDD.scala:75)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
...
Caused by: org.apache.kafka.common.KafkaException: Failed to load SSL keystore /tmp/3e7304f8e27d4436a2f382280cebe7c5/dataproc-versa-sase-p12-1.jks of type JKS
at org.apache.kafka.common.security.ssl.DefaultSslEngineFactory$FileBasedStore.load(DefaultSslEngineFactory.java:377)
at org.apache.kafka.common.security.ssl.DefaultSslEngineFactory$FileBasedStore.<init>(DefaultSslEngineFactory.java:349)
at org.apache.kafka.common.security.ssl.DefaultSslEngineFactory.createKeystore(DefaultSslEngineFactory.java:299)
at org.apache.kafka.common.security.ssl.DefaultSslEngineFactory.configure(DefaultSslEngineFactory.java:161)
at org.apache.kafka.common.security.ssl.SslFactory.instantiateSslEngineFactory(SslFactory.java:138)
at org.apache.kafka.common.security.ssl.SslFactory.configure(SslFactory.java:95)
at org.apache.kafka.common.network.SslChannelBuilder.configure(SslChannelBuilder.java:74)
at org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:192)
at org.apache.kafka.common.network.ChannelBuilders.clientChannelBuilder(ChannelBuilders.java:81)
at org.apache.kafka.clients.ClientUtils.createChannelBuilder(ClientUtils.java:105)
at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:737)
... 53 more
Caused by: java.nio.file.NoSuchFileException: /tmp/3e7304f8e27d4436a2f382280cebe7c5/dataproc-versa-sase-p12-1.jks
at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:214)
at java.nio.file.Files.newByteChannel(Files.java:361)
at java.nio.file.Files.newByteChannel(Files.java:407)
at java.nio.file.spi.FileSystemProvider.newInputStream(FileSystemProvider.java:384)
at java.nio.file.Files.newInputStream(Files.java:152)
at org.apache.kafka.common.security.ssl.DefaultSslEngineFactory$FileBasedStore.load(DefaultSslEngineFactory.java:370)
From my mac, I'm using PKCS files (.p12) and am able to access the Kafka cluster in SSL mode. However, in Dataproc - it seems the expected file format is JKS.
here is the command i used to convert .p12 file to JKS format:
keytool -importkeystore -srckeystore dataproc-versa-sase.p12 -srcstoretype pkcs12 -srcalias versa-sase-user -destkeystore dataproc-versa-sase-p12-1.jks -deststoretype jks -deststorepass <password> -destalias versa-sase-user
What needs to be done to fix this ?
it seems the JKS file is not accessible to the Spark program ?
tia!
I would add the following option if you want to use jks
.option("kafka.ssl.keystore.type", "JKS")
.option("kafka.ssl.truststore.type", "JKS")
Also this will work with PKCS12 by the way
.option("kafka.ssl.keystore.type", "PKCS12")
.option("kafka.ssl.truststore.type", "PKCS12")
Like someone mention earlier you can check if it is jdk compatibility issue doing something like so:
keytool -v -list -storetype pkcs12 -keystore kafka-client-jdk8-truststore.p12
If you receive a message displaying keystore you are in the clear, but if you receive a message saying the Identifier can’t be found that means difference in jdks.
per note from #OneCricketer, i was able to get this working by using --files <gs://cert1>,<gs://cert2>.
Also, this works when using the cluster mode.
ClusterMode command
gcloud dataproc jobs submit pyspark /Users/karanalang/Documents/Technology/gcp/DataProc/StructuredStreaming_Kafka_GCP-Batch-feb2-v2.py --cluster dataproc-ss-poc --properties spark.jars.packages=org.apache.spark:spark-sql-kafka-0-10_2.12:3.1.2,spark.submit.deployMode=cluster --region us-central1
Client Mode :
gcloud dataproc jobs submit pyspark /Users/karanalang/Documents/Technology/gcp/DataProc/StructuredStreaming_Kafka_GCP-Batch-feb2-v2.py --cluster dataproc-ss-poc --properties spark.jars.packages=org.apache.spark:spark-sql-kafka-0-10_2.12:3.1.2 --region us-central1
Accessing the certs in the Driver:
# access using the cert name
ssl_truststore_location="ca.p12"
ssl_keystore_location="dataproc-versa-sase.p12"
df_stream = spark.readStream.format('kafka') \
.option("kafka.security.protocol", "SSL") \
.option("kafka.ssl.truststore.location", ssl_truststore_location) \
.option("kafka.ssl.truststore.password", ssl_truststore_password) \
.option("kafka.ssl.keystore.location", ssl_keystore_location) \
.option("kafka.ssl.keystore.password", ssl_keystore_password) \
.option("kafka.bootstrap.servers",kafkaBrokers)\
.option("subscribe", topic) \
.option("kafka.group.id", consumerGroupId)\
.option("startingOffsets", "earliest") \
.option("failOnDataLoss", "false") \
.option("maxOffsetsPerTrigger", 10) \
.load()

Using "snakemake --config" to pass flags to a command

I have read the Snakemake tutorial and it is clear to me how to use "snakemake --config ..." to modify parameters, and these get passed to the command being executed. Can I use "--config" to pass a flag to a command? For example, can I write a Snakefile that will execute either of these commands, based on using "--config"?
muscle -in unaligned.fa -out aligned.fa
muscle -in unaligned.fa -out aligned.fa -msf
Yes, from within a shell command definition in Snakemake, you can directly access config:
rule a:
input: ...
output: ...
shell:
"muscle -in {input} -out {output} {config[muscle-params]}"
Given that you e.g. invoke snakemake --config muscle-params="-msf"
or (even better) have the key defined in your config file.

"Not a tty" error in Alpine-based duplicity image

This is my first ever question at stackoverflow, so I hope it'll adhere to the community guidelines:
I've build a docker image based on an already existing image which has the sole purpose of running duplicity in an container to backup files and folders to an Amazon S3 bucket in Europe.
Duplicity worked for a couple of days when being run manually inside a container resulting from the image. Now I moved on to run containers via unit files on the host with CoreOS and things don't work anymore - but the command also won't work it I run it manually inside a duplicity container..
The run command:
docker run --rm --env-file=<my backup env file>.env --name=<container image> -v <cache container>:/home/duplicity/.cache/duplicity -v <docker volume with gpg keys>:/home/duplicity/.gnupg --volumes-from <docker container of interest> gymnae/duplicity
the env-file contains the following:
PASSPHRASE=<my super secret passphrase>
AWS_ACCESS_KEY_ID=<my aws access key id>
AWS_SECRET_ACCESS_KEY=<my aws access key>
SOURCE_PATH=<where does the data come from>
REMOTE_URL=s3://s3.eu-central-1.amazonaws.com/<my bucket>
PARAMS_CLEAN="--remove-older-than 3M --force --extra-clean"
ENCRYPT_KEY=<derived from the gpg key>
And the init.sh, which is called on docker run, looks like this:
#!/bin/sh
duplicity \
--verbosity 8 \
--s3-use-ia \
--s3-use-new-style \
--s3-use-server-side-encryption \
--s3-european-buckets \
--allow-source-mismatch \
--ssl-no-check-certificate \
--s3-unencrypted-connection \
--volsize 150 \
--gpg-options "--no-tty" \
--encrypt-key $ENCRYPT_KEY \
--sign-key $ENCRYPT_KEY \
$SOURCE_PATH \
$REMOTE_URL
I tried with -i, -it, -t and just -d - but the result is always the same:
===== Begin GnuPG log =====
gpg: using "<supersecret>" as default secret key for signing
gpg: signing failed: Not a tty
gpg: [stdin]: sign+encrypt failed: Not a tty
===== End GnuPG log =====
GPG error detail: Traceback (most recent call last):
File "/usr/bin/duplicity", line 1532, in <module>
with_tempdir(main)
File "/usr/bin/duplicity", line 1526, in with_tempdir
fn()
File "/usr/bin/duplicity", line 1380, in main
do_backup(action)
File "/usr/bin/duplicity", line 1508, in do_backup
incremental_backup(sig_chain)
File "/usr/bin/duplicity", line 662, in incremental_backup
globals.backend)
File "/usr/bin/duplicity", line 425, in write_multivol
at_end = gpg.GPGWriteFile(tarblock_iter, tdp.name, globals.gpg_profile, globals.volsize)
File "/usr/lib/python2.7/site-packages/duplicity/gpg.py", line 356, in GPGWriteFile
file.close()
File "/usr/lib/python2.7/site-packages/duplicity/gpg.py", line 241, in close
self.gpg_failed()
File "/usr/lib/python2.7/site-packages/duplicity/gpg.py", line 226, in gpg_failed
raise GPGError(msg)
GPGError: GPG Failed, see log below:
===== Begin GnuPG log =====
gpg: using "<supersecret>" as default secret key for signing
gpg: signing failed: Not a tty
gpg: [stdin]: sign+encrypt failed: Not a tty
===== End GnuPG log =====
This Not a tty error while gpg tries to sign is weird.
It didn't seem to be a problem before, or I did some crazy typing on a late night shift that it worked once, but now it just doesn't want to work anymore.
For anyone who struggles from the same problem, I found the answer thanks to the developr of duply
https://sourceforge.net/p/ftplicity/bugs/76/#74c5
In short, you need to add GPG_OPTS='--pinentry-mode loopback' starting with gpg 2.1 and add allow-loopback-pinentry to .gnupg/gpg-agent.conf
This brought me much closer to a working setup.

CouchDB SSL handshake error

I've installed CouchDB on the mac via Homebrew (yay homebrew!):
brew install couchdb
Then I've done a bunch of SSL setup steps (in a shell script) that are detailed in the official documentation: http://docs.couchdb.org/en/1.6.1/config/http.html -
#!/bin/sh
currDir=$(pwd)
mkdir couch_certs
cd couch_certs
openssl genrsa > privkey.pem
openssl req -new -x509 -key privkey.pem -out couchdb.pem -days 1095
chmod 600 privkey.pem couchdb.pem
perl -p -i -e "s#\[daemons\]#[daemons]\nhttpsd = {couch_httpd, start_link, [https]}#" /usr/local/etc/couchdb/default.ini
perl -p -i -e "s#\[ssl\]#[ssl]\ncert_file = ${currDir}/couchdb.pem#" /usr/local/etc/couchdb/default.ini
perl -p -i -e "s#\[ssl\]#[ssl]\nkey_file = ${currDir}/privkey.pem#" /usr/local/etc/couchdb/default.ini
Then (same terminal), I launch couch:
couchdb
In a different terminal I test that:
curl -k https://127.0.0.1:6984/
And get a failure:
curl: (35) Server aborted the SSL handshake
What am I doing wrong?
Note I can get the same error when doing the CouchDB install as an application (section 2.3.1 of http://docs.couchdb.org/en/stable/install/mac.html)
Edit: I think it is an Erlang SSL issue: http://bugs.erlang.org/browse/ERL-74
My root cause was an older version of openssl (the one that came with OS X 10.10.5). After a homebrew install of openssl, and the same key-gen sequence, it all works.

Apache 2.4.x manual build and install on RHEL 6.4

OS: Red Hat Enterprise Linux Server release 6.4 (Santiago)
The current yum installation of apache on this OS is 2.2.15. I require the latest 2.4.x branch so have gone about installing it manually. I have noted the complete procedure I undertook, including unpacking apr and apr-util sources into the apache sources beforehand, but I guess the following is the most important part of the procedure:
GATHER LATEST APACHE AND APR
$ cd ~
$ mkdir apache-src
$ cd apache-src
$ wget http://apache.insync.za.net//httpd/httpd-2.4.6.tar.gz
$ tar xvf httpd-2.4.6.tar.gz
$ cd httpd-2.4.6
$ cd srclib
$ wget http://apache.insync.za.net//apr/apr-1.5.0.tar.gz
$ tar -xvzf apr-1.5.0.tar.gz
$ mv apr-1.5.0 apr
$ rm -f apr-1.5.0.tar.gz
$ wget http://apache.insync.za.net//apr/apr-util-1.5.3.tar.gz
$ tar -xvzf apr-util-1.5.3.tar.gz
$ mv apr-util-1.5.3 apr-util
INSTALL DEVEL PACKAGES
yum update --skip-broken (There is a dependency issue with the latest Chrome needing the latest libstdc++, which is not available for RHEL and CentOS)
yum install apr-devel
yum install apr-util-devel
yum install pcre-devel
INSTALL
$ cd ~/apache-src/httpd-2.4.6
$ ./configure --prefix=/etc/httpd --enable-mods-shared="all" --enable-rewrite --with-included-apr
$ make
$ make install
NOTE: At the time of running the above, /etc/http is empty.
This seems to have gone fine until I attempt to start the httpd service. It seems that every module include in httpd.conf fails with a message similar to this one for mod_rewrite:
httpd: Syntax error on line 148 of /etc/httpd/conf/httpd.conf: Cannot load /etc/httpd/modules/mod_rewrite.so into server: /etc/httpd/modules/mod_rewrite.so: undefined symbol: ap_global_mutex_create
I've gone right through the list of enabled modules in httpd.conf and commented them out one at a time. All trigger an error as above, however the "undefined symbol: value" is often different (so not always ap_global_mutex_create).
Am I missing a step? Although I find a some portion of that error on Google, most of the solutions centre around the .so files not being reachable. That doesn't seem to be an issue here and the modules are present in /etc/http/modules.
NOTE: At the time of running the above, /etc/http is empty.
You have the correct procedure but it's incomplete.
After the installation you have to enable SSL in httpd.conf. and generate server.crt and server.key file.
Below the complete procedure :
1. Download Apache
cd /usr/src
wget http://www.apache.org/dist/httpd/httpd-2.4.23.tar.gz
tar xvf httpd-2.4.23.tar.gz
2. Download APR and APR-Util
cd /usr/src
wget -c http://mirror.cogentco.com/pub/apache/apr/apr-1.5.2.tar.gz
wget -c http://mirror.cogentco.com/pub/apache/apr/apr-util-1.5.4.tar.gz
tar xvf apr-1.5.2.tar.gz
tar xvf apr-util-1.5.4.tar.gz
Now put the APR and APR-Util you downloaded into your apache source files.
mv apr-1.5.2 /usr/src/httpd-2.4.23/srclib/apr
mv apr-util-1.5.4 /usr/src/httpd-2.4.23/srclib/apr-util
3.Compile
cd /usr/src/httpd-2.4.23
./configure --enable-so --enable-ssl --with-mpm=prefork --with-included-apr --with-included-apr-util
make
make install
As you can see in the ./configure command we specify command line options to include apr and apr-utils.
4. Enable SSL in httpd.conf
Apache configuration file httpd.conf is located under /usr/local/apache2/conf.
nano /usr/local/apache2/conf/httpd.conf
Uncomment the httpd-ssl.conf Include line and the LoadModule ssl_module line in the /usr/local/apache2/conf/httpd.conf file :
# LoadModule ssl_module modules/mod_ssl.so
# Include conf/extra/httpd-ssl.conf
View the httpd-ssl.conf to review all the default SSL configurations. For most cases, you don’t need to modify anything in this file.
nano /usr/local/apache2/conf/extra/httpd-ssl.conf
The SSL certificate and key are required before we start the Apache. The server.crt and server.key file mentioned in the httpd-ssl.conf needs to be created before we move forward.
cd /usr/local/apache2/conf/extra
egrep 'server.crt|server.key' httpd-ssl.conf
SSLCertificateFile "/usr/local/apache2/conf/server.crt"
SSLCertificateKeyFile "/usr/local/apache2/conf/server.key"
5. Generate server.crt and server.key file
First, Generate the server.key using openssl.
cd /usr/src
openssl genrsa -des3 -out server.key 1024
The above command will ask for the password. Make sure to remember this password. You need this while starting your Apache later.
Next, generate a certificate request file (server.csr) using the above server.key file.
openssl req -new -key server.key -out server.csr
Finally, generate a self signed ssl certificate (server.crt) using the above server.key and server.csr file.
openssl x509 -req -days 365 -in server.csr -signkey server.key -out server.crt
Copy the server.key and server.crt file to appropriate Apache configuration directory location.
cp server.key /usr/local/apache2/conf/
cp server.crt /usr/local/apache2/conf/
6. Start Apache
/usr/local/apache2/bin/apachectl start
If you are getting the below error message :
AH00526: Syntax error on line 51 of /usr/local/apache2/conf/extra/httpd-ssl.conf:
Invalid command 'SSLCipherSuite', perhaps misspelled or defined by a module not included in the server configuration
Make sure to uncomment the line shown below in httpd.conf :
vi /usr/local/apache2/conf/httpd.conf
# LoadModule socache_shmcb_module modules/mod_socache_shmcb.so
Finally, this will prompt you to enter the password for your private key before starting up the apache.
Verify that the Apache httpd process is running in the background.
ps -ef | grep http
You should see something like that :
root 29529 1 0 13:08 ? 00:00:00 /usr/local/apache2/bin/httpd -k start
antoine 29530 29529 0 13:08 ? 00:00:00 /usr/local/apache2/bin/httpd -k start
antoine 29531 29529 0 13:08 ? 00:00:00 /usr/local/apache2/bin/httpd -k start
antoine 29532 29529 0 13:08 ? 00:00:00 /usr/local/apache2/bin/httpd -k start
root 29616 18260 0 13:09 pts/0 00:00:00 grep http
By default Apache SSL runs on 443 port. Open a web browser and verify that you can access your Apache using https://{your-ip-address}
I hope this help, else I advise you to go see : http://jasonpowell42.wordpress.com/2013/04/05/install-apache-2-4-4-on-centos-6-4/
baprutil-1.la /usr/src/httpd-2.4.27/srclib/apr/libapr-1.la -lrt -lcrypt -lpthread -ldl -lcrypt
/usr/src/httpd-2.4.27/srclib/apr-util/.libs/libaprutil-1.so: undefined reference to `XML_GetErrorCode'
/usr/src/httpd-2.4.27/srclib/apr-util/.libs/libaprutil-1.so: undefined reference to `XML_SetEntityDeclHandler'
/usr/src/httpd-2.4.27/srclib/apr-util/.libs/libaprutil-1.so: undefined reference to `XML_ParserCreate'
/usr/src/httpd-2.4.27/srclib/apr-util/.libs/libaprutil-1.so: undefined reference to `XML_SetCharacterDataHandler'
/usr/src/httpd-2.4.27/srclib/apr-util/.libs/libaprutil-1.so: undefined reference to `XML_ParserFree'
/usr/src/httpd-2.4.27/srclib/apr-util/.libs/libaprutil-1.so: undefined reference to `XML_SetUserData'
/usr/src/httpd-2.4.27/srclib/apr-util/.libs/libaprutil-1.so: undefined reference to `XML_StopParser'
/usr/src/httpd-2.4.27/srclib/apr-util/.libs/libaprutil-1.so: undefined reference to `XML_Parse'
/usr/src/httpd-2.4.27/srclib/apr-util/.libs/libaprutil-1.so: undefined reference to `XML_ErrorString'
/usr/src/httpd-2.4.27/srclib/apr-util/.libs/libaprutil-1.so: undefined reference to `XML_SetElementHandler'
collect2: error: ld returned 1 exit status
make[2]: *** [htpasswd] Error 1
make[2]: Leaving directory `/usr/src/httpd-2.4.27/support'
make[1]: *** [all-recursive] Error 1
make[1]: Leaving directory `/usr/src/httpd-2.4.27/support'
make: *** [all-recursive] Error 1
This error is received in make step if --with-included-apr-util is not specified in ./configure