In "gst-rtsp-server/examples/test-video.c", it seems one can set up TLS certificate and launch rtsp server. I wonder how it would work at the client side, including e.g. command line parameters and certificate authority, etc. Thank you for the tutorial.
Here is more information after some attempt, where the most important error I think is "Peer failed to perform TLS handshake".
server side
$ gst-rtsp-server/examples/test-video
client side
$ GST_DEBUG=3 gst-launch-1.0 rtspsrc location=rtsps://127.0.0.1:8554/test protocols=tls ! rtph264depay ! avdec_h264 ! xvimagesink
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Progress: (open) Opening Stream
Progress: (connect) Connecting to rtsps://127.0.0.1:8554/test
0:00:00.055578735 12767 0xa51230 ERROR default gstrtspconnection.c:698:gst_rtsp_connection_connect: failed to connect: Peer failed to perform TLS handshake
0:00:00.055643339 12767 0xa51230 ERROR rtspsrc gstrtspsrc.c:3677:gst_rtsp_conninfo_connect:<rtspsrc0> Could not connect to server. (Generic error)
0:00:00.055679389 12767 0xa51230 WARN rtspsrc gstrtspsrc.c:6148:gst_rtspsrc_retrieve_sdp:<rtspsrc0> error: Failed to connect. (Generic error)
0:00:00.055764506 12767 0xa51230 WARN rtspsrc gstrtspsrc.c:6227:gst_rtspsrc_open:<rtspsrc0> can't get sdp
0:00:00.055793412 12767 0xa51230 WARN rtspsrc gstrtspsrc.c:4525:gst_rtspsrc_loop:<rtspsrc0> we are not connected
ERROR: from element /GstPipeline:pipeline0/GstRTSPSrc:rtspsrc0: Could not open resource for reading and writing.
Additional debug info:
gstrtspsrc.c(6148): gst_rtspsrc_retrieve_sdp (): /GstPipeline:pipeline0/GstRTSPSrc:rtspsrc0:
Failed to connect. (Generic error)
ERROR: pipeline doesn't want to preroll.
Setting pipeline to PAUSED ...
Setting pipeline to READY ...
Setting pipeline to NULL ...
Freeing pipeline ...
Related
When I run pacman -Syu to update, it first shows no error, I normally update everything and after that, I run pacman -Syu again, it shows this, what is the reason and any solution?
:: Synchronizing package databases...
core is up to date
extra is up to date
community is up to date
error: failed retrieving file 'core.db' from mirror.erickochen.nl : Failed to connect to mirror.erickochen.nl port 443 after 5241 ms: Connection timed out
error: failed retrieving file 'extra.db' from mirror.erickochen.nl : Failed to connect to mirror.erickochen.nl port 443 after 5202 ms: Connection timed out
error: failed retrieving file 'community.db' from mirror.erickochen.nl : Failed to connect to mirror.erickochen.nl port 443 after 5202 ms: Connection timed out
warning: too many errors from mirror.erickochen.nl, skipping for the remainder of this transaction
:: Starting full system upgrade...
there is nothing to do
Sometimes mirrors go offline, it's recommended to have multiple mirrors so you don't have a single point of failure, as well as keeping mirrors updated. Using reflector is recommended since it also finds fast candidates based on your location.
For the time being, edit /etc/pacman.d/mirrorlist and uncomment a couple of mirrors, then try updating again.
I have a Apache Kafka (v. 2.13-3.0.0) installed on a remote Ubuntu server.
I follow this tutorial to secure my cluster:
https://medium.com/egen/securing-kafka-cluster-using-sasl-acl-and-ssl-dec15b439f9d
but when I try to start Kafka with jaas conf file with the commands:
export KAFKA_OPTS=-Djava.security.auth.login.config=<kafka-binary-
dir>/config/kafka_server_jaas.conf
./bin/kafka-server-start.sh ./config/server.properties
I receive the error:
[2021-11-12 10:30:47,864] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$)
[2021-11-12 10:30:48,089] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util)
[2021-11-12 10:30:48,099] ERROR Exiting Kafka due to fatal exception (kafka.Kafka$)
java.lang.ClassNotFoundException: kafka.security.auth.SimpleAclAuthorizer
at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:581)
at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:178)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:522)
at java.base/java.lang.Class.forName0(Native Method)
at java.base/java.lang.Class.forName(Class.java:398)
at org.apache.kafka.common.utils.Utils.loadClass(Utils.java:417)
at org.apache.kafka.common.utils.Utils.newInstance(Utils.java:406)
at kafka.security.authorizer.AuthorizerUtils$.createAuthorizer(AuthorizerUtils.scala:31)
at kafka.server.KafkaConfig.<init>(KafkaConfig.scala:1583)
at kafka.server.KafkaConfig.<init>(KafkaConfig.scala:1394)
at kafka.Kafka$.buildServer(Kafka.scala:67)
at kafka.Kafka$.main(Kafka.scala:87)
at kafka.Kafka.main(Kafka.scala)
These are the SSL config in server.properties file:
########### SECURITY using SCRAM-SHA-512 and SSL
listeners=PLAINTEXT://localhost:9092,SASL_PLAINTEXT://localhost:9093,SASL_SSL://localhost:9094
advertised.listeners=PLAINTEXT://localhost:9092,SASL_PLAINTEXT://localhost:9093,SASL_SSL://localhost:9094
security.inter.broker.protocol=SASL_SSL
ssl.endpoint.identification.algorithm=
ssl.client.auth=required
sasl.mechanism.inter.broker.protocol=SCRAM-SHA-512
sasl.enabled.mechanisms=SCRAM-SHA-512
# Broker security settings
ssl.truststore.location=/home/kafka/Downloads/kafka_2.13-3.0.0/config/truststore/kafka.truststore.jks
ssl.truststore.password=giuseppe
ssl.keystore.location=/home/kafka/Downloads/kafka_2.13-3.0.0/config/keystore/kafka.keystore.jks
ssl.keystore.password=giuseppe
ssl.key.password=giuseppe
# ACLs
authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
super.users=User:admin
#zookeeper SASL
zookeeper.set.acl=false
########### SECURITY using SCRAM-SHA-512 and SSL
If I try to comment the 2 rows of ACL I receive the error:
[2021-11-12 11:05:29,301] INFO [ThrottledChannelReaper-
ControllerMutation]: Starting
(kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2021-11-12 11:05:29,331] ERROR [KafkaServer id=0] Fatal error
during KafkaServer startup. Prepare to shutdown
(kafka.server.KafkaServer)
org.apache.kafka.common.KafkaException: Failed to acquire lock on
file .lock in /tmp/kafka-logs. A Kafka instance in another process
or thread is using this directory.
at kafka.log.LogManager.$anonfun$lockLogDirs$1(LogManager.scala:241)
at scala.collection.StrictOptimizedIterableOps.flatMap(StrictOptimizedIterableOps.scala:117)
at scala.collection.StrictOptimizedIterableOps.flatMap$(StrictOptimizedIterableOps.scala:104)
at scala.collection.mutable.ArraySeq.flatMap(ArraySeq.scala:37)
at kafka.log.LogManager.lockLogDirs(LogManager.scala:236)
at kafka.log.LogManager.<init>(LogManager.scala:112)
at kafka.log.LogManager$.apply(LogManager.scala:1283)
at kafka.server.KafkaServer.startup(KafkaServer.scala:254)
at kafka.Kafka$.main(Kafka.scala:109)
at kafka.Kafka.main(Kafka.scala)
What is the cause? May it be a wrong configuration?
Thanks.
Update:
Changing the row in:
# ACLs authorizer.class.name=org.apache.kafka.server.authorizer.Authorizer
there is this error: org.apache.kafka.common.KafkaException: Could not find
a public no-argument constructor for
org.apache.kafka.server.authorizer.Authorizer at org.apache.kafka.common.utils.Utils.newInstance(Utils.java:392)
I receive this new error:
[2021-11-12 16:51:57,613] ERROR Exiting Kafka due to fatal exception
(kafka.Kafka$)
org.apache.kafka.common.KafkaException: Could not find a public no-argument
constructor for org.apache.kafka.server.authorizer.Authorizer at org.apache.kafka.common.utils.Utils.newInstance(Utils.java:392)
at org.apache.kafka.common.utils.Utils.newInstance(Utils.java:406)
at kafka.security.authorizer.AuthorizerUtils$.createAuthorizer(AuthorizerUtils.scala:31)
at kafka.server.KafkaConfig.<init>(KafkaConfig.scala:1583)
at kafka.server.KafkaConfig.<init>(KafkaConfig.scala:1394)
at kafka.Kafka$.buildServer(Kafka.scala:67)
at kafka.Kafka$.main(Kafka.scala:87)
at kafka.Kafka.main(Kafka.scala)
Caused by: java.lang.NoSuchMethodException:
org.apache.kafka.server.authorizer.Authorizer.<init>()
at java.base/java.lang.Class.getConstructor0(Class.java:3508)
at java.base/java.lang.Class.getDeclaredConstructor(Class.java:2711)
at org.apache.kafka.common.utils.Utils.newInstance(Utils.java:390)
... 7 more
It just seems that if you change the
kafka.security.auth.SimpleAclAuthorizer
to
kafka.security.authorizer.AclAuthorizer
It should work; it worked for me.
Kafka 3.0 removed SimpleAclAuthorizer
Pull request - https://github.com/apache/kafka/commit/976e78e405d57943b989ac487b7f49119b0f4af4#diff-e0ccf1b5c964d2c303b6a69a8b8b67df5a6bfbae8aa514f580d353c4c6bf8e36
The blog seems to be using version 2.2.0.
I'm trying to publish message to a Tibco Queue on a SSL Tibco Server through JMeter 5.4.1 using JMS Point-to-Point Logic Controller.
JMS Point To Point Controller Config
But I'm getting the following error message:
2021-06-13 12:25:46,278 ERROR o.a.j.p.j.s.JMSSampler: Not permitted:
Failed to connect to any server at: ssl://[server-name]:7352,
ssl://[server-name]:7352 [Error: Failed to connect via SSL to
[ssl://[server-name]:7352]: Received fatal alert:
protocol_version: url that returned this exception =
SSL://[server-name]:7352 ]
javax.naming.AuthenticationException: Not permitted: Failed to connect
to any server at: ssl://[server-name]:7352,
ssl://[server-name]:7352 [Error: Failed to connect via SSL to
[ssl://[server-name]:7352]: Received fatal alert:
protocol_version: url that returned this exception =
SSL://[server-name] ] at
com.tibco.tibjms.naming.TibjmsContext.lookup(TibjmsContext.java:670)
~[tibjms.jar:8.0.0] at
com.tibco.tibjms.naming.TibjmsContext.lookup(TibjmsContext.java:491)
~[tibjms.jar:8.0.0] at
javax.naming.InitialContext.lookup(InitialContext.java:417)
~[?:1.8.0_291] at
org.apache.jmeter.protocol.jms.sampler.JMSSampler.threadStarted(JMSSampler.java:638)
[ApacheJMeter_jms.jar:5.4.1] at
org.apache.jmeter.threads.JMeterThread$ThreadListenerTraverser.addNode(JMeterThread.java:784)
[ApacheJMeter_core.jar:5.4.1] at
org.apache.jorphan.collections.HashTree.traverseInto(HashTree.java:993)
[jorphan.jar:5.4.1] at
org.apache.jorphan.collections.HashTree.traverse(HashTree.java:976)
[jorphan.jar:5.4.1] at
org.apache.jmeter.threads.JMeterThread.threadStarted(JMeterThread.java:752)
[ApacheJMeter_core.jar:5.4.1] at
org.apache.jmeter.threads.JMeterThread.initRun(JMeterThread.java:740)
[ApacheJMeter_core.jar:5.4.1] at
org.apache.jmeter.threads.JMeterThread.run(JMeterThread.java:252)
[ApacheJMeter_core.jar:5.4.1]
I tried:
openssl s_client -connect [server-name]:7352
It gave the following output:
SSL-Session:
Protocol : TLSv1.2
Cipher : ECDHE-RSA-AES256-GCM-SHA384
Session-ID:
Session-ID-ctx:
So added the following line in jmeter.properties file.
https.default.protocol=TLSv1.2
Also commented jdk.tls.disabledAlgorithms from java.security file for JDK (I'm using jdk1.8.0_291)
# jdk.tls.disabledAlgorithms=SSLv3, TLSv1, TLSv1.1, RC4, DES, MD5withRSA, \
# DH keySize < 1024, EC keySize < 224, 3DES_EDE_CBC, anon, NULL, \
# include jdk.disabled.namedCurves
But still I'm getting the same error. Someone please help.
I think you're using the wrong property (not only the wrong property but the wrong place as well), you're setting default protocol for HTTPS, while you need to set it for TLS, i.e. add the next line to system.properties file
jdk.tls.client.protocols=TLSv1.2
JMeter restart will be required to apply this property.
If it won't help or you will get different errors - consider adding the next line there as well:
javax.net.debug=all
and then check jmeter.log file and stdout for any suspicious entries
More information:
Configuring JMeter
Apache JMeter Properties Customization Guide
I resolved it by using the latest tibjms.jar in the lib directory in JMeter as the Tibco server was upgraded some hours before I raised this issue.
I'm trying to run a very GStreamer pipeline on macOS 10.15 provided here, to play a video from the Internet; however, I get the error "Unacceptable TLS certificate". I also tried with other video URLs but got the same error for all. The output is the same using souphttpsrc instead of uridecodebin. See the pipeline and the console output below.
I remember this kind of pipeline working seamlessly on Ubuntu. What could be the reason for this behaviour on macOS, and is there anyway to make the TLS certificate "acceptable"?
$ gst-launch-1.0 uridecodebin uri=https://www.freedesktop.org/software/gstreamer-sdk/data/media/sintel_trailer-480p.webm ! audioconvert ! autoaudiosink
Setting pipeline to PAUSED ...
Pipeline is PREROLLING ...
Got context from element 'source': gst.soup.session=context, session=(SoupSession)NULL, force=(boolean)false;
ERROR: from element /GstPipeline:pipeline0/GstURIDecodeBin:uridecodebin0/GstSoupHTTPSrc:source: Secure connection setup failed.
Additional debug info:
../ext/soup/gstsouphttpsrc.c(1383): gst_soup_http_src_parse_status (): /GstPipeline:pipeline0/GstURIDecodeBin:uridecodebin0/GstSoupHTTPSrc:source:
Unacceptable TLS certificate (6), URL: https://www.freedesktop.org/software/gstreamer-sdk/data/media/sintel_trailer-480p.webm, Redirect to: (NULL)
ERROR: pipeline doesn't want to preroll.
ERROR: from element /GstPipeline:pipeline0/GstURIDecodeBin:uridecodebin0/GstSoupHTTPSrc:source: Internal data stream error.
Additional debug info:
../libs/gst/base/gstbasesrc.c(3127): gst_base_src_loop (): /GstPipeline:pipeline0/GstURIDecodeBin:uridecodebin0/GstSoupHTTPSrc:source:
streaming stopped, reason error (-5)
ERROR: pipeline doesn't want to preroll.
ERROR: from element /GstPipeline:pipeline0/GstURIDecodeBin:uridecodebin0/GstTypeFindElement:typefindelement0: Stream doesn't contain enough data.
Additional debug info:
../plugins/elements/gsttypefindelement.c(988): gst_type_find_element_chain_do_typefinding (): /GstPipeline:pipeline0/GstURIDecodeBin:uridecodebin0/GstTypeFindElement:typefindelement0:
Can't typefind stream
ERROR: pipeline doesn't want to preroll.
Setting pipeline to NULL ...
Freeing pipeline ..
I have an Phoenix/Elixir App that works fine with https locally, however when I try to change it to use the production certificates the server does not respond and no error messages are shown.
In my dev.exs this was made with the hostname localhost
In prod.exs here are the keys. These were made with my production URL
I have tried to change the localhost to production url on local by adding host into the https portion in the config
https: [port: 443,
host: "produrl.com"
keyfile: "priv/keys/domain.key",
certfile: "priv/keys/domain.crt"],
This throws an error
sudo MIX_ENV=prod mix phoenix.server
[info] Running LiteChartBe.Endpoint with Cowboy using http://localhost:80
[info] Application lite_chart_be exited: LiteChartBe.start(:normal, []) returned an error: shutdown: failed to start child: LiteChartBe.Endpoint
** (EXIT) shutdown: failed to start child: Phoenix.Endpoint.Server
** (EXIT) shutdown: failed to start child: {:ranch_listener_sup, LiteChartBe.Endpoint.HTTPS}
** (EXIT) shutdown: failed to start child: :ranch_acceptors_sup
** (EXIT) :badarg
{"Kernel pid terminated",application_controller,"{application_start_failure,lite_chart_be,{{shutdown,{failed_to_start_child,'Elixir.LiteChartBe.Endpoint',{shutdown,{failed_to_start_child,'Elixir.Phoenix.Endpoint.Server',{shutdown,{failed_to_start_child,{ranch_listener_sup,'Elixir.LiteChartBe.Endpoint.HTTPS'},{shutdown,{failed_to_start_child,ranch_acceptors_sup,badarg}}}}}}}},{'Elixir.LiteChartBe',start,[normal,[]]}}}"}
Kernel pid terminated (application_controller) ({application_start_failure,lite_chart_be,{{shutdown,{failed_to_start_child,'Elixir.LiteChartBe.Endpoint',{shutdown,{failed_to_start_child,'Elixir.Phoeni
If I simply forward localhost to produrl in my local hosts file, no errors are thrown and nothing connects to the server using https.
The error states that you provided a wrong argument for the configuration of your Endpoint (** (EXIT) :badarg). I suppose that is beacause you are missing a comma behind your host url.
This does probably not solve your problem, but that is supposedly the reason for the error message shown after your change.