ansible automation for cisco vIOS - automation

I reach to the pint to generate config file by using netbox and Jinja2 , after that whaen I am trying to push those configuration to cisco switch vIOSl2 active on my topology , it showing this error
fatal: [al-sw1-TS23]: FAILED! => {"changed": false, "module_stderr": "switchport trunk encapsulation dot1q\r\nswitchport trunk encapsulation dot1q\r\n ^\r\n% Invalid input detected at '^' marker.\r\n\r\nal-sw1-TS23(config-vlan)#", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error"}
any help to solve this issue
I am trying to automate Cisco vIOS-l2 by using ansible
my expextation is configuration file should be deliver success to the network box ( cisco vIOS-L2 ) in my case this node is running in GNS3
the current result is deploy configuration file to the node is fialer with this errore message
fatal: [al-sw1-TS23]: FAILED! => {"changed": false, "module_stderr": "switchport trunk encapsulation dot1q\r\nswitchport trunk encapsulation dot1q\r\n ^\r\n% Invalid input detected at '^' marker.\r\n\r\nal-sw1-TS23(config-vlan)#", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error"}

Related

Trivy on EKS unable to scan any images

I am trying to scan all images deployed on my EKS cluster I am setting up for high security (will be deployed to classified IL5 environment). Kubernetes v1.23, all worker nodes run on Bottlerocket OS.
I expect images to be scanned and available in the VulnerabilityReports CRD.
I was able to successfully install Falco to the cluster (uses containerd). However, when deploying the official Helm chart (0.6.0-rc3) the scan-vulnerability containers start and then immediately error out. I set this environment variable on the trivy-operator deployment:
- name: CONTAINER_RUNTIME_ENDPOINT
value: /run/containerd/containerd.sock
Output of run with -debug:
{
"level": "error",
"ts": 1668286646.865245,
"logger": "reconciler.vulnerabilityreport",
"msg": "Scan job container",
"job": "trivy-system/scan-vulnerabilityreport-74f54b6cd",
"container": "discovery",
"status.reason": "Error",
"status.message": "2022-11-12T20:57:13.674Z\t\u001b[31mFATAL\u001b[0m\timage scan error: scan error: unable to initialize a scanner: unable to initialize a docker scanner: 4 errors occurred:\n\t* unable to inspect the image (023620263533.dkr.ecr.us-gov-east-1.amazonaws.com/docker.io/istio/pilot:1.15.2): Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?\n\t* unable to initialize Podman client: no podman socket found: stat podman/podman.sock: no such file or directory\n\t* containerd socket not found: /run/containerd/containerd.sock\n\t* GET https://023620263533.dkr.ecr.us-gov-east-1.amazonaws.com/v2/docker.io/istio/pilot/manifests/1.15.2: unexpected status code 401 Unauthorized: Not Authorized\n\n\n\n",
"stacktrace": "github.com/aquasecurity/trivy-operator/pkg/vulnerabilityreport.(*WorkloadController).processFailedScanJob\n\t/home/runner/work/trivy-operator/trivy-operator/pkg/vulnerabilityreport/controller.go:551\ngithub.com/aquasecurity/trivy-operator/pkg/vulnerabilityreport.(*WorkloadController).reconcileJobs.func1\n\t/home/runner/work/trivy-operator/trivy-operator/pkg/vulnerabilityreport/controller.go:376\nsigs.k8s.io/controller-runtime/pkg/reconcile.Func.Reconcile\n\t/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime#v0.13.1/pkg/reconcile/reconcile.go:102\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile\n\t/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime#v0.13.1/pkg/internal/controller/controller.go:121\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime#v0.13.1/pkg/internal/controller/controller.go:320\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime#v0.13.1/pkg/internal/controller/controller.go:273\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime#v0.13.1/pkg/internal/controller/controller.go:234"
}
I confirmed that bottlerocket uses containerd, as /run/containerd/containerd.sock is specified on my Falco deployment. Even when I mount this as volume onto the pod and set the CONTAINER_RUNTIME_ENDPOINT to this path I get the same error.
Edit
I added the following security context:
seLinuxOptions:
user: system_u
role: system_r
type: control_t
level: s0-s0:c0.c1023
Initially I mounted the dockershim.sock from the host to the pod, then realized that was not necessary, the error messages were a little misleading, it was really an authentication with ECR issue. Furthermore, the seLinux flags needed to be specified at the pod level, and not the container level.

I had a problem with Appium and couldn't fix it

''' An unknown server-side error occurred while processing the
command. Original error: Error executing adbExec. Original error:
'Command 'D:\program\android-sdk\platform-tools\adb.exe -P 5037 -s
f8cb3e08 install -g
E:\appium\resources\app\node_modules\appium\node_modules\io.appium.settings\apks\settings_apk-debug.apk'
exited with code 1'; Stderr: 'adb: failed to install
E:\appium\resources\app\node_modules\appium\node_modules\io.appium.settings\apks\settings_apk-debug.apk:
Security exception: You need the
android.permission.INSTALL_GRANT_RUNTIME_PERMISSIONS permission to use
the PackageManager.INSTALL_GRANT_RUNTIME_PERMISSIONS flag
java.lang.SecurityException: You need the
android.permission.INSTALL_GRANT_RUNTIME_PERMISSIONS permission to use
the PackageManager.INSTALL_GRANT_RUNTIME_PERMISSIONS flag at
com.android.server.pm.PackageInstallerService.createSessionInternal(PackageInstallerService.java:596)
at
com.android.server.pm.PackageInstallerService.createSession(Package';
Code: '1' '''
I've turned on administrator mode
Usb debugging is also enabled
And other environments are already configured
This is my parameter
{
"platformName": "Android",
"platformVersion": "7.0",
"deviceName": "f8cb3e08",
"appPackage": "com.tencent.qqlive",
"appActivity": "ona.activity.SplashHomeActivity",
"noReset": "true",
"autoGrantPermissions": "true"
}
Use automation name as capability.
"automationName":"UiAutomator2"
You can use appium instead UiAutomator2 as default.

Phoenix with exq: How do I execute mix test without redis running

I use exq in my Phoenix application with Phoenix 1.4.16 to run some background jobs.
One of them can be as simple as this:
defmodule PeopleJob do
def perform(request) do
IO.puts("Hello from PeopleJob:\n#{inspect(request)}")
end
end
It runs with redis in dev environment perfectly.
The problem is that when I push the code to a CI server that has no redis, all the tests fail.
The test config is like this
In config/test.exs:
config :exq, queue_adapter: Exq.Adapters.Queue.Mock
In test/test_helper.exs:
Exq.Mock.start_link(mode: :inline)
When I run "mix test" on a machine without redis running, it fails like this:
** (Mix) Could not start application exq: Exq.start(:normal, []) returned an error: shutdown: failed to start child: Exq.Manager.Server
** (EXIT) an exception was raised:
** (RuntimeError)
====================================================================================================
ERROR! Could not connect to Redis!
Configuration passed in: [host: "127.0.0.1", port: 6379, database: 0, password: nil]
Error: :error
Reason: {:badmatch, {:error, %Redix.ConnectionError{reason: :closed}}}
Make sure Redis is running, and your configuration matches Redis settings.
====================================================================================================
(exq) lib/exq/manager/server.ex:393: Exq.Manager.Server.check_redis_connection/1
(exq) lib/exq/manager/server.ex:173: Exq.Manager.Server.init/1
(stdlib) gen_server.erl:374: :gen_server.init_it/2
(stdlib) gen_server.erl:342: :gen_server.init_it/6
(stdlib) proc_lib.erl:249: :proc_lib.init_p_do_apply/3
Actually I try all 3 modes :redis, :fake and :inline, but all of them fail to start the mix test.
Question: Can I run "mix test" on a machine that has no redis?
The reason is that our company doesn't want to install redis on the Travis CI machine.
I expected that using Exq Mock in the test environment would allow the test to run without redis, but it is not the case.
I figure it out.
In config/test.exs:
config :exq, queue_adapter: Exq.Adapters.Queue.Mock
config :exq, start_on_application: false
In test/test_helper.exs:
Exq.Mock.start_link(mode: :inline)
Adding config :exq, start_on_application: false to config/test.exs solved this problem.

Unable to execute the script file using ansible playbook for redislbas & jmeter

I am trying to install redislabs and jmter using ansible playbook, but unable to execute the script using playbook. Please find my playbook and error as well.
ERROR:
fatal: [localhost]: FAILED! => {"changed": true, "msg": "non-zero return code", "rc": 127, "stderr": "/home/ansibleadm/.ansible/tmp/ansible-tmp-1576768466.18-58336526997867/jmeter.sh: line 109: /home/ansibleadm/.ansible/tmp/ansible-tmp-1576768466.18-58336526997867/jmeter: No such file or directory\n", "stderr_lines": ["/home/ansibleadm/.ansible/tmp/ansible-tmp-1576768466.18-58336526997867/jmeter.sh: line 109: /home/ansibleadm/.ansible/tmp/ansible-tmp-1576768466.18-58336526997867/jmeter: No such file or directory"], "stdout": "", "stdout_lines": []}
Note: Below error for jmeter and getting same error for redislabs as well. like No such file or directory
cat jmeter.yaml
hosts: localhost
user: ansibleadm
connection: local
become: yes
become_method: sudo
tasks:
name: creating jmeter directory
file: path=/home/ansibleadm/jmeter state=directory mode=0700 owner=ansibleadm group=ansibleadm
name: downloading jmeter tar file
get_url:
url: http://apache.mirrors.tds.net//jmeter/source/apache-jmeter-5.2.1_src.tgz
dest: /home/ansibleadm/jmeter
name: untar the file
unarchive:
src: "/home/ansibleadm/jmeter/apache-jmeter-5.2.1_src.tgz"
dest: "/home/ansibleadm/jmeter"
name: executing jmeter.sh file
script: "/home/ansibleadm/jmeter/apache-jmeter-5.2.1/bin/jmeter.sh"
2: Please find the redislabs playbook and error:
hosts: redisgroup
user: ansibleadm
become: yes
become_method: sudo
tasks:
name: creating a directory for redislabs
file: path=/home/ansibleadm/remote_redis owner=ansibleadm group=ansibleadm mode=0700 state=directory
name: defining a variable
set_fact:
redis_variable: "/home/ansibleadm/remote_redis"
name: copy the tar file from src to destination.
copy: src=/home/ansibleadm/redislabs-5.4.6-18-rhel7-x86_64.tar dest="{{redis_variable}}/redislabs-5.4.6-18-rhel7-x86_64.tar"
name: untar the file
unarchive:
src: /home/ansibleadm/redislabs-5.4.6-18-rhel7-x86_64.tar
dest: "{{redis_variable}}"
name: execute the install.sh file in remote server
shell: "{{redis_variable}}/install.sh -y"
ERROR:
FAILED! => {"changed": true, "cmd": "/home/ansibleadm/remote_redis/install.sh -y", "delta": "0:00:04.792255", "end": "2019-12-20 02:33:32.430351", "msg": "non-zero return code", "rc": 1, "start": "2019-12-20 02:33:27.638096", "stderr": "/home/ansibleadm/remote_redis/install.sh: line 25: rlec_upgrade_tmpdir/upgrade_checks_error_codes.sh: No such file or directory\ntouch: cannot touch ‘/var/opt/redislabs/log/install.log’: No such file or directory\nchmod: cannot access ‘/var/opt/redislabs/log/install.log’: No such file or directory\n/home/ansibleadm/remote_redis/install.sh: line 64: /var/opt/redislabs/log/install.log: No such file or directory", "stderr_lines": ["/home/ansibleadm/remote_redis/install.sh: line 25: rlec_upgrade_tmpdir/upgrade_checks_error_codes.sh: No such file or directory", "touch: cannot touch ‘/var/opt/redislabs/log/install.log’: No such file or directory", "chmod: cannot access ‘/var/opt/redislabs/log/install.log’: No such file or directory", "/home/ansibleadm/remote_redis/install.sh: line 64: /var/opt/redislabs/log/install.log: No such file or directory"], "stdout": "/home/ansibleadm/remote_redis/install.sh: line 25: rlec_upgrade_tmpdir/upgrade_checks_error_codes.sh: No such file or directory\n2019-12-20 02:33:27 [.] Checking prerequisites\n2019-12-20 02:33:27 [.] Checking hardware requirements...\n2019-12-20 02:33:27 [!] The node’s hardware does not meet the minimum requirements for a production system: \nThe node has 2 cores (minimum is 4) and 7 GB RAM (minimum is 15 GB). \nConsider upgrading your hardware in the case of a production system.\n================================================================================\n\u001b[1m\u001b[91mRedis\u001b[90mLabs\u001b[0m Enterprise Cluster installer.\n================================================================================\n\n2019-12-20 02:33:28 \u001b[92m[.] Checking root access\u001b[0m\n2019-12-20 02:33:28 \u001b[33m[!] Running as user root, sudo is not required.\u001b[0m\n2019-12-20 02:33:28 \u001b[92m[.] Updating paths.sh\u001b[0m\n2019-12-20 02:33:28 \u001b[92m[.] Creating socket directory /var/opt/redislabs/run \u001b[0m\n2019-12-20 02:33:29 \u001b[92m[.] Deleting \u001b[1m\u001b[91mRedis\u001b[90mLabs\u001b[0m debug package if exist\u001b[0m\n2019-12-20 02:33:29 \u001b[92m[.] Installing \u001b[1m\u001b[91mRedis\u001b[90mLabs\u001b[0m packages\u001b[0m\n2019-12-20 02:33:29 \u001b[37m[$] executing: 'yum install -y redislabs-5.4.6-18.rhel7.x86_64.rpm redislabs-utils-5.4.6-18.rhel7.x86_64.rpm'\u001b[0m\n\u001b[90mLoaded plugins: enabled_repos_upload, package_upload, product-id, search-\n : disabled-repos, subscription-manager, tracer_upload\nNo package redislabs-5.4.6-18.rhel7.x86_64.rpm available.\nNo package redislabs-utils-5.4.6-18.rhel7.x86_64.rpm available.\nError: Nothing to do\nUploading Enabled Repositories Report\nLoaded plugins: product-id, subscription-manager\n\u001b[0m2019-12-20 02:33:32 \u001b[91m[x] yum install failed\u001b[0m", "stdout_lines": ["/home/ansibleadm/remote_redis/install.sh: line 25: rlec_upgrade_tmpdir/upgrade_checks_error_codes.sh: No such file or directory", "2019-12-20 02:33:27 [.] Checking prerequisites", "2019-12-20 02:33:27 [.] Checking hardware requirements...", "2019-12-20 02:33:27 [!] The node’s hardware does not meet the minimum requirements for a production system: ", "The node has 2 cores (minimum is 4) and 7 GB RAM (minimum is 15 GB). ", "Consider upgrading your hardware in the case of a production system.",
In the last step change script: to shell:.
The script tasks "uploads" the script to the target host and executes the uploaded one, but it is uploaded into a temporary directory (see the ansible-tmp-XXXXXXX in the error output). The script (jmeter.sh) then tries to find jmeter in that directory but obviously it is not there. By using shell: instead it will just run the script from the proper place.

Gerrit LDAP setup and getting InitInjector failed error

i am trying to configure LDAP auth setup in gerrit and to encrypt/decrypt LDAP password from secure.config file , i used secure-config plugin. and i placed that plugin under $gerrit/path/lib and added line in gerrit.config file
[gerrit]
secureStoreClass = com.googlesource.gerrit.plugins.secureconfig.SecureConfigStore
followed instruction from https://gerrit.googlesource.com/plugins/secure-config/
then i did init like below and getting below error
java -jar gerrit-war-2.13.7.war init -d Gerrit/
fatal: InitInjector failed
fatal: Unable to create injector, see the following errors
fatal: 1) Error injecting constructor, java.lang.NullPointerException
fatal: at com.googlesource.gerrit.plugins.secureconfig.PBECodec.<init>(PBECodec.java:47)
fatal: at com.googlesource.gerrit.plugins.secureconfig.PBECodec.class(PBECodec.java:39)
fatal: while locating com.googlesource.gerrit.plugins.secureconfig.PBECodec
fatal: for the 2nd parameter of com.googlesource.gerrit.plugins.secureconfig.SecureConfigStore.<init>(SecureConfigStore.java:46)
fatal:at com.googlesource.gerrit.plugins.secureconfig.SecureConfigStore.class(SecureConfigStore.java:46)
fatal:while locating com.googlesource.gerrit.plugins.secureconfig.SecureConfigStore fatal: while locating com.google.gerrit.server.securestore.SecureStoreProvider
fatal: at com.google.gerrit.pgm.init.BaseInit$1.configure(BaseInit.java:274)
fatal: while locating com.google.gerrit.server.securestore.SecureStore
fatal: for the 2nd parameter of com.google.gerrit.server.config.GerritServerConfigProvider.<init>(GerritServerConfigProvider.java:40)
fatal: while locating com.google.gerrit.server.config.GerritServerConfigProvider
fatal: at com.google.gerrit.server.config.GerritServerConfigModule.configure(GerritServerConfigModule.java:78) fatal: while locating org.eclipse.jgit.lib.Config annotated with #com.google.gerrit.server.config.GerritServerConfig()
fatal: for the 1st parameter of com.google.gerrit.server.config.TrackingFootersProvider.<init>(TrackingFootersProvider.java:46)
fatal: at com.google.gerrit.server.config.TrackingFootersProvider.class(TrackingFootersProvider.java:35)
fatal: while locating com.google.gerrit.server.config.TrackingFootersProvider
fatal: at com.google.gerrit.server.config.GerritServerConfigModule.configure(GerritServerConfigModule.java:77)
fatal: while locating com.google.gerrit.server.config.TrackingFooters fatal: Caused by: java.lang.NullPointerException
You're following the instructions from the master branch but you're using Gerrit 2.13.7. Have you installed the secure-config plugin from master branch or from stable-2.13 one? I saw there's a difference between the master and stable-2.13 instructions in the "How to run" section:
master
Gerrit secure.config properties need to be generated and managed using the Gerrit init wizard. All the passwords entered at init will be stored as encrypted values and then decrypted on-the-fly when needed at runtime.
stable-2.13
This plugin will decode values in secure.config, it will fail if there is an existing secure.config which contains values that are not encrypted. If the values in the current secure.config are not encrypted you will need to either clear out secure.config or back it up by moving it to another file before running this plugin.
See the stable-2.13 instructions here.