Automating Zope5 Database Pack - backup

I tried asking on the Plone forums but no one had any good responses.
I am running Zope5, no ZeoServer, no Plone, with Apache as a frontend proxy.
In the old Zope2 there was a script called zodb-pack that could pack the database from the command line. This is no longer included with Zope5 and I am searching for a way to pack the db from the command line.
Also, Apache is setup for client certificate authentication, so I cannot do something like:
curl -X POST https://username:password#zope.domain.com
I also don't want to hardcode that type of curl statement because of the need to include the username and password.
My Zope is running in a Docker container, so I thought about doing something like:
source /zope5/bin/activate
python scriptname
with a python script along the lines of
from ZODB.DB import DB
from ZODB.config import databaseFromString
from transaction import commit
db = databaseFromString("<zodb_config>")
storage = db.storage
storage.pack(None, referencesf)
but I'm not sure that's the correct way to do this. Basically I just want my bash script that automates the backups for the server to pack the Zope DB before backing it up, but I need a command line command to do so.
I cannot use any solution that requires me to modify how Zope runs, nor requires me to stop Zope to perform the pack.
Of course I can manually go to the ZMI's Control Panel and click Pack, but like I said, I was trying to automate it so it could run in off peak hours.

Related

MYSQL8 Permanently change password policy requirements

I am trying to change the password policy requirements in MYSQL8 (Note not MYSQL 5.7). I am using Ubuntu 20.04 server (so no GUI).
I can change them within mysql-8 by using lines such as SET GLOBAL validate_password.policy=LOW; and I can see the changes using SHOW VARIABLES LIKE 'validate_password%';, however when I restart mysql using service mysql restart they return the their default settings.
The Stack Overflow article 36301100 alludes to adding lines to mysqld.cnf file however there is no mysqld.cnf file just the mysql.cnf file under the /etc/mysql/ directory. When I add any lines such as SET GLOBAL validate_password.policy=LOW; it causes the mysql server to fail after the service mysql restart command.
Another suggestion given is to remove the password plugin with UNINSTALL COMPONENT 'file://component_validate_password';, however this strikes me as a bit harsh.
Any suggestions? Thanks, Greg

Execute job spoon with software

I have a JOB done in SPOON, which is executed without problems in the command line, but I would like to know if there is any software in which I can execute these JOBS and go to see the execution visually. The idea is that for the most pleasant exploitation area these tasks are executed.
You have two solutions:
Carte:
Use the carte server which is shipped with the PDI. Install the PDI on any server, launch carte (specifying the port), then you can execute/view/stop/restart job/transformation from any browser. Documentation is here.
Of course you can launch a job/transformation from your own PDI. Just define a new Slave server, on the left panel, tab view, default username/password = cluster/cluster. Then each time you run a job/transformation, choose the carte server, instead of Pentaho/local in the Run configuration.
Loggin
If you just want to follow job/transformation, you may use the database logging: Right-click any where, Parameters, Logging, Job/Transformation, then define a database, a table and a logging interval of 2 seconds.
Then every two seconds, the line_read, line_written, errors, and log_field are written to a database. This database can be read by an external process and displayed on the screen or on a browser.
This method is used in the github/ETL-pilot which uses a tomcat (because you probably have a tomcat already running with a Pentaho server), but can easily be adapted to a nodejs or any other server. (If you do it and OpenSource it, please add a link to your work on our github).

How to run command on Zabbix agents?

I want to run a command on Zabbix agents:
Some simple unix commands, to obtain our reporting data.
When there is some processing required on the agent side.
There seem to be a variety approaches being talked about. So how to execute such commands on a Zabbix agent?
Run commands from the server directly from a new item.
First, set: EnableRemoteCommands=1 in the agent conf file (for all of your agents). To enable this feature.
Create a new item. A field on the "new item" page says 'key'. Enter:
system.run[command]
As the 'key' string. Where command is the command you want to be downloaded and run on the agent. Here is an example:
system.run[sysctl dev.cpu.0.temperature | cut -d ' ' -f 2 | tr -d C]
Perhaps you need to run something substantially more complex that is too long to fit in there? Then you will need to make a custom script. Put your custom scripts on a local webserver, or somewhere on the web.
Then you might set the item's key to:
system.run[ command -v script && script || wget script_url -O /path/to/script && script]
To fetch and download the missing script to the agent the first time it's executed. However that is a rather crude hack. Not very elegant.
A better way is to go to "Administration" --> "Scripts" in the menu. From there, you can create a new script to use in an item which may be configured to run on any of your agents.
Make a special custom item to re-run your script periodically (like a cron job). The job of the special script item is to update the agent with a collection of your other needed custom scripts.
Of course you could just write all of your custom scripts directly into zabbix's MYSQL database. And it is very tempting to do that. But be aware that then they'd be lost and vulnerable if your zabbix database ever gets fried or corrupted / lost. Zabbix databases always have a habit of growing large, unwieldy and out-of-control. So don't do that. Storing them separately somewhere else and under version control (git or subversion).
Once that's all sorted, we can finally go ahead and create further custom items to run your custom scripts. Again using:
system.run[script]
as the item's key just as before. Where 'script' is the command (plus any arguments), to execute your custom script locally on the agent.
Define the user parameter at the client (where zabbix agent is
located) at /etc/zabbix/zabbix_agentd.conf
The key should be
unique. I am using lsof as an example: UserParameter=open_file,lsof | wc -l
Restart the agent: service zabbix-agent restart
Test if the key is working using zabbix_get utility. To do that from the zabbix server, invoke the following: /usr/local/bin/zabbix_get -s <HOST/IP of the zabbix agent> -k open_file (It should return a number in this case)
Create an item with the key at the zabbix server at the template
level (the return type should be correctly defined, otherwise zabbix
will not accept it):
Type: Zabbix Agent (Active)
Key: open_file
Type of Information: Numeric (unsigned)
Data Type: decimal
You may create a graph using the item to monitor the value at
regular interval.
Here is the official documentation.

SSH "Framework" to write programs that will keep a connection open and feed commands to the servers

I am looking for some kind of framework that will allow me to do connect to multiple servers using SSH and keep the connection open, reopen it if it dies, and allow me to run commands to it and report back. For example, check disk space on all the machines right away, so I'd do results = object.run("df -h") and it returns an array with the response from all the machines (I am not looking for a monitoring system).
Anyone have any idea?
I would use Python and the Fabric framework. Lets you easily execute commands on a set of servers - like doing deployment
with Fabric you could do
from fabric import run, env
def getSpace(server):
env.host_string
run("df -h")
>>> fab getSpace("234.24.32.1")
One way to do this would be to use Python and the paramiko library. Writing the functionality that runs a given command on a specified set of servers is then a simple matter of programming.

Stop IIS 7 Application Pool from build script

How can I stop and then restart an IIS 7 application pool from an MSBuild script running inside TeamCity. I want to deploy our nightly builds to an IIS server for out testers to view.
I have tried using appcmd like so:
appcmd stop apppool /apppool.name:MYAPP-POOL
... but I have run into elevation issues in Windows 2008 that so far have stopped me from being able to run that command from my TeamCity build process because Windows 2008 requires elevation in order to run appcmd.
If I do not stop the application pool before I copy my files to the web server my MSBuild script is unable to copy the files to the server.
Has anybody else seen and solved this issue when deploying web sites to IIS from TeamCity?
This article describes using an htm file named App_offline.htm to take a site offline. Once the IIS detectes this file in the root of a web application directory,
ASP.NET 2.0 will shut-down the application, unload the application
domain from the server, and stop processing any new incoming requests
for that application.
In App_offline-htm, you can put a user-friendly message indicating that the site is currently under maintainance.
Jason Lee shows the MSDeploy calls you need to use (plus much more about integrating these steps in your build scripts!).
MSDeploy
-verb:sync
-source:contentPath="[absolute_path]App_offline-Template.htm"
-dest:contentPath="name_of_site/App_offline.htm",computerName="copmuter_name",
username=user_with_administrative priviliges,password=passwort
After deployment you can remove the App_offline.htm file using the following call:
MSDeploy
-verb:delete
-dest:contentPath="name_of_site/App_offline.htm",computerName="computer_name",
username=user_with_administrative_priviliges,password=passwort
The msbuild community tasks includes an AppPoolController that appears to do what you want (though as noted it is dated and at present only supports IIS6.) An example:
<AppPoolController ApplicationPoolName="MyAppPool" Action="Restart" />
Note that you can also provide a username and password if necessary.
Edit: Just noticed that the MSBuild Extension Pack has an Iis7AppPool task that is probably more appropriate.
this is the fairly hackey workaround I ended up using:
1) Set up a limited-access account for your service to run as. Since I'm running a CruiseControl.NET service, I'll call my user 'ccnet'. He does NOT have admin rights.
2) Make a new local user account, and assign to the Administrators group (I'll call him 'iis_helper' for this example). Give him some password, and set it to never expire.
3) Change iis_helper's access permissions to NOT allow local login or remote desktop login, and anything else you might want to do to lock down this account.
4) Log in (either locally or through remote desktop) as your non-admin user, 'ccnet' in this example.
5) Open a command terminal, and use the 'runas' command to execute whatever it is that needs to be run escalated. Use the /savecred option. Specify your new administrative user.
runas /savecred /user:MYMACHINE\iis_helper "C:\Windows\System32\inetsrv\appcmd.exe"
The first time it will prompt you for 'iis_helper's password. After that, it will be stored thanks to the /savecred option (this is why we're running it once from a real command prompt, so we can enter the password once).
6) Assuming that command executed OK, you can now log out. I then logged back in as a local admin and turned off the 'ccnet' user for local interactive login, and remote desktop. The account is only used to run a service, but no real logins. This isnt a mandatory step.
7) Set up your service to run as your user account ('ccnet').
8) Configure whatever service is running (CruiseControl.NET in my case) to execute the 'runas' command instead of 'appcmd.exe' directly, the same as before:
replace:
"C:\Windows\System32\inetsrv\appcmd.exe" start site "My Super Site"
with:
runas /savecred /user:MYMACHINE\iis_helper "\"C:\Windows\System32\inetsrv\appcmd.exe\" start site \"My Super Site\""
The thing to note there is that the command should be in one set of quotes, with all the inner quotes escaped (slash-quote).
9) Test, call it a day, hit the local pub.
Edit: I apparently did #9 in the wrong order and had a few too many before testing...
This method also doesn't completely work. It does attempt to run as the administrative account, however it still runs as a non-escalated process under the administrative user, so still no admin permissions. I didn't initially catch the failure because the 'runas' command spawns a separate cmd window then closes right away, so I wasn't seeing the failure output.
Its starting to seem like the only real possibility might be writing a windows service that will run as admin, and its only purpose is to run appcmd.exe, then somehow call that service to start/stop IIS.
Isn't it great how UAC is there to secure things, but in actuality just unsecures more servers, because anything you want to do you have to do as admin, so its easier to just always run everything as admin and forget it?
You can try changing the Build Agent Service settings to log-on as a normal user account instead of SYSTEM (the default), this can be done from the services control panel (Start | Run | services.msc).
If it doesn't help, you can also try configuring the appcmd to always run elevated, refer to this document for details.
In case such option is not available for appcmd or it still doesn't work, you can disable UAC completely for this user.
Here you go. You can use this from CC.NET with NAnt or just with NAnt:
http://nantcontrib.sourceforge.net/release/latest/help/tasks/iisapppool.html