Drush, how to check migration rollback status - migration

We are running a complex migration project that pulls data from various older DBs into our new staging website before we execute it on live.
For reasons, we have to rollback the migrations we have. I am using drush migration:rollback --group=xxxxx.
Unfortunately, this is a multi hour process, and the ssh connection timesout (running overnight). When logging back into the server, is there a way to check how far the rollback has gone? The process does not appear on htop.
I found a lot of references to check the drush migrate process, but no hits on Goolge (or elsewere) for a way to check how far the rollback has gone.
Thanks in advance

Related

MySQL error 1449 reappearing even though definer was set to resolve initial error?

On Monday I messed up with a database.
We have an application running on a VPS, using cPanel and phpmyadmin, and I informed the developers I will be doing some queries on the DB to extract information.
So, I did a few large queries using the "Visual Builder" query tool and the web-application got stuck. The queries weren't loading and even refreshing the page did not work. The website wasn't loading and users couldn't log in. So I used WHM to log in as root and kill the queries manually. After I did this, the system was still not running.
Then, the database completely freaked out and I got these error messages:
After doing this, the DB somehow fixed itself and the web application was working again. However, we saw that we could not update some jobs or add new jobs in the system. If you pressed the "SAVE" button on a job, the system just gave an "undefined" message.
The developers had a look and discovered this was causing the issue:
[
The devs went ahead and added the definer and the issue was resolved. The blacked out "user"#1.0.0.0" is the actual cPanel account username.
However, this did not last as yesterday evening the exact same situation was occurring. The web-application was running fine on Tuesday and most of Wednesday, then all of a sudden users couldn't update their jobs again which means the definer user was removed once again even though nobody did anything in the database.
Has anyone encountered this issue before? I read this thread on the topic and even though what they say makes sense, I believe the developers did this but the error still occurred.
When I log into phpmyadmin via cPanel, I get a weird user called "cpses_234ikjih#localhost.com". Does this perhaps have something to do with this error? I believe before the server went crazy, this user was only the name of the cPanel account (for example: "cPanelAccountName#localhost.com".
To summarize your post, what I'm seeing is that you have a MySQL user, the user disappeared, you recreated the user, and it went away again.
There must be some external factor here. Someone could have access to your database and is deleting the user maliciously or out of misunderstanding, there could be a scheduled job, or it could be something to do with your web host.
I'd start by auditing the database accounts, and restricting access as much as possible. Check any interface that's exposed to the web, such as WordPress, Joomla, or other applications.
You should enable logging, there are several degrees of logging that MySQL can allow. I think the most useful for you would be the audit log, although honestly I've never used that specifically. You'd enable that to log future events. The binary log may contain record of what has already occurred.
SOLVED
I managed to solve this by changing MySQL database password and cPanel account password.
I read one post by someone saying that there was a session file which perhaps stored an old session and that changing passwords could resolve this. Luckily it did, have not had the error 1449 appearing for 5 days now.

Execute job spoon with software

I have a JOB done in SPOON, which is executed without problems in the command line, but I would like to know if there is any software in which I can execute these JOBS and go to see the execution visually. The idea is that for the most pleasant exploitation area these tasks are executed.
You have two solutions:
Carte:
Use the carte server which is shipped with the PDI. Install the PDI on any server, launch carte (specifying the port), then you can execute/view/stop/restart job/transformation from any browser. Documentation is here.
Of course you can launch a job/transformation from your own PDI. Just define a new Slave server, on the left panel, tab view, default username/password = cluster/cluster. Then each time you run a job/transformation, choose the carte server, instead of Pentaho/local in the Run configuration.
Loggin
If you just want to follow job/transformation, you may use the database logging: Right-click any where, Parameters, Logging, Job/Transformation, then define a database, a table and a logging interval of 2 seconds.
Then every two seconds, the line_read, line_written, errors, and log_field are written to a database. This database can be read by an external process and displayed on the screen or on a browser.
This method is used in the github/ETL-pilot which uses a tomcat (because you probably have a tomcat already running with a Pentaho server), but can easily be adapted to a nodejs or any other server. (If you do it and OpenSource it, please add a link to your work on our github).

Rundeck project and job sync between 2 instances with backend as mysql cluster

I have set up 2 rundecks in 2 VMs and mysql cluster so Rundeck #1 on VM#1 connects to Mysql DB#1 and similarly Rundeck #2 on VM#2 connects to Mysql DB#2.
The problem now I have is whenever I am creating a project / job in rundeck #1 that I am not able to see it in rundeck #2. What should I do?
Any help will be appreciated
I would first try to switch the databases, i.e. Rundeck#2 connects to MySql DB#1 to see if the jobs are visible.
If this is the case, then you have a sync issue.
If jobs are still not visible, then i assume that there are some identification problems of the Rundeck instances.
Just my 2 cents.
The issue can be fixed by maintaining the default engine in my.cnf.
So in my case I just modified the /etc/my.cnf
Introduced the following option below the header [mysqld]:
default-storage-engine=NDBCLUSTER
And did a mysql restart and the tables sync started to happen.
delete the rundeck db before proceeding with any modifications.
Thanks and hope this helps everyone facing such issues.

RavenDB periodic backup bundle + web admin does not persist changes

I'm using the latest stable version (3.0.3660) on a VM on Windows Azure and would like to enable period backup. Have tried to enable both local backup and backup to Azure but the GUI doesn't seem to persist the changes. Modal dialog says "Saving..." but nothing more.
Is there a log for this so that I can troubleshoot what doesn't work?
/Erik
I tried it too and the database is non-responsive for several minutes (a co-worker was waiting for tens of minutes). But after waiting a while it actually does something. I configured the Azure backup and that went wrong because it couldn't upload a blob of that large a size. The error was logged and can be found in the studio > status > logs.
Running the server standalone (instead of running as a service) doesn't give any additional feedback either.
Managed it to work by setting "Raven/AnonymousAccess" to Admin and then save the changes, not sure why. Connected with API key that should have full access.

Heroku doesn't set boolean field in rails app

It's passing the parameter as replacement_emails, which is correct. From the log:
Parameters: {"utf8"=>"✓", "authenticity_token"=> ... "replacement_emails"=>"1"}, "commit"=>"submit", "id"=>"1"}
But it's not getting set in the database. No error message in the log, nothing. It works in development with SQLite.
Any thoughts? On why it works in development but no in production on Heroku?
I came across your question today when I had a similar problem and may be able to explain what was going on.
Running Rails migrations on Heroku doesn't automatically cause your application to restart and so your new code may be seeing an old view of the database via its existing database connection. This can cause some strange behavior (like accessing a column that didn't exist until the migration executed).
A manual restart of the application will cause it to reconnect to the db and see the changes.
A rollback or redeployment will also cause the application to restart and reconnect to the database.
Just remember to restart your application after running rails migrations.
After doing a rollback of the deployment, then re-deploying -- it just suddenly works. Not sure what was wrong.