Heroku doesn't set boolean field in rails app - ruby-on-rails-3

It's passing the parameter as replacement_emails, which is correct. From the log:
Parameters: {"utf8"=>"✓", "authenticity_token"=> ... "replacement_emails"=>"1"}, "commit"=>"submit", "id"=>"1"}
But it's not getting set in the database. No error message in the log, nothing. It works in development with SQLite.
Any thoughts? On why it works in development but no in production on Heroku?

I came across your question today when I had a similar problem and may be able to explain what was going on.
Running Rails migrations on Heroku doesn't automatically cause your application to restart and so your new code may be seeing an old view of the database via its existing database connection. This can cause some strange behavior (like accessing a column that didn't exist until the migration executed).
A manual restart of the application will cause it to reconnect to the db and see the changes.
A rollback or redeployment will also cause the application to restart and reconnect to the database.
Just remember to restart your application after running rails migrations.

After doing a rollback of the deployment, then re-deploying -- it just suddenly works. Not sure what was wrong.

Related

Drush, how to check migration rollback status

We are running a complex migration project that pulls data from various older DBs into our new staging website before we execute it on live.
For reasons, we have to rollback the migrations we have. I am using drush migration:rollback --group=xxxxx.
Unfortunately, this is a multi hour process, and the ssh connection timesout (running overnight). When logging back into the server, is there a way to check how far the rollback has gone? The process does not appear on htop.
I found a lot of references to check the drush migrate process, but no hits on Goolge (or elsewere) for a way to check how far the rollback has gone.
Thanks in advance

User login failed for <user> in production but same connection string works from local client

I have an Azure database setup of which I have included the below connection string as I believe it should be. Problem is when I try to run my client app in production, the server returns a 500 internal error. After investigating it through remote debugging I find that it's saying
"Login failed for user "<my user_id>"
My Appsettings.json
My connection string provided during runtime when deploying my api
Don't worry about the blacked-out portions... I've verified those to be the same in both.
Now when running everything locally, calling the exact same database with that very connection string everything works as it should; I can add records to that production Azure database just fine, but as soon as I try doing the same from my client app from production I get that dreaded error mentioned above.
Can anyone tell me what might be happening? I've been over and over this and it's driving me mad. I've even gone as far as changing the connection string to be Server=... I've made sure to append the # to the user_id. I believe I've tried just about everything I could find that wasn't 8 years old, including searching similar issues here... nothing seems to be quite like my issue exactly.
If you need more information let me know and I'll update my question.
Thanks!
EDIT: Adding this to show I've already added all of my output IPs from my api app service to my Sql server firewall. Can someone tell me if all my settings look good?

MySQL error 1449 reappearing even though definer was set to resolve initial error?

On Monday I messed up with a database.
We have an application running on a VPS, using cPanel and phpmyadmin, and I informed the developers I will be doing some queries on the DB to extract information.
So, I did a few large queries using the "Visual Builder" query tool and the web-application got stuck. The queries weren't loading and even refreshing the page did not work. The website wasn't loading and users couldn't log in. So I used WHM to log in as root and kill the queries manually. After I did this, the system was still not running.
Then, the database completely freaked out and I got these error messages:
After doing this, the DB somehow fixed itself and the web application was working again. However, we saw that we could not update some jobs or add new jobs in the system. If you pressed the "SAVE" button on a job, the system just gave an "undefined" message.
The developers had a look and discovered this was causing the issue:
[
The devs went ahead and added the definer and the issue was resolved. The blacked out "user"#1.0.0.0" is the actual cPanel account username.
However, this did not last as yesterday evening the exact same situation was occurring. The web-application was running fine on Tuesday and most of Wednesday, then all of a sudden users couldn't update their jobs again which means the definer user was removed once again even though nobody did anything in the database.
Has anyone encountered this issue before? I read this thread on the topic and even though what they say makes sense, I believe the developers did this but the error still occurred.
When I log into phpmyadmin via cPanel, I get a weird user called "cpses_234ikjih#localhost.com". Does this perhaps have something to do with this error? I believe before the server went crazy, this user was only the name of the cPanel account (for example: "cPanelAccountName#localhost.com".
To summarize your post, what I'm seeing is that you have a MySQL user, the user disappeared, you recreated the user, and it went away again.
There must be some external factor here. Someone could have access to your database and is deleting the user maliciously or out of misunderstanding, there could be a scheduled job, or it could be something to do with your web host.
I'd start by auditing the database accounts, and restricting access as much as possible. Check any interface that's exposed to the web, such as WordPress, Joomla, or other applications.
You should enable logging, there are several degrees of logging that MySQL can allow. I think the most useful for you would be the audit log, although honestly I've never used that specifically. You'd enable that to log future events. The binary log may contain record of what has already occurred.
SOLVED
I managed to solve this by changing MySQL database password and cPanel account password.
I read one post by someone saying that there was a session file which perhaps stored an old session and that changing passwords could resolve this. Luckily it did, have not had the error 1449 appearing for 5 days now.

Multiple SSIS Packages failing during ftp during ftp.GetDetailListing?

I have a whole bunch of SSIS packages that are failing during the GetDetailListing within an ftp script component of SSIS jobs. These were all working fine for years up until about 1.5 days ago. Currently they are all failing. We have traced it to the point of being the fact that the ftp itself is not set to passive. We can set the FtpClientConnection setting like this. :
FtpClientConnection.UsePassiveMode = True
Upon setting the above prior to the ftp connection being set, it will connect and work without error. What I am trying to determine prior to going through and fixing all these packages. Is what would have changed that would cause all ftps that were previously not set to passive to now require passive? At first I thought it was some sort of network setting or something of that sort but have been unable to determine what that would be. I can't believe that all the ftp locations that we were connecting to would have performed a security update all at the same time.
Any ideas? I am stumped and have been looking at this for > 8 hours now.
This ended up being our barracuda server. Upon being rebooted these failures just disappeared. An auto update to this server must have been the cause to all these issues.

Azure SQL database working when ran locally but not when published to Azure

This has been cracking me up for a few days now and I just can't solve it.
I followed an online tutorial showing you how to connect and use an Azure database using the model first approach within Entity Framework. With this you had to set up database migrations as to update the Azure database when publishing the website to Azure.
I had already created a database on Azure so I thought I would take the approach of using a database first model (also it didn't require migrations to be setup) and used the Entity Framework wizard to create my model. Everything worked perfectly and when I run my MVC website locally it connects to my Azure database and shows the data, etc. However, when I publish the website to the Azure website, for some reason, when I click on tab that uses the controller that gets data from the database, I get an error:
Sorry, an error occurred while processing your request.
I have checked my connection strings and they all seem OK and as I said, when I run it locally I can get the data from the Azure database. For some reason, though, I can't get the data once its published.
Any ideas?
Thankyou everyone for your help.
I didn't even think of turning on the copy local however when I checked it it was set to true. So no answer there :(
Next I added the customerrors mode to off to try and get a more detailed description. The error I got was huge and really didn't make much sense so i did the usual thing and googled the error and I found this
Can anyone spot why I keep getting this error testing the EF 5 beta
As soon as I read it i knew this would fix it. I originally setup the project as a .net 4.5 project until I realized that azure websites didn't work with 4.5 yet so I changed it to .net 4 once I uninstalled EF and reinstalled it everything work.
Thanks for all your help. This has been stopping me doing anything for a few days :P