SQL (Find & Replace) Entire Database - sql

I am using PHPMYADMIN from SiteGround CPanel.
Story: I had Cloudflare setup for a php platform, I then realised it was causing issues so I removed it. The issue I'm left with is that half of my site is still running of (https://www.example.com).
What I have done so far: In the config files of my script I have already set it so that it runs through https alone.
What I want to achieve: I noticed in the database that there are some fields that are running through the www. I want to execute a command that will automatically find anything with my old domain (https://www.example.com) and replace it with (https://example.com). I noticed that the fields are not all appearing from a single column/file, it is all over the place, so a field&replace overall should fix the issue.
I would appecaite any help. Since it is database I don't wish to try out random things from different websites provding their feedback. I was recommended to use this website for assistance (if possible).
Thank you in advance.

Probably the most straightforward and quickest way, is to simply take a dump of the entire database, open the sql dump file in some text editor, and then do a text replace from [old url] to [new url]. Then import the dump file back to the database. This should work just fine and avoid the headache of uncertainty and risk over doing a write operation on the entire database's tables via some db query.

Related

Is there a way to extract Access Modules without opening the file?

I ended up corrupting my database to where every time I attempt to open it, I get error 3022, "changes you requested to the table were not successful because they would create duplicate values in the index."
Recovery of the file does not seem possible and my previous back up is a month ago. I have been able to extract everything but the Modules, which is what I need to recover the most. None of the standard ways I have found work because they require the ability to open the database (For example, trying to set it as a VBA reference still give the same error.)
Is there any way to get the modules or code out of the file without opening it?
Edit:
Was finally able to get access to the file. Using DBEngine.CompactDatabase it was able to do a compact and repair. The issue has boiled down to the "MSysAccessStorage" table is corrupt, and says "Id is not an index in this table". I know have access to everything, except the modules, which I can't open without the MSysAccessStorage working.
I'm going to keep poking at it but I'm not sure what options I have for fixing a system table. Any ideas would be helpful.
Unfortunately, the Visual Basic for Applications project has been corrupted. The original database doesn't even have any VBProjects when listing a count. I'm going to call this one a lost cause. Thanks everyone that tried to help.

What's elasticsearch and is it safe to delete logstash?

I have an internal Apache server for testing purpose, not client facing.
I wanted to upgrade the server to apache 2.4, but there is no space left, so I was trying to delete some files on the server.
After checking file size, I found a folder /var/lib/elasticsearch takes 80g space. For example, /var/lib/elasticsearch/elasticsearch/nodes/0/indices/logstash-2015.12.08 takes 60g already. I'm not sure what's elasticsearch. Is it safe if i delete this logstash? Thanks!
Elasticsearch is a search engine, like a NoSql database, and it stores the data in indeces. What you are seeing is the data of one index.
Probobly someone was using the index aroung 2015 when the index was timestamped.
I would just delete it.
I'm afraid that only you can answer that question. One use for logstash+elastic search are to help make sense out of system logs. That combination isn't normally setup by default, so I presume someone set it up at some time for some reason, and it has obviously done some logging. Only you can know if it is still being used, or if it is safe to delete.
As other answers pointed out Elastic search is a distributed search engine. And I believe an earlier user was pushing application or system logs using Logstash to this Elastic search instance. If you can find the source application, check if the log files are already there, if yes, then you can go ahead and delete your index. I highly doubt anyone still needs the logs back from 2015, but it is really your call to see what your application's archiving requirements are and then take necessary action.

Logging the last time user signed in Node.js

I need to log the last time the user signed in using my node.js server. I am looking into three options. The persistence requirement is not super high, meaning that the margin of error of this record being recorded is open.
Use SQL DB and whenever the user logs in it modifies their profile account.
Record it in a server text file. So whenever the user logs on, this file will be opened and updated. The opening, recording and closing of the file will all be done asynchronously.
I'm thinking that the second option is the better on because I'm using SQL for many other operations so I prefer to not interrupting my DB as much as possible.
One concern I have for the second option is the performance hit on the server that will be caused by the frequently read and write to a local text file.
I'm curious what other people who have gone through this path thought about my thought process. Any opinions or tips are highly welcomed. Thank you.
Normally you should use a SQL database, it is a much more better way than the plain text.
The main problem with a text file is that when you log in, you can simply append a line (but what about a couple of user loggin in at the same moment ? You have not any warranty that all the access are logged), but when you want to extact the last login for a user, you should read (and then load) all the file from the start (or the end), which can cause a really worst problem than the access to the DB.
Naturally you can work out all the problems with a text file, but then you have written a lot of code to avoid a simple update query.
I don't think that, with the information you give, you should be worried about the performance of a database access in this case.

Extracting large zip file onto server while pc is turned off

I've got a zip file of 1,6gb and it takes me forever to extract it on a server. I left it all night long and when i woke up it wasn't finished. There is no way to keep track how much time is left on extracting a file and how much percantage is done so i'm not sure if the whole thing works properly. Is there a way to exctract that file using File manager in Cpanel so that it can be done while the pc is off and maybe to note me on an email when it's done. I basically need to copy a webshop from live server to developers server and am just loosing too much time on that. So if anyone has a better idea how to extract it please feel free to suggest it.
P.S. Deleting of those files that did extract takes forever too
P.P.S. I'm a linux/SystemAdmin
If it's all about copying files from one server to another - why not just use rsync and avoid archiving?
I mean, if extraction is a pain - remove it from the equation :)
It is not a good ideato use the cPanel File Manager for this task, as the server will probably kill the extract process if it takes too long.
The best way to go about this would be via SSH, while logged in as root. If you need to switch off your computer, you should run it in screen.
You can also use unzipper.php which you can get from github.
It will require you to upload your file and unzipper.php too. Then run wwww.yourdomain.con/unzipper.php

Migrations don't run on hosting

I'm using MigratorDotNet to manage Rails-style migrations for my web app. I have a workflow where, if I delete all the tables in the database, I can access an installation view that will run MigratorDotNet and create all the necessary tables.
This works locally. For some reason, when I upload my code to my Arvixe hosting, the migrations just never run. I get this odd error:
There is already an object named 'SchemaInfo' in the database.
This is odd because, prior to running migrations, I manually deleted all the tables in the database (to make sure it wasn't left over from a previous install).
My code essentially boils down to:
new Migrator.Migrator("SqlServer", connectionString.ToString(), migrationsAssembly).MigrateToLastVersion();
I've already verified by logging that the connection string is correct (production/hosting settings), and the assembly is correctly loaded (name and version).
Works locally, but not on Arvixe. How do I troubleshoot this?
This is a dark day.
It turns out (oddly) that the root cause was my hosting company used a schema other than dbo for my database. Because of this, the error message I saw (SchemaInfo already exists) was talking about their table.
My solution, unfortunately, was to rip out MigratorDotNet and go with FluentMigator instead. not only did this solve the problem, but it also gave me a more intelligible error message (one referring to the schema names).
While it doesn't seem possible to auto-set the schema, and while I need to switch the schema on my dev vs. production machine, it's still a solvable problem (and a better API, IMO). I googled, but did not find any way to change the default schema in migratordotnet.
I'm sorry for the issues that you were having. On shared hosting, unfortunately the only way that we may be able to change the schema is manually. If you are still looking for a solution that requires our assistance, please forward your ticket ID to qa .at. arvixe.com as well as arvand .at. arvixe.com and we can look into the best way to resolve this.