I use liquibase for migrations and have sql files with cyrillic comments, like example.sql:
--liquibase formatted sql
--changeset user:id1
create table example
(
id varchar(100) primary key
);
-- cyryllic comment here
comment on table kafka_cluster is 'Коммментарий к таблице.';
My changelog is yaml file like:
databaseChangeLog:
- include:
# comment
file: db/changelog/1.0.0/schema/example.sql
When i run application on different OS(my mac and on the linux server) i always got different checksums for same files.
Liquibase version:
compile "org.liquibase:liquibase-core:4.3.1"
I don't know why it happens, files are the same. All files are in the UTF-8 charset.
One of the answers for my question is to have all files in UTF-8 with BOM.
But it's not good for me - make sure that all sql files are with BOM.
Another solution is to move from yaml to xml.
Maybe there is a simpler solution?
Thanx.
Related
I'm using Liquibase for my project and would like to rename my changelog files.
From old structure:
databaseChangeLog:
- include:
file: db/changelog/add_pokemons.sql
- include:
file: db/changelog/add_minions.sql
To new structure:
databaseChangeLog:
- include:
file: db/changelog/v001__add_table_pokemons.sql
- include:
file: db/changelog/v002__add_table_minions.sql
The Liquibase documentation unfortunately does not state best practices for such intention.
I was thinking about duplicating my files so I have the old ones plus new ones in one directory and then write a migration to change column filename in the databasechangelog table.
What would be the best approach in your opinion?
The problem here is that liquibase holds the name of file inside databasechangelog.filename. So if you are not using logicalFilePath yet maybe you have a chance.
If those files you've posted are formatted sql files then you could do something like this (example for file db/changelog/add_pokemons.sql):
--liquibase formatted sql logicalFilePath:db/changelog/add_pokemons.sql
and you can rename the actual file to whatever you want.
In that case nothing should be broken inside existing databasechangelog table.
Other than that there is probably no available change that will do that automatically from liquibase.
I need to restore a SQL table from daily backup but there are problems with encoding. Backup is made by virtualmin, encoding set to "default". Texts are in French language, so with accents...
Here is the dump of the webmin backup file:
For the table (wordpress table, interesting fields are:)
I need to insert a part of this table into the live table (after some deletion of lines..). So the table is already created with
Default collation UTF8mb4_unicode_ci
When I import the table lines into the table, text is not "converted" into the right charset. For example the french "é" shows up as "é". And so on.
I tried a few things, adding SET commands to utf8mb4 before the INSERT, no way, encoding is never done correctly. Text in the base itself shows "é" instead "é", and of course the same when displaying in a browser.
Any suggestion? Thank you!
I'm trying to find good ideas to rename files.
I have a small site with some editors and sometimes they upload files with the same name of a previous file. For example:
document.doc
I don't like the solution:
document(1).doc
Because it says nothing about the file, only that there was other 'similar' file before.
I thought to add a timestamp but it is not nice to download a file:
document_1348849299.doc
Do you have any suggestions or a really great way to name files?
I think you approach is a good one. You could format the timestamp to be a bit more attractive e.g. document_2012-09-28_11.59.00.doc
An alternative would be to keep a database table with files and a table with file versions. Name the file on disk after the file version id.
e.g. SQL Tables
File
FileID
FileName
CreatedBy
[Whatever]
FileVersion
FileVersionID
FileID
UserName
UploadTime
[Whatever]
~/file_store/
1.file
2.file
I rewrite a program based on the old Foxbase database consisting of files .dbf. I need a tool that would read these files, and helped in the transfer of data to PostgreSQL. You know maybe some of this type of tool?
pgdbf.sourceforge.net - has worked for all the DBF I've fed it. Quoting the site description:
PgDBF is a program for converting XBase databases - particularly
FoxPro tables with memo files - into a format that PostgreSQL can
directly import. It's a compact C project with no dependencies other
than standard Unix libraries.
If you are looking for something to run on Windows, and this doesn't compile directly, you could use cygwin (www.cygwin.com) to build and run pgdbf.
As part of the migration path you could use Python and my dbf module. A very simple script to convert the dbf files to csv would be:
import sys
import dbf
dbf.export(sys.argv[1])
which will create a .csv file of the same name as the dbf file. If you put that code into a script named dbf2csv.py you could then call it as
python dbf2csv.py dbfname
Hopefully there are some handy tools to get the csv file into PostgreSQL.
I have a bunch of .btr and .lck files and I need to import those to a SQL Server Data Base.
How can I do that?
.LCK files are lock files. You can't (and don't need to) read those directly. The .BTR files are the data files. Do you have DDF files (FILE.DDF, FIELD.DDF, INDEX.DDF)? If so, you should be able to download a trial version of Pervasive PSQL v11 from www.pervasivedb.com. Once you've installed the trial version, you can create an ODBC DSN pointing to your data and then use SSIS or DTS or any number of programs to export the data from PSQL and import it to MS SQL.
If you don't have DDFs, you would need to either get them or create them. The DDFs describe record structure of each data file.