Our team is using a django environment to develop a website, the main issue is one team member recently updated one of the databases and the change will not care through mysql. We are literally on the same branch but the database tables are completely different. We are using the current version of django, python, and mysql are the development environment and github to share work.
The truth is Git is never synced with your MySQL database. MySQL Database is a local property of your system. If you use SQLite then that can be synced by Git. If you both need to access the same database you need some database in the cloud so that you both are on the same page.
Also, you need to migrate the migrations to get those same tables this is independent of the system but is dependent on the number of migrations created and applied.
This will create same tables and columns. Just run this the terminal
python3 manage.py makemigrations
python3 manage.py migrate
Related
The system I have is a local machine for development with the dev DB and a number of remote servers with the production database. While looking for a system to manage the versions of my SQLite database I found Liquibase but I can't understand if it will work for what I need. Which is updating the schema of the production databases when i release a new version, adding the changes configured in Liquibase's changelog file for that version. Ofcourse all the rest code is under GIT so, if Liquibase only needs the changelog files I can put them in the repository, but if it needs something else it could become a problem.
Yes it should work. If you are using liquibase for first time it will run all the migrations and will store information in your database by creating seperate table for itself. Though you should verify the structure at both local and production is same and migrations won't cause error.
I want to use Pentaho for my work. After a bit of research I found that to store the ktr/kjb files I can have either database as a repository or I can use file system as a repository. However I don't find any benefits of using database as a repository over file system. The basic purpose of repository here is to create a common location where I can keep all the developed ktr/kjb files in production environment. Basically if I consider the database repository, it will hold all the developed ktr/kjb files in production and every time I need to run a job/transformation I will connect to database to get the respective ktr/kjb file (similar to how informatica stores transformation) on the other hand file based repository will be like a folder holding all the developed files.
Can somebody here will be able to explain pros and cons of both type of repository?
Please let me know if you need any other information.
Thanks in advance.
When several people develop on the same jobs/transformations, the database repository will hold the changes, and ensure the latest versions.
The pros of a filesystem is of course ease of backup, no database connection that can trouble you, and the possibility to use other, more modern and mature version control systems for the files, than the database repositories use.
If you are using the free community edition, I would definitely go with the file repository, along with external file-based version control and migration systems. If you are using the enterprise edition, then you might want to consider the database repository, since you can then use Pentaho's built-in version control and migration systems.
Is it possible to create a package or replace an existing package in a local database using a package from a different database without having to export it from the remote database?
Basically i have two environments/servers (DEV and QA).
The developers that work on the packages use the development environment and i would like to update the same packages in the QA environment using the package in DEV (ignore any possible issues for now e.g compilation failures etc).
Is it possible to frequently update the package in QA using the package in Dev as the source (instead of compiling from an .sql file)? Maybe a database link?
Yes, it's possible, you could created a process on your target system which uses the DBMS_METADATA package on the remote system to fetch the DDL for the desired package spec and body, and then use dynamic SQL on local system to compile the fetched code.
Alternatively, you could use tools such as Oracle's SQL Developer for migrating code. Using either the database diff functionality to detect differences and prepare the appropriate DDL scripts, or the Cart functionality to pick and choose what get's migrated. However, I'm not sure how well the SQL Developer method can be automated.
I got an application written in YII that from time to time will need version update. Currently, when we release a new update, we manually run a shell script to copy/overwrite the application code/source files from our git repo and set the appropriate permissions and other things, then at the end of the script, we run a YII command to run our database update. We have versioning on our database update. We also rollback changes to the database if one of the sql statements of a version fails. Now the issue occurs if a database update fails, and the application code/source is updated, then it will fail when it tries to access some table fields, table or views.
How to best handle an application update with versioning? Much like the way wordpress handles its update or better.
Would like to ask for suggestions to the rigth approach, it may include RPM, GIT or other info.
It would be good to have a detailed list of processes from you guys.
Thanks.
Database updates may include backups, and running multiple scripts, and
should be handled outside of rpm packaging. There are too many failure modes
for RPM scripting to handle flawlessly.
You can always package up the database schema update script in the package
and then check for the correct schema version when your application starts,
providing instructions (or a pointer to instructions) on how to upgrade the
database, and how to reinstall the last-known-good application, when the wrong
schema version is detected.
Why when i use sql files and load my initial data via sql statements and when deploying the app on heroku,the sql files aren't executed and no data found in the database by default,and how to solve it ?
Heroku does not use the SQLlite database. You have to use Heroku's shared postgresql database since heroku is a 'production' environment.
When you push a django app up to heroku, it overrides your database settings in your settings.py file. You have to do a syncdb or south migration to your production postresql database your app now uses.
Btw - you will need to install postgres on your devel environment and pip install psycopg2 for postgres/python support.