Upgrade django-synchro for django 2.2 : OperationalError - no such table: django_content_type" - django-2.2

I am trying to upgrade django-synchro to django 2.2. I have already upgraded the project to django 2.1 but I have now a problem with ContentType object
The upgraded version to django 2.1 can be found here
(python runtests.py works, all tests pass)
With django 2.2.3 I have the error
django.db.utils.OperationalError: no such table: django_content_type
It seems that at initialisation, migrations are done on default database and the rest it is done on test database (in memory). So ContentType are not seen in the rest of the code. An error occur when models.py is read
(content_type = models.ForeignKey(ContentType, on_delete=models.CASCADE))
Any ideas would be very appreciated...
I have looked in Django 2.2 release notes
There are two backwards incompatible changes in 2.2 that can maybe do the error :
TransactionTestCase serialized data loading : Initial data
migrations are now loaded in TransactionTestCase at the end of the
test, after the database flush. In older versions, this data was
loaded at the beginning of the test, but this prevents the test
--keepdb option from working properly (the database was empty at the end of the whole test suite). This change shouldn’t have an impact on
your tests unless you’ve customized TransactionTestCase’s internals.
Test : Deferrable database constraints are now checked at the end of
each TestCase test on SQLite 3.20+, just like on other backends that
support deferrable constraints. These checks aren’t implemented for
older versions of SQLite because they would require expensive table
introspection there.

For me, it was this note from the Django 2.2 release notes
Tests will fail on SQLite if apps without migrations have relations to
apps with migrations. This has been a documented restriction since
migrations were added in Django 1.7, but it fails more reliably now.
You’ll see tests failing with errors like no such table:
_. This was observed with several third-party apps
that had models in tests without migrations. You must add migrations
for such models.
I'm not sure if the error message is particularly helpful. But a round of upgrades and checking that I had no really old django modules lying around seemed to fix it.

Related

Django - Transferring data to a new database

I am using Django as my web framework with Django REST API. Time and time again, when I try to migrate the table on production, I get a litany of errors. I believe my migrations on development are out of sync with production, and as a result, chaos. Thus each time I attempt major migrations on production I end up needing to use the nuclear option - delete all migrations, and if that fails, nuke the database. (Are migrations even supposed to be committed?)
This time however, I have too much data to lose. I would like to preserve the data. I would like to construct a new database with the new schema, and then manually transfer the old database to the new one. I am not exactly sure how to go about this. Does anyone have any suggestions? Additionally, how can I prevent this from occurring in the future?
From what you're saying, it sounds like you have migration files that are out of wack and you're constantly running into issues relating to database migrations. I would recommend you just remove all of your migration files and start with a new initial migration after you make all the necessary model changes and restructuring of the schema.
When it comes time to make the migration on your production server, it might make the most sense to --fake-initial and manually making the database changes outside of Django so it matches your schema.
I might get a lot of backlash about this and obviously use your best judgement, but from my experience it was much easier to go about this problem this way and not wasting time making custom migration files that try to fix all of your problems.
Addressing your other questions
Time and time again, when I try to migrate the table on production, I get a litany of errors.
I highly recommend you take the time to get acquainted with how to make migrations by reading the official Django docs, you will save yourself a LOT of headache.
... each time I attempt major migrations on production I end up needing to use the nuclear option - delete all migrations
You shouldn't be deleting your migration files every time there's an issue.
Are migrations even supposed to be committed?
You should definitely be committing your migrations. If you're working on a team, they would be using the migration files you created to make the necessary changes on their local DB as well as any dev/prod server you may have.

Can Liquibase detect if it has already run?

I have a small set of scripts that manage the build/test/deployment of an app. Recently I decided I wanted to switch to Liquibase for db schema management. This script will be working both on the developer machines where it regularly blow away and rebuild their database and also on deployed environment where we will only be adding new changesets.
When this program first runs on a deployed environment I need to detect if Liquibase has run or not and then run changelogSync to sync with the existing tables.
Other than manually checking if the database changelog table exists is there a way for the Liquibase API to let me know that it has already run at least once?
I'm using the Java core library in Groovy
The easiest way is probably ((StandardChangeLogHistoryService) ChangeLogHistoryServiceFactory.getInstance().getChangeLogService(database)).hasDatabaseChangeLogTable()
The ChangeLogHistoryService interface returned by liquibase.changelog.ChangeLogHistoryServiceFactory doesn't have a method to check if the table exists, but the StandardChangeLogHistoryService implementation does.

Could not load type 'System.Data.Entity.Core.Mapping.EntityContainerMapping'

When I debug the following code, I receive the message "System.TypeLoadException was caught" when I perform the Delete().
Using db As New ScholarshipEntities
db.ApplicationHistories.Where(Function(h) h.HistoryTypeId = 0).Delete()
db.SaveChanges()
End Using
I am using EF 6.1 in Visual Studio 2013. I also have the EntityFramework.Extended library installed.
I have no trouble querying results. I thought the bug might occur when the Where method has no results, but that is not the case. I also have no problem adding new models (.edmx), which was a problem some people with this exception had.
I just recently upgraded to EF 6.1 and installed the Extended library. This is my first time using one of the extended methods. I've un-installed and re-installed the nuget packages with no success.
IntelliTrace shows the following exceptions from the Delete() call (in order):
'EntityFramework.Reflection.DynamicProxy' does not contain a definition for 'InternalQuery'
Cannot implicitly convert type 'EntityFramework.Reflection.DynamicProxy' to 'System.Data.Entity.Core.Objects.ObjectQuery<Scholarship.ApplicationHistory>'
Could not load type 'System.Data.Entity.Core.Mapping.EntityContainerMapping'
I've added an issue on the Extended library's github.
Update
I've reinstalled EF and the EF.Extended library with no luck. I am able to use RemoveRange in its place. I am able to create a new project, install the packages, add a model mapped to the same database, and successfully use Delete. Obviously, the problem is in my current solution.
In my solution, I have an ASP.NET project and a regular library project. In the ASP project, a page's code behind calls a method in the library RemoveHistory. The library contains classes for the business logic and data access. Both classes implement interfaces. The actual Delete occurs in the data access class. My model also resides in this library project.
I may be able to create a completely new project and bring everything over, but that will take quite some time. Even if I did, I want to understand why it doesn't work in the first place, so that I don't have to repeat this process.
If you want to delete certain rows do it like that:
Using db As New ScholarshipEntities
db.ApplicationHistories.RemoveRange(db.ApplicationHistories.Where(Function(h) h.HistoryTypeId = 0))
db.SaveChanges()
End Using
If you want to remove single entity do it like that:
Using db As New ScholarshipEntities
db.ApplicationHistories.Remove(db.ApplicationHistories.Single(Function(h) h.HistoryTypeId = 0))
db.SaveChanges()
End Using
I "solved" the issue some time ago. I'll eventually go back to try and reproduce the problem to confirm my suspicions.
There were multiple versions of Entity Framework installed in the solution. This didn't appear to affect basic EF functionality, though I'm sure it did in some subtle, potentially buggy fashion.
Every time the solution was opened, NuGet would state that it couldn't complete uninstallation. Uninstalling and restoring via NuGet was unsuccessful, and the packages had to be deleted manually. Once completely removed, I installed the packages again. This resolved the issue.
I wish I could give a more technical answer, though the basic reason was forgetting to look closer at the packages folder and configuration.

Core Data Versioning - Multiple mapping models required

I have an existing project that uses Core Data, and I have 3 versions within my xcdatamodeld bundle. So far I have only used lightweight migration as I have mostly added new parameters and entities, however I now wish to move an existing parameter into a new entity. I realise that I have to crete a mapping model to do this in order to get the data migrated between the parameters.
I presume there are users out there with very old versions of the app using version 1 of the model, and others using version 2 and 3.
Questions:
Do I need to create a mapping model from all existing versions to the new version, or just from the latest version
Do I need to change/disable the lightweight migration options on my NSPersistentStoreCoordinator? Currently I have the following options enabled:
NSMigratePersistentStoresAutomaticallyOption
NSInferMappingModelAutomaticallyOption
I presume that lightweight migration will still be required to move from v1 to v2 to v3, however the new mapping model is required to go from v3 to v4. I've had a look around, but I can't find any information on how this all happens as most tutorials only cover 2 versions.
Thanks
Just from the latest version.
No.
Migrations are sequential (which is the reason why you need to keep all model versions present, even if no migration from the first version is anticipated.
The NSMigratePersistentStoresAutomaticallyOption will only do the automatic migration if no mapping model is present.

RavenDb Config and DocumentStore abstraction?

I am using RavenDb across multiple projects and solutions to access three different databases that are all part of the same product. For instance, I have multiple MVC projects that fetch user info and some data out of the 'web' centric database and the 'backend' database, using '-' for the id override (but I need this only for a subset of classes in the 'web' db). And then I have another 'backend' database that is used by services (as well as the MVC projects). And finally a third temp/scratch database I use by another set of services to build the backend db. And of course, all of these are being accessed from different class libraries and even console test, seed, and integration test apps.
Managing all of these is becoming quite a nuisance. Every time I create a new console app or class library that access the db, I have to setup config and raven packages for each project, make sure indexes are built, etc.... Not to mention running update on all nuget updates, or in my case, installing a new unstable version of the server/client binaries.
Is there an easier way to manage this?
I tried to abstract the DocumentStore creation and initialization, as well as index creation into it own project and reference that. But the other projects then had to manually add newtonsoft.json (and nlog) from the package directory.
As well, I am getting the following when I try and abstract the DocumentStore into a class with a static property:
StackTrace of un-disposed document store recorded. Please make sure to dispose any document store in the tests in order to avoid race conditions in tests.
Anyone have any thoughts on handling these issues?
Thanks
I don't think that the manual addition of the references is a big issue, but you can add the actual nuget references as well.
Note that the DocumentStore not disposed error is something that only happened in the unstable (debug builds), and won't happen on release builds.