How to make cascade to false for all grails domains at global level - grails-orm

How to make cascade to false for all grails domains at global level.
Also i would like to have the option to set it on specific save operation.

For the first part of your question:
How to make cascade to false for all grails domains at global level.
It's a bit of an undocumented feature, but in your application.groovy file you can add
grails.gorm.default.mapping = {
'*' cascade:'none'
}
Be aware that cascade validation will be disabled even if you use validate(deepValidate:true).
Note: Even with cascade validation disabled, if you validate manually a nested instance before the outer instance, then the validation of the outer instance will collect the errors of the nested instance. This caused me a fair amount of confusion.
I have only tested this in Grails 3.3, I'm not sure if it works in other versions.

Related

Flask-migrate change db before upgrade

I have a multi-tenancy structure set up where each client has a schema set up for them. The structure mirrors the "parent" schema, so any migration that happens needs to happen for each schema identically.
I am using Flask-Script with Flask-Migrate to handle migrations.
What I tried so far is iterating over my schema names, building a URI for them, scoping a new db.session with the engine generated from the URI, and finally running the upgrade function from flask_migrate.
#manager.command
def upgrade_all_clients():
clients = clients_model.query.all()
for c in clients:
application.extensions["migrate"].migrate.db.session.close_all()
application.extensions["migrate"].migrate.db.session = db.create_scoped_session(
options={
"bind": create_engine(generateURIForSchema(c.subdomain)),
"binds": {},
}
)
upgrade()
return
I am not entirely sure why this doesn't work, but the result is that it only runs the migration for the db that was set up when the application starts.
My theory is that I am not changing the session that was originally set up when the manager script runs.
Is there a better way to migrate each of these schemas without setting multiple binds and using the --multidb parameter? I don't think I can use SQLALCHEMY_BINDS in the config since these schemas need to be able to be dynamically created/destroyed.
For those who are encountering the same issue, the answer to my specific situation was incredibly simple.
#manager.command
def upgrade_all_clients():
clients = clients_model.query.all()
for c in clients:
print("Upgrading client '{}'...".format(c.subdomain))
db.engine.url.database = c.subdomain
_upgrade()
return
The database attribute of the db.engine.url is what targets the schema. I don't know if this is the best way to solve this, but it does work and I can migrate each schema individually.

Difference between with and without sudo() in Odoo

What is different between:
test = self.env['my.example'].sudo().create({'id':1, 'name': 'test'})
test = self.env['my.example'].create({'id':1, 'name': 'test'})
All example work, but what is the advantages when using sudo()?
Odoo 8–12
Calling sudo() (with no parameters) before calling create() will return the recordset with an updated environment with the admin (superuser) user ID set. This means that further method calls on your recordset will use the admin user and as a result bypass access rights/record rules checks [source]. sudo() also takes an optional parameter user which is the ID of the user (res.users) which will be used in the environment (SUPERUSER_ID is the default).
When not using sudo(), if the user who calls your method does not have create permissions on my.example model, then calling create will fail with an AccessError.
Because access rights/record rules are not applied for the superuser, sudo() should be used with caution. Also, it can have some undesired effects, eg. mixing records from different companies in multi-company environments, additional refetching due to cache invalidation (see section Environment swapping in Model Reference).
Odoo 13+
Starting with Odoo 13, calling sudo(flag) will return the recordset in a environment with superuser mode enabled or disabled, depending if flag is True or False, respectively. The superuser mode does not change the current user, and simply bypasses access rights checks. Use with_user(user) to actually switch users.
You can check the comments on sudo in Odoo code at odoo -> models.py -> def sudo().
Returns a new version of this recordset attached to the provided
user.
By default this returns a ``SUPERUSER`` recordset, where access
control and record rules are bypassed.
It is same as:
from odoo import api, SUPERUSER_ID
env = api.Environment(cr, SUPERUSER_ID, {})
In this example we pass SUPERUSER_ID in place of uid at the time of creating a Enviroment.
If you are not use Sudo() then the current user need permission to
create a given object.
.. note::
Using ``sudo`` could cause data access to cross the
boundaries of record rules, possibly mixing records that
are meant to be isolated (e.g. records from different
companies in multi-company environments).
It may lead to un-intuitive results in methods which select one
record among many - for example getting the default company, or
selecting a Bill of Materials.
.. note::
Because the record rules and access control will have to be
re-evaluated, the new recordset will not benefit from the current
environment's data cache, so later data access may incur extra
delays while re-fetching from the database.
The returned recordset has the same prefetch object as ``self``.

libgit2: Merge with conflicts

I'm working on implementing merge using libgit2, and I'm having trouble getting it to deal with conflicts (changes to the same line in a file) - the merge just aborts, with nothing written to the index or to the workspace. Resolvable conflicts (changes to different lines) are working fine.
It exits with GIT_ECONFLICT, which apparently indicates that the worktree and/or index aren't clean, but I checked with git status just before calling git_merge() and it's clean.
I'm using default merge options, and checkout options set to GIT_CHECKOUT_SAFE | GIT_CHECKOUT_ALLOW_CONFLICTS. I tried using FORCE instead of SAFE but it didn't help. What else do I need to do so the conflicts are recorded?
Code is here (in Swift):
https://github.com/Uncommon/Xit/blob/ff1bf6312bb1250b1db432035947a282a2cdd362/Xit/XTRepository%2BMergePushPull.swift#L154
It turned out the problem was that, since my unit test had just done a git commit using the command line tool, libgit2's in-memory copy of the index was out of date, so using that old copy of the index it thought there was a conflict. Reloading the index with git_index_read() before calling git_merge() solved the problem.
This is actually a bug in libgit2; git_merge should be reloading the index itself: https://github.com/libgit2/libgit2/issues/4203

getting a "need project id error" in Keen

I get the following error:
Keen.delete(:iron_worker_analytics, filters: [{:property_name => 'start_time', :operator => 'eq', :property_value => '0001-01-01T00:00:00Z'}])
Keen::ConfigurationError: Keen IO Exception: Project ID must be set
However, when I set the value, I get the following:
warning: already initialized constant KEEN_PROJECT_ID
iron.io/env.rb:36: warning: previous definition of KEEN_PROJECT_ID was here
Keen works fine when I run the app and load the values from a env.rb file but from the console I cannot get past this.
I am using the ruby gem.
I figured it out. The documentation is confusing. Per the documentation:
https://github.com/keenlabs/keen-gem
The recommended way to set keys is via the environment. The keys you
can set are KEEN_PROJECT_ID, KEEN_WRITE_KEY, KEEN_READ_KEY and
KEEN_MASTER_KEY. You only need to specify the keys that correspond to
the API calls you'll be performing. If you're using foreman, add this
to your .env file:
KEEN_PROJECT_ID=aaaaaaaaaaaaaaa
KEEN_MASTER_KEY=xxxxxxxxxxxxxxx
KEEN_WRITE_KEY=yyyyyyyyyyyyyyy KEEN_READ_KEY=zzzzzzzzzzzzzzz If not,
make a script to export the variables into your shell or put it before
the command you use to start your server.
But I had to set it explicitly as Keen.project_id after doing a Keen.methods.
It's sort of confusing since from the docs, I assumed I just need to set the variables. Maybe I am misunderstanding the docs but it was confusing at least to me.

servicestack redis, when using SetEntry, it will automatic generate a set with key "ids:+objectName" in redis db, how can I disable it?

when using SetEntry, it will automatic generate a set with key "ids:+ objectName" in redis db.
For example:
typedClient.SetEntry("famyly:username:jhon",new Family {FatherName="Jhon",...});
a set with key name of "ids:Family" and a member like "2343443" will be automatic created in redis db,
and each time I update or modify the same key with SetEntry, the set of "ids:Family" will increment with an new auto generated member. And this set will grow extremely large if I update the key frequently.
How can I disable the auto generated set? this set seems useless for the current circumstances.
thanks
I ran into this same problem - I discovered that our database contained a couple dozen of these "ids:XXX" sets, each containing tens of millions of items, which were consuming significant amounts of memory.
The solution is to switch to untyped clients. You can still use typed methods on the client so you're really not giving up any type safety or automatic serialization at all. There's a couple ways to create clients; we tend to use the get-in-get-out Exec shortcuts on RedisClientsManager. You should be able to adapt this to the way you do it.
Typed client - creates "ids" sets:
// set:
redis.ExecAs<T>(c => c.SetEntry(key, value));
// get:
T value = redis.ExecAs<T>(c => c.GetValue(key));
Untyped client - no "ids" sets created:
// set:
redis.Exec(c => c.Set(key, value));
// get:
using (var cli = _redis.GetClient())
{
T value = cli.Get<T>(key);
}
The inferred auto-generated id's are when you use the high-level Redis Typed Client. Use the IRedisClient.SetEntry on the string-based RedisClient API instead.