Can I lock a table in Rails? (And should I?) - ruby-on-rails-3

I have a small table in my Rails app that contains static data (user Roles). It shouldn't ever change. I'm wondering if it's possible to lock the table to keep anyone (namely developers) from accidentally changing it in production.
Or should I have not put that data into the database at all? Should it have been hardcoded somewhere to make editing more difficult and, at least, auditable (git blame)?

The right way to do this is with permissions. Change the ownership of the table to another user, and grant the production database user SELECT only.
I would say the right place for these kinds of things is probably the database. That way if they ever need to change, you can change them in the database, and you don't have to redeploy your application (assuming it will notice the change).
You didn't ask this, but your wording brought it to mind so I'll say it anyway: you should never need to explicitly LOCK a table with PostgreSQL. If you think this is something you need to do, you should make sure what you're worried about can actually happen under MVCC and that transactions aren't going to do the right thing for you.

I would probably make use of attr_accesible
if you write something like:
class Role < ActiveRecord::Base
attr_accessible #none
end
you could at least prevent any assignment from the rails side, but it does not prevent any direct modifications through developers with database access.
see also this thread: How I can set 'attr_accessible' in order to NOT allow access to ANY of the fields FOR a model using Ruby on Rails?

You can use a trigger to prevent updates to the table (assuming you can't add a new db user).
Or, use a view and ensure all read requests go through it. (probably by removing the ActiveRecord class that corresponds to the table.

Related

How to manually add a user in ibm cloudant?

I have a cloudant database with a lot of deleted docs. Since they can't be destroyed, I would like to make a filtered copy with the non deleted items to a temporary base, destroy the original one, and copy the temporary base to a fresh database with the same name as before.
The problem is when I destroy the base, the API keys generated are also destroyed...
So the front app calling the new base can't acces it !
I would like to manually create a user/password, so I can recreate the same user each time I destroy the database.
I don't know how to do it ?
Or is there another way to achieve my goal ??
To answer your actual question, you can't add "users" to a Cloudant account, only databases. You can, however, make API-keys that span multiple databases, which sounds like it could be what you want:
https://dx13.co.uk/articles/2016/04/11/using-a-cloudant-api-key-with-multiple-cloudant-databases-and-accounts/
But as was noted by bessbd above, if your data model relies on document deletion, you're working against the grain of Cloudant, and sooner or later you'll end up with problems.
And finally -- the doc links appear to work just fine.
Maybe some useful stuff here: https://blog.cloudant.com/2019/11/21/Best-and-Worst-Practices.html
[disclaimer, I wrote that]
Can you please expand a little further on your use case? Why do you want to get rid of the deleted docs? Is there a way to avoid deleting the docs? Also, have you already read https://cloud.ibm.com/docs/services/Cloudant?topic=cloudant-documents#tombstone-documents ?

Effectively make database records read-only

How can I make sure that specific data in the database isn't altered anymore.
We are working with TSQL. Inside the database we store contract revisions. These have a status: draft / active. When the status has become active, the revision may never be altered anymore. A revision can have 8 active modules (each with its own table), each with their own settings and sub-tables. This creates a whole tree of tables with records that may never change anymore when the contract revision has been set to active.
Ideally I would simply mark those records as read-only. But such thing does not exists as of today. The next thing that comes to mind are triggers. Thus I have to add those triggers to a lot of tables, all which are related to the contract revision.
Now maybe there are other approaches, like a database only for archiving on which the user only has insert rights. Thus when a contract revision has become active, it is moved from one DB to the archive DB (insert is allowed). And can never be altered anymore (DENY UPDATE|DELETE).
But maybe there are other more ingenious options I haven't thought of, and you did. Maybe including the CLR or what not.
So how can I make a tree-structure of records inside our TSQL database effectively readonly that is the most maintenance free, easy to understand, quickly to setup, and can be applied in a most generic way?
What ever you do (triggers, granted rights...) might be overcome by a user with higher rights, this you know for sure...
Is this just to archive this data?
One idea coming into my mind was to create a nested XML with all data within on big structure and put this somewhere into a side table. Create a INSTEAD OF UPDATE,DELETE TRIGGER where you just do nothing. Let these tables be 1:1-related.
You can still work with this data, but not quite as fast as being read from physical tables.
If you want, you even might convert the XML to a string and calculate some Hash-Code, which you store in a different place to check for manipulations.
The whole process might be done in one single Stored Procedure call.

GetOrCreate in RavenDB, or a better alternative?

I have just started using RavenDB on a personal project and so far inserting, updating and querying have all been very easy to implement. However, I have come across a situation where I need a GetOrCreate method and I'm wondering what the best way to achieve this is.
Specifically I am integrating with OpenID and once authentication has taken place the user is redirected to my site. At this point I'd either like to retrieve their user record from Raven (by querying on the ClaimsIdentifier property) or create a new record. The user's ID is currently being set by Raven.
Obviously I can write this in two statements but without some sort of transaction around the select and the create I could potentially end up with two user records in the database with the same claims identifier.
Is there anyway to achieve this kind of functionality? Possibly even more importantly is do you think I'm going down the wrong path. I'm assuming even if I could create a transaction it would make scaling out to multiple servers difficult and in anycase could add a performance bottle-neck.
Would a better approach be to have the Query and Create operations as separate statements and check for duplicates when the user is retrieved and merge at that point. Or do something similar but on a scheduled task?
I can't help but feel I'm missing something obvious here so any advice on this problem would be greatly appreciated.
Note: while scaling out to multiple servers may seem unnessecary for a personal project, I'm using it as an evaluation of Raven before using it in work.
Dan, although RavenDB has support for transactions, I wouldn't go that way in your case. Instead, you could just use the users ClaimsIdentifier as the user documents id, because they are granted to be unique.
Alternatively, you can also stay with user ids being generated by Raven (HiLo btw) and use the new UniqueConstraintsBundle, which lets you attribute certain properties to be unique. Internally it will create an additional document that has the value of your unique property as its id.

Ruby on Rails: Way to test for ActiveRecord Where conditions without hitting database?

I am using Ruby on Rails 2.3.8. I have several User model objects stored in memory and several Where conditions that I want to check these against. Because all this data is stored in memory I want to avoid hitting the database to perform these checks. Is there a way to check these models without hitting the database, i.e. some way to validate a SQL Where condition against an in-memory model object?
To make things more clear, if I were to actually pull the record from the database I would do something like this:
whereCondition = "name LIKE 'James Smith'"
User.find(:first, :conditions => [whereCondition])
I have several Users and several whereConditions available in memory, and what I'd really like to do is something like this:
someUser.meetsCondition?(whereCondition)
Which would return a boolean. Is there some way of doing this without writing my own SQL parser?
Thanks.
If the User is already in memory then doing something like the following shouldn't reload the object should it?
user.name =~ /James Smith/
There is no way to do this without parsing it yourself.
However, if you wanted to, you could create a sqlite in-memory database and load the records into it and then you could use your query. I don't know how practical this would be though - it's definitely too much work to be doing on a normal web request, and the cost of doing this could very well be more expensive than just running the query against your real database again anyways. You'll have to experiment.
No, there is no built-in mechanism to execute SQL logic against in-memory collections. You would have to write your own SQL parser to do what you are wanting.
Rails has caching settings that you can read about on the Rails Guide site. Some of those may suit what you're trying to accomplish.
You also may want to check out Memcached in conjunction with the cached_model gem. I generally use this and update the cached copy of the object in an after_save model callback.

Can I change a model from embedded to referenced without losing data?

I made a bad decision as I was designing a MongoDB database to embed a model rather than reference it in an associated model. Now I need to make the embedded model a referenced model, but there is already a healthy amount of data in the database (or document?).
I'm using Mongoid, so I reasoned I can just change embedded_in to referenced_in. Before I start, I figured I'd ask people who know better than I do. How can I transition the embedded data already in the database to the document for the associated model.
class Building
embeds_many :landlords
..
end
class Landlord
embedded_in :building
...
end
Short answer - Incrementally.
Create a copy of Landlord, name it Landlord2.
Make it referenced in Building.
Copy all data from Landlord to Landlord2.
Delete Landlord.
Rename Landlord2 to Landlord.
Users should not be able to CRUD Landlord during steps 3-5 (ideally). You still can get away with only locking CRUD on 4-5. Just make sure you make all updates that happened during copying, before removing Landlords.
Just chan ging the model like you have above will not work, the old data will still be in a different strucutre in the db.
Very similar the previous answer, one of the things I have done to do this migration before is to do it dynamically, while the system is running and being used by the users.
I had the data layer separated from the logic, so it let me add some preprocessors and inject code to do the following.
Lets say we start with the old datamodel, then release new code that does the following:
On every access to the document, you would have to check whether the embedded property exists, if it does, create a new entry associated as a reference and save to the database and delete the embedded property from the documents. Once this is done for a couple of days, a lot of my data got migrated and then I just had to run a similar script for everything that was not touched, made the job of migrating the data much easier and simpler and I did not have to run long running scripts or get the system offline to perform the conversion.
You may not ha ve that requirement, so Pick accordingly.