I've created a test case to persist data to my db. The fixtures persist correctly but the records I create in the test case aren't showing up in the database. My test looks like the following:
test "new course" do
course = Course.new
course.name = 'test'
assert course.save
end
The test passes and when I debug through it and call course.save myself I can see that it has a generated ID. Any ideas?
If you're attempting to check that the data is in the database after the test, this won't be the case due to how the tests are run.
Tests for Rails are run inside database transactions (by default1) and so it will go ahead and create the data inside the test and run all your assertions, but once it's all said and done it's going to rollback that transaction leaving the database in a pristine state.
The reason for doing this is so that the database is always clean and so that you don't have records just lying about which could potentially alter the outcome of tests.
1 Although this can be changed by disabling the setting (I forget what this is in Test::Unit land), or by using tools like database_cleaner.
Related
I am trying call to sp_rename inside transaction (BEGIN TRANSACTION), but it shows this error message:
Can't run sp_rename from within a transaction., Error 17260, Procedure sp_rename, Line 78
The sp_rename code checks for any open transactions::
/*
** Running sp_rename inside a transaction would endanger the
** recoverability of the transaction/database. Disallow it.
** Do the ##trancount check before initializing any local variables,
** because "select" statement itself will start a transaction
** if chained mode is on.
*/
if ##trancount > 0
begin
/*
** 17260, "Can't run %1! from within a transaction."
*/
raiserror 17260, "sp_rename"
return (1)
end
else
begin
set chained off
end
I don't understand why these actions are a danger....
Additionally, I need a way to call this stored procedure within the transaction and then rollback this action.
Any suggestions?
ASE isn't really designed for rolling back schema changes (as you're seeing).
If you want a means of testing your 'framework functionality' consider:
create a new test db
run your scripts against this test db
when done just drop the test db; an alternative would be to run a series of drop commands to 'undo' all the schema changes
New databases are initially created as a copy of the model database so you could go so far as to install some base components in the model database but keep in mind the model database (and its contents) are used when creating all new databases (eg, all temp dbs when starting ASE), so don't add anything to the model database that you wouldn't want showing up in any new databases (outside your 'framework functionality' testing).
What you're proposing doesn't sound much different than what I've seen developers regularly do when testing a new 'release':
load a copy of the prod db
apply release package against said db
rinse/repeat until release package completes successfully
key being to start with a newly loaded (or created) database
A variation on the above:
create a new test db
add base components as needed
dump/save a copy of the test db
run your tests
when you want to run your test again then load that dump/saved copy back into the test db, then run tests again
basically same thing as loading copy of prod db but in this case you load a copy of your base test db
I am doing a test that updates my database each time I run it
And I cannot do the test again with the updated values
I am recreating the WHOLE database with:
postgres=# drop database mydb;
DROP DATABASE
postgres=# CREATE DATABASE mydb WITH TEMPLATE mycleandb;
CREATE DATABASE
This takes a while
Is there any way I can update just the tables that I changed with tables from mycleandb?
Transactions
You haven't mentioned what your programming language or framework are. Many of them have built in test mechanisms that take care of this sort of thing. If you are not using one of them, what you can do is to start a transaction with each test setup. Then roll it back when you tear down the test.
BEGIN;
...
INSERT ...
SELECT ...
DELETE ...
ROLLBACK;
Rollback, as the name suggests reverses all that has been done to the database so that it remains at the original condition.
There is one small problem with this approach though, you can't do integration tests where you intentionally enter incorrect values and cause a query to fail integrity tests. If you do that the transaction ends and no new statements can be executed until rolled back.
pg_dump/pg_restore
it's possible to use the -t option of pg_dump to dump and then restore one or a few tables. This maybe the next best option when transactions are not practical.
Non Durable Settings / Ramdisk
If both above options are inapplicable please see this answer: https://stackoverflow.com/a/37221418/267540
It's on a question about django testing but there's very little django specific stuff on that. However coincidentally django's rather excellent test framework relies on the begin/update/rollback mechanis described above by default.
Test inside a transaction:
begin;
update t
set a = 1;
Check the results and then:
rollback;
It will be back to a clean state;
I'm writing a Rails system that ingests data from external sources by spawning multiple processes to fetch the data and updating a single db table. I want to write RSpec tests that spawn multiple processes that emulate the fetch/write process to look for concurrency issues.
short question
How can I initialize a table in an RSpec test so that an external process can see the contents of the table? (At least, I think that's the right question. Read on for details...)
longer form
The general structure of my RSpec test is:
it 'external task should update the correct records' do
initialize_my_model_table_with_some_records
spawn_external_tasks_to_update_records
# wait for spawned processes to complete
Process.waitall.each {|pid, status| status.exitstatus.should == 0 }
validate_results
end
But the external process always sees the model table as empty (verified by debug printing). Subsequently, attempts to update the table fail.
I'm pretty sure this is because RSpec is holding the table under a lock so it can do a rollback after the test completes.
So (to repeat the short question): How can I initialize a table in an RSpec test so that an external process can see the initialized contents of the table?
edit #2
I notice that upon entry to a subsequent test, the table is in the state that the previous (external) processes left it. This makes sense: RSpec can only roll back the table to the state that it 'knows' about, so and changes made by external processes will persist.
This suggests a solution: It appears that it works to use a before(:all) to explicitly initialize the table. But is this is the cleanest approach?
environment
Ruby version 1.9.3 (x86_64-darwin10.8.0)
pg (0.13.2)
rails (3.2.1)
rspec (2.9.0)
rspec-rails (2.9.0)
When RSpec runs a test, it locks the database under a transaction so that it can be rolled back after the test. As a result, any external process won't see changes that RSpec makes to the database. And correspondingly, any changes that an external process makes to the database won't be rolled back after the RSpec test.
One exception to this is inside before(:all) and after(:all) blocks: any changes that RSpec makes will be visible to external processes.
So the example in the OP can be made to work like this:
describe 'with external tasks' do
before(:all)
initialize_my_model_table_with_some_records
end
after(:all)
reinitialize_my_model_table
end
it 'should update the correct records' do
spawn_external_tasks_to_update_records
# wait for spawned processes to complete
Process.waitall.each {|pid, status| status.exitstatus.should == 0 }
validate_results
end
end
We have many SQL Server scripts. But there are a few critical scripts that should only be run at certain times under certain conditions. Is there a way to protect us from ourselves with some kind of popup warning?
i.e. When these critical scripts are run, is there a command to ask the user if they want to continue?
(We've already made some rollback scripts to handle these, but it's better if they not be accidentally run at all).
No, there is no such thing.
You can write an application (windows service?) that will only run the scripts as and when they should be.
The fact that you are even asking the question shows that this is something that should be automated, the sooner the better.
You can mitigate the problem in the meanwhile by using if to test for these conditions and only execute if they are met. If this is a series of scripts you should wrap them in transactions to boot.
One work-around you can use is the following, which would require you to update a value in another table:
CREATE PROC dbo.MyProc
AS
WHILE (SELECT GoBit FROM dbo.OKToRun) = 0
BEGIN
RAISERROR('Waiting for GoBit to be set!', 0,1)
WAITFOR DELAY '00:00:10'
END
UPDATE dbo.OKtoRun
SET GoBit = 0
... DO STUFF ...
This will require you to, in another spid or session, update that table manually before it'll proceed.
This gets a lot more complicated with multiple procedures, so it will only work as a very short-term workaround.
sql is a query language. does not have ability to accept user inputs.
only thing i can think of would be to have it #variable driven. first part should update #shouldRunSecond = 1. and the second part should be wrapped in a
if #shouldRunSecond = 1
begin
...
end
second portion will be skipped if not desired.
The question is - where are these scripts located ?
If you have them as .sql file that you open every time before you run, then you can simply add some "magic numbers" before beginning of the script, that you will have to calculate every time, before you run it. In example below each time before you run your script you have to put correct date and minute into IF fondition, other wise script will not run
IF DATEPART(dd,GETDATE())!=5 or DATEPART(mi,(GETDATE()))!=43
BEGIN
RAISERROR ('You have tried occasionally to run your dangerous script !!!',16,1);
RETURN
END
--Some dangerous actions
drop database MostImportantCustomer
update Personal set Bonus=0 where UserName=SUSER_SNAME()
If your scripts reside in stored procedure - you can add some kind of "I am sure, I know what I do" parameter, where you will always pass, for example Minute multiplied by Day.
Hote it helps
I have seen batch scripts containing SQLCMD ..., so instead of running the .sql script from code or management studio, you could add a prompt in the script.
I have (on limited occasion) created an #AreYouSure parameter that must be passed into a stored procedure, then put comments next to the declaration in the stored procedure explaining the danger of running said procedure.
At least that way, no RANDOs will wander into your environment and kick off stored procedures when they don't understand the consequences. The parameter could be worked into an IF statement that checks it's value, or it doesn't really have to be used at all, but if it must be passed, then they have to at least figure out what to pass.
If you use this too much, though, others may just start passing a 'Y' or a 1 into every stored procedure without reading the comments. You could switch up the datatypes, but at some point it becomes more work to maintain this scheme than it is worth. That is why I use it on limited occasion.
I'm trying to find out if a specific MySQL User is still in use in our system (and what queries it is executing).
So I thought of writing a trigger that would kick in anytime user X executes a query, and it would log the query in a log table.
How can I do that?
I know how to write a query for a specific table, but not for a specific user (any table).
Thanks
You could branch your trigger function on USER().
The easiest would be to have the trigger always fire, but only logs if the user is X.
I would look at these options:
A) Write an audit plugin, which filters events based on the user name.
For simplicity, the user name can be hard coded in the plugin itself,
or for elegance, it can be configured by a plugin variable, in case this problem happens again.
See
http://dev.mysql.com/doc/refman/5.5/en/writing-audit-plugins.html
B) Investigate the --init-connect server option.
For example, call a stored procedure, check the value of user() / current_user(),
and write a trace to a log (insert into a table) if a connection from the user was seen.
See
http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html#sysvar_init_connect
This is probably the closest thing to a connect trigger.
C) Use the performance schema instrumentation.
This assumes 5.6.
Use table performance_schema.setup_instrument to only enable the statement instrumentation.
Use table performance_schema.setup_actors to only instrument sessions for this user.
Then, after the system has been running for a while, look at activity for this user in the following tables:
table performance_schema.users will tell if there was some activity at all
table performance_schema.events_statements_history_long will show the last queries executed
table performance_schema.events_statements_summary_by_user will show aggregate statistics about each statement types (SELECT, INSERT, ...) executed by this user.
Assuming you have a user defined as 'old_app'#'%', a likely follow up question will be to find out where (which host(s)) this old application is still connecting from.
performance_schema.accounts will just show that: if traffic for this user is seen, it will show each username # hostname source of traffic.
There are statistics aggregated by account also, look for '%_by_account%' tables.
See
http://dev.mysql.com/doc/refman/5.6/en/performance-schema.html
There are also other ways you could approach this problem, for example using MySQL proxy
In the proxy you could do interesting things - from logging to transforming queries, pattern matching (check this link also for details on how to test/develop the scripts)
-- set the username
local log_user = 'username'
function read_query( packet )
if proxy.connection.client.username == log_user and string.byte(packet) == proxy.COM_QUERY then
local log_file = '/var/log/mysql-proxy/mysql-' .. log_user .. '.log'
local fh = io.open(log_file, "a+")
local query = string.sub(packet, 2)
fh:write( string.format("%s %6d -- %s \n",
os.date('%Y-%m-%d %H:%M:%S'),
proxy.connection.server["thread_id"],
query))
fh:flush()
end
end
The above has been tested and it does what it is supposed to (although this is a simple variant, does not log success or failure and only logs proxy.COM_QUERY, see the list of all constants to see what is skipped and adjust for your needs)
Yeah, fire away, but use whatever system you have to see what user it is (cookies, session) to log only if the specific user (userID, class) matches your credentials.