Does my database save my data or not? - sql

Im using sql database and making a program with vb. i use this code for my data to save to database:
DerslerTableAdapter.Insert(CDate(Me.Label15.Text), Me.Label9.Text)
DerslerTableAdapter.Fill(Verilerim.Dersler)
i stop the program. Then run it again. I can see the data i saved, its ok. but i cant see it when i look "database explorer"->"tables"->"dersler"->"show table data".
when i run the program again i can see my data ok. then i save my project and run it again. There is no data. Icant see it. Does the program runs properly?

TableAdapter.fill() executes the query, so I believe that you need to use TableAdapter.update() to commit the changes to your datebase. Overall I would like to see more code to get a feel for what is going on.

Sounds like you are not committing your changes to the database.

Related

SQL Server - insufficient memory (mscorlib) / 'the operation could not be completed'

I have been working on building a new database. I began by building the structure within the database it is replacing and populating this as I created each set of tables. Once I had made additions I would drop what had been created and execute the code to build the structure again and a separate file to insert the data. I repeated this until the structure and content was complete to ensure each stage was as I intended.
The insert file is approximately 30mb with 500,000 lines of code (I appreciate this is not the best way to do this but for various reasons I cannot use alternative options). The final insert completed and took approximately 30 minutes.
A new database was created for me, the structure executed successfully but the data would not insert. I received the first error message shown below. I have looked into this and it appears I need to use the sqlcmd utility to get around this, although I find it odd as it worked in the other database which is on the same sever and has the same autogrow settings.
However, when I attempted to save the file after this error I received the second error message seen below. When I selected OK it took me to my file directory as it would if I selected Save As, I tried saving in a variety of places but received the same error.
I attempted to copy the code into notepad to save my changes but the code would not copy to the clipboard. I accepted I would lose my changes and rebooted my system. If I reopen this file and attempt to save it I receive the second error message again.
Does anyone have an explanation for this behaviour?
Hm. This looks more like an issue with SSMS and not the SQL Server DB/engine.
If you've been doing few times, possibly Management Studio ran out of RAM?
Have you tried breaking INSERT into batches/smaller files?

SSIS Package Not Populating Any Results

I'm trying to load data from my database into an excel file of a standard template. The package is ready and it's running, throwing a couple of validation warnings stating that truncation may occur because my template has fields of a slightly smaller size than the DB columns i've matched them to.
However, no data is getting populated to my excel sheet.
No errors are reported, and when I click preview for my OLE DB source, it's showing me rows of results. None of these are getting populated into my excel sheet though.
You should first make sure that you have data coming through the pipeline. In the arrow connecting your Source task to Destination task (I'm assuming you don't have any steps between), double click and you'll open the Data Flow Path Editor. Click on Data Viewer, then Add and click OK. That will allow you to see what is moving through the pipeline.
Something to consider with Excel is that is prefers Unicode data types to Non-Unicode. Chances are you have a database collation that is Non-Unicode, so you might have to convert the values in a Data Conversion task.
ALSO, you may need to force the package to execute in 32bit runtime. The VS application develops in a 32bit environment, so the drivers you have visibility to are 32bit. If there is no 64bit equivalent, it will break when you try and run the package. Right click on your project and click Properties and under the Debug menu you'll need to change the setting Run64BitRuntime to FALSE.
you dont provide much informatiom. Add a Data View between your source and your excel destination to see if data is passing through. Do do it, just double click the data flow path, select data view and then add a grid.
Run your app. If you see data, provide more details so we can help you
Couple of questions that may lead to an answer:
Have you checked that data is actually passed through the SSIS package at run time?
Have you double checked your mapping?
Try converting within the package so you don't have the truncation issue
If you add some more details about what you're running, I may be able do give a better answer.
EDIT: Considering what you wrote in your comment, I'd defiantly try the third option. Let us know if this doesn't solve the problem.
Just as an assist for anyone else running into this - I had a similar issue and beat my head against the wall for a long time before I found out what was going on. My export WAS writing data to the file, but because I was using a template file as the destination, and that template file had previous data that had been deleted, the process was appending the data BELOW the previously used rows. So, I was writing out three lines of data, for example, but the data did not start until row 344!!!
The solution was to select the entire spreadsheet in my template file, and delete every bit of it so that I had a completely clean sheet to begin with. I then added my header lines to the clean sheet and saved it. Then I ran the data flow task and...ta-daa!!! Perfect export!
Hopefully this will help some poor soul who runs into this same issue in the future!

iOS Rolling out app updates. Keeping user data intact when DB update required

I have just done a quick search and nothing too relevant came up so here goes.
I have released the first version of an app. I have made a few changes to the SQLite db since then, in the next release I will need to update the DB structure but retain the user's data.
What's the best approach for this? I'm currently thinking that on app update I will never replace the user's (documents folder, not in bundle) database file but rather alter its structure using SQL queries.
This would involve tracking changes made to the database since the previous release. Script all these changes into SQL queries and run these to bring the DB to the latest revision. I will also need to keep a field in the database to track the version number (keep in line with app version for simplicity).
Unless there are specific hooks, delegate methods that are fired at first run after an update I will put calls for this logic into the very beginning of the appDelegate, before anything else is run.
While doing this I will display "Updating app" or something to the user.
Next thing, what happens if there is an error somewhere along the line and the update fails. The DB will be out of date and the app won't function properly as it expects a newer version?
Should I take it upon myself to just delete the user's DB file and replace it with the new version from the app bundle. OR, should I just test, test, test until everything is solid on my side and if an error occurs on the user's side it's something else, in which case I can't do anything about it only discard the data.
Any ideas on this would be greatly appreciated. :)
Thanks!
First of all, the approach you are considering is the correct one. This is known as database migration. Whenever you modify the database on your end, you should collect the appropriate ALTER TABLE... etc. methods into a migration script.
Then the next release of your app should run this code once (as you described) to migrate all the user's data.
As for handling errors, that's a tough one. I would be very weary of discarding the user's data. Better would be to display an error message and perhaps let the user contact you with a bug report. Then you can release an update to your app which hopefully can do the migration with no problems. But ideally you test the process well enough that there shouldn't be any problems like this. Of course it all depends on the complexity of the migration process.

undo changes to a stored procedure

I altered a stored procedure and unknowingly overwrote some changes that were made to it by another developer. Is there a way to undo the changes and get the old script back?
Unfortunately I do not have a backup of that database, so that option is ruled out.
The answer is YES, you can get it back, but it's not easy. All databases log every change made to it. You need to:
Shutdown the server (or at least put it into read-only mode)
Take a full back up of the server
Get a copy of all the db log files going back to before when the accident happened
Restore the back up onto another server
Using db admin tools, roll back through the log files until you "undo" the accident
Examine the restored code in the stored proc and code it back into your current version
And most importantly: GET YOUR STORED PROCEDURE CODE UNDER SOURCE CONTROL
Many people don't grok this concept: You can only make changes to a database; you can't roll back the stored proc version like you can with application code by replacing files with their previous versions. To "roll back", you must make more changes that drop/define your stored proc.
Note to nitpickers: By "roll back" I do not mean "transaction roll back". I mean you've made your changes and decide once the server is back up that the change is no good.
"Is there a way to undo the changes and get the old script back?"
Short answer: Nope.
:-(
In addition to the sound advice to either use a backup or recover from source control (and if you're doing neither of those things, you need to start), you could also consider getting SSMS Tools Pack from #MladenPrajdic. His Management Studio add-in allows you to keep a running history of all the queries you've worked on or executed, so it is very easy to go back in time and see previous versions. While that doesn't help you if someone else worked on the last known good version, if your entire team is using it, anyone can go back and see any version that was executed. You can dictate where it is saved (to your own file system, a network share, or a database), and fine-tune how often auto-save kicks in. Really priceless functionality, especially if you're lazy about backups and/or source control (though again, I stress, you should be doing these things before you touch your production server again).
You could look through the cached execution plans and try to find the one where your colleague made his changes and run the relevant parts again.
EDIT
Although Bohemian looks to have a good answer if you've got the changes in the TL, this is what I'm talking about. Review the SQL text for the plan.
SELECT cached.*,
sqltext.*
FROM sys.dm_exec_cached_plans cached
CROSS APPLY sys.dm_exec_sql_text (cached.plan_handle) AS sqltext
But as squillman points out, there is no execution plan for DDL.
You won't be able to get it back from the database side of things. Your options at this point are pretty much limited to 1) recover from backup, 2) go to source control or 3) hope that someone else has a copy still up in an editor somewhere or saved to a file.
If neither of these are an option for you, then here's the obligatory "you should take regular backups and use source control"....
I'm way late to the game on this but I did this same thing this morning and found I had forgot to save my script at some point in the past and needed to recover it. (It will be in source control after I get done fixing this!!!)
Some people mentioned restoring from a backup but no one really mentioned how easy this is if you have a back up. Moreover, you aren't locked into rolling back the production database. I think this is key and assuming you have a back up I would say this is a much better alternative to what has been voted up to the best answer.
All you have to do is take your back up and restore it to a new database. Pull out the sp you are looking for and voila, you've recovered the missing code.
Don't forget to drop the newly created database after you've recovered the missing file.
I had the same problem, and I don't have the confidence to go restoring from log files to another server. I was pretty distraught until I realised the solution was very simple...
Press Ctrl-Z over and over until I had undone my changes and the run the ALTER PROCEDURE again.
Admittedly I was pretty lucky that I still had it there to revert to but it really is the easiest fix. Probably a bit late now though.
If you have scripted the stored procedure out from management studio object explorer this will work.
Before expand and collapse the object explorer just scroll and point to the stored procedure you have opened. Script the stored procedure as create or alter to then you can get the previous version of the proc since the object explorer doesn't refreshed yet. This is always my life saver.

Is it possible to force an error in an Integration Services data flow to demonstrate its rollback?

I have been tasked with demoing how Integration Services handles an error during a data flow to show that no data makes it into the destination. This is an existing package and I want to limit the code changes to the package as much as possible (since this is most likely a one time deal).
The scenario that is trying to be understood is a "systemic" failure - the source file disappears midstream, or the file server loses power, etc.
I know I can make this happen by having the Error Output of the source set to Failure and introducing bad data but I would like to do something lighter than that.
I suppose I could add a Script Transform task and look for a certain value and throw an error but I was hoping someone has come up with something easier / more elegant.
Thanks,
Matt
mess up the file that you are trying to import by pasting some bad data or saving it in another format like UTF-8 or something like that
We always have a task at the end that closes the dataflow in our meta data tables. To test errors, I simply remove the ? that is the variable for the stored proc it runs. Easy to do and easy to fix back the way it was and it doesn't mess up anything datawise as our error trapping then closes the the data flow with an error. You could do something similar by adding a task to call a stored proc with an input variable but assign no parameters to it so it will fail. Then once the test is done, simply disable that task.
Data will make it to the destination if it is not running as a transaction. If you want to prevent populating partial data you have to use transactions. Then there is an option to set the end result of a control flow item as "failed" irrespective of the actual result but this is not available in data flow items. You will have to either produce an actual error in the data or code in a situation that will create an error. There is no other way...
Could we try with transaction level property of the package?
On failure of the data flow it will revert all the data from the target.
On successful dataflow only it will commit the data to target otherwise it will roll back the data from target.