Newbie: What is the easiest way to bulk add data to a sqlite database? - sql

I'm a total newbie to SQL, and I'd like to know whether anyone knows a means for easily "copy and pasting" hundreds of entries to a sqlite database. Again, I'm not a professional programmer, so software that could automate that process would be great. (I primarily code in JavaScript, but SQL code can be used as well if you could kindly explain the code.)
Essentially, the text I'd be adding would be delimited by a character (the '|' character in my case) for the columns, and line breaks for the rows. It would be added onto a table that's already being used in the database, with columns already set up.
Thanks a lot!! Any suggestions are most appreciated!

You can use DB Browser for SQLite and then File > Import > From CSV file.. after creating a New Database.

Related

How to copy from table design from database to another vb.net access

The aim is when updating the application and update the access database without altering the data so update by update only the new tables or new columns so i want to copy the exact table with it's structure to the old database vb.net and access database.
what I've tried is detecting the differences between the old database and the new one by getting in combobox1 the only missed table and in combobox2 the missed columns in the old database in exact table already there in both database and get it's data type .
so i want to copy the entire table and then create only missed columns
thank you
There is not a built in tool to do this.
But, worse yet, there is no "generate" change scripts in Access
(Like say with SQL server).
So, how do you approach this issue? What do some of the accounting systems or commercial programs that use ms-access as the database?
Well, you have to build a kind of "up-grade" system in your software.
This means two things:
To add a new column to a table (for example), you NEVER go open up the access database with access, but "add" or "write" the code to add that field in question.
In fact, I had an applcation deployed out in the field - many desktops.
So, I had a code module called upgrade. And each time I needed a new field or whatever, then I would write the code to add that new colum.
AS LONG as I always added things into that code module, I was ok. (never break the rule for adding new fields, tables or even increasing the length of some field? - use code).
And it became quite easy after I had some code written. I would in fact often cut + paste a previous bit of code to add a new column to a table.
However, after about 5 years, that messy code module had 800+ lines of code in it!!!
But, I ALSO realized that MOST things like adding a new column or whatever? Same code over and over.
So, what I did next was built a "upgrade" table. It looked like this:
Version action SQL RunCode
2.5 AddTable tblCustomers
2.5 AddField "sql here to add table"
etc. etc.
So, I had a version number, and then I compare against the up-grade table. I had "action", and the code would simple loop this table, and do whatever.
So, for example, to add a field, you can use access "DDL" command (data definition commands - most SQL systems support this, and so does Access).
so, say like this:
' any new table code goes here:
If lngVer < 1148 Then
' add event Invoice text option
ExecuteSQLNR "ALTER TABLE dbo.Events ADD InvoiceText ntext NULL"
ExecuteSQLNR "ALTER TABLE dbo.Events ADD HideEventDate bit NULL default 0"
Or, say to increase a column lengh from 50 to 55
db.Execute "ALTER TABLE tblGroupRemind ALTER COLUMN Anotes text(255)", dbFailOnError
As noted, since oh so many the commands were VERY similar, then I started putting that information into a table, and then I would execute the required upgrades in a loop.
For a whole new table? Well, I thought that was too much code, so I always included a blank empty database - and for new tables, I would place them in that upgrade.accDB table - and "transfer/copy" the table from that upgrade database to the real one. That way, I could with great ease create a whole new table, and create in Access designers, and then add/copy that table to the "upgrade.accDB" database.
As noted? The above ideas an approaches work quite well.
In fact, over time, I found it LESS hassle while coding away to add the new column or whatever LESS effort then having to open up ms-acces, and then the table, and then the designer and make the changes.
However, the BIG issue with above?
Well, you have to get all users at least upgraded to your EXISTING schema, and there is no automated tools.
in fact, before I had any automated tools? I would open up note pad, and if I added some field to some table? I would simple type into note pad that new field in such and such table is required).
Then, when on customer site, I would open up their database, and then go look at the note pad document for the list of changes I was to make. (that is what I was doing before I started automating the process - and of course it not always practical to be "on site" or have the customers database.
But, ONCE I had all of the above working?
Then during development, I would open up my "upgrade" database, add the new row and action (new table, new column, (and more).
I even had a column that defined the function to run AFTER that one command. I mean, quite often when you add a new column, or change somthing in a table, often you need to copy data, or at least process some data after you make that change.
Once you get above going?
Then you simple NEVER make changes in the data tables directly, but use your "system" for this. And that works REALLY well.
For one, a customer could open up a older data file - say one from 4 or 5 years ago. The applcation version number would be detected, and then the upgrade code would run all though the versions to update that database. (and I did this automatic on startup - so they never even knew such a upgrade had occurred).
So, you just have to make sure that for each change you make, you put that code in your upgrade system, and you are done.
But, for existing systems? You have to look at what changes you made since last deploy, and write out the "ddl" commands (the alter table SQL commands).
There is no automated way of doing this.
As FYI?
One of the BEST and more valuable free tools in Visual Tools is the SQL server compare utility. It will not only automatic detect and tell you the changes between two SQL server databases, but will also upgrade for you. (very nice).
But, such a system is not available for Access. In fact, so valuable is that utility for SQL server, you might consider upgrading from Access to SQL server for this applcation. With that utility? I can work local, add fields, columns, tables and even stored procedures to that SQL database. When I am on site (or even by VPN), then I run that compare tool - it shows the changes, and ALSO has a button to update the target schema.
I don't know of a automated "schema" checker and updater for Access.
So, what I suggest for above ONLY works if you put such a system in place, and THEN as a developer always make your schema changes to your upgrade system, and never directly in the database with ms-access.

How to query access table with a subdatasheet that requires parameters

I have been tasked with creating a method to copy the contents of an entire database to a central database. There are a number of source databases, all in Access. I've managed to copy the majority of the tables properly, 1:1. I'm using VBScript and ADO to copy the data. It actually works surprisingly well, considering that it's Access.
However
I have 3 tables that include subdatasheets (to those that don't know, a subdatasheet is a visual representation of a 1 to many relationship. You can see related records in another table inside the main table). When My script runs, I get an error. "No value given for one or more required parameters." When I open Access and try to run the same query that I've written in SQL, It pops up message boxes asking for parameters.
If I use the query wizard inside Access to build the select query, no parameters are required and I get no subdatasheet in the result set.
My question is this: how do I write a vanilla SQL query in my VBScript that does not require parameters and just gives me the data that I want?
I've tried copying the SQL from Access and running it through my VBScript and that doesn't seem to do the trick.
Any help is greatly appreciated!
As it turns out, you need to make sure that you've spelled all of the field names properly in your source query. If you've included additional fields that aren't actually in the source or destination table they'll need to be removed too.

Is there a way to update a calculation view through a query on SAP HANA?

I'm working on updating hundreds of calculation views on SAP HANA.
I should update (for every calculation view) the last aggregation/projection columns : Keep flag = True.
There's a way, by updating XML Code of every calculation view file Like below:
<attribute id="EQUNR" order="3" attributeHierarchyActive="false"
displayAttribute="false" keepFlag="true">
<descriptions defaultDescription="EQUNR"/>
But, my question is, is there a way to update this Keep Flag through a query on SQL Console ?
if not, is there any other method you suggest guys ?
Every idea matters, Thank you folks
There is no way to achieve this via SQL.
Although you may be able to author a regex expression that matches some of the target XML tags, there’s no way of correctly updating the repository tables storing the source XML (if you’re using the HANA classic repository).
For HANA 2 HDI files no DB command can change the source code as these are not stored in the database.
Beyond this technical issue, it’s probably not a good idea to apply a flag that changes query semantics as a batch update.

INSERTing data from a text file into SQL server (speed? method?)

Got about a 400 MB .txt file here that is delimited by '|'. Using a Windows Form with C#, I'm inserting each row of the .txt file into a table in my SQL server database.
What I'm doing is simply this (shortened by "..." for brevity):
while ((line = file.ReadLine()) != null)
{
string[] split = line.Split(new Char[] { '|' });
SqlCommand cmd = new SqlCommand("INSERT INTO NEW_AnnualData VALUES (#YR1984, #YR1985, ..., #YR2012)", myconn);
cmd.Parameters.AddWithValue("#YR1984", split[0]);
cmd.Parameters.AddWithValue("#YR1985", split[1]);
...
cmd.Parameters.AddWithValue("#YR2012", split[28]);
cmd.ExecuteNonQuery();
}
Now, this is working, but it is taking awhile. This is my first time to do anything with a huge amount of data, so I need to make sure that A) I'm doing this in an efficient manner, and that B) my expectations aren't too high.
Using a SELECT COUNT() while the loop is going, I can watch the number go up and up over time. So I used a clock and some basic math to figure out the speed that things are working. In 60 seconds, there were 73881 inserts. That's 1231 inserts per second. The question is, is this an average speed, or am I getting poor performance? If the latter, what can I do to improve the performance?
I did read something about SSIS being efficient for this purpose exactly. However, I need this action to come from clicking a button in a Windows Form, not going through SISS.
Oooh - that approach is going to give you appalling performance. Try using BULK INSERT, as follows:
BULK INSERT MyTable
FROM 'e:\orders\lineitem.tbl'
WITH
(
FIELDTERMINATOR ='|',
ROWTERMINATOR ='\n'
)
This is the best solution in terms of performance. There is a drawback, in that the file must be present on the database server. There are two workarounds for this that I've used in the past, if you don't access to the server's file system from where you're running the process. One is to install an instance of SQL Express on the workstation, add the main server as a linked server to the workstation instance, and then run "BULK INSERT MyServer.MyDatabase.dbo.MyTable...". The other option is to reformat the CSV file as XML, which can be processed very quickly, and then passing the XML to query and processing it using OPENXML. Both BULK INSERT and OPENXML are well documented on MSDN, and you'd do well to read through the examples.
Have a look at SqlBulkCopy on MSDN, or the nice blog post here. For me that goes up to tens of thousands of inserts per second.
I'd have to agree with Andomar. I really quite like SqlBulkCopy. It is really fast (you need to play around with BatchSizes to make sure you find one that suits your situation.)
For a really in depth article discussing the various options, check out Microsoft's "Data Loading Performance Guide";
http://msdn.microsoft.com/en-us/library/dd425070(v=sql.100).aspx
Also, take a look at the C# example with SqlBulkCopy of CSV Reader. It isn't free, but if you can write a fast and accurate parser in less time, then go for it. At least, it'll give you some ideas.
I have fonud SSIS to be much faster than this type of method but there are a bunch of variables that can affect performence.
If you want to experiment with SSIS, use the Import and Export wizard in Management Studio to generate a SSIS package that will import a pipe delimited file. You can save out the package and run it from a .NET application
See this article: http://blogs.msdn.com/b/michen/archive/2007/03/22/running-ssis-package-programmatically.aspx for info on how to run an SSIS package programatically. It includes options on how to run from the client, from the server, or wherever.
Also, take a look at this article for additional ways you can improve bulk insert performance in general. http://msdn.microsoft.com/en-us/library/ms190421.aspx

Looking for an SQL scripting language to manipulate DBF files

I have a large collection of DBF files (about 1100 of them) that I need to analyze for a client. Each file contains a single table. I need to perform a query on each table, copy the results into one large results table (which will hold the results from all files), and then move on to the next DBF file. At the end I need to save the results table in a format I can manipulate later. Does anyone know of a scripting language that can make this easy for me?
There are a few caveats:
1.) I need something that works in Vista (something that runs in DOS, python or GNU Octave is okay too).
2.) I'm not a database administrator, but I do have fairly good programming skills.
3.) I have only a basic working knowledge of SQL. I can write the queries, my problem is opening the DBF files and saving the results.
4.) I've actually accomplished this using MS Access, but it's a messy solution. So I'm looking for something that doesn't use Access.
I've been reading up on various scripting languages for SQL. Most of the sites I've seen get into things about servers, setting up relations, security and all those things. These issues are well beyond my understanding and aren't my concern. I just want to query these files, get my results, and get out. Is there something out there that's easily accessible for beginners, yet significantly powerful?
Any help would be much appreciated.
I would do this with SSIS. Looping and Data transformations are fairly easy in SSIS.
My first choice would be Visual FoxPro. It handles .dbf files natively, has an interactive environment and provides SQL SELECT capability. The SELECT statement has an INTO clause that sends query results to another table. Some kinds of MSDN subscription include FoxPro on the DVDs and make it available for download from MSDN.
dBASE is available too.
If necessary, the .dbf file structure is easy to manipulate with code. In the past I've had to write code to modify .dbf files in C and Delphi. It was never more than an afternoon's work. A Google Code Search will probably yield .dbf-related code for any major programming language. The .dbf file format and related file formats are documented on MSDN.
I have written a python dbf module which has very rudimentary SQL support. However, even if the SQL is not up to your needs, it is easy to query the dbf files using python syntax.
Some examples:
import dbf
To create a results table and add records to it:
results = dbf.Table('results_table', 'name C(50); amount N(10, 4)')
record = results.append()
with record:
record.name = 'something'
record.amount = 99.928
# to open an existing table
table = dbf.Table('some_dbf_table').open()
# find all sales >= $10,000
records = table.pql("select * where sales >= 10000")
# find all transactions for customer names that start with Bob
records = table.pql("select * where customer.startswith('Bob')")
# nevermind thin sql veneer, just use python commands
records = table.find("sales >= 10000 and customer.startswith('Bob')")
# sum sales by customer
customer_sales = default_dict(int) # if customer not already seen, will default to 0
for record in table:
customer_sales[record.customer] += record.sales
# now add to results table and print them out
for name, total in sorted(customer_sales.items()):
result_record = results.append()
with result_record:
result_record.name = name
result_record.amount = total
print "%s: %s" % (name, total)