I have an text file with a couple hundred records in it. I want to be able to join this information with another table. Currently, the only way I can think of is to create a table with CREATE and then use hundreds of INSERT INTO's (since INSERT INTO in Teradata doesn't support multiple insert values)
Is there a more efifcient way of achieving what I want?
Link the table in MS$Access, and paste the content of the text file directly into the Teradata table records. A rather fast method for small files, if you do it non-automated.
Related
I'm using Oracle's DB.I have a table say T. It has following columns id, att1,att2,att3. Now for a large amount of data att3 is blank. I've created a csv file which contains data in the format id,att3 it has a lot of data. How do I updatefrom this file to existing rows?
Any way of doing it via pl/SQL
Which database engine are you using? The answer might vary depending on that.
But this post here How to update selected rows with values from a CSV file in Postgres? is similar to what you're asking, I'm sure you can adapt it to your needs.
It's annoying to preview all columns all the time (especially when tables has a lot of) and even worse to create filter after every restart of SQLDeveloper?
I can't see there any option to save them.
Has someone workaround for this?
My version is 4.1.0.17.
List down the required columns in the select query. Compose the final query with only the columns you want in the select list. Save the query in your local as .sql file with a proper name of your choice. From next time open this file in SQL Worksheet. This would be applicable if you use the same query quite often.
Alternatively, you could create a new table from existing table, specify the columns in the order you want to display first and keep the rest columns towards the end.
Create table_new as select
From table_old;
Drop table tqble_old;
Rename table_new to table_old;
I would like to ask, how can I make insert script without I have to manually write it. Does exist some soft with GUI where I can write a data and it will generate me a script? I do my database in oracle sqldeveloper, but I can't find something like that. Thank you in advice.
If you mean you want to fill with dummy test data in a table, there are dozens of way to do it:
Provided you have access to the data dictionary, here is one easy way to generate 20,000 records:
insert into my_table
select
--<number of columns you want
-- use dbms_random if you would like>
from
dba_objects a,
dba_objects b
where
rownum<=20000;
This makes use of cartesian join with one of the large dictionary views that comes installed with Oracle dba_objects.
PS: Cartesian join on large tables/views can become very slow, so use good judgement to restrict the result set.
OTOH, if you want specific data and not some random stuff to be inserted into the table, you are stuck with the INSERT..VALUES syntax of Oracle wherein which you create a INSERT statement for each of the records. You might reduce the effort to convert your data(in CSV or some other standard format) by automating copy/paste stuff using features like macro available in some editors like Notepad++, sublime, etc.,.
There are also other options like SQL*Loader where you need to write a "Control File" to tell the tool how to load the data from external file to the table. This approach would be best and faster than the INSERT..VALUES approach.
I've got a table that can contain a variety of different types and fields, and I've got a table definitions table that tells me which field contains which data. I need to select things from that table, so currently I build up a dynamic select statement based on what's in that table definitions table and select it all into a temp table, then work from that.
The actual amount of data I'm selecting is quite big, over 5 million records. I'm wondering if a temp table is really the best way to go around doing this.
Are there other more efficient options of doing what I need to do?
If your data is static, reports like - cache most popular queries results, preferably on Application Server. Or do multidimensional modeling (cubes). That is the really "more efficient option to do that".
Temp tables, table variables, table data types... In any case you will use your tempdb, and if you want to optimize your queries, try to optimize tempdb storage (after checking IO statistics ). You can aslo create indexes for your temp tables.
You can use Table Variables to achieve the functionality.
If you are using the same structure in multiple queries, you can go for custom defined Table data types as well.
http://technet.microsoft.com/en-us/library/ms188927.aspx
http://technet.microsoft.com/en-us/library/bb522526(v=sql.105).aspx
In an MS Access 2010 database, I have a massive table that I need to shrink in order to make it usable. I am only interested in a subset of the records in the table, so I want to select all the data that I care about and insert it into another table that has all the same identical fields. The problem is that the table has MANY fields and it would be error-prone to list them all explicitly. Is there some way to simply select all fields and insert into all fields without listing each field explicitly? If so, how do I change the following code in order to accomplish this?
INSERT INTO massivetable_destination (*)
SELECT * FROM massivetable_source
WHERE State='MS';
I may be misunderstanding you, but if the tables are in the same access database, it seems you could do the following steps and let the IDE do all of the heavy lifting for you.
Right click your massive table and select copy.
Right click in the object explorer area and select paste.
Optional - rename the copied table.
Run a delete query on the copied table, removing all records that you do not want. The delete query would look like the following:
Query Text
DELETE *
FROM MyCopiedTable
WHERE State <> 'ms';