Postgres copy to select * with a type cast - sql

I have a group of two SQL tables in postgres. A staging table and the main table. Among the variety of reasons for the staging table, the data i am uploading has irregular and different formats for all of the date columns. During the upload process these values go into staging table as varchars to be manipulated into usable formats.
In the main table the column type for the date fields is of type 'date' in the staging table they are of type varchar.
The question is, does postgres support a copy expression similar to
insert into production_t select *,textdate::date from staging_t
I need to change the format of a single field during the copy process. I know i can individually type out all of the column names during the insert and typecast the date columns there, but this table has over 200 columns and is one of 10 tables with similar issues. I want to accomplish this insert+typecast in one line that i can apply to all tables rather than having to type 2000+ lines of sql queries.

You have to write every column of such a query, there is no shorthand.
May I say that a design with 200 columns is questionable.

Related

Selecting all columns from a table but casting certain columns

I've read through other similar questions but can't find an answer
I have a SQL query such as:
SELECT * FROM tblName
tblName has several columns e.g. id, name, date etc
I would like to cast id as bigint.
However, in my application, tblName is dynamic. So a user has a list of all the tables in the DB. Let's say 1000 tables. The application then gets all columns from that table. The only column each table has in common is the id column.
The application is using Flask and pyodbc so any larger numbers get converted to decimal/float which is a whole other headache. The workaround is to cast the int as bigint within SQL.
Is this possible? I'm unable to rewrite any part of the application so therefore asking if it can be done in SQL

Trying to avoid polymorphic association in a schema for dynamic fields

I want to create a dynamic fields system. The idea is that the owner will be able to create dynamic fields for let's say, the customers of his company. The problem is that with the database structure that I came up with, requires the use of polymorphic association.
My structure is the following:
The fields table that consists of the following columns:
ID, FieldName, FieldType (The field type can be avoided, probably)
The field value tables (There are multiple value tables, one for every data type of the dynamic fields ex. A table to store the values that are DATETIMES, a table that stores the values that are DECIMALS and so on.).These tables have identical structure but with a different data type for their value column! They consist of the following columns:
ID, FieldID, CustomerID, FieldValue
Now, in order to get the field value I have to do a bunch of LEFT JOINs between the Value Tables and the Fields Table and keep only the value column that its value is not NULL, since that only one value column if any will have a value! Of course this isn't efficient at all and I am trying to avoid it. Any suggestions even if they require a completely different database structure at all are welcome. I am also using MySQL along with EntityFrameworkCore.

select only specific number of columns from Table - Hive

How to select only specific number of columns from a table in hive. For Example, If I have Table with 50 Columns, then how Can I just select first 25 columns ? Is there any easy way to do it rather than hard coading the column names.
I guess that you're asking about using the order in which you defined your columns in your CREATE TABLE statement. No, that's not possible in Hive for the moment.
You could do the trick by adding a new column COLUMN_NUMBER and use that in your WHERE statements, but in that case I would really think twice of the trade off between spending some more time typing your queries and messing your whole table design by adding unnecessary columns. Apart from the fact that if you need to change your table schema in the future (for instance, by adding a new column), adapting your previous code with different column numbers would be painful.

Comparing the data of two tables in the same database in sqlserver

I need to compare the two table data with in one database.match the data using some columns form table.
Stored this extra rows data into another table called "relationaldata".
while I am searched ,found some solutin.
But it's not working to me
http://weblogs.sqlteam.com/jeffs/archive/2004/11/10/2737.aspx
can any one help how to do this.
How compare two table data with in one database using redgate(Tool)?
Red Gate SQL Data Compare lets you map together two tables in the same database, provided the columns are compatible datatypes. You just put the same database in the source and target, then go to the Object Mapping tab, unmap the two tables, and map them together.
Data Compare used to use UNION ALL, but it was filling up tempdb, which is what will happen if the table has a high row count. It does all the "joins" on local hard disk now using a data cache.
I think you can use Except clause in sql server
INSERT INTO tableC
(
Col1
, col2
, col3
)
select Col1,col2,col3from tableA
Except
select Col1,col2,col3 from tableB
Please refer for more information
http://blog.sqlauthority.com/2008/08/07/sql-server-except-clause-in-sql-server-is-similar-to-minus-clause-in-oracle/
Hope this helps

SQL query: have results into a table named the results name

I have a very large database I would like to split up into tables. I would like to make it so when I run a distinct, it will make a table for every distinct name. The name of the table will be the data in one of the fields.
EX:
A --------- Data 1
A --------- Data 2
B --------- Data 3
B --------- Data 4
would result in 2 tables, 1 named A and another named B. Then the entire row of data would be copied into that field.
select distinct [name] from [maintable]
-make table for each name
-select [name] from [maintable]
-copy into table name
-drop row from [maintable]
Any help would be great!
I would advise you against this.
One solution is to create indexes, so you can access the data quickly. If you have only a handful of names, though, this might not be particularly effective because the index values would have select almost all records.
Another solution is something called partitioning. The exact mechanism differs from database to database, but the underlying idea is the same. Different portions of the table (as defined by name in your case) would be stored in different places. When a query is looking only for values for a particular name, only that data gets read.
Generally, it is bad design to have multiple tables with exactly the same data columns. Here are some reasons:
Adding a column, changing a type, or adding an index has to be done times instead of one time.
It is very hard to enforce a primary key constraint on a column across the tables -- you lose the primary key.
Queries that touch more than one name become much more complicated.
Insertions and updates are more complex, because you have to first identify the right table. This often results in overuse of dynamic SQL for otherwise basic operations.
Although there may be some simplifications (security comes to mind), most databases have other mechanisms that are superior to splitting the data into separate tables.
what you want is
CREATE TABLE new_table
AS (SELECT .... //the data that you want in this table);