BCP Export from SQL to CSV File, Metadata, Header Row - sql

I need to export data from an SQL table to CSV format. But I also need:
1. Metadata inserted in the first row of the output. This will be static.
2. Header row after the metadata.
3. Data. BUT I need fields with multiple values (e.g. name JOHN SMITH) to be in "" and commas within the quotes to separate the values within the field.
Here is my first draft to get data in CSV:
EXEC xp_cmdshell 'bcp "SELECT ITN_USER, SITE_ID, TICKET_NUMBER, VALIDATING_CARRIER_CODE, TICKET_EXPIRATION_DATE, TICKET_CURR_CODE, RESIDUAL_TOTAL_AMT, TICKET_TOTAL_FARE, PASSENGER_NAME, FIRST_ORIG_APT_CODE, FIRST_DEST_APT_CODE, FIRST_DEPART_DATE, TICKET_ISSUE_DATE, CRS_LOCATOR, TICKET_STATUS_ID, TICKET_TYPE, RSVN_SYS_ID, TICKETING_LOCATION, TICKET_BASE_FARE, TICKET_TAX, FARE_CALC_LINE FROM GDSX.dbo.UnusedTickets WHERE INSERT_DATE = ''01-31-2018''" queryout "C:\Users\Public\Documents\filename1_filename2_date.csv" /c /t, -T'
Any helpful tips or suggestions would be greatly appreciated.
This is what I want to achieve: “josh#gmail.com,vbear#gmail.com"
ITN_USER,SITE_ID,TICKET_NUMBER,VALIDATING_CARRIER_CODE,TICKET_ EXPIRATION_DATE,TICKET_CURR_CODE,RESIDUAL_TOTAL_AMT,TICKET_TOT AL_FARE,PASSENGER_NAME,FIRST_ORIG_APT_CODE,FIRST_DEST_APT_CODE ,FIRST_DEPART_DATE,TICKET_ISSUE_DATE,CRS_LOCATOR,TICKET_STATUS _ID,TICKET_TYPE,RSVN_SYS_ID,TICKETING_LOCATION,TICKET_BASE_FAR E,TICKET_TAX,FARE_CALC_LINE vbear,abccorpus,0017845439769,AA,08MAY2009,USD,1226.57,1629.00 ,bear/vernon,MSY,ORD,17MAY2008,08MAY2008,,,electronic,,,,, jsmith,abccorpus,0167846739059,UA,19JUN2009,USD,354.00,354.00, smith/john,LAX,PDX,25JUN2008,19JUN2008,,,,,,,, dgarcia,abccorpmx,1327959759566,MX,03AUG2009,MXN,6828.06,6828. 06,garcia/diego,MEX,GUA,07AUG2008,03AUG2008,,,electronic,,,,,
Thanks!

Try Creating a view and use the view in your BCP statement. You can do all your calculation in the view. For metadata you need to do a union all with the data. SO your view will be something like...
Create view abc as
select 'Name' as Name, 'Age' as Age --Metadata
Union All
Select Name , Cast(Age as Varchar(X)) from your table
make sure you give cast varchar for all the column as you will be doing union.

Related

DMV request for "Description" of the table for Power BI dataset

What I am trying to achieve is to add tables and columns descriptions programmatically to a Power BI dataset.
For this reason, I use Server Analysis Services to get access to the metadata.
I run a simple request:
select *
from $System.TMSCHEMA_PARTITIONS
As a result, I get a table with columns names:
ID
TableID
Name
Description
....
Now I want to select where the "Description" is empty.
select *
from $System.TMSCHEMA_PARTITIONS
where Description IS NULL
But I can't, I always get a syntax error:
Query (3, 7) The syntax for 'Description' is incorrect.
SQL reads it as a command and I don't know how to avoid it.
I have tried adding quotes and double quotes to the name of the columns, I tried adding a table reference and all of these combined, but nothing helps.
It works for "TableID" for example.
This one works:
select *
from $System.TMSCHEMA_PARTITIONS
where len([Description]) = 0

How to pass dynamic values into snowflake query?

I have a query to find potential SSN in a table using regex pattern.
db_name.schema_name.Table name: db_name.schema_name.ABC
Column name with Sensitive data: senstve_col
select regexp_substr(senstve_col, '\\b[0-9]{3}[ -][0-9]{2}[ -]{4}\\b') as sensitive_data, * from db_name.schema_name.ABC)
I need to do this for 200 tables with 200 different column names. Also, the db_name and schema_name varies for each table.
Is there a way to pass the values dynamically and store the data into a new table in snowflake?
can someone help with the query to automate the above query for multiple tables?
This is how you call the .SQL in unix
snowsql -o variable_substitution=True --variable NEXT_TABLE=tblname --variable NEXT_COL=colname -f /home/sagar/snowflake/create_table.sql
And this is how you mention the variable name in the .sql
create or replace view ntgrpa_hist.vw_rt_satelliteinfo_latest as
select &NEXT_COL from public.&NEXT_TABLE;

SQL - Loading data from CSV file & using that data to find instances in another table

I am new to SQL and I was wondering if there was any way to search a table for instances with values from a column of an external table (csv file). To explain that in a clearer manner, this is what I'm working on: If a column in the csv file contains latitudes and another column contained longitudes; I want to search a table that contains information about several locations, with their Latitudes and Longitudes specified in the table.
I want to retrieve the information about that particular location with Latitudes and Longitudes as input from a csv file.
Would it look something like this? :
CREATE TABLE MyTable(
latitude DECIMAL(5, 2) NOT NULL,
longitude DECIMAL(5, 2) NOT NULL
);
LOAD DATA INFILE 'C:\Users\Admin\Desktop\Catalog.csv'
INTO TABLE MyTable
FIELDS TERMINATED BY ','
LINES TERMINATED BY '\n'
SELECT
main.object_id,
main.latitude,
main.longitude,
main.Some_Information
FROM
location_info AS main,
MyTable AS temp
WHERE
main.latitude = temp.latitude AND
main.longitude = temp.longitude
I also tried using psql's \copy like:
\copy MyTable FROM 'C:\Users\Admin\Desktop\Catalog.csv' WITH CSV;
As given here -> http://postgresguide.com/utilities/copy.html.
But this didn't work either. There was an error near "\" at or near copy, but then this could be because of the presence of an older version of psql.
Also I am not a Superuser, hence the use of \copy and not COPY FROM.
I also tried using a temporary table and using \copy alongside it. It gave the same error as above.
PostgreSQL does not support the LOAD DATA syntax you're using. You'll need to use COPY instead.
Your workflow should look more like:
CREATE TABLE MyTable(
latitude DECIMAL(5, 2) NOT NULL,
longitude DECIMAL(5, 2) NOT NULL
);
COPY MyTable(latitude, longitude)
FROM 'C:\Users\Admin\Desktop\Catalog.csv' WITH CSV;
SELECT
main.object_id,
main.latitude,
main.longitude,
main.Some_Information
FROM
location_info AS main
JOIN MyTable AS temp
on main.latitude = temp.latitude
and main.longitude = temp.longitude
There are three main things to notice here.
I've removed your \ from the COPY command.
I've specified the columns that you're trying to insert into with the COPY command. If the columns are in a different order in the CSV file, simply reorder them in the COPY expression.
I've changed the syntax of your join to a standard ANSI join. The logic is the same, but this is a better standard to use for readability/compatibility reasons.

Rowcounts of all tables in a database in Netezza

I am migrating data from MS SQL to Netezza and so I need to find the row counts of all tables in a database (in Netezza). Any query for the same would be of an immense help to me as I'm completely new to this. Thanks in advance.
This query does it directly from _v_table:
SELECT TABLENAME, RELTUPLES FROM _V_TABLE where objtype = 'TABLE' ORDER BY RELTUPLES
something like this should work:
select 'select '||chr(39)||tablename||chr(39)||' as entity, count(1) from '||tablename||' union all'
from _v_table
where object_type ='TABLE';
copy/paste the result, remove the last "union all".
I have never used Netezza but googled and found:
http://www.folkstalk.com/2009/12/netezza-count-analytic-functions.html
SELECT dept_id,
salary,
COUNT(1) OVER() total_cnt
FROM Employees
If you don't know what tables that exists:
http://www.folkstalk.com/2009/11/netezza-system-catalog-views.html
select * from _v_table;
Another way to acquire the row counts for a table (if you have access to the operating system level) is to use the Netezza nz_get_table_rowcount command. You can enter, "nz_get_table_rowcount -h" to get all of the help text on this command, but the format is:
Usage: nz_get_table_rowcount [database]
Purpose: Perform a "SELECT COUNT(*) FROM ;" to get its true rowcount.
Thus, this script results in a full table scan being performed.
Inputs: The database name is optional. If not specified, then $NZ_DATABASE
will be used instead.
The table name is required. If only one argument is specified, it
will be taken as the table name.
If two arguments are specified, the first will be taken as the
database name and the second will be taken as the table name.
Outputs: The table rowcount is returned.
Use this command in a shell script to cycle through all of the tables within a database. Use nz_get_table_names to get the list of tables within a database.

Write output to file in postgreSQL?

I need to print a table column out to a text file in postgres, just a simple SELECT "column_name" FROM "table." I'm pretty sure there is a way to do this but I can't remember the syntax. Can anyone help?
Use COPY.
If you need to copy an entire table, you can specify a table name:
COPY country TO '/usr1/proj/bray/sql/country_data';
You can also copy a query result:
COPY (SELECT column_name FROM country WHERE country_name LIKE 'A%')
TO '/usr1/proj/bray/sql/a_list_countries.copy';
The same syntax can be used to import a table:
COPY country FROM '/usr1/proj/bray/sql/country_data';
It is possible to specify additional options, e.g. delimiters, format, etc. In my daily work, I often use:
COPY country TO '/usr1/proj/bray/sql/country_data' DELIMITER ',' CSV;
For a full description of the COPY statement, refer to the linked documentation page above.