How to change column name if it contains certain characters - sql

How to change the name of table columns using SQL Server.
The Database table looks like this:
| Column 1 | Column 2 | Column 3 | Q115($) | Q215($) | Q315($) | .... | QXYY($)|
Where new columns are added over time in the format "Quarter"+"Year"+"($)"
I want to write a query that does the following in Microsoft SQL Server:
For all columns that contains "($)":
Change Name of Column from QXYY($) to 20YY QX

Changing column names is probably something you want to do manually instead of as a mass change. You can get a list of all the columns in all tables in your DB using the sys.all_columns table so something like
select all_objects.name as Table_nm, all_columns.name as Column_nm
from sys.all_columns
inner join sys.all_objects
on all_objects.object_id = all_columns.object_id
where all_columns.name like '%$%'
It looks like renaming columns is done through a Stored Procedure, so you could take the output of that query to create a string of other queries to update all the offending records.
https://msdn.microsoft.com/en-us/library/ms188617.aspx
Good luck!

Powershell:
push-location;
import-module sqlps -disablenamechecking;
pop-location;
$s = new-object microsoft.sqlserver.management.smo.server '.';
$tbl = $s.databases['MyDB'].tables['MyTbl'];
foreach ($col in $tbl.Columns) {
if ($col -match '^Q([1-4])([0-9][0-9])\(\$\)$') {
$newName = "20$($matches[2]) Q$($matches[1])";
$col.Rename( $newName );
}
}

Related

Elasticsearch, Elasticsearch SQL, SHOW COLUMNS or DESCRIBE - is there a posibility to filter the output

I have simple elastic SQL query like this:
GET /_sql?format=txt
{
"query" :"""
DESCRIBE "index_name"
"""
}
and it works, and the output is like this:
column | type | mapping
-----------------------------------------------------------
column_name1 | STRUCT | object
column_name1.Id | VARCHAR | text
column_name1.Id.keyword | VARCHAR | keyword
Is there a possibility to the prepare above query using filter or where, for example something like this:
GET /_sql?format=txt
{
"query":"""
DESCRIBE "index_name"
""",
"filter": {"terms": {"type.keyword": ["STRUCT"]}}
}
or
GET /_sql?format=txt
{
"query":"""
DESCRIBE "index_name"
WHERE "type" = 'STRUCT'
"""
}
That is not possible, no.
While the DESCRIBE sql command seems to return tabular data, it is not a query and it does not support WHERE clauses or can be used within a SELECT statement. That is actually not specific to Elasticsearch, but the same in RDBMs.
The same apparently is true for the Elasticsearch filter clause. This again will work with SELECT SQL statements, but with DESCRIBE or SHOW COLUMNS - while not producing an error - it simply will have no effect on the results.
In "real" SQL, you could work around this by querying information_schema.COLUMNS, but that is not an option in Elasticsearch.

PostgreSQL import from CSV NULL values are text - Need null

I had exported a bunch of tables (>30) as CSV files from MySQL database using phpMyAdmin. These CSV file contains NULL values like:
"id","sourceType","name","website","location"
"1","non-commercial","John Doe",NULL,"California"
I imported many such csv to a PostgreSQL database with TablePlus. However, the NULL values in the columns are actually appearing as text rather than null.
When my application fetches the data from these columns it actually retrieves the text 'NULL' rather than a null value.
Also SQL command with IS NULL does not retrieve these rows probably because they are identified as text rather than null values.
Is there a SQL command I can do to convert all text NULL values in all the tables to actual NULL values? This would be the easiest way to avoid re-importing all the tables.
PostgreSQL's COPY command has the NULL 'some_string' option that allows to specify any string as NULL value: https://www.postgresql.org/docs/current/sql-copy.html
This would of course require re-importing all your tables.
Example with your data:
The CSV:
"id","sourceType","name","website","location"
"1","non-commercial","John Doe",NULL,"California"
"2","non-commercial","John Doe",NULL,"California"
The table:
CREATE TABLE import_with_null (id integer, source_type varchar(50), name varchar(50), website varchar(50), location varchar(50));
The COPY statement:
COPY import_with_null (id, source_type, name, website, location) from '/tmp/import_with_NULL.csv' WITH (FORMAT CSV, NULL 'NULL', HEADER);
Test of the correct import of NULL strings as SQL NULL:
SELECT * FROM import_with_null WHERE website IS NULL;
id | source_type | name | website | location
----+----------------+----------+---------+------------
1 | non-commercial | John Doe | | California
2 | non-commercial | John Doe | | California
(2 rows)
The important part that transforms NULL strings into SQL NULL values is NULL 'NULL' and could be any other value NULL 'whatever string'.
UPDATE For whoever comes here looking for a solution
See answers for two potential solutions
One of the solutions provides a SQL COPY method which must be performed before the import itself. The solution is provided by Michal T and marked as accepted answer is the better way to prevent this from happening in the first place.
My solution below uses a script in my application (Built in Laravel/PHP) which can be done after the import is already done.
Note- See the comments in the code and you could potentially figure out a similar solution in other languages/frameworks.
Thanks to #BjarniRagnarsson suggestion in the comments above, I came up with a short PHP Laravel script to perform update queries on all columns (which are of type 'string' or 'text') to replace the 'NULL' text with NULL values.
public function convertNULLStringToNULL()
{
$tables = DB::connection()->getDoctrineSchemaManager()->listTableNames(); //Get list of all tables
$results = []; // an array to store the output results
foreach ($tables as $table) { // Loop through each table
$columnNames = DB::getSchemaBuilder()->getColumnListing($table); //Get list of all columns
$columnResults = []; // array to store the results per column
foreach ($columnNames as $column) { Loop through each column
$columnType = DB::getSchemaBuilder()->getColumnType($table, $column); // Get the column type
if (
$columnType == 'string' || //check if column type is string or text
$columnType == 'text'
) {
$query = "update " . $table . " set \"" . $column . "\"=NULL where \"" . $column . "\"='NULL'"; //Build the update query as mentioned in comments above
$r = DB::update($query); //perform the update query
array_push($columnResults, [
$column => $r
]); //Push the column Results
}
}
array_push($results, [
$table => $columnResults
]); // push the table results
}
dd($results); //Output the results
}
Note I was using Laravel 8 for this.

Sql query 'IN ' operator is not work error?

I am using CI 'in'operator is not work sql error please check its and share valuable idea...
table
enter image description here
id | coach_name
------------------
9 | GS
------------------
10 | SLR
view and function
$coachID = explode(',',$list['coach']);
$coachname = $this->rail_ceil_model->display_coach_name($coachID);
show result
SLR
need result
GS,SLR
last query result here
SELECT coach_name FROM mcc_coach WHERE id IN('9', '10')
CI code
public function display_coach_name($coachID='')
{
$db2 = $this->load->database('rail',TRUE);
$db2->select('coach_name');
$db2->from('mcc_coach');
$db2->where_in('id',$coachID);
$query = $db2->get();
echo $db2->last_query(); die;
if ($query->num_rows() > 0):
//return $query->row()->coach_name;
else:
return 0;
endif;
}
You must provide an array to in operator so #coachId must be an array not a string
If you are writing this query
SELECT coach_name FROM mcc_coach WHERE id IN('9,10')
it means you are applying in operator on a single id which contains a comma separated value.
So, right query will be
SELECT coach_name FROM mcc_coach WHERE id IN('9','10')

Using the results of a select sub query as the columns to select in the main query. Injection?

I have a table that contains a column storing sql functions, column names and similar snippets such as below:
ID | Columsql
1 | c.clientname
2 | CONVERT(VARCHAR(10),c.DOB,103)
The reason for this is to use selected rows to dynamically create results from the main query that match spreadsheet templates. EG Template 1 requires the above client name and DOB.
My Subquery is:
select columnsql from CSVColumns cc
left join Templatecolumns ct on cc.id = ct.CSVColumnId
where ct.TemplateId = 1
order by ct.columnposition
The results of this query are 2 rows of text:
c.clientname
CONVERT(VARCHAR(10),c.DOB,103)
I would wish to pass these into my main statement so it would read initially
Select(
select columnsql from CSVColumns cc
left join Templatecolumns ct on cc.id = ct.CSVColumnId
where ct.TemplateId = 1
order by ct.columnposition
) from Clients c
but perform:
select c.clientname, CONVERT(VARCHAR(10),c.DOB,103) from clients c
to present a results set of client names and DOBs.
So far my attempts at 'injecting' are fruitless. Any suggestions?
You can't do this, at least not directly. What you have to do is, in a stored procedure, build up a varchar/string containing a complete SQL statement; you can execute that string.
declare #convCommand varchar(50);
-- some sql to get 'convert(varchar(10), c.DOB, 103) into #convCommand.
declare #fullSql varchar(1000);
#fullSql = 'select c.clientname, ' + #convCommand + ' from c,ients c;';
exec #fullSql
However, that's not the most efficient way to run it - and when you already know what fragment you need to put into it, why don't you just write the statement?
I think the reason you can't do that is that SQL Injection is a dangerous thing. (If you don't know why please do some research!) Having got a dangerous string into a table - e.g 'c.dob from clients c;drop table clients;'- using the column that contains the data to actually execute code would not be a good thing!
EDIT 1:
The original programmer is likely using a C# function:
string newSql = string.format("select c.clientname, {0} from clients c", "convert...");
Basic format is:
string.format("hhh {0} ggg{1}.....{n}, s0, s1,....sn);
{0} in the first string is replaced by the string at s0; {1} is replaces by tge string at s1, .... {n} by the string at sn.
This is probably a reasonable way to do it, though why is needs all the fragments is a bit opaque. You can't duplicate that in sql, save by doing what I suggest above. (SQL doesn't have anything like the same string.format function.)

Powershell working with SQL Server datasets

I was trying to filter the dataset that i got from SQL server database. Here is the scenario...
I'm getting servername,dbname columns from one of the database servers and returning the result set as return ,$dataSet.Tables[0]. I'm getting the results correctly in a table format.
But now i got all the server names to a variable from this dataset as below,
$servers=$dataSet_1.servername | select -unique
Now, i'm trying to loop through each server and get the database list associated to each server as follows, but looks like this is not a right approach as it is getting me all the severs and their database names is every iteration
foreach($server IN $servers)
{
write-host $server
$dataSet_1 | Where-Object {$dataSet_1.servername -eq $server} | select $_.dbname
}
Could someone help me the right approach are a way to do this.
Sample output: (Basically it should iterate each server and display its databasename)
ServerA
dbname
database1
database2
database3
....
ServerB
dbname
database1
database8
database10
....
ServerC
...
Thanks,
"$dataSet_1 | where {$_.servername -eq $server} " is still returning the System.Data.DataTable type.
Does this return the desired output?
$dataSet_1 | where {$_.servername -eq $server} | %{$_.dbname}
Maybe this:
$servers=$dataSet_1.servername | Select-Object -Unique
$filteredSet = $dataSet_1 | Where-Object { $servers -contains $_.servername }
$filteredSet