I have a column in a SQL table that contains json lists as text.
I want to find all the rows that have [7] in the column. So I have something like the following
SELECT col1, col2 FROM tbl1 WHERE col2 LIKE '[7]'
However this comes back with nothing (though I can see rows when I inspec the table that have that in the field).
Stranger still, if I try editing the query to the following
SELECT col1, col2 FROM tbl1 WHERE col2 LIKE '%[7]%'
It will find the rows that have [7], but it will also get rows that clearly (I think) violate that pattern, i.e. rows that have [1,7]
Is the square bracket something special that I'm not aware of?
Try something like
SELECT col1, col2 FROM tbl1
WHERE col2 LIKE '%\[7]%' ESCAPE '\'
Related
we have a view let's say table1_v
if I run select * from table1_v ,I get an output with a column - Year (Jahr)
I want to know what is the underlying sql so I can find it in Sql developer under details of this view or in all_views.
I wonder because the sql looks like this
select year from table1. such sql will give my the column name YEAR and not Year (Jahr)
I would expect something like this in the view definition -
select year as "Year (Jahr)" from table1
so my question is,where could the column name be renamed if not in the view definition and if it is best practice?
If you do want to rename it in the view then give the column an alias:
CREATE VIEW view_name (col1, col2, col3, col4, "Year (Jahr)", col6, col7) AS
SELECT col1, col2, col3, col4, year, col5, col6
FROM table_name;
However, don't, as it is not best-practice.
If you want to include mixed-case, spaces and symbol characters in your identifiers then you will need to use a quoted identifier and that means you would need to use the exact same quoted identifier wherever it is reused in queries. Because of the difficulties caused by matching cases, etc. using quoted identifiers is not considered best-practice and you should use unquoted identifiers wherever possible.
Where could the column name be renamed if not in the view definition and if it is best practice?
Instead of changing the view, you can alias the column when you SELECT from the view when you want to display the output:
SELECT col1, col2, col3, col4, year AS "Year (Jahr)", col6, col7
FROM view_name;
I'm implementing a search-bar for my program that will query a database based on user input. When the user enters a single search term with spaces, example: I am searching for something, the program takes it as space separated values and performs a search for each value in multiple relevant columns in the database. An example query (based on the above search phrase) would be:
SELECT * FROM TableName WHERE (
col1 LIKE "%I%" OR col2 LIKE "%I%" OR col3 LIKE "%I%" OR col4 LIKE "%I%" OR col5 LIKE "%I%"
OR col1 LIKE "%am%" OR col2 LIKE "%am%" OR col3 LIKE "%am%" OR col4 LIKE "%am%" OR col5 LIKE "%am%"
)
and so on for each space separated value in the input. As you can expect, this will be a very long query based on user input.
My question is, is there a better way to search for a single value in multiple columns? Or just a better way to implement a search like this one.
Yes, SQLite provides full-text search functionality via the FTS5 module. Firstly you need to create a virtual table:
CREATE VIRTUAL TABLE virtual_table USING fts5(col1,col2,col3,col4,col5);
Note that you can not add types, constraints or PRIMARY KEY declarations to a CREATE VIRTUAL TABLE statement used to create an FTS5 table.
Populate your table using INSERT (UPDATE/DELETE) like any other table:
INSERT INTO virtual_table (col1, col2, col3, col4, col5)
VALUES ('I', 'like', 'doughnuts', 'with', 'chocolate'),
('I', 'am', 'searching', 'for', 'something'),
('We', 'are', 'going', 'to', 'the park');
Then you can use the full power of full-text search functionality. There are three ways to execute a full-text query, see the documentation for more details. One possible option would be using the MATCH operator:
SELECT *
FROM virtual_table
WHERE virtual_table MATCH 'I am searching for something';
I have 2 tables in 2 different schemas scha schb (e.g) and in scha I have a several tables that are all made of varchar as I had to format some data [it was part of the task]
now I have the same tables but with different types in schb.
The problem is this, Wherever I have a type which involves numbers (money, numerical, date), it's giving me an error to CAST.
Is there a way where I can CAST, without the need of copying one coloumn after another (copying it all in one go)
for example
INSERT INTO schb.customer
SELECT "col1", "col2" "col3 **(needs casting)**...."
FROM scha.customer
Thanks
A SELECT clause is not a list of columns, it is a list of expressions (which usually involve columns). A type cast is an expression so you can put them right into your SELECT. PostgreSQL supports two casting syntaxes:
CAST ( expression AS type )
expression::type
The first is standard SQL, the :: form is PostgreSQL-specific. If your schb.customer.col3 is (for example) numeric(5,2), then you'd say:
INSERT INTO schb.customer (col1, col2, col3)
SELECT col1, col2, cast(col3 as numeric(5,2))
FROM scha.customer
-- or
INSERT INTO schb.customer (col1, col2, col3)
SELECT col1, col2, col3::numeric(5,2)
FROM scha.customer
Note that I've included the column list in the INSERT as well. You don't have to do that but it is a good idea as you don't have to worry about the column order and it makes it easy to skip columns (or let columns assume their default values with explicitly telling them to).
This question already has answers here:
SQL column names and comparing them to row records in another table in PostgreSQL
(3 answers)
Closed 9 years ago.
I am trying to have a SQL statement where the column names in the SELECT are a subquery. The basic format is:
SELECT (<subquery for columns>) FROM Table;
My subquery returns 4 rows of field names, so I need to make them a single row. I used:
SELECT array_to_string(array_agg(column_names::text),',') FROM Fieldnames;
And then I get a returned format of col1, col2, col3, col4 for my 4 returned rows as a string. If I paste in the raw test for my query, it works fine as:
SELECT (col1, col2, col3, col4) FROM Table;
The issue arises when I put the two together. I get an odd response from psql. I get a:
?column?
col1, col2, col3, col4
with no rows returned for:
SELECT(SELECT array_to_string(array_agg(column_names::text),',') FROM Fieldnames) FROM Table;
Conceptually, I think there are two ways I can address this. I need to get my subquery SELECT back in a format that I can put as the column-name argument to the first SELECT statement, but because I return multiple rows (of a single value of a varchar for the column name that I want), I thought I could just paste them together but I cannot. I am using psql so I do not have the "#" list trick.
Any advice would be appreciated.
Solution:
Here is why the question is not a duplicate, and how I solved it. In trying to simplify the question to be manageable, it lost its muster. I ended up writing a function because I couldn't use # to pass a list to SELECT in PostgreSQL. When you want to select only a subset of rows, you cannot pass a nested (SELECT) even with an AS, although this works in Oracle. As a result, I wrote a function that effective created a string, and then passed it as the SELECT. There seems to be something fundamentally different on how the SQL parser in PostgreSQL handles the arguments for SELECT from Oracle, but everyone DB is different.
If you enclose several column names in parentheses like you do:
SELECT (col1, col2, col3, col4) FROM tbl;
.. you effectively create an ad-hoc row type from the enclosed columns, which has no name, because you did not provide an alias. Postgres will choose a fallback like ?column?. In later Postgres versions the default name is row since, internally, the above is short syntax for:
SELECT ROW(col1, col2, col3, col4) FROM tbl;
Provide your own name (alias):
SELECT (col1, col2, col3, col4) AS my_row_type FROM tbl;
But you probably just want individual columns. Drop the parentheses:
SELECT col1, col2, col3, col4 FROM tbl;
In my table, certain rows have junk characters like Ё㺞稹㾸䐶ꖆ㩜癈ῤ in certain columns. I am trying to filter out such rows like below:
SELECT Col1, Col2, Col3 FROM MyTable WHERE Col2 NOT LIKE '%[REGEX]%'
Is there any better approach?
If no, how can I generate a proper regex? I don't want any records where Col2 has anything but alphanumeric characters, punctuations, and a space.
SQL Server doesn't support proper RegExs in the LIKE.
The pattern syntax does support negation (^) and ranges or sets of characters though so you could use something like..
WHERE Col2 LIKE N'%[^0-9A-Za-z .,;:]%' collate Latin1_General_BIN