I've read through other similar questions but can't find an answer
I have a SQL query such as:
SELECT * FROM tblName
tblName has several columns e.g. id, name, date etc
I would like to cast id as bigint.
However, in my application, tblName is dynamic. So a user has a list of all the tables in the DB. Let's say 1000 tables. The application then gets all columns from that table. The only column each table has in common is the id column.
The application is using Flask and pyodbc so any larger numbers get converted to decimal/float which is a whole other headache. The workaround is to cast the int as bigint within SQL.
Is this possible? I'm unable to rewrite any part of the application so therefore asking if it can be done in SQL
Related
This question already exists:
how to write ```crosstab``` in postgres with many keys (too many to write out) [closed]
Closed 2 years ago.
I am pivoting a table of the form id, key, value where there are many types of keys (130). There are too many keys to explicitly enumerate the types in the crosstab() call, or write out the crosstab_N function definition as recommended in the crosstab documentation:
row_name TEXT,
category_1 TEXT,
category_2 TEXT,
.
.
.
category_N TEXT
);
How do I pivot this table into wide format with columns id, category_1, category_2, ... category_130? I find it hard to believe you can't pivot such tables in SQL without explicitly enumerating the column types. For example in R, using the tidyverse package I would just call dataframe %>% spread(key=key, value=value)
I find it hard to believe you can't pivot such tables in SQL without explicitly enumerating the column types.
A SQL query returns a fixed set of columns. SQL is a descriptive language, and you need to tell the database which columns you want in the resultset so it can understand your requirement, and build the proper query plan.
Typical solutions involve dynamic SQL: that is, use a query to generate the actual query string, then execute it. That's an additional level of indirection, that is can be somehow challenging.
One half-way solution is JSON. If you can live with a result where all key/pair values are aggregated in a single column a JSON object, you can do:
select id, json_object_agg(key, value) obj
from mytable
group by id
in sql, in any given table, in a column named "name", wih data type as text
if there are ten entries, suppose an entry in the column is "rohit". i want to show all the entries in the name column after rohit. and i do not know the row id or id. can it be done??
select * from your_table where name > 'rohit'
but in general you should not treat text columns like that.
a database is more than a collection of tables.
think about how to organize your data, what defines a datarow.
maybe, beside their name, there is another thing how you would classify such a row? some things like "shall be displayed?" "is modified" "is active"?
so if you had a second column, say display of type int and your table looked like
CREATE TABLE MYDATA
NAME TEXT,
DISPLAY INT NOT NULL DEFAULT(1);
you could flag every row with 1 or 0 whether it should be displayed or not and then your query could look like
SELECT * FROM MYDATA WHERE DISPLAY=1 ORDER BY NAME
to get your list of values.
it's not much of a difference with ten rows, you don't even need indexes here, but if you build something bigger, say 10,000+ rows, you'd be surprised how slow that would become!
in general, TEXT columns are good to select and display, but should be avoided as a WHERE condition as much as you can. Use describing columns, preferrably int fields which can be indexed with extreme high efficiency and an application doesn't get slower even if the record size goes over 100k.
You can use "default" keyword for it.
CREATE TABLE Persons (
ID int NOT NULL,
name varchar(255) DEFAULT 'rohit'
);
I have a group of two SQL tables in postgres. A staging table and the main table. Among the variety of reasons for the staging table, the data i am uploading has irregular and different formats for all of the date columns. During the upload process these values go into staging table as varchars to be manipulated into usable formats.
In the main table the column type for the date fields is of type 'date' in the staging table they are of type varchar.
The question is, does postgres support a copy expression similar to
insert into production_t select *,textdate::date from staging_t
I need to change the format of a single field during the copy process. I know i can individually type out all of the column names during the insert and typecast the date columns there, but this table has over 200 columns and is one of 10 tables with similar issues. I want to accomplish this insert+typecast in one line that i can apply to all tables rather than having to type 2000+ lines of sql queries.
You have to write every column of such a query, there is no shorthand.
May I say that a design with 200 columns is questionable.
I'm having a table with an id and a name.
I'm getting a list of id's and i need their names.
In my knowledge i have two options.
Create a forloop in my code which executes:
SELECT name from table where id=x
where x is always a number.
or I'm write a single query like this:
SELECT name from table where id=1 OR id=2 OR id=3
The list of id's and names is enormous so i think you wouldn't want that.
The problem of id's is the id is not always a number but a random generated id containting numbers and characters. So talking about ranges is not a solution.
I'm asking this in a performance point of view.
What's a nice solution for this problem?
SQLite has limits on the size of a query, so if there is no known upper limit on the number of IDs, you cannot use a single query.
When you are reading multiple rows (note: IN (1, 2, 3) is easier than many ORs), you don't know to which ID a name belongs unless you also SELECT that, or sort the results by the ID.
There should be no noticeable difference in performance; SQLite is an embedded database without client/server communication overhead, and the query does not need to be parsed again if you use a prepared statement.
A "nice" solution is using the INoperator:
SELECT name from table where id in (1,2,3)
Also, the IN operator is syntactic sugar built for exactly this purpose..
SELECT name from table where id IN (1,2,3,4,5,6.....)
Hoping that you are getting the list of ID's on which you have to perform a query for names as input temp table #InputIDTable,
SELECT name from table WHERE ID IN (SELECT id from #InputIDTable)
This is not query related, what I would like to know is if it's possible to have a field in a column being displayed as a sum of other fields. A bit like Excel does.
As an example, I have two tables:
Recipes
nrecepie integer
name varchar(255)
time integer
and the other
Instructions
nintrucion integer
nrecepie integer
time integer
So, basically as a recipe has n instructions I would like that
recipes.time = sum(intructions.time)
Is this possible to be done in create table script?? if so, how?
You can use a view:
CREATE VIEW recipes_with_time AS
SELECT nrecepie, name, SUM(Instructions.time) AS total_time
FROM Recepies
JOIN Instructions USING (nrecepie)
GROUP BY Recepies.nrecepie
If you really want to have that data in the real table, you must use a trigger.
This could be done with an INSERT/UPDATE/DELETE trigger. Every time data is changed in table Instructions, the trigger would run and update the time value in Recepies.
You can use a trigger to update the time column everytime the instructions table is changed, but a more "normal" (less redundant) way would be to compute the time column via a group by clause on a join between the instructions and recepies [sic] table.
In general, you want to avoid situations like that because you're storing derived information (there are exceptions for performance reasons). Therefore, the best solution is to create a view as suggested by AndreKR. This provides an always-correct total that is as easy to SELECT from the database as if it were in an actual, stored column.
Depends pn the database vendor... In SQL Server for example, you can create a column that calculates it's value based on the values of other columns in the same row. they are called calculated columns, and you do it like this:
Create Table MyTable
(
colA Integer,
colB Integer,
colC Intgeer,
SumABC As colA + colB + colC
)
In general just put the column name you want, the word 'as' and the formula or equation to ghenerate the value. This approach uses no aditonal storage, it calculates thevalue each time someone executes a select aganist it, so the table profile remains narrower, and you get better performance. The only downsode is you cannot put an index on a calculated column. (although there is a flag in SQL server that allows you to specify to the database that it should persist the value whenever it is created or updated... In which case it can be indexed)
In your example, however, you are accessing data from multiple rows in another table. To do this, you need a trigger as suggested by other respondants.