perform an update query for over 200 items - sql

In access I have an update function where I can update information in an inventory database 1 at a time. I enter the Item name and I able to update the cost as well as a date.
Is there a way to write an sql Query to perform an update for 200 unique items?
EDIT:
I have
|ITEM NAME|ITEM COST|DATE CHANGE|
|A |$2.00 |1/1/1111 |
|B |$3.50 |1/2/1111 |
|C |$4.50 |1/3/1111 |
Let's say there are over 200 item names, I'd want to run a query to keep the item name but update the prices and date
|ITEM NAME|ITEM COST|DATE CHANGE|
|A |$3.00 |1/4/1111 |
|B |$1.50 |1/5/1111 |
|C |$84.50 |1/6/1111 |
I feel the only way to do it is to just do one long update, but I don't know if there is something better to do that.

I'm not sure if you mean that there's duplicate rows.. But I hope this could help by using the DISTINCT keyword from SQL. So if you were to write an update sql query it would be something like this:
UPDATE Accomodation
SET AccomodationName = "YourChoice"
WHERE AccomodationPrice = (SELECT DISTINCT AccomodationPrice
FROM Accomodation)
For more information about distinct, you can see here.

With a data table called Inventory and another table, Updates, with the same structure containing the 200 or so changes, the update would look like
UPDATE Inventory INNER JOIN Updates ON Inventory.[Item Name] = Updates.[Item Name] SET Inventory.[Item Cost] = [updates].[item cost], Inventory.[Date Change] = [updates].[date change];

Related

Using ranking functions without order by clause

I have got a PostgreSQL database. There is a query that orders rows by some_date and minutes. The result looks like this:
id|is_smth|some_date |minutes
35|true |2021-11-10|985
36|true |2021-11-19|684
35|true |2021-11-25|605
34|false |null |null
Then, I have tried applying DENSE_RANK() to it, to implement pagination in my application, but it does not work as I need it to, as it has to order the rows by given columns.
What I am trying to achieve is just giving the rank to each row by the criteria specified in the original ORDER BY clause plus the id of the result set without changing their order. So, it would look like this:
id|is_smth|some_date |minutes|rank
35|true |2021-11-10|985 |1
36|true |2021-11-19|684 |2
35|true |2021-11-25|605 |1
34|false |null |null |3
I have to retrieve the first two rows AND all of the associated data with them (in this case it would be all of the other rows with ids 35 and 36). Is it possible?

Selecting limited results from two tables

I apologise if this has been asked before. I'm still not certain how to phrase my question for the title, so wasn't sure what to search for.
I have a hundred or so databases in the same instance, one for each of my customers, named for the customer, and they all have the same structure. I want to select a single result set that includes the database name along with the most recent date entry in one of the tables. I can pull the database names from sys.databases, but then for each database I want to select the most recent date from Events.Date_Logged so that my result set looks something like this:
_______________________________
| | |
|Cust_Name |Latest_Event |
|_______________|_______________|
| | |
|Customer1 |01/02/2020 |
|_______________|_______________|
| | |
|Customer2 |02/02/2020 |
|_______________|_______________|
| | |
|Customer3 |03/02/2020 |
|_______________|_______________|
I'm really struggling with the syntax though. I either get just a single row returned or every single event for each customer. I think my joins are as rusty as hell.
Any help would be appreciated.
What I suggest you do:
Declare a result variable (of type table)
Use a cursor to go over every database
Inside the cursor: do a select top 1 ... order by date desc to get the most recent record. Save this result in the result variable.
After the cursor print the result variable.
That should do the trick.

Bigquery select column only if not null

I am an absolute beginner in Bigquery and SQL so apologies if this is a dumb question. I have a bigquery table like this
|Name|Value1|Value2|Value3|Value4|Value5|Value6|
|Ben |19 |45 |null |19 |13 |null |
|Bob |34 |null |12 |null |45 |43 |
My query only selects one row that matches the name in Name column. I want the result to only display columns that have non null values. For example if I do
SELECT * FROM mytable WHERE Name = "Bob"
I want the result to look like
|Name|Value1|Value3|Value5|Value6|
|Bob |34 |12 |45 |43 |
Similarly, if I select for Ben I want the result to look like
|Name|Value1|Value2|Value4|Value5|
|Ben |19 |45 |19 |13 |
I have tried SELECT IF but don't seem to get the syntax right.
You cannot select a variable amount of columns, but you may be able to create a SQL, with a combination of aggregate/pivot functions. You may be spending more time than it's worth trying to do it. I spend about two hours on the documentation, and I still feel almost clueless (If doesn't help that I don't have an account there, and my own database does not have the same exact functions).
See Google's BigQuery Documentation for examples.
I think you may be able to do it with UNNEST() and ARRAY(), but you'll lose the original column header information in the process.
I doubt if it can be achieved, because any SQL statement will act on record(s),i.e various columns, so if a column is null, it will affect all columns in the record that are to be retrieved. SQL STATEMENTS RETRIEVE ROWS(COLUMNS REFERENCED)
You can not do that dynamically in SQL. If you need a query like that you could create it manually but it depends on the results you want to achieve.
In the case you showed for example, the query below would work but you would lose the table's header reference.
SELECT value1,value2,value4,value5 FROM mytable WHERE value3 IS NULL AND value6 is NULL
UNION ALL
SELECT value1,value3,value5,value6 FROM mytable WHERE value2 IS NULL AND value4 is NULL
In this example it's possible to see that this kind of query is complicated to build if you have many conditions. Besides that, UNION ALL will always need the same number of columns in each separate query to work. If you need to create a generic query to do that, it's not gonna be possible.
I hope it helps

dbfit - How to assert that an item is not returned in the results

I am creating dbfit test cases and I encountered a scenario where I need to check that an item is not included in the results returned by the query. How can I do that?
Thanks,
If you carry out a query to count the number of records that are matching your query then you can check that the result is zero something like
|query |select count(id) as NumberOfRecords from dbo.tablename where criteria='matched'|
|NumberOfRecords |
|0 |

Summing n numerical variables by grouping level specific to each

I am working through a group by problem and could use some direction at this point. I want to summarize a number of variables by a grouping level which is different (but the same domain of values) for each of the variables to be summed. In pseudo-pseudo code, this is my issue: For each empYEAR variable (there are 20 or so employment-by-year variables in wide format), I want to sum it by the county in which the business was located in that particular year.
The data is a bunch of tables representing business establishments over a 20-year period from Dun & Bradstreet/NETS.
More details on the database, which is a number of flat files, all with the same primary key.
The primary key is DUNSNUMBER, which is present in several tables. There are tables detailing, for each year:
employment
county
sales
credit rating (and others)
all organized as follows (this table shows employment, but the other variables are similarly structured, with a year postfix).
dunsnumber|emp1990 |emp1991|emp1992|... |emp2011|
a | 12 |32 |31 |... | 35 |
b | |2 |3 |... | 5 |
c | 1 |1 | |... | |
d | 40 |86 |104 |... | 350 |
...
I would ultimately like to have a table that is structured like this:
county |emp1990|emp1991|emp1992|...|emp2011|sales1990|sales1991|sales1992|sales2011|...
A
B
C
...
My main challenge right now is this: How can I sum employment (or sales) by county by year as in the example table above, given that county as a grouping variable changes sometimes by the year and specified in another table?
It seems like something that would be fairly straightforward to do in, say, R with a long data format, but there are millions of records, so I prefer to keep the initial processing in postgres.
As I understand your question this sounds relatively straight forward. While I normally prefer normalized data to work with, I don't see that normalizing things beforehand will buy you anything specific here.
It seems to me you want something relatively simple like:
SELECT sum(emp1990), sum(emp1991), ....
FROM county c
JOIN emp e ON c.dunsnumber = e.dunsnumber
JOIN sales s ON c.dunsnumber = s.dunsnumber
JOIN ....
GROUP BY c.name, c.state;
I don't see a simpler way of doing this. Very likely you could query the system catalogs or information schema to generate a list of columns to sum up. the rest is a straight group by and join process as far as I can tell.
if the variable changes by name, the best thing to do in my experience is to put together a location view based on that union and join against it. This lets you hide the complexity from your main queries and as long as you don't also join the underlying tables should perform quite well.