Access text count in query design - sql

I am new to Access and am trying to develop a query that will allow me to count the number of occurrences of one word in each field from a table with 15 fields.
The table simply stores test results for employees. There is one table that stores the employee identification - id, name, etc.
The second table has 15 fields - A1 through A15 with the words correct or incorrect in each field. I need the total number of incorrect occurrences for each field, not for the entire table.
Is there an answer through Query Design, or is code required?
The solution, whether Query Design, or code, would be greatly appreciated!

Firstly, one of the reasons that you are struggling to obtain the desired result for what should be a relatively straightforward request is because your data does not follow database normalisation rules, and consequently, you are working against the natural operation of a RDBMS when querying your data.
From your description, I assume that the fields A1 through A15 are answers to questions on a test.
By representing these as separate fields within your database, aside from the inherent difficulty in querying the resulting data (as you have discovered), if ever you wanted to add or remove a question to/from the test, you would be forced to restructure your entire database!
Instead, I would suggest structuring your table in the following way:
Results
+------------+------------+-----------+
| EmployeeID | QuestionID | Result |
+------------+------------+-----------+
| 1 | 1 | correct |
| 1 | 2 | incorrect |
| ... | ... | ... |
| 1 | 15 | correct |
| 2 | 1 | correct |
| 2 | 2 | correct |
| ... | ... | ... |
+------------+------------+-----------+
This table would be a junction table (a.k.a. linking / cross-reference table) in your database, supporting a many-to-many relationship between the tables Employees & Questions, which might look like the following:
Employees
+--------+-----------+-----------+------------+------------+-----+
| Emp_ID | Emp_FName | Emp_LName | Emp_DOB | Emp_Gender | ... |
+--------+-----------+-----------+------------+------------+-----+
| 1 | Joe | Bloggs | 01/01/1969 | M | ... |
| ... | ... | ... | ... | ... | ... |
+--------+-----------+-----------+------------+------------+-----+
Questions
+-------+------------------------------------------------------------+--------+
| Qu_ID | Qu_Desc | Qu_Ans |
+-------+------------------------------------------------------------+--------+
| 1 | What is the meaning of life, the universe, and everything? | 42 |
| ... | ... | ... |
+-------+------------------------------------------------------------+--------+
With this structure, if ever you wish to add or remove a question from the test, you can simply add or remove a record from the table without needing to restructure your database or rewrite any of the queries, forms, or reports which depends upon the existing structure.
Furthermore, since the result of an answer is likely to be a binary correct or incorrect, then this would be better (and far more efficiently) represented using a Boolean True/False data type, e.g.:
Results
+------------+------------+--------+
| EmployeeID | QuestionID | Result |
+------------+------------+--------+
| 1 | 1 | True |
| 1 | 2 | False |
| ... | ... | ... |
| 1 | 15 | True |
| 2 | 1 | True |
| 2 | 2 | True |
| ... | ... | ... |
+------------+------------+--------+
Not only does this consume less memory in your database, but this may be indexed far more efficiently (yielding faster queries), and removes all ambiguity and potential for error surrounding typos & case sensitivity.
With this new structure, if you wanted to see the number of correct answers for each employee, the query can be something as simple as:
select results.employeeid, count(*)
from results
where results.result = true
group by results.employeeid
Alternatively, if you wanted to view the number of employees answering each question correctly (for example, to understand which questions most employees got wrong), you might use something like:
select results.questionid, count(*)
from results
where results.result = true
group by results.questionid
The above are obviously very basic example queries, and you would likely want to join the Results table to an Employees table and a Questions table to obtain richer information about the results.
Contrast the above with your current database structure -
Per your original question:
The second table has 15 fields - A1 through A15 with the words correct or incorrect in each field. I need the total number of incorrect occurrences for each field, not for the entire table.
Assuming that you want to view the number of incorrect answers by employee, you are forced to use an incredibly messy query such as the following:
select
employeeid,
iif(A1='incorrect',1,0)+
iif(A2='incorrect',1,0)+
iif(A3='incorrect',1,0)+
iif(A4='incorrect',1,0)+
iif(A5='incorrect',1,0)+
iif(A6='incorrect',1,0)+
iif(A7='incorrect',1,0)+
iif(A8='incorrect',1,0)+
iif(A9='incorrect',1,0)+
iif(A10='incorrect',1,0)+
iif(A11='incorrect',1,0)+
iif(A12='incorrect',1,0)+
iif(A13='incorrect',1,0)+
iif(A14='incorrect',1,0)+
iif(A15='incorrect',1,0) as IncorrectAnswers
from
YourTable
Here, notice that the answer numbers are also hard-coded into the query, meaning that if you decide to add a new question or remove an existing question, not only would you need to restructure your entire database, but queries such as the above would also need to be rewritten.

Related

Database design for partially changing data points, with history and snapshot functionality?

I'm looking for a best practice or solution, on a conceptual level, to a problem I'm working on.
I have a collection of data points (around 500) which are partially changed, by a user, over time. It is important to able to tell, which values have been changed at what point in time. The data might look like this:
Data changed over time:
+--------------------------------------------------------------------------------------+
| Date | Value no. 1 | Value no. 2 | Value no. 3 | ... | Value no. 500 |
|------------+---------------+---------------+---------------+-------+-----------------|
| 1/1/2018 | | | 2 | | 1 |
| 1/3/2018 | 2 | 1 | | | |
| 1/7/2018 | | | 4 | | 8 |
| 1/12/2018 | 5 | 3 | | | |
....
It must be possible to take a snapshot at a certain point in time, to get a complete set of data points, that were valid for that particular point in time, like this:
Snapshot taken 1/3/2018 will yield:
+---------------------------------------------------------+
| Value 1 | Value 2 | Value 3 | ... | Value 500 |
|-----------+-----------+-----------+-------+-------------|
| 2 | 1 | 2 | 0 | 1 |
Snapshot taken 1/9/2018 will yield:
+---------------------------------------------------------+
| Value 1 | Value 2 | Value 3 | ... | Value 500 |
|-----------+-----------+-----------+-------+-------------|
| 2 | 1 | 4 | 0 | 8 |
Snapshot taken 1/13/2018 will yield:
+---------------------------------------------------------+
| Value 1 | Value 2 | Value 3 | ... | Value 500 |
|-----------+-----------+-----------+-------+-------------|
| 5 | 3 | 4 | 0 | 8 |
and so on...
I'm not bound by a particular database technology, so either SQL or NoSQL will do. It is probably not possible to satisfy all the requirements in the DB-domain - some will probably have to be addressed in code. But my main question is what database technology is best suited for this task?
I'm not quite sure this fits a time-series database (TSDB), since only a portion of the values are changed at a given time, and it is important to know which values changed. Maybe I'm wrong?
/Chris
My suggestion would be to model this in a sparse format, something like:
CREATE TABLE DataPoint (
DataID int, /* 1 to 500 in your example, or whatever you need to identify it*/
ValidFrom timestamp, /*default value 01/01/1970-00:00:00 or a suitable "Epoch" */
ValidUntil timestamp, /*default value 31/12/3999-00:00:00 or again something that is in the far future for your case */
value Number (7,5) /* again, this may be any data type, or even more than one field if needed, like Price & Currency
);
What we have just defined is a set of data and the "interval" in which each data has a specific value, so if you measured DataPoint 1 yesterday and got a value of 89.768 you will insert:
DataId=1
ValidFrom=26/11/2018-14:52:41
ValidUntil=31/12/3999-00:00:00
Value=89.768
Then you measure it again tomorrow and get:
DataId=1
ValidFrom=28/11/2018-14:51:23
ValidUntil=31/12/3999-00:00:00
Value=89.443
(Let assume that you have also logic so that when you record a new value you update the current value record and assign ValidUntil=28/11/2018-14:51:23 this is not really needed but will make the example query simpler).
One month from now you have accumulated more measurements for data #1, and the same, on different moments, for data #2 to 500.
You now want to find out what the values were at noon today (i.e. one month "ago") i.e. at 27/11/2018:12:00:00:00
Select DataID, Value from DataPoint where ValidFrom <= 27/11/2018:12:00:00 and ValidUntil > 27/11/2018:12:00:00
This will return:
001,89.768
002,45.678
...,...
500,112.809
Regarding logging who did this, or for what reason, you can either log it separately (saving for example DataPoint Id, Timestamp, UserId...) or make it part of the original table, so that whenever you register a new datapoint you also log who measured it.
Have a look at SQL Server temporal tables engine which may be a solution in your case. This approach allow to run the queries mentioned in the question, for example
SELECT *
FROM my_data
FOR SYSTEM_TIME AS OF '2018-01-01'
However, the table in the example seems to be very large (maybe denormalized). I would suggest to group columns by some technical or functional characteristics (vertical partitioning) to avoid further maintenance drawbacks.

Using SQL in Domo to apply multiple groups or filters

I am trying to filter for unique UserIDs and then see which IDs have a value of >1 in another column. I have tried pretty much every example from SQL - Selecting unique values from one column then filtering based on another and at least 1 or two more sites but I cant find the appropriate links for that as it turned out to be a little unrelated anyways.
An example of the relevant columns of my dataset are as follows:
+---------+-----------+
| UserIDs | EventType |
+---------+-----------+
| 100 | Start |
| 100 | Start |
| 100 | Finish |
| 100 | Finish |
| 200 | Start |
| 200 | Start |
| 200 | Start |
| 200 | Finish |
| 200 | Finish |
| 200 | Finish |
| 300 | Start |
| 400 | Start |
| 400 | Finish |
+---------+-----------+
What I am trying to figure out is how many users triggered EventType-Finish more than once. The data I would want from the example above would be:
+------------------------------------------------+
| Total # of students that battled more than once|
+------------------------------------------------+
| 2 |
+------------------------------------------------+ *edited*
None of the Group By stuff seems right because it would just compress the other rows into each other?
For the record I am very new to SQL and programming in general so try not to be too technical in your answer. Anytime I get remotely close to thinking I have solved it, I run it and it gives me syntax errors so I have no idea where else to turn.
Sorry guys I wrote the output incorrectly, I am actually just looking for the number of students who triggered Finish more than once. Also, what would I use in place of a table name eg: FROM "TABLE" since Domo has a kind of strange way of breaking up the hierarchy. For context, my table is called "Metrics Data" so trying to type that into the table name generally converts data into SQL code.
Attempt at answer
Try this answer,
SELECT UserId,COUNT(1) [# of Finishes]
FROM Your_Table
WHERE EventType='Finish'
GROUP BY UserID
HAVING COUNT(1)>1
Hope it helps.
I would like to do some conditional aggregation with help of case expressions
SELECT
UserIDs [UserID],
SUM(CASE(EventType) WHEN 'Finish' THEN 1 ELSE 0 END) [# of Finishes]
FROM <table> GROUP BY UserIDs
HAVING SUM(CASE(EventType) WHEN 'Finish' THEN 1 ELSE 0 END) > 1
Result :
UserID # of Start
100 2
200 3

Organizing & normalising RSS Feed categories data

I am having trouble normalising data from a RSS Feed into a database.
Each post would have id and categories.
The problem I am having is that categories is a list which is not predefined in size. By 1NF I should split a list up such that each column only has atomic data:
+----+----------+
| id | name |
+----+----------+
| 1 | flying |
| 2 | swimming |
| 3 | throwing |
| 4 | sleeping |
| 5 | etc |
+----+----------+
However, blog posts can have more than one category tagged. This means that the posts table can have a list of ids of the categories tagged.
Alternatively, the categories table can have two ids:
+----+--------+----------+
| id | postId | name |
+----+--------+----------+
| 1 | 1 | flying |
| 2 | 1 | swimming |
| 3 | 1 | throwing |
| 4 | 2 | flying |
| 5 | 2 | swimming |
| 6 | 2 | etc |
+----+--------+----------+
And the posts table id will reference the postId column. However, there is repeated data, which is not good.
Lastly, another method I had thought of was to put all the categories in one table:
+----+--------+----------+----------+----------+-----+
| id | flying | swimming | throwing | sleeping | etc |
+----+--------+----------+----------+----------+-----+
| 1 | 1 | 1 | 1 | 1 | 1 |
| 2 | 0 | 1 | 0 | 0 | 0 |
| 3 | 1 | 1 | 0 | 0 | 1 |
| 4 | 0 | 0 | 1 | 1 | 1 |
+----+--------+----------+----------+----------+-----+
1s representing present and 0s representing absent, the id in the posts table references id. This method would not have any repeated data. However, categories from blogs can be created at will, making it hard to maintain such a table as I would need to update it every time there is a new category.
How do I put my database in 3NF, eliminating repetition while keeping it maintainable?
TL;DR "Repeated data" is a bugbear. Learn about design and normalization. Start with rows/tables that make clear straightforward relevant statements about an arbitrary situation. So far all you need is:
-- [id] identifies a post with ...
Post(id, ...)
-- post [id] is tagged [name]
Post_Category(id, name)
there is repeated data, which is not good
What exactly do you think "repeated data" is? And why exactly do you think it's "not good"?
There is nothing intrinsically bad about having the same value appear multiple times as a column of a row or part of a value for a column of a row. What matters is whether rows in tables say overlapping things about a situation in certain ways.
Normalization replaces a table by projections of it that join back to it. That means that it replaces tables whose rows say (ie have predicate) "some stuff AND other stuff" about column values by tables whose rows say "some stuff" and "other stuff" separately. Having "AND"s in such a row/table meaning isn't always bad. When there's only one AND, normalization says to decompose to a particular pair of tables exactly when no shared column set always holds a unique set of values in either of the two tables.
put all the categories in one table
Although there is nothing about such a design that would cause normalization to decompose it, your last table is a "bad" design. (Sometimes this kind of design with repeated similar columns is said to violate some notion of "1NF" or "normalization", but that is a misconception.) Eg its rows say "(post [id] is tagged 'flying' and [flying] = 1 OR post [id] is not tagged 'flying' AND [flying] = 0) AND (post [id] is tagged 'swimming' and [swimming] = 1 OR post [id] is not tagged 'swimming' AND [swimming] = 0) AND ..." when instead we could just have a table Post_Category with rows saying "post [id] is tagged [name]". Eg we cannot write queries that ask about all categories without mentioning all categories explicitly. Eg if we add a new category then we must add a new column to the table and then if we want our past queries re all categories to mean the same thing then they we must add the new column to still be referring to all categories.
PS It's not clear why you introduced ids. There are reasons we do so, but you should do it for a reason. (Normalization does not introduce ids.) Eg introducing post ids if posts are not uniquely identifiable by other information we want to record.

What is a structured way to build a MySQL query?

I consider myself fairly competent in understanding and manipulating C-ish languages; it's not a problem for me to come up with an algorithm and implement it in any C-ish language.
I have tremendous difficulty writing SQL (in my specific case, MySQL) queries. For very simple queries, it isn't a problem, but for complex queries, I become frustrated not knowing where to start. Reading the MySQL documentation is difficult, mainly because the syntax description and explanation isn't organized very well.
For example, the SELECT documentation is all over the map: it starts out with what looks like psuedo-BNF, but then (since the text for aggregate descriptions aren't clickable... like select_expr) it quickly devolves into this frustrating exercise of trying to piece the syntax together yourself by having a number of browser windows open.
Enough whining.
I'd like to know how people, step by step, begin constructing a complex MySQL query. Here is a specific example. I have three tables below. I want to SELECT a set of rows with the following characteristics:
From the userInfo and userProgram tables, I want to select the userName, isApproved, and modifiedTimestamp fields and UNION them into one set. From this set I want to ORDER by modifiedTimestamp taking the MAX(modifiedTimestamp) for every user (i.e. there should be only one row with a unique userName and the timestamp associated with that username should be as high as possible).
From the user table, I want to match the firstName and lastName that is associated with the userName so that it looks something like this:
+-----------+----------+----------+-------------------+
| firstName | lastName | userName | modifiedTimestamp |
+-----------+----------+----------+-------------------+
| JJ | Prof | jjprofUs | 1289914725 |
| User | 2 | user2 | 1289914722 |
| User | 1 | user1 | 1289914716 |
| User | 3 | user3 | 1289914713 |
| User | 4 | user4 | 1289914712 |
| User | 5 | user5 | 1289914711 |
+-----------+----------+----------+-------------------+
The closest I've got is a query that looks like this:
(SELECT firstName, lastName, user.userName, modifiedTimestamp
FROM user, userInfo
WHERE user.userName=userInfo.userName)
UNION
(SELECT firstName, lastName, user.userName, modifiedTimestamp
FROM user, userProgram
WHERE user.userName=userProgram.userName)
ORDER BY modifiedTimestamp DESC;
I feel like I'm pretty close but I don't know where to go from here or even if I'm thinking about this in the right way.
> user
+--------------------+--------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+--------------------+--------------+------+-----+---------+-------+
| userName | char(8) | NO | PRI | NULL | |
| firstName | varchar(255) | NO | | NULL | |
| lastName | varchar(255) | NO | | NULL | |
| email | varchar(255) | NO | UNI | NULL | |
| avatar | varchar(255) | YES | | '' | |
| password | varchar(255) | NO | | NULL | |
| passwordHint | text | YES | | NULL | |
| access | int(11) | NO | | 1 | |
| lastLoginTimestamp | int(11) | NO | | -1 | |
| isActive | tinyint(4) | NO | | 1 | |
+--------------------+--------------+------+-----+---------+-------+
> userInfo
+-------------------+------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-------------------+------------+------+-----+---------+-------+
| userName | char(8) | NO | MUL | NULL | |
| isApproved | tinyint(4) | NO | | 0 | |
| modifiedTimestamp | int(11) | NO | | NULL | |
| field | char(255) | YES | | NULL | |
| value | text | YES | | NULL | |
+-------------------+------------+------+-----+---------+-------+
> userProgram
+-------------------+--------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-------------------+--------------+------+-----+---------+-------+
| userName | char(8) | NO | PRI | NULL | |
| isApproved | tinyint(4) | NO | PRI | 0 | |
| modifiedTimestamp | int(11) | NO | | NULL | |
| name | varchar(255) | YES | | NULL | |
| address1 | varchar(255) | YES | | NULL | |
| address2 | varchar(255) | YES | | NULL | |
| city | varchar(50) | YES | | NULL | |
| state | char(2) | YES | MUL | NULL | |
| zip | char(10) | YES | | NULL | |
| phone | varchar(25) | YES | | NULL | |
| fax | varchar(25) | YES | | NULL | |
| ehsChildren | int(11) | YES | | NULL | |
| hsChildren | int(11) | YES | | NULL | |
| siteCount | int(11) | YES | | NULL | |
| staffCount | int(11) | YES | | NULL | |
| grantee | varchar(255) | YES | | NULL | |
| programType | varchar(255) | YES | | NULL | |
| additional | text | YES | | NULL | |
+-------------------+--------------+------+-----+---------+-------+
For what I understand from your question, you seem to need a correlated query, which would look like this:
(SELECT firstName, lastName, user.userName, modifiedTimestamp
FROM user, userInfo ui1
WHERE user.userName=userInfo.userName
AND modifiedtimestamp=(select max(modifiedtimestamp) from userInfo ui2 where ui1.userName=ui2.userName))
UNION
(SELECT firstName, lastName, user.userName, modifiedTimestamp
FROM user, userProgram up1
WHERE user.userName=userProgram.userName
AND modifiedtimestamp=(select max(modifiedtimestamp) from userProgram up2 where up1.userName=up2.userName))
ORDER BY modifiedTimestamp DESC;
So, do I proceed to get to this result? Key is: express clearly the information you want to retrieve, without taking mental shortcuts.
Step 1: Choose the fields I need in the different tables of my database. That's what is between SELECT and FROM. Seems obvious, but it becomes less obvious when it comes to aggregation function like sums or counts. In that case, you have to say, for example "I need the count of lines in userInfo for each firstName". See below in GROUP BY.
Step 2: Knowing the field you need, write the joins between the different corresponding tables. That's an easy one...
Step 3: Express your conditions. It can be easy, like if you want data from user for userName="RZEZDFGBH", or more complicated, like in your case: the way to formulate it so you can get the thing done, if you want only the most recent modifiedtimestamp, is "so that the modifiedtimestamp is equal to the most recent modifiedtimestamp" (that's where you can easily take a mental shortcut and miss the point)
Step 4: If you have aggregates, it's time to set the GROUP BY statement. For example, if you count all line in userInfo for each firstName, you would write "GROUP BY firstName":
SELECT firstName,count(*) FROM userInfo GROUP BY firstName
This gives you the number of entries in the table for each different firstName.
Step 5: HAVING conditions. These are conditions on the aggregates. In the previous example, if you wanted only the data for the firstName having more than 5 lines in the table, you could write SELECT firstName,count(*) FROM userInfo GROUP BY firstName HAVING count(*)>5
Step 6: Sort with ORDER BY. Pretty easy...
That's only a short summary. There is much, much more to discover, but it would be too long to write an entire SQL course here... Hope it helps, though!
As f00 says, it's simple(r) if you think of the data in terms of sets.
One of the issues with the question as it stands is that the expected output doesn't match the stated requirements - the description mentions the isApproved column, but this doesn't appear anywhere in either the query or the expected output.
What this illustrates is that the first step in writing a query is to have a clear idea of what you want to achieve. The bigger issue with the question as it stands is that this is not clearly described - instead, it moves from a sample table of expected output (which would be more helpful if we had corresponding samples of expected input data) straight into a description of how you intend to achieve it.
As I understand it, what you want to see is a list of users (by username, with their associated first and last names), together with the last time any associated record was modified on either the userInfo or userProgram tables.
(It isn't clear whether you want to see users who have no associated activity on either of these other tables - your supplied query implies not, otherwise the joins would be outer joins.)
So, you want a list of users (by username, with their associated first and last names):
SELECT firstName, lastName, userName
FROM user
together with a list of times that records were last modified:
SELECT userName, MAX(modifiedTimestamp)
...
on either the userInfo or userProgram tables:
...
FROM
(SELECT userName, modifiedTimestamp FROM userInfo
UNION ALL
SELECT userName, modifiedTimestamp FROM userProgram
) subquery -- <- this is an alias
...
by userName:
...
group by userName
These two sets of data need to be linked by their userName - so the final query becomes:
SELECT user.firstName, user.lastName, user.userName,
MAX(subquery.modifiedTimestamp) last_modifiedTimestamp
FROM user
JOIN
(SELECT userName, modifiedTimestamp FROM userInfo
UNION ALL
SELECT userName, modifiedTimestamp FROM userProgram
) subquery
ON user.userName = subquery.userName
GROUP BY user.userName
In most versions of SQL, this query would return an error as user.firstName and user.lastName are not included in the GROUP BY clause, nor are they summarised.
MySQL allows this syntax - in other SQLs, since those fields are functionally dependant on userName, adding a MAX in front of each field or adding them to the grouping would achieve the same result.
A couple of additional points:
UNION and UNION ALL are not identical - the former removes duplicates while the latter does not; this makes the former more processor-intensive.
Since duplicates will be removed by the grouping, it is better to use UNION ALL.
Many people will write this query as user joined to userInfo UNIONed ALL with user joined to userProgram - this is because many SQL engines can optimise this type of query more effectively.
At this point, this represents premature optimisation.
There's a lot of good stuff here. Thanks to everyone who contributed. This is a quick summary of the things I found helpful as well as some additional thoughts in connecting building functions to building queries. I wish I could give everyone SO merit badges/points but I think that there can only be one (answer) so I'm picking Traroth based upon point total and personal helpfulness.
A function can be understood as three parts: input, process, output. A query can be understood similarly. Most queries look something like this:
SELECT stuff FROM data WHERE data is like something
The SELECT portion is the output. There are some capabilities for formatting the output here (i.e. using AS)
The FROM portion is the input. The input should be seen as a pool of data; you will want to make this as specific as possible, using a variety of joins and subqueries that are appropriate.
The WHERE portion is like the process, but there's a lot of overlap with the FROM portion. Both the FROM and WHERE portions can reduce the pool of data appropriately using a variety of conditions to filter out unwanted data (or to only included desired data). The WHERE portion can also help format the output.
Here's how I broke down the steps:
Start with thinking about what your output looks like. This stuff goes into the SELECT portion.
Next, you want to define the set of data that you wish to work on. Traroth notes: "Knowing the field you need, write the joins between the different corresponding tables. That's an easy one..." It depends on what you mean by 'easy'. If you are new to writing queries, you will probably just default to writing inner joins (like I did). This is not always the best way to go. http://en.wikipedia.org/wiki/Join_(SQL) is a great resource to understanding the different kinds of joins possible.
As a part of the previous step think about smaller parts of that data set and build up to the complete data set you are interested in. In writing a function, you can write subfunctions to help express your process in a clearer manner. Similar to that, you can write subqueries. A huge tip from Mark Bannister in creating a subquery AND USING AN ALIAS. You will have to reconfigure your output to use this alias, but this is pretty key.
Last, you can use various methods to pare down your data set, removing data you're not interested in
One way to think about the data you are operating on is a giant 2-D matrix: JOINs make larger the horizontal aspect, UNIONs make larger the vertical aspect. All the other filters are designed to make this matrix smaller to be appropriate for your output. I don't know if there is a "functional" analogy to JOIN, but UNION is just adding the output of two functions together.
I realize, though, there are lots of ways that building query IS NOT like writing a function. For example, you can build and pare down your data set in both the FROM and WHERE areas. What was key for me was understanding joins and finding out how to create subqueries using aliases.
just learn to think in terms of sets - then it's simple :P
http://www.codinghorror.com/blog/2007/10/a-visual-explanation-of-sql-joins.html
You can't construct sql without understanding the data in the tables and the logical result required. There's no background given for what data the tables might look like and mean and the description of the results you're trying to gather doesn't make sense to me so I'm not going to venture a guess.
On the latter point... it's rare that you'd want a union of timestamp values multiple sources. Generally speaking when results like that are gathered it's generally for some sort of auditing/tracing. However, when you're discarding all information about the source of the timestamp and just computing a maximum you have... well what exactly?
Anyways, one or more examples of data and desired output and maybe something about the application and the whys is a must to make yourself clear.
To the extent I'll make any prediction about the shape of your eventual statement, (assuming your task will still be to get a single maximum timestamp per user) it's that it will look something like this:
select u.firstname, u.lastname, user_max_time.userName, user_max_time.max_time
from users u,
( select (sometable).userName, max((sometable).(timestamp column))
from (data of interest)
group by (sometable).userName) user_max_time
where u.userName = user_max_time.userName
order by max_time desc;
Your task here would then be to replace the ()s inside the the user_max_time subselect with something that makes sense and maps to your requirements. In terms of a general approach to complex sql, the major suggestion is to build the query from the innermost subselects back out (testing along the way to make sure performance is ok and you don't need intermediate tables).
Anyways, if you're having trouble, and can come back with examples, would be happy to help.
Cheers,
Ben

Retrieve comma delimited data from a field

I've created a form in PHP that collects basic information. I have a list box that allows multiple items selected (i.e. Housing, rent, food, water). If multiple items are selected they are stored in a field called Needs separated by a comma.
I have created a report ordered by the persons needs. The people who only have one need are sorted correctly, but the people who have multiple are sorted exactly as the string passed to the database (i.e. housing, rent, food, water) --> which is not what I want.
Is there a way to separate the multiple values in this field using SQL to count each need instance/occurrence as 1 so that there are no comma delimitations shown in the results?
Your database is not in the first normal form. A non-normalized database will be very problematic to use and to query, as you are actually experiencing.
In general, you should be using at least the following structure. It can still be normalized further, but I hope this gets you going in the right direction:
CREATE TABLE users (
user_id int,
name varchar(100)
);
CREATE TABLE users_needs (
need varchar(100),
user_id int
);
Then you should store the data as follows:
-- TABLE: users
+---------+-------+
| user_id | name |
+---------+-------+
| 1 | joe |
| 2 | peter |
| 3 | steve |
| 4 | clint |
+---------+-------+
-- TABLE: users_needs
+---------+----------+
| need | user_id |
+---------+----------+
| housing | 1 |
| water | 1 |
| food | 1 |
| housing | 2 |
| rent | 2 |
| water | 2 |
| housing | 3 |
+---------+----------+
Note how the users_needs table is defining the relationship between one user and one or many needs (or none at all, as for user number 4.)
To normalise your database further, you should also use another table called needs, and as follows:
-- TABLE: needs
+---------+---------+
| need_id | name |
+---------+---------+
| 1 | housing |
| 2 | water |
| 3 | food |
| 4 | rent |
+---------+---------+
Then the users_needs table should just refer to a candidate key of the needs table instead of repeating the text.
-- TABLE: users_needs (instead of the previous one)
+---------+----------+
| need_id | user_id |
+---------+----------+
| 1 | 1 |
| 2 | 1 |
| 3 | 1 |
| 1 | 2 |
| 4 | 2 |
| 2 | 2 |
| 1 | 3 |
+---------+----------+
You may also be interested in checking out the following Wikipedia article for further reading about repeating values inside columns:
Wikipedia: First normal form - Repeating groups within columns
UPDATE:
To fully answer your question, if you follow the above guidelines, sorting, counting and aggregating the data should then become straight-forward.
To sort the result-set by needs, you would be able to do the following:
SELECT users.name, needs.name
FROM users
INNER JOIN needs ON (needs.user_id = users.user_id)
ORDER BY needs.name;
You would also be able to count how many needs each user has selected, for example:
SELECT users.name, COUNT(needs.need) as number_of_needs
FROM users
LEFT JOIN needs ON (needs.user_id = users.user_id)
GROUP BY users.user_id, users.name
ORDER BY number_of_needs;
I'm a little confused by the goal. Is this a UI problem or are you just having trouble determining who has multiple needs?
The number of needs is the difference:
Len([Needs]) - Len(Replace([Needs],',','')) + 1
Can you provide more information about the Sort you're trying to accomplish?
UPDATE:
I think these Oracle-based posts may have what you're looking for: post and post. The only difference is that you would probably be better off using the method I list above to find the number of comma-delimited pieces rather than doing the translate(...) that the author suggests. Hope this helps - it's Oracle-based, but I don't see .