can you please help me with this on crystal reports.
|field1 | field2 |field3 |
|-----------|-----------|-------|
|code1 | abc | 12.00 |
|code2 | xyz | 11:00 |
|code3 | cde | 12.00 |
|code4 | yabc | 2.00 |
|code5 | xabc | 2.00 |
|code6 | xxyzx | 3.00 |
|code7 | fgfgf | 43.00 |
code8 and so on....
i want to add all contains "abc", "xyz", and so on and if not just show the same name as above.
result should be something like:
|-----|------|
|ABC | 16.00|
|XYZ | 14.00|
|code3| 12.00|
|code7| 43.00|
code8 and so on...
note: included on the quotation will not be included in the required result
im still a newbie on crystal reports..
thank you in advance
grace
Create an IF Then Else formula that returns the value if the condition is true and 0 otherwise.
Then, SUM that formula at whatever level of aggregation (Report-level or Group-level) you need.
Related
Using MS access SQL I have a query (actually a UNION made of multiple queries) and need a cumulative sum (actually a statement of account which items are in chronological order).
How do I get a cumulative sum?
Since they are duplicates by date I have to add a new ID, however, SQL in MS access does not seem to have ROW_ID or similar.
So, we need to sort donation data into chronological order across multiple tables with duplicates. First combine all the tables of donators in one query which sets up the simplest syntax. Then to put things in order we need to have an order for the duplicate dates. The dataset has two natural ways to sort duplicate dates including the donator and the amount. For instance, we could decide that after the date bigger donations come first, If the rule is complicated enough we abstract it to a code module and into public function and include it in the query so that we can sort by it:
'Sorted Donations:'
SELECT (BestDonator(q.donator)) as BestDonator, *
FROM tblCountries as q
UNION SELECT (BestDonator(j.donator)) as BestDonator, *
FROM tblIndividuals as j
ORDER BY EvDate Asc, Amount DESC , BestDonator DESC;
Public Function BestDonator(donator As String) As Long
BestDonator = Len(donator) 'longer names are better :)'
End Function
with sorted donations we have settled on an order for the duplicate dates and have combined both individual donations and country donations, so now we can calculate the running sum directly using either dsum or a subquery. There is no need to calculate row id. The tricky part is getting the syntax correct. I ended up abstracting the running sum calculation to a function and omitting BestDonator because I couldn't easily paste together this query in the query designer and I ran out of time to bug fix
Public Function RunningSum(EvDate As Date, Amount As Currency)
RunningSum = DSum("Amount", "Sorted Donations", "(EvDate < #" & [EvDate] & "#) OR (EvDate = #" & [EvDate] & "# AND Amount >= " & [Amount] & ")")
End Function
Carefully note the OR in the Dsum part of the RunningSum calculation. This is the tricky part to summing the right amounts.
'output
-------------------------------------------------------------------------------------
| donator | EvDate | Amount | RunningSum |
-------------------------------------------------------------------------------------
| Reiny | 1/10/2020 | 321 | 321 |
-------------------------------------------------------------------------------------
| Czechia | 3/1/2020 | 7455 | 7776 |
-------------------------------------------------------------------------------------
| Germany | 3/18/2020 | 4222 | 11998 |
-------------------------------------------------------------------------------------
| Jim | 3/18/2020 | 222 | 12220 |
-------------------------------------------------------------------------------------
| Australien | 4/15/2020 | 13423 | 25643 |
-------------------------------------------------------------------------------------
| Mike | 5/31/2020 | 345 | 25988 |
-------------------------------------------------------------------------------------
| Portugal | 6/6/2020 | 8755 | 34743 |
-------------------------------------------------------------------------------------
| Slovakia | 8/31/2020 | 3455 | 38198 |
-------------------------------------------------------------------------------------
| Steve | 9/6/2020 | 875 | 39073 |
-------------------------------------------------------------------------------------
| Japan | 10/10/2020 | 5234 | 44307 |
-------------------------------------------------------------------------------------
| John | 10/11/2020 | 465 | 44772 |
-------------------------------------------------------------------------------------
| Slowenia | 11/11/2020 | 4665 | 49437 |
-------------------------------------------------------------------------------------
| Spain | 11/22/2020 | 7677 | 57114 |
-------------------------------------------------------------------------------------
| Austria | 11/22/2020 | 3221 | 60335 |
-------------------------------------------------------------------------------------
| Bill | 11/22/2020 | 767 | 61102 |
-------------------------------------------------------------------------------------
| Bert | 12/1/2020 | 755 | 61857 |
-------------------------------------------------------------------------------------
| Hungaria | 12/24/2020 | 9996 | 71853 |
-------------------------------------------------------------------------------------
I've been through the various comments on here about df.replace but I'm still not able to get it working.
Here is a snippet of my code:
# Name columns
df_yearly.columns = ['symbol', 'date', ' annuual % price change']
# Change date format to D/M/Y
df_yearly['date'] = pd.to_datetime(df_yearly['date'], format='%d/%m/%Y')
The df_yearly dataframe looks like this:
| symbol | date | annuual % price change
---|--------|------------|-------------------------
0 | APX | 12/31/2017 |
1 | APX | 12/31/2018 | -0.502554278
2 | AURA | 12/31/2018 | -0.974450706
3 | BASH | 12/31/2016 | -0.998110828
4 | BASH | 12/31/2017 | 8.989361702
5 | BASH | 12/31/2018 | -0.083599574
6 | BCC | 12/31/2017 | 121718.9303
7 | BCC | 12/31/2018 | -0.998018734
I want to replace all dates of 12/31/2018 with 06/30/2018. The next section of my code is:
# Replace 31-12-2018 with 30-06-2018 as this is final date in monthly DF
df_yearly_1 = df_yearly.date.replace('31-12-2018', '30-06-2018')
print(df_yearly_1)
But the output is still coming as:
| 0 | 2017-12-31
| 1 | 2018-12-31
| 2 | 2018-12-31
| 3 | 2016-12-31
| 4 | 2017-12-31
| 5 | 2018-12-31
Is anyone able to help me with this? I thought this might be related to me having the date format incorrect in my df.replace statement but I've tried to search and replace 12-31-2018 and it's still not doing anything.
Thanks in advance!!
try '.astype(str).replace'
df.date.astype(str).replace('2016-12-31', '2018-06-31')
I'm using Business Objects to construct a simple report on whether a unit is on or off for a given day. When constructing a vertical table, the data is correct and looks like such:
Unit ID | Status | Date
1 | On | 2016-09-10
1 | On | 2016-09-11
1 | Off | 2016-09-12
2 | Off | 2016-09-10
2 | Off | 2016-09-11
2 | On | 2016-09-12
However the cross table I've created, with columns of "date" and rows of "Unit ID" is duplicating Unit ID and having an entire row of 'On' followed by an entire row of 'Off' like:
____| 2016-09-10 | 2016-09-11 | 2016-09-12
1 | On | On | On
1 | Off | Off | Off
2 | On | On | On
2 | Off | Off | Off
instead of what it should be as:
____| 2016-09-10 | 2016-09-11 | 2016-09-12
1 | On | On | Off
2 | Off | Off | On
Any suggestions as to why it's doing this? The table isn't particularly useful if it has these duplicate rows and I can't understand why it's resulting in this odd table.
Turns out what happened is the "Status" field was a dimension type, but the cross table requires the data field to be a measure type. Simply making a new variable that was a measure equal to "Status" solved the issue.
So I have been doing pretty well on my project (Link to previous StackOverflow question), and have managed to learn quite a bit, but there is this one problem that has been really dogging me for days and I just can't seem to solve it.
It has to do with using the UNIX_TIMESTAMP call to convert dates in my SQL database to UNIX time-format, but for some reason only one set of dates in my table is giving me issues!
==============
So these are the values I am getting -
#abridged here, see the results from the SELECT statement below to see the rest
#of the fields outputted
| firstVst | nextVst | DOB |
| 1206936000 | 1396238400 | 0 |
| 1313726400 | 1313726400 | 278395200 |
| 1318910400 | 1413604800 | 0 |
| 1319083200 | 1413777600 | 0 |
when I use this SELECT statment -
SELECT SQL_CALC_FOUND_ROWS *,UNIX_TIMESTAMP(firstVst) AS firstVst,
UNIX_TIMESTAMP(nextVst) AS nextVst, UNIX_TIMESTAMP(DOB) AS DOB FROM people
ORDER BY "ref DESC";
So my big question is: why in the heck are 3 out of 4 of my DOBs being set to date of 0 (IE 12/31/1969 on my PC)? Why is this not happening in my other fields?
I can see the data quite well using a more simple SELECT statement and the DOB field looks fine...?
#formatting broken to change some variable names etc.
select * FROM people;
| ref | lastName | firstName | DOB | rN | lN | firstVst | disp | repName | nextVst |
| 10001 | BlankA | NameA | 1968-04-15 | 1000000 | 4600000 | 2008-03-31 | Positive | Patrick Smith | 2014-03-31 |
| 10002 | BlankB | NameB | 1978-10-28 | 1000001 | 4600001 | 2011-08-19 | Positive | Patrick Smith | 2011-08-19 |
| 10003 | BlankC | NameC | 1941-06-08 | 1000002 | 4600002 | 2011-10-18 | Positive | Patrick Smith | 2014-10-18 |
| 10004 | BlankD | NameD | 1952-08-01 | 1000003 | 4600003 | 2011-10-20 | Positive | Patrick Smith | 2014-10-20 |
It's because those DoB's are from before 12/31/1969, and the UNIX epoch starts then, so anything prior to that would be negative.
From Wikipedia:
Unix time, or POSIX time, is a system for describing instants in time, defined as the number of seconds that have elapsed since 00:00:00 Coordinated Universal Time (UTC), Thursday, 1 January 1970, not counting leap seconds.
A bit more elaboration: Basically what you're trying to do isn't possible. Depending on what it's for, there may be a different way you can do this, but using UNIX timestamps probably isn't the best idea for dates like that.
I am using Microsoft SQL Server 2008.
I have a table that looks something like this:
|======================================================|
| RespondentId | QuestionId | AnswerValue | ColumnName |
|======================================================|
| P123 | 1 | Y | CanBathe |
|------------------------------------------------------|
| P123 | 2 | 3 | TimesADay |
|------------------------------------------------------|
| P123 | 3 | 1.00 | SoapPrice |
|------------------------------------------------------|
| P465 | 1 | Y | CanBathe |
|------------------------------------------------------|
| P465 | 2 | 1 | TimesADay |
|------------------------------------------------------|
| P465 | 3 | 0.99 | SoapPrice |
|------------------------------------------------------|
| P901 | 1 | N | CanBathe |
|------------------------------------------------------|
| P901 | 2 | 0 | TimesADay |
|------------------------------------------------------|
| P901 | 3 | 0.00 | SoapPrice |
|------------------------------------------------------|
I would like to flip the rows to be columns so that this table looks like this:
|=================================================|
| RespondentId | CanBathe | TimesADay | SoapPrice |
|=================================================|
| P123 | Y | 3 | 1.00 |
|-------------------------------------------------|
| P465 | Y | 1 | 0.99 |
|-------------------------------------------------|
| P901 | N | 0 | 0.00 |
|-------------------------------------------------|
(the example data here is arbitrarily made up, so its silly)
The source table is a temp table with approximately 70,000 rows.
What SQL would I need to write to do this?
Update
I don't even know if PIVOT is the right way to go.
I don't know what column to PIVOT on.
The documentation mentions <aggregation function> and <column being aggregated> and I don't want to aggregate anything.
Thanks in advance.
It, is required to use an aggregate function if you use PIVOT. However, since your (RespondentId, QuestionId) combination is unique, your "groups" will have only one row, so you can use MIN() as an aggregate function:
SELECT RespondentId, CanBathe, TimesADay, SoapPrice
FROM (SELECT RespondentId, ColumnName, AnswerValue FROM MyTable) AS src
PIVOT (MIN(AnswerValue) FOR ColumnName IN(CanBathe, TimesADay, SoapPrice)) AS pvt
If a group only contain one row, then MIN(value) = value, or in other words: the aggregate function becomes the identity function.
See if this gets you started. Used to have to use CASE statements to make that happen but it looks like some inkling of PIVOT is in SQL Server now.
PIVOT is a start, but the thing with sql queries is that you really need to know what columns to expect in the result set before writing the query. If you don't know this, the last time I checked you have to either resort to dynamic sql or allow the client app that retrieves the data to do the pivot instead.