Trading Simulation, compare 2 data frame : Python, Pandas, .loc .where - pandas

following are data what i am handling now
data image for easy understanding
following are code
buying_list = data[data['Buying']==1]
selling_list = data[data['Selling']==1]
data['Cut_Off_Signal']=0
# Buying & Selling & Cut off signal process
def test(data):
data_index = list(data.index)
buying_list_index = list(buying_list.index)
for i in range(len(data_index)):
for j in range(len(buying_list_index)):
if buying_list_index[j]<=data_index[i]:
#data.loc[(data['Cut_Off_Price'][j] < data['Close'][i]) & (data['Cut_Off_Price'][j] >= 1), 'Cut_Off_Signal'] = 1
data['Cut_Off_Signal'][i] = np.where((data['Cut_Off_Price'][j] < data['Close'][i]) & (data['Cut_Off_Price']>=1) , 1, 0)
#data.loc[(data['Cut_Off_Price'][:i] < data['Close'][i]) & (data['Cut_Off_Price']!= 0), 'Cut_Off_Signal'] = 1
return data
data
Date Open High Low Close Volume Adj Close Buying Cut_Off_Price Selling Cut_Off_Signal
2015-10-13 256000 257000 255000 256000 161200 245982.6 0 0 0 0
2015-10-14 257000 260000 256000 257500 147700 247423.91 0 0 0 0
2015-10-15 257500 260500 256000 259000 139700 248865.21 0 0 0 0
2015-10-16 258000 260000 256500 258000 120400 247904.34 0 0 0 0
2015-10-19 258000 261500 257000 260500 89200 250306.52 0 0 0 0
2015-10-20 258500 260500 257500 259500 93400 249345.65 0 0 0 0
2015-10-21 260000 262000 259000 260000 93700 249826.08 0 0 0 0
2015-10-22 259500 260000 250000 251500 192200 241658.69 0 0 0 0
2015-10-23 252500 254500 249500 250000 147600 240217.39 0 0 0 0
2015-10-26 252000 255500 251500 254000 160900 244060.87 0 0 0 0
2015-10-27 254000 258500 251500 252000 149000 242139.13 1 228000 0 0
2015-10-28 253000 254500 248500 249000 128000 239256.52 0 0 0 0
2015-10-29 247500 250000 240000 242500 349000 233010.87 1 215050 0 0
2015-10-30 243500 245500 241000 241000 250200 231569.56 0 0 0 0
2015-11-02 243500 244000 235500 238500 541300 229167.39 0 0 0 0
2015-11-03 237000 237500 224500 230000 1054600 221000 0 0 0 0
2015-11-04 227500 237000 225500 231000 539400 221960.87 0 0 0 0
2015-11-05 233000 234500 230500 230500 189300 221480.43 0 0 0 0
2015-11-06 230500 231000 226000 227000 226700 218117.39 0 0 0 1
2015-11-09 227000 231000 226500 229000 173100 220039.13 0 0 0 0
2015-11-10 228500 229000 226000 227000 175000 218117.39 0 0 0 0
2015-11-11 225500 233000 222500 229000 342700 220039.13 0 0 0 0
2015-11-12 230000 234500 230000 232000 210700 222921.74 0 0 0 0
2015-11-13 231000 235000 230000 232000 202700 222921.74 0 0 0 0
2015-11-16 228000 234500 228000 233500 191300 224363.04 0 0 0 0
2015-11-17 233500 234500 230500 231500 215400 222441.3 0 0 0 0
2015-11-18 231000 232000 228000 230000 207000 221000 0 0 0 0
2015-11-19 231500 233500 229500 232000 141900 222921.74 0 0 0 0
2015-11-20 230000 233500 230000 233000 105600 223882.61 0 0 0 0
2015-11-23 232000 232500 231000 232500 98700 223402.17 0 0 0 0
2015-11-24 231000 233500 230500 233000 132100 223882.61 0 0 0 0
My desire result are
1) at everyday, compare close price and Cut_Off_Price
2) if close price are under Cut_Off_Price, then make Cut_Off_Signal "1"
3) and from the 2nd appearance of under Cut_Off_Price, then ignore.
(I didnt code yet, but i plan to erase this price in "buying_list"
above code didnt make any Cut_Off_Signal. could you advise??
4 i try to solve this problem but still failed.
def test(data)
for i in range(len(list(data.index))):
data.loc[(data[:i][data['Buying']==1)['Cut_Off_Price']<=data['Close'][i],'Cut_Off_Signal']=1
return data
facing error message are
"IndexingError: Unalignalbe boolean Series key provided"
i really don't know what is the problem. your advice are greatly appreciated.

I do not quite understand what you mean under point 3), but for 1) and 2) the code could look something like this:
data.loc[data.Cut_Off_Price>data.Close, 'Cut_Off_Signal'] = 1

Related

Count the day before and the present day

I have a code to count every route from bus by departure date, but i need to count the day before from route 148 plus the count of the route 139 for any day.
I have a view with variables: DEP_DATE (DATE), DepartureDate (Datetime), Routes (Numeric) like this
DEP_DATE DepartureDate Route= 139 Route= 148 Route=129 Route=61 Route=134 Route=60
08/02/2019 2019-02-08T15:00:00 0 0 0 0 0 0
08/02/2019 2019-02-08T10:45:00 0 0 0 0 0 0
08/02/2019 2019-02-08T08:30:00 0 0 0 0 0 0
08/02/2019 2019-02-08T08:15:00 0 0 0 0 0 0
08/02/2019 2019-02-08T21:00:00 0 0 0 0 0 0
08/02/2019 2019-02-08T13:00:00 0 0 0 0 0 0
08/02/2019 2019-02-08T06:30:00 0 0 0 11 0 0
08/02/2019 2019-02-08T19:00:00 0 0 21 0 0 0
08/02/2019 2019-02-08T06:00:00 0 0 0 0 10 13
08/02/2019 2019-02-08T17:30:00 0 0 2 0 0 0
08/02/2019 2019-02-08T05:30:00 1 0 0 0 0 0
08/02/2019 2019-02-08T14:45:00 0 0 0 0 0 0
08/02/2019 2019-02-08T07:00:00 0 0 0 0 0 0
09/02/2019 2019-02-09T20:15:00 0 0 0 0 0 0
09/02/2019 2019-02-09T22:00:00 0 2 0 0 0 0
09/02/2019 2019-02-09T20:30:00 0 0 8 0 0 0
09/02/2019 2019-02-09T08:30:00 0 0 0 0 0 0
09/02/2019 2019-02-09T07:00:00 0 0 0 12 0 0
09/02/2019 2019-02-09T19:00:00 0 0 12 0 0 0
09/02/2019 2019-02-09T06:00:00 0 0 0 0 20 7
09/02/2019 2019-02-09T15:00:00 0 0 0 0 0 0
09/02/2019 2019-02-09T06:30:00 0 0 0 0 0 0
09/02/2019 2019-02-09T08:15:00 0 0 0 0 0 0
09/02/2019 2019-02-09T18:15:00 0 0 0 0 0 0
09/02/2019 2019-02-09T14:45:00 0 0 0 0 0 0
09/02/2019 2019-02-09T13:00:00 0 0 0 0 0 0
10/02/2019 2019-02-10T21:00:00 0 0 0 0 0 0
10/02/2019 2019-02-10T10:45:00 0 0 0 0 0 0
10/02/2019 2019-02-10T06:00:00 0 0 0 0 11 11
10/02/2019 2019-02-10T13:00:00 0 0 0 0 0 0
10/02/2019 2019-02-10T08:30:00 0 0 0 0 0 0
10/02/2019 2019-02-10T08:15:00 0 0 0 22 0 0
10/02/2019 2019-02-10T19:00:00 0 0 21 0 0 0
10/02/2019 2019-02-10T07:00:00 0 0 0 0 0 0
10/02/2019 2019-02-10T20:15:00 0 0 0 0 0 0
10/02/2019 2019-02-10T15:00:00 0 0 0 0 0 0
10/02/2019 2019-02-10T20:30:00 0 1 2 0 0 0
10/02/2019 2019-02-10T06:30:00 0 0 0 0 0 0
10/02/2019 2019-02-10T18:15:00 0 0 0 10 0 0
11/02/2019 2019-02-11T19:00:00 0 0 32 0 0 0
11/02/2019 2019-02-11T08:30:00 0 0 0 0 0 0
11/02/2019 2019-02-11T06:00:00 0 0 0 0 14 12
11/02/2019 2019-02-11T00:45:00 0 0 0 0 0 0
11/02/2019 2019-02-11T15:00:00 0 0 0 0 0 0
11/02/2019 2019-02-11T08:15:00 0 0 0 0 0 0
11/02/2019 2019-02-11T13:00:00 0 0 0 0 0 0
11/02/2019 2019-02-11T06:30:00 0 0 0 0 0 0
11/02/2019 2019-02-11T07:00:00 0 0 0 0 0 0
11/02/2019 2019-02-11T10:45:00 0 0 0 0 0 0
12/02/2019 2019-02-12T08:30:00 0 0 0 0 0 0
12/02/2019 2019-02-12T13:00:00 0 0 0 0 0 0
12/02/2019 2019-02-12T06:00:00 0 0 0 0 10 8
12/02/2019 2019-02-12T15:00:00 0 0 0 0 0 0
12/02/2019 2019-02-12T10:45:00 0 0 0 0 0 0
12/02/2019 2019-02-12T07:00:00 0 0 0 0 0 0
12/02/2019 2019-02-12T14:45:00 0 0 0 15 0 0
12/02/2019 2019-02-12T19:00:00 0 0 14 0 0 0
12/02/2019 2019-02-12T22:00:00 0 2 0 0 0 0
13/02/2019 2019-02-13T13:00:00 0 0 0 0 0 0
13/02/2019 2019-02-13T18:15:00 0 0 0 0 0 0
13/02/2019 2019-02-13T08:15:00 0 0 0 0 0 0
13/02/2019 2019-02-13T20:15:00 0 1 0 0 0 0
13/02/2019 2019-02-13T15:00:00 0 0 0 0 0 0
13/02/2019 2019-02-13T14:45:00 0 0 0 0 0 0
13/02/2019 2019-02-13T08:30:00 0 0 0 0 0 0
13/02/2019 2019-02-13T07:00:00 0 0 0 0 0 0
13/02/2019 2019-02-13T06:00:00 0 0 0 0 7 7
13/02/2019 2019-02-13T21:00:00 0 0 0 0 0 0
13/02/2019 2019-02-13T06:30:00 0 0 0 3 0 0
13/02/2019 2019-02-13T19:00:00 0 0 24 0 0 0
14/02/2019 2019-02-14T18:15:00 0 0 0 0 0 0
14/02/2019 2019-02-14T20:30:00 0 0 3 0 0 0
14/02/2019 2019-02-14T07:00:00 0 0 0 0 0 0
14/02/2019 2019-02-14T06:00:00 0 0 0 0 4 2
14/02/2019 2019-02-14T15:00:00 0 0 0 10 0 0
14/02/2019 2019-02-14T19:00:00 0 0 10 0 0 0
14/02/2019 2019-02-14T13:00:00 2 0 0 0 0 0
14/02/2019 2019-02-14T08:30:00 0 0 0 0 0 0
And have my code that i made is like this:
SELECT
DEP_DATE,
COUNTIF( RouteId = 139) + COUNTIF( RouteId = 148 AND DepartureDate =
DATETIME_SUB(departureDate, INTERVAL 1 DAY)) AS BUS_1,
COUNTIF( RouteId = 134 ) + COUNTIF( RouteId = 60 ) AS BUS_2,
COUNTIF( RouteId = 134 AND EXTRACT(HOUR FROM DepartureDate) = 6) +
COUNTIF( RouteId = 60 AND EXTRACT(HOUR FROM DepartureDate) = 6) AS
BUS_3,
FROM
`project.dataset.view`
WHERE
DepartureDate >
DATETIME_TRUNC(DATETIME_SUB(CURRENT_DATETIME("America/Lima"), INTERVAL 3
DAY),DAY)
GROUP BY
DEP_DATE
My results are like this
DEP_DATE Bus_1 Bus_2 Bus_3 Explanation_Bus_1: Route_139 Route_148
08/02/2019 1 34 23 1 0
09/02/2019 2 32 27 0 2
10/02/2019 1 45 22 0 1
11/02/2019 0 42 26 0 0
12/02/2019 2 29 18 2 0
13/02/2019 0 27 14 0 1
14/02/2019 3 23 6 2 0
But what i expect my count on "Bus_1" like this:
DEP_DATE Bus_1 Bus_2 Bus_3 Explanation_Bus_1: Route_139 Route_148
08/02/2019 1 34 23 1 0
09/02/2019 0 32 27 0 2
10/02/2019 2 45 22 0 1
11/02/2019 1 42 26 0 0
12/02/2019 2 29 18 2 0
13/02/2019 0 27 14 0 1
14/02/2019 3 23 6 2 0
Every count of the route 148 has to be count the day after in the Bus 1
You would need to check if the date is Friday, Wednesday or Saturday. If it is one of those days, then calculate else default to 0.
Big Query Conditional Expressions: https://cloud.google.com/bigquery/docs/reference/standard-sql/conditional_expressions
Big Query Date Functions: https://cloud.google.com/bigquery/docs/reference/standard-sql/date_functions
DAYOFWEEK: Returns values in the range [1,7] with Sunday as the first day of the week.
Try this:
SELECT
T1.DEP_DATE,
T2.BUS_1 AS BUS_1,
COUNTIF( RouteId = 134 ) + COUNTIF( RouteId = 60 ) AS BUS_2,
COUNTIF( RouteId = 134 AND EXTRACT(HOUR FROM DepartureDate) = 6) +
COUNTIF( RouteId = 60 AND EXTRACT(HOUR FROM DepartureDate) = 6) AS
BUS_3,
FROM
`project.dataset.view` T1
LEFT JOIN (
SELECT
DATEADD(DEP_DATE, 1, DAY) AS DEP_DATE,
COUNTIF(RouteId = 139) + COUNTIF(RouteId = 148) AS BUS_1
FROM project.dataset.view
GROUP BY DEP_DATE
) T2
ON T1.DEP_DATE = T2.DEP_DATE
WHERE
DepartureDate >
DATETIME_TRUNC(DATETIME_SUB(CURRENT_DATETIME("America/Lima"), INTERVAL 3
DAY),DAY)
GROUP BY
T1.DEP_DATE, T2.BUS_1

How can I change my index vector into sparse feature vector that can be used in sklearn?

I am doing a News recommendation system and I need to build a table for users and news they read. my raw data just like this :
001436800277225 [12,456,157]
009092130698762 [248]
010003000431538 [361,521,83]
010156461231357 [173,67,244]
010216216021063 [203,97]
010720006581483 [86]
011199797794333 [142,12,86,411,201]
011337201765123 [123,41]
011414545455156 [62,45,621,435]
011425002581540 [341,214,286]
the first column is userID, the second column is the newsID.newsID is a index column, for example, after transformation, [12,456,157] in the first row means that this user has read the 12th, 456th and 157th news (in sparse vector, the 12th column, 456th column and 157th column are 1, while other columns have value 0). And I want to change these data into a sparse vector format that can be used as input vector in Kmeans or DBscan algorithm of sklearn.
How can I do that?
One option is to construct the sparse matrix explicitly. I often find it easier to build the matrix in COO matrix format and then cast to CSR format.
from scipy.sparse import coo_matrix
input_data = [
("001436800277225", [12,456,157]),
("009092130698762", [248]),
("010003000431538", [361,521,83]),
("010156461231357", [173,67,244])
]
NUMBER_MOVIES = 1000 # maximum index of the movies in the data
NUMBER_USERS = len(input_data) # number of users in the model
# you'll probably want to have a way to lookup the index for a given user id.
user_row_map = {}
user_row_index = 0
# structures for coo format
I,J,data = [],[],[]
for user, movies in input_data:
if user not in user_row_map:
user_row_map[user] = user_row_index
user_row_index+=1
for movie in movies:
I.append(user_row_map[user])
J.append(movie)
data.append(1) # number of times users watched the movie
# create the matrix in COO format; then cast it to CSR which is much easier to use
feature_matrix = coo_matrix((data, (I,J)), shape=(NUMBER_USERS, NUMBER_MOVIES)).tocsr()
Use MultiLabelBinarizer from sklearn.preprocessing
from sklearn.preprocessing import MultiLabelBinarizer
mlb = MultiLabelBinarizer()
pd.DataFrame(mlb.fit_transform(df.newsID), columns=mlb.classes_)
12 41 45 62 67 83 86 97 123 142 ... 244 248 286 341 361 411 435 456 521 621
0 1 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 1 0 0
1 0 0 0 0 0 0 0 0 0 0 ... 0 1 0 0 0 0 0 0 0 0
2 0 0 0 0 0 1 0 0 0 0 ... 0 0 0 0 1 0 0 0 1 0
3 0 0 0 0 1 0 0 0 0 0 ... 1 0 0 0 0 0 0 0 0 0
4 0 0 0 0 0 0 0 1 0 0 ... 0 0 0 0 0 0 0 0 0 0
5 0 0 0 0 0 0 1 0 0 0 ... 0 0 0 0 0 0 0 0 0 0
6 1 0 0 0 0 0 1 0 0 1 ... 0 0 0 0 0 1 0 0 0 0
7 0 1 0 0 0 0 0 0 1 0 ... 0 0 0 0 0 0 0 0 0 0
8 0 0 1 1 0 0 0 0 0 0 ... 0 0 0 0 0 0 1 0 0 1
9 0 0 0 0 0 0 0 0 0 0 ... 0 0 1 1 0 0 0 0 0 0

Pharo FileSystem: How do I write a binary file?

TabularResources testExcelSheet
from this project gives me a binary representation in a literal array of an Excel file.
````
testExcelSheet
^ #[80 75 3 4 20 0 6 0 8 0 0 0 33 0 199 122 151 144 120 1 0 0 32 6 0 0 19 0 8 2 91 67 111 110 116 101 110 116 95 84 121 112 101 115 93 46 120 109 108 32 162 4 2 40 160 0 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 .....
....0 109 108 80 75 1 2 45 0 20 0 6 0 8 0 0 0 33 0 126 148 213 45 209 1 0 0 250 10 0 0 16 0 0 0 0 0 0 0 0 0 0 0 0 0 233 36 0 0 120 108 47 99 97 108 99 67 104 97 105 110 46 120 109 108 80 75 5 6 0 0 0 0 13 0 13 0 74 3 0 0 232 38 0 0 0 0]
````
Question
How do I write this to the disk to see which kind of file it is?
Answer
(by Esteban, edited)
./TabularATest1.xlsx' asFileReference writeStreamDo: [ :stream |
stream
binary;
nextPutAll: self testExcelSheet ]
Easiest way to do that is something like this:
'./file.bin' asFileReference writeStreamDo: [ :stream |
stream
binary;
nextPutAll: #[1 2 3 4 5 6 7 8 9 0] ]
So the trick is just telling to the stream "be a binary file" :)

in a word macro delete everything that does not start with one of two strings

I have a data file that contains a lot of extra data. I want to run a word macro that only keeps 5 lines (I could live with 6 if it makes it easier)
I found how to delete a row if it contains a string.
I want to keep the paragraphs that start with:
Record write time
Headband impedance
Headband Packets
Headband RSSI
Headband Status
I could live with keeping
Headband ID
I tried the following macro, based on a sample I saw here. But, I am getting an error.
Sub test()
'
' test Macro
Dim search1 As String
search1 = "record"
Dim search2 As String
search2 = "headb"
Dim para As Paragraph
For Each para In ActiveDocument.Paragraphs
Dim txt As String
txt = para.Range.Text
If Not InStr(LCase(txt), search1) Then
If Not InStr(LCase(txt), search2) Then
para.Range.Delete
End If
Next
End Sub
The error is: next without For.
I know that there may be a better way, and an open to any fix.
Sample data:
The data is:
ZEO Start data record
----------------
Record write time: 10/14/2014 20:32
Factory reset date: 10/14/2014 20:23
Headband ID: 01/01/1970 18:32
Headband impedance: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 255 241 247 190 165 154 150 156 162 177 223 202
Headband Packets: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 3 21 4 30 3 3 3 9 4 46 46 1
Headband RSSI: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 14 0 0 6 254 254 250 5 255 4 3 249
Headband Status: 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 169 170 170
Hardware ID: 2
Software ID: 43
Sensor Life Reset date: Not recorded
sleep Stat Reset date: 10/14/2014 20:18
Awakenings: 0
Awakenings Average: 0
Start of night: 10/14/2014 20:28
End of night: 10/14/2014 20:32
Awakenings: 0
Awakenings Average: 0
Time in deep: 0
Time in deep average: 0
There is an End If missing. Add this immediately after the first End If - do you get the same error?
Update:
There is also an error in the If conditions. Check the InStr reference for return values.
You need to use something like If Not InStr(...) = 1 Then on both if statements.

Jasper Report - Subreport only printed first time

My problem is related with subreports primary, my configuration is the following:
I have a main report as shown in the image:
Trueness associated subreport:
And each of the 4 last reports has the same structure, a page header and a detail.
the main report sends the parameter wavelength to its subreports and all the DataSources with all the info, and the last report has a conditional print detail:
$F{wavelength}.intValue()==$P{wavelength}.intValue()
Each DataSource "Bean" has wavelength as parameter and each ChX information.
When executing the application it generates 6 TruenessReports for Wavelenghts: (405,450,...,690), and 48 SubReports of each type (absorvance, reference, abs_error, rel_error).
The Report generated is the following (sorry but cannot generate one right now)
Wavelength: 405
Absorvances
Ch1 Ch2 Ch3 Ch4 Ch5 Ch6 Ch7 Ch8 Ch9 Ch10 Ch11 Ch12
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
Reference Absorvances
Ch1 Ch2 Ch3 Ch4 Ch5 Ch6 Ch7 Ch8 Ch9 Ch10 Ch11 Ch12
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
Absorvances Error
Ch1 Ch2 Ch3 Ch4 Ch5 Ch6 Ch7 Ch8 Ch9 Ch10 Ch11 Ch12
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
Relative Errors
Ch1 Ch2 Ch3 Ch4 Ch5 Ch6 Ch7 Ch8 Ch9 Ch10 Ch11 Ch12
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
Wavelength: 450
Absorvances
Reference Absorvances
Absorvances Error
Relative Errors
....
Wavelength: 690
Absorvances
Reference Absorvances
Absorvances Error
Relative Errors
So, only the first time the last 4 subreports are printed, the next ones (in my case 5 other wavelengths) it does not print anything, and there is data for its own associated wavelength.
Anyone have any idea?