Decreasing loss for epochs but accuracy remains same for multiple epochs before changing - tensorflow

I am building a neural network model to identify the blade sharpness based on the cutting force at the given distance after incision. My data is in csv format and I am using a binary classification model with 2 hidden layers. I only 45 input data points. When I run I my neural network model the loss is decreasing but the accuracy remains same over multiple epochs before changing.
#Initialising the neural network
Classifier = Sequential()
#Adding the input layer and the first hidden layer
Classifier.add(Dense(units=2, kernel_initializer= 'he_uniform', activation= 'relu',input_dim = 2))
Classifier.add(Dense(units=2, kernel_initializer= 'he_uniform', activation= 'relu',))
#Adding the output layer
Classifier.add(Dense(units =1, kernel_initializer='glorot_uniform', activation = 'sigmoid',))
Classifier.summary()
Epoch 177/2000
1/1 [==============================] - 0s 98ms/step - loss: 0.5921 - accuracy: 0.7222 - val_loss: 0.6642 - val_accuracy: 0.5000
Epoch 178/2000
1/1 [==============================] - 0s 72ms/step - loss: 0.5915 - accuracy: 0.7222 - val_loss: 0.6627 - val_accuracy: 0.5000
Epoch 179/2000
1/1 [==============================] - 0s 83ms/step - loss: 0.5908 - accuracy: 0.7222 - val_loss: 0.6612 - val_accuracy: 0.5000
Epoch 180/2000
1/1 [==============================] - 0s 82ms/step - loss: 0.5902 - accuracy: 0.7222 - val_loss: 0.6597 - val_accuracy: 0.5000
Epoch 181/2000
1/1 [==============================] - 0s 123ms/step - loss: 0.5896 - accuracy: 0.7222 - val_loss: 0.6581 - val_accuracy: 0.5000
Epoch 182/2000
1/1 [==============================] - 0s 77ms/step - loss: 0.5889 - accuracy: 0.7222 - val_loss: 0.6566 - val_accuracy: 0.5000
Epoch 183/2000
1/1 [==============================] - 0s 75ms/step - loss: 0.5883 - accuracy: 0.7500 - val_loss: 0.6550 - val_accuracy: 0.5000
Epoch 184/2000
1/1 [==============================] - 0s 73ms/step - loss: 0.5877 - accuracy: 0.8056 - val_loss: 0.6533 - val_accuracy: 0.5000
Epoch 185/2000
1/1 [==============================] - 0s 83ms/step - loss: 0.5870 - accuracy: 0.8056 - val_loss: 0.6517 - val_accuracy: 0.5000
Epoch 186/2000
1/1 [==============================] - 0s 103ms/step - loss: 0.5864 - accuracy: 0.8056 - val_loss: 0.6500 - val_accuracy: 0.5000
Epoch 187/2000
1/1 [==============================] - 0s 95ms/step - loss: 0.5857 - accuracy: 0.8056 - val_loss: 0.6484 - val_accuracy: 0.5000
Epoch 188/2000
1/1 [==============================] - 0s 69ms/step - loss: 0.5851 - accuracy: 0.8056 - val_loss: 0.6467 - val_accuracy: 0.5000
Epoch 189/2000
1/1 [==============================] - 0s 84ms/step - loss: 0.5845 - accuracy: 0.8056 - val_loss: 0.6450 - val_accuracy: 0.5000
Epoch 190/2000
1/1 [==============================] - 0s 94ms/step - loss: 0.5838 - accuracy: 0.8056 - val_loss: 0.6433 - val_accuracy: 0.5000
Epoch 191/2000
1/1 [==============================] - 0s 86ms/step - loss: 0.5832 - accuracy: 0.8056 - val_loss: 0.6416 - val_accuracy: 0.5000
Epoch 192/2000
1/1 [==============================] - 0s 80ms/step - loss: 0.5825 - accuracy: 0.8056 - val_loss: 0.6399 - val_accuracy: 0.5000
Epoch 193/2000
1/1 [==============================] - 0s 63ms/step - loss: 0.5818 - accuracy: 0.8056 - val_loss: 0.6381 - val_accuracy: 0.5000
Epoch 194/2000
1/1 [==============================] - 0s 79ms/step - loss: 0.5812 - accuracy: 0.8056 - val_loss: 0.6364 - val_accuracy: 0.5000
Epoch 195/2000
1/1 [==============================] - 0s 87ms/step - loss: 0.5805 - accuracy: 0.8056 - val_loss: 0.6347 - val_accuracy: 0.5000
Epoch 196/2000
1/1 [==============================] - 0s 90ms/step - loss: 0.5799 - accuracy: 0.8056 - val_loss: 0.6330 - val_accuracy: 0.5000
Epoch 197/2000
1/1 [==============================] - 0s 83ms/step - loss: 0.5792 - accuracy: 0.8056 - val_loss: 0.6313 - val_accuracy: 0.7500
Epoch 198/2000
1/1 [==============================] - 0s 191ms/step - loss: 0.5785 - accuracy: 0.8333 - val_loss: 0.6296 - val_accuracy: 1.0000
Epoch 199/2000
1/1 [==============================] - 0s 77ms/step - loss: 0.5779 - accuracy: 0.8333 - val_loss: 0.6278 - val_accuracy: 1.0000
Epoch 200/2000
1/1 [==============================] - 0s 122ms/step - loss: 0.5772 - accuracy: 0.8333 - val_loss: 0.6261 - val_accuracy: 1.0000
Epoch 201/2000
1/1 [==============================] - 0s 98ms/step - loss: 0.5765 - accuracy: 0.8333 - val_loss: 0.6244 - val_accuracy: 1.0000
Epoch 202/2000
1/1 [==============================] - 0s 85ms/step - loss: 0.5758 - accuracy: 0.8333 - val_loss: 0.6226 - val_accuracy: 1.0000
Epoch 203/2000
1/1 [==============================] - 0s 107ms/step - loss: 0.5752 - accuracy: 0.8333 - val_loss: 0.6209 - val_accuracy: 1.0000
Epoch 204/2000
1/1 [==============================] - 0s 54ms/step - loss: 0.5745 - accuracy: 0.8333 - val_loss: 0.6192 - val_accuracy: 1.0000
Epoch 205/2000
1/1 [==============================] - 0s 67ms/step - loss: 0.5738 - accuracy: 0.8333 - val_loss: 0.6175 - val_accuracy: 1.0000
Epoch 206/2000
1/1 [==============================] - 0s 125ms/step - loss: 0.5731 - accuracy: 0.8333 - val_loss: 0.6158 - val_accuracy: 1.0000
Epoch 207/2000
1/1 [==============================] - 0s 101ms/step - loss: 0.5725 - accuracy: 0.8333 - val_loss: 0.6140 - val_accuracy: 1.0000
Epoch 208/2000
1/1 [==============================] - 0s 146ms/step - loss: 0.5718 - accuracy: 0.8333 - val_loss: 0.6123 - val_accuracy: 1.0000
Epoch 209/2000
1/1 [==============================] - 0s 218ms/step - loss: 0.5711 - accuracy: 0.8333 - val_loss: 0.6106 - val_accuracy: 1.0000
Epoch 210/2000
1/1 [==============================] - 0s 174ms/step - loss: 0.5704 - accuracy: 0.8333 - val_loss: 0.6088 - val_accuracy: 1.0000```

Related

How to use LAG function and get the numbers in percentages in SQL

I have a table with columns below
Employee(linked_lylty_card_nbr, prod_nbr, tot_amt_incld_gst, start_txn_date, main_total_size, tota_size_uom)
Below is the respective table
linked_lylty_card_nbr, prod_nbr, tot_amt_incld_gst, start_txn_date, main_total_size, tota_size_uom
1100000000006296409 83563-EA 3.1600 2021-11-10 500.0000 ML
1100000000006296409 83563-EA 2.6800 2021-11-20 500.0000 ML
1100000000001959800 83563-EA 2.6900 2021-12-21 500.0000 ML
1100000000006296409 83563-EA 3.1600 2021-12-30 500.0000 ML
1100000000001959800 83563-EA 5.3700 2022-01-14 500.0000 ML
1100000000006296409 83563-EA 2.6800 2022-01-16 500.0000 ML
1100000000001959800 83563-EA 2.4900 2022-01-19 500.0000 ML
1100000000006296409 83563-EA 3.4600 2022-02-26 500.0000 ML
1100000000006296409 607577-EA 3.9800 2022-05-26 500.0000 ML
1100000000006296409 607577-EA 3.9800 2022-06-11 500.0000 ML
1100000000001959800 83563-EA 3.9800 2022-06-14 500.0000 ML
1100000000001959800 83563-EA 3.9800 2022-06-24 500.0000 ML
1100000000006296409 607577-EA 4.4600 2022-07-30 500.0000 ML
1100000000001959800 83563-EA 4.0100 2022-08-02 500.0000 ML
1100000000001959800 83563-EA 4.0100 2022-09-01 500.0000 ML
1100000000006296409 607577-EA 3.9800 2022-09-08 500.0000 ML
I'm trying to get the change in the volume per each visit in % i.e. percentage, for example if 1100000000006296409 is linked_lylty_card_nbr then for start_txn_date 2021-11-10 main_total_size is 500, then for the same customer for 2021-11-20 the main_total_size is 500, there's no difference and no of days it's taking for the linked_lylty_card_nbr to return and get the product in % i.e. percentage. Below is the SQL query i've written
SELECT
linked_lylty_card_nbr,
prod_nbr,
start_txn_date,
main_total_size,
total_size_uom,
(
main_total_size - LAG(main_total_size, 1) OVER (
PARTITION BY linked_lylty_card_nbr
ORDER BY
start_txn_date
)) / main_total_size AS change_in_volume_per_visit,
(
start_txn_date - LAG(start_txn_date, 1) OVER (
PARTITION BY linked_lylty_card_nbr
ORDER BY
start_txn_date
)) / main_total_size AS change_in_days_per_visit
FROM
Employee
ORDER BY
linked_lylty_card_nbr,
start_txn_date
The output is below
linked_lylty_card_nbr prod_nbr start_txn_date main_total_size tota_size_uomm change_in_volume_per_visit change_in_days_per_visit
1100000000001959800 83563-EA 2021-12-21 500.0 ML
1100000000001959800 83563-EA 2022-01-14 1000.0 ML 0.5 0.024
1100000000001959800 83563-EA 2022-01-19 500.0 ML -1.0 0.01
1100000000001959800 83563-EA 2022-06-14 500.0 ML 0.0 0.292
1100000000001959800 83563-EA 2022-06-24 500.0 ML 0.0 0.02
1100000000001959800 83563-EA 2022-08-02 500.0 ML 0.0 0.078
1100000000001959800 83563-EA 2022-09-01 500.0 ML 0.0 0.06
1100000000006296409 83563-EA 2021-11-10 500.0 ML
1100000000006296409 83563-EA 2021-11-20 500.0 ML 0.0 0.02
1100000000006296409 83563-EA 2021-12-30 500.0 ML 0.0 0.08
1100000000006296409 83563-EA 2022-01-16 500.0 ML 0.0 0.034
1100000000006296409 83563-EA 2022-02-26 500.0 ML 0.0 0.082
1100000000006296409 607577-EA 2022-05-26 500.0 ML 0.0 0.178
1100000000006296409 607577-EA 2022-06-11 500.0 ML 0.0 0.032
1100000000006296409 607577-EA 2022-07-30 500.0 ML 0.0 0.098
1100000000006296409 607577-EA 2022-09-08 500.0 ML 0.0 0.08
From the above output, the change_in_volume_per_visit column, 2nd row has value 0.5. But it must be 1 if 500 is jumping to 1000 as in main_total_size 1st row has 500 and 2nd row has 1000 and Can anyone please confirm whether change_in_days_per_visit values are correct or not?

Date Formatting in SQL for weeks

Select
[fa] as 'CouponName',
[fb] as 'Store',
[fc] as 'DateTime',
[fd] as 'PLU',
[fe] as 'QTY'
FROM [database].[dbo].[table]
where [fd] = '00milecard' and [fc] >= dateadd(dd, -70, getdate()),
Order By [fb]
This produces:
CouponName************Store***DateTime*********************PLU**************QTY
CPN: MILE CARD $5*** 747*** 2020-01-10 14:57:26.060*** 00MILECARD*** 1.0000
CPN: MILE CARD $5*** 747*** 2020-01-10 19:21:12.763*** 00MILECARD*** 1.0000
CPN: MILE CARD $5*** 747*** 2020-01-11 18:19:01.093*** 00MILECARD*** 1.0000
CPN: MILE CARD $5*** 747*** 2020-01-12 17:11:29.610*** 00MILECARD*** 1.0000
CPN: MILE CARD $5*** 747*** 2020-01-12 15:33:31.747*** 00MILECARD*** 1.0000
CPN: MILE CARD $5*** 747*** 2020-01-13 13:11:58.243*** 00MILECARD*** 1.0000
CPN: MILE CARD $5*** 747*** 2020-01-08 16:45:41.070*** 00MILECARD*** 1.0000
CPN: MILE CARD $5*** 747*** 2020-01-03 18:11:12.050*** 00MILECARD*** 1.0000
CPN: MILE CARD $5*** 748*** 2020-01-11 15:12:13.370*** 00MILECARD*** 1.0000
CPN: MILE CARD $5*** 748*** 2020-01-10 11:59:28.517*** 00MILECARD*** 1.0000
CPN: MILE CARD $5*** 748*** 2019-12-26 08:17:40.420*** 00MILECARD*** 1.0000
CPN: MILE CARD $5*** 748*** 2019-12-26 15:39:31.900*** 00MILECARD*** 1.0000
CPN: MILE CARD $5*** 748*** 2019-12-27 14:59:12.890*** 00MILECARD*** 1.0000
CPN: MILE CARD $5*** 750*** 2020-01-04 19:08:45.337*** 00MILECARD*** 1.0000
CPN: MILE CARD $5*** 750*** 2020-01-08 06:23:59.963*** 00MILECARD*** 1.0000
I need this to sum the qty in a week's time span, per store number, and run for a period of 10 weeks(70 days).
Our week is a Monday - Sunday.
I think a "DATEDIFF" will do this, but I do not have any experience with this formatter.
I think something like this will do what you want:
select min(fc0), fb, sum(fe)
from [database].[dbo].[table]
where [f01] = '00milecard' and
datediff(week, fc, getdate()) <= 10
group by datediff(week, fc, getdate()), fb
Order By [fb]

tensorflow Dataset consuming data becomes slower and slower with the increase of epochs

In tensor flow 1.4, the tf.data.Dataset class provides a repeat() function to make epochs, but its performance becomes slower and slower with the increase of epochs!
Here is the code:
num_data = 1000
num_epoch = 50
batch_size = 32
dataset = tf.data.Dataset.range(num_data)
dataset = dataset.repeat(num_epoch).batch(batch_size)
iterator = dataset.make_one_shot_iterator()
with tf.Session() as sess:
for epoch in xrange(num_epoch):
t1 = time.time()
for i in xrange(num_data/batch_size):
a = sess.run(iterator.get_next())
t2 = time.time()
print 'epoch %d comsuming_time %.4f'%(epoch,t2-t1)
and its outputs:
epoch 0 comsuming_time 0.1604
epoch 1 comsuming_time 0.1725
epoch 2 comsuming_time 0.1839
epoch 3 comsuming_time 0.1942
epoch 4 comsuming_time 0.2213
epoch 5 comsuming_time 0.2430
epoch 6 comsuming_time 0.2361
epoch 7 comsuming_time 0.2512
epoch 8 comsuming_time 0.2607
epoch 9 comsuming_time 0.2936
epoch 10 comsuming_time 0.3282
epoch 11 comsuming_time 0.2990
epoch 12 comsuming_time 0.3105
epoch 13 comsuming_time 0.3239
epoch 14 comsuming_time 0.3393
epoch 15 comsuming_time 0.3518
epoch 16 comsuming_time 0.3673
epoch 17 comsuming_time 0.3859
epoch 18 comsuming_time 0.3928
epoch 19 comsuming_time 0.4090
epoch 20 comsuming_time 0.4206
epoch 21 comsuming_time 0.4333
epoch 22 comsuming_time 0.4479
epoch 23 comsuming_time 0.4631
epoch 24 comsuming_time 0.4774
epoch 25 comsuming_time 0.4923
epoch 26 comsuming_time 0.5533
epoch 27 comsuming_time 0.5187
epoch 28 comsuming_time 0.5319
epoch 29 comsuming_time 0.5470
epoch 30 comsuming_time 0.5647
epoch 31 comsuming_time 0.5796
epoch 32 comsuming_time 0.6036
I think I have found the problem. It is sess.run(iterator.get_next()). Predifining get_next_op = iterator.get_next() outside the loop and run sess.run(get_next_op) will be OK.

WCF : Moving from IIS 7 to IIS 8

I have moved my my wcf service from iis7 to 8.i can browse to the svc file but i cannot browse to any other methods through get or post method.it shows the below error
The sever encountered an error processing the request.see server logs for more details
the log file is shown below
Software: Microsoft Internet Information Services 8.5
Version: 1.0
Date: 2014-12-17 04:25:48
Fields: date time s-ip cs-method cs-uri-stem cs-uri-query s-port cs-username c-ip cs(User-Agent) cs(Referer) sc-status sc-substatus sc-win32-status time-taken
2014-12-17 04:25:48 (ipaddress) GET /service - 786 - (ipaddress) Mozilla/5.0+(Windows+NT+6.3;+WOW64;+Trident/7.0;+rv:11.0)+like+Gecko - 301 0 0 120
2014-12-17 04:25:48 (ipaddress) GET /service/ - 786 - (ipaddress) Mozilla/5.0+(Windows+NT+6.3;+WOW64;+Trident/7.0;+rv:11.0)+like+Gecko - 200 0 0 3
2014-12-17 04:25:53 (ipaddress) GET /service/MposService.svc - 786 - (ipaddress) Mozilla/5.0+(Windows+NT+6.3;+WOW64;+Trident/7.0;+rv:11.0)+like+Gecko (ipaddress):786/service/ 200 0 0 904
2014-12-17 04:27:42 (ipaddress) GET /service/MposService.svc - 786 - publicip Mozilla/5.0+(Windows+NT+6.1;+WOW64)+AppleWebKit/537.36+(KHTML,+like+Gecko)+Chrome/39.0.2171.95+Safari/537.36 - 200 0 0 628
2014-12-17 04:27:42 (ipaddress) GET /favicon.ico - 786 - public ip Mozilla/5.0+(Windows+NT+6.1;+WOW64)+AppleWebKit/537.36+(KHTML,+like+Gecko)+Chrome/39.0.2171.95+Safari/537.36 - 404 0 2 470
2014-12-17 04:28:24 (ipaddress) GET /service/MposService.svc/getCustomer section=s1 786 - 117.213.26.161 Mozilla/5.0+(Windows+NT+6.1;+WOW64)+AppleWebKit/537.36+(KHTML,+like+Gecko)+Chrome/39.0.2171.95+Safari/537.36 - 400 0 0 640

Derived/calculated column on existing table

I have been going nuts over this issue for some time and I am seeking help.
I have SQL Server table with values, as follows:
Account - Date - Amount - Summary
10000 - 2010-1-1 - 50.00 - 0.00
10000 - 2010-2-1 - 50.00 - 0.00
10000 - 2010-3-1 - 50.00 - 0.00
10000 - 2010-4-1 - 50.00 - 0.00
10000 - 2010-5-1 - 50.00 - 0.00
10000 - 2010-6-1 - 50.00 - 0.00
10000 - 2010-7-1 - 50.00 - 0.00
10000 - 2010-8-1 - 50.00 - 0.00
10000 - 2010-9-1 - 50.00 - 0.00
10000 - 2010-10-1 - 50.00 - 0.00
10000 - 2010-11-1 - 50.00 - 0.00
10000 - 2010-12-1 - 50.00 - 600.00
10000 - 2011-1-1 - 25.00 - 0.00
10000 - 2011-2-1 - 25.00 - 0.00
10000 - 2011-3-1 - 50.00 - 0.00
10000 - 2011-4-1 - 50.00 - 0.00
10000 - 2011-5-1 - 50.00 - 0.00
10000 - 2011-12-1 - 25.00 - 825.00
10000 - 2012-1-1 - 100.00 - 0.00
10000 - 2012-2-1 - 200.00 - 0.00
10000 - 2012-3-1 - 100.00 - 0.00
10000 - 2012-5-1 - 100.00 - 0.00
10000 - 2012-6-1 - 100.00 - 0.00
10000 - 2012-8-1 - 100.00 - 0.00
10000 - 2012-12-1 - 100.00 - 1625.00
10001 - 2010-1-1 - 50.00 - 0.00
10001 - 2010-2-1 - 60.00 - 0.00
10001 - 2010-12-1 - 60.00 - 170.00
10001 - 2011-1-1 - 50.00 - 0.00
10001 - 2011-2-1 - 50.00 - 0.00
10001 - 2011-3-1 - 50.00 - 0.00
10001 - 2011-4-1 - 50.00 - 0.00
10001 - 2011-6-1 - 50.00 - 0.00
10001 - 2011-8-1 - 50.00 - 0.00
10001 - 2011-10-1 - 50.00 - 0.00
10001 - 2011-12-1 - 50.00 - 570.00
This is a basic snapshot of the table. The "Summary" column gives the total for the "Amounts" at the end of the year (based on "date" column), but only when the MONTH(Date) = '12'. It goes on this way for hundreds of accounts, with about 4 more years as well. I would like to add a column to this existing table, called "SummaryPreviousYear". The SummaryPreviousYear column should have the sum of the amounts from MONTH(Date) = '12' and the previous year. I'd like to join this column on the account number, so that it sits next to the Summary column and gives a value just like the Summary value does, but the SummaryPreviousYear value would need to be present the whole way down the column, not just where the month is 12. For example, the following row:
Before:
Account - Date - Amount - Summary
10001 - 2011-10-1 - 50.00 - 0.00
10001 - 2011-12-1 - 50.00 - 570.00
After:
Account - Date - Amount - Summary - SummaryPreviousYear
10001 - 2011-10-1 - 50.00 - 0.00 - 170.00
10001 - 2011-12-1 - 50.00 - 570.00 - 170.00
Can anyone help me with this? I am pulling my hair out here for 2 days and need to get this dataset created so I can proceed with my report development. Unfortunately, the DBA's off site. Literally at my wit's end. Any help would be greatly appreciated.
Why are you duplicating this summary and previous year summary data for each row in your database? This is wasteful and unnecessary. It would be far better to have another table with previous year summaries, one row per year, that you could join to this table. And, I don't see a need for a Summary column at all. Why not create a view that calculates a Summary, when the month is 12, and returns a zero for any month not equal to 12.
SELECT l.*,
q.summary AS SummaryPreviousYear
FROM lists l
LEFT JOIN
(
SELECT Date,
Summary
FROM lists
WHERE MONTH(Date) = 12
) AS q
ON YEAR(l.Date) = YEAR(q.Date) + 1
SELECT
t.Account,
t.Date,
t.Amount,
t.Summary,
s.Summary as SummaryPreviousYear
FROM TestTable t
JOIN (
SELECT
Account,
DATEPART(YEAR, Date) as Year,
SUM(Amount) as Summary
FROM TestTable
GROUP BY Account, DATEPART(YEAR, Date)
) s
ON s.Account = t.Account
AND s.Year = DATEPART(YEAR, Date) - 1