Attachments in Redmine Python Rest API - odoo

dir(sttachments) will give the following value.
[u'author', u'content_type', u'content_url', u'created_on', u'description', u'filename', u'filesize', u'id', u'thumbnail_url']
But I cannot able to read the container_id of attachments, due to this I can't able to map my attachments with issue.
How to read the container_id of attachments and how to map the attachments with issue
from redmine import Redmine
conn_red = Redmine('http://localhost:3000', username='admin', password='admin')
issue = conn_red.issue.all()
attachment = {}
att_list = []
for id in issue:
for att in id.attachments:
attachment['file_name'] = att.content_url
attachment['created_date'] = att.created_on
att_list.append((0, 0, attachment))
print "issue",id.id,"attachments",att.id,"att_list",att_list
The above code give the following result
issue 7 attachments 5 att_list [(0, 0, {'file_name': u'http://localhost:3000/attachments/download/5/3BasicSkills_Eng.pdf', 'created_date': datetime.datetime(2016, 6, 2, 8, 43, 51)})]
issue 6 attachments 5 att_list [(0, 0, {'file_name': u'http://localhost:3000/attachments/download/5/3BasicSkills_Eng.pdf', 'created_date': datetime.datetime(2016, 6, 2, 8, 43, 51)}), (0, 0, {'file_name': u'http://localhost:3000/attachments/download/5/3BasicSkills_Eng.pdf', 'created_date': datetime.datetime(2016, 6, 2, 8, 43, 51)})]
issue 5 attachments 5 att_list [(0, 0, {'file_name': u'http://localhost:3000/attachments/download/5/3BasicSkills_Eng.pdf', 'created_date': datetime.datetime(2016, 6, 2, 8, 43, 51)}), (0, 0, {'file_name': u'http://localhost:3000/attachments/download/5/3BasicSkills_Eng.pdf', 'created_date': datetime.datetime(2016, 6, 2, 8, 43, 51)}), (0, 0, {'file_name': u'http://localhost:3000/attachments/download/5/3BasicSkills_Eng.pdf', 'created_date': datetime.datetime(2016, 6, 2, 8, 43, 51)})]
issue 4 attachments 5 att_list [(0, 0, {'file_name': u'http://localhost:3000/attachments/download/5/3BasicSkills_Eng.pdf', 'created_date': datetime.datetime(2016, 6, 2, 8, 43, 51)}), (0, 0, {'file_name': u'http://localhost:3000/attachments/download/5/3BasicSkills_Eng.pdf', 'created_date': datetime.datetime(2016, 6, 2, 8, 43, 51)}), (0, 0, {'file_name': u'http://localhost:3000/attachments/download/5/3BasicSkills_Eng.pdf', 'created_date': datetime.datetime(2016, 6, 2, 8, 43, 51)}), (0, 0, {'file_name': u'http://localhost:3000/attachments/download/5/3BasicSkills_Eng.pdf', 'created_date': datetime.datetime(2016, 6, 2, 8, 43, 51)})]
issue 3 attachments 5 att_list [(0, 0, {'file_name': u'http://localhost:3000/attachments/download/5/3BasicSkills_Eng.pdf', 'created_date': datetime.datetime(2016, 6, 2, 8, 43, 51)}), (0, 0, {'file_name': u'http://localhost:3000/attachments/download/5/3BasicSkills_Eng.pdf', 'created_date': datetime.datetime(2016, 6, 2, 8, 43, 51)}), (0, 0, {'file_name': u'http://localhost:3000/attachments/download/5/3BasicSkills_Eng.pdf', 'created_date': datetime.datetime(2016, 6, 2, 8, 43, 51)}), (0, 0, {'file_name': u'http://localhost:3000/attachments/download/5/3BasicSkills_Eng.pdf', 'created_date': datetime.datetime(2016, 6, 2, 8, 43, 51)}), (0, 0, {'file_name': u'http://localhost:3000/attachments/download/5/3BasicSkills_Eng.pdf', 'created_date': datetime.datetime(2016, 6, 2, 8, 43, 51)})]
issue 2 attachments 3 att_list [(0, 0, {'file_name': u'http://localhost:3000/attachments/download/3/page.png', 'created_date': datetime.datetime(2016, 5, 30, 8, 15, 42)}), (0, 0, {'file_name': u'http://localhost:3000/attachments/download/3/page.png', 'created_date': datetime.datetime(2016, 5, 30, 8, 15, 42)}), (0, 0, {'file_name': u'http://localhost:3000/attachments/download/3/page.png', 'created_date': datetime.datetime(2016, 5, 30, 8, 15, 42)}), (0, 0, {'file_name': u'http://localhost:3000/attachments/download/3/page.png', 'created_date': datetime.datetime(2016, 5, 30, 8, 15, 42)}), (0, 0, {'file_name': u'http://localhost:3000/attachments/download/3/page.png', 'created_date': datetime.datetime(2016, 5, 30, 8, 15, 42)}), (0, 0, {'file_name': u'http://localhost:3000/attachments/download/3/page.png', 'created_date': datetime.datetime(2016, 5, 30, 8, 15, 42)})]
issue 1 attachments 4 att_list [(0, 0, {'file_name': u'http://localhost:3000/attachments/download/4/page.png', 'created_date': datetime.datetime(2016, 5, 30, 8, 16, 40)}), (0, 0, {'file_name': u'http://localhost:3000/attachments/download/4/page.png', 'created_date': datetime.datetime(2016, 5, 30, 8, 16, 40)}), (0, 0, {'file_name': u'http://localhost:3000/attachments/download/4/page.png', 'created_date': datetime.datetime(2016, 5, 30, 8, 16, 40)}), (0, 0, {'file_name': u'http://localhost:3000/attachments/download/4/page.png', 'created_date': datetime.datetime(2016, 5, 30, 8, 16, 40)}), (0, 0, {'file_name': u'http://localhost:3000/attachments/download/4/page.png', 'created_date': datetime.datetime(2016, 5, 30, 8, 16, 40)}), (0, 0, {'file_name': u'http://localhost:3000/attachments/download/4/page.png', 'created_date': datetime.datetime(2016, 5, 30, 8, 16, 40)}), (0, 0, {'file_name': u'http://localhost:3000/attachments/download/4/page.png', 'created_date': datetime.datetime(2016, 5, 30, 8, 16, 40)})]
The second list is repeated with same data.
EDIT ONE
I here by attach the screen shot of attachments table, here container_id is associated with issues number, so I want to prepare a list based on issue with attachments,
For example Issue no 1 i.e container_id 1 has two attachments i.e ID 1 and 4, so I have to prepare a list [(0,0,{filename:'vignesh.png'}),(0,0,{filename:'page.png'})], so issue number 1 has two issue that should be created as a list.

So as far as I understood the problem is the repeated data, but it has nothing to do with Python-Redmine or Redmine's REST API. The problem happens because your att_list = [] is defined on module level, if you define it inside first for() loop then everything should work as expected, e.g.:
from redmine import Redmine
conn_red = Redmine('http://localhost:3000', username='admin', password='admin')
issues = conn_red.issue.all()
for issue in issues:
att_list = []
for att in issue.attachments:
attachment = {
'file_name': att.content_url,
'created_date': att.created_on
}
att_list.append((0, 0, attachment))
print "issue",issue.id, "att_list", att_list

Related

Bar of proportion of two variables

I am having a pandas dataframe as shown below
import numpy as np
data = {
'id': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50],
'baseline': [1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1],
'endline': [1, 0, np.nan, 1, 0, 0, 1, np.nan, 1, 0, 0, 1, 0, 0, 1, 0, np.nan, np.nan, 1, 0, 1, np.nan, 0, 1, 0, 1, 0, np.nan, 1, 0, np.nan, 0, 0, 0, np.nan, 1, np.nan, 1, np.nan, 0, np.nan, 1, 1, 0, 1, 1, 1, 0, 1, 1],
'gender': ['male', 'male', 'male', 'male', 'male', 'male', 'male', 'male', 'male', 'male', 'male', 'male', 'male', 'male', 'male', 'male', 'male', 'male', 'male', 'male', 'male', 'female', 'female', 'female', 'female', 'female', 'female', 'female', 'female', 'female', 'female', 'female', 'female', 'female', 'female', 'female', 'female', 'female', 'female', 'female', 'female', 'female', 'female', 'female', 'female', 'female', 'female', 'female', 'female', 'female']
}
df = pd.DataFrame(data)
df.head(n = 5)
The challenge is the endline column may have some missing values. My goal is to have 2 bars for each variable side by side as shown below.
Thanks in advance!
Seaborn prefers its data in "long form". Pandas' melt can convert the given dataframe to combine the 'baseline' and 'endline' columns.
By default, sns.barplot shows the mean when there are multiple y-values belonging to the same x-value. You can use a different estimator, e.g. summing the values and dividing by the number of values to get a percentage.
Here is some code to get you started:
import matplotlib.pyplot as plt
from matplotlib.ticker import PercentFormatter
import seaborn as sns
import pandas as pd
import numpy as np
data = {
'id': range(1, 51),
'baseline': [1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1],
'endline': [1, 0, np.nan, 1, 0, 0, 1, np.nan, 1, 0, 0, 1, 0, 0, 1, 0, np.nan, np.nan, 1, 0, 1, np.nan, 0, 1, 0, 1, 0, np.nan, 1, 0, np.nan, 0, 0, 0, np.nan, 1, np.nan, 1, np.nan, 0, np.nan, 1, 1, 0, 1, 1, 1, 0, 1, 1]
}
df = pd.DataFrame(data)
sns.set_style('white')
ax = sns.barplot(data=df.melt(value_vars=['baseline', 'endline']),
x='variable', y='value',
estimator=lambda x: np.sum(x) / np.size(x) * 100, ci=None,
color='cornflowerblue')
ax.bar_label(ax.containers[0], fmt='%.1f %%', fontsize=20)
sns.despine(ax=ax, left=True)
ax.grid(True, axis='y')
ax.yaxis.set_major_formatter(PercentFormatter(100))
ax.set_xlabel('')
ax.set_ylabel('')
plt.tight_layout()
plt.show()

Ggpredict over fitting causing trend line to begin in an odd place

I am not entirely sure what I am doing wrong/what to look up. My objective is to use ggpredict and ggplot to display the relationship between time and the proportion of years burnt. I'm guessing it is something to do with the time variable being log transformed?
library(lme4);library(ggplot2);library(ggeffects);library(dplyr)
data = read.csv('FtmpAllyrs10kmC.csv')
This is what the data looks like:
structure(list(Observ = c(5208, 2828, 1664, 578, 18, 1644, 4741,
751, 689, 3813, 1464, 438, 1553, 4752, 4960, 376, 2482, 1811,
5682, 5441, 4505, 2281, 2103, 2993, 562, 4297, 3592, 5148, 3793,
1621, 1912, 1627, 1737, 4976, 2173, 5132, 5758, 2756, 1789, 5666,
2628, 2593, 794, 5779, 5158, 3123, 4986, 676, 4200, 2442, 2751,
4330, 1802, 2020, 2500, 1056, 959, 3290, 4303, 247, 5586, 922,
1049, 2432, 2076, 2560, 1369, 3636, 3722, 4137, 1561, 4915, 2515,
3034, 5547, 1491, 1247, 4116, 455, 4687, 1697, 5329, 21, 5724,
3701, 5697, 2938, 1721, 61, 998, 4304, 5798, 651, 910, 2689,
3986, 2908, 5753, 2574, 2345, 1940, 4317, 4588, 2179, 665, 4133,
749, 3977, 3134, 4190, 3985, 4937, 2473, 3238, 4987, 3915, 4261,
3521, 2736, 3665, 1797, 5692, 5578, 4087, 2011, 903, 889, 1523,
3396, 2291, 5269, 3644, 3403, 4814, 4618, 16, 77, 5385, 2842,
5816, 2015, 1443, 3183, 3331, 4977, 5380, 989, 4918, 740, 4637,
887, 1557, 4295, 4673, 1918, 5662, 4167, 1384, 3441, 614, 2360,
780, 661, 1267, 2018, 1906, 3402, 677, 5218, 2830, 4979, 3984,
4924, 1125, 2640, 986, 1885, 2573, 5300, 2398, 4832, 4816, 3738,
3276, 3830, 2425, 2054, 4273, 5607, 1678, 378, 1158, 510, 2210,
2399, 1952, 2909, 4945, 2659, 2642), yrblock15 = c(2015, 2010,
2007, 2005, 2004, 2007, 2014, 2005, 2005, 2012, 2007, 2004, 2007,
2014, 2015, 2004, 2009, 2008, 2016, 2016, 2014, 2009, 2008, 2010,
2005, 2013, 2011, 2015, 2012, 2007, 2008, 2007, 2007, 2015, 2008,
2015, 2016, 2010, 2007, 2016, 2009, 2009, 2005, 2016, 2015, 2010,
2015, 2005, 2013, 2009, 2010, 2013, 2008, 2008, 2009, 2006, 2006,
2011, 2013, 2004, 2016, 2006, 2006, 2009, 2008, 2009, 2007, 2012,
2012, 2013, 2007, 2014, 2009, 2010, 2016, 2007, 2006, 2013, 2005,
2014, 2007, 2015, 2004, 2016, 2012, 2016, 2010, 2007, 2004, 2006,
2013, 2016, 2005, 2006, 2009, 2012, 2010, 2016, 2009, 2009, 2008,
2013, 2014, 2008, 2005, 2013, 2005, 2012, 2010, 2013, 2012, 2014,
2009, 2011, 2015, 2012, 2013, 2011, 2010, 2012, 2007, 2016, 2016,
2013, 2008, 2006, 2005, 2007, 2011, 2009, 2015, 2012, 2011, 2014,
2014, 2004, 2004, 2015, 2010, 2016, 2008, 2007, 2011, 2011, 2015,
2015, 2006, 2014, 2005, 2014, 2005, 2007, 2013, 2014, 2008, 2016,
2013, 2007, 2011, 2005, 2009, 2005, 2005, 2006, 2008, 2008, 2011,
2005, 2015, 2010, 2015, 2012, 2014, 2006, 2009, 2006, 2008, 2009,
2015, 2009, 2014, 2014, 2012, 2011, 2012, 2009, 2008, 2013, 2016,
2007, 2004, 2006, 2005, 2008, 2009, 2008, 2010, 2014, 2009, 2009
), circleID = c(258, 128, 314, 128, 18, 294, 241, 301, 239, 213,
114, 438, 203, 252, 10, 376, 232, 11, 282, 41, 5, 31, 303, 293,
112, 247, 442, 198, 193, 271, 112, 277, 387, 26, 373, 182, 358,
56, 439, 266, 378, 343, 344, 379, 208, 423, 36, 226, 150, 192,
51, 280, 2, 220, 250, 156, 59, 140, 253, 247, 186, 22, 149, 182,
276, 310, 19, 36, 122, 87, 211, 415, 265, 334, 147, 141, 347,
66, 5, 187, 347, 379, 21, 324, 101, 297, 238, 371, 61, 98, 254,
398, 201, 10, 439, 386, 208, 353, 324, 95, 140, 267, 88, 379,
215, 83, 299, 377, 434, 140, 385, 437, 223, 88, 37, 315, 211,
371, 36, 65, 447, 292, 178, 37, 211, 3, 439, 173, 246, 41, 319,
44, 253, 314, 118, 16, 77, 435, 142, 416, 215, 93, 33, 181, 27,
430, 89, 418, 290, 137, 437, 207, 245, 173, 118, 262, 117, 34,
291, 164, 110, 330, 211, 367, 218, 106, 252, 227, 268, 130, 29,
384, 424, 225, 390, 86, 85, 323, 350, 148, 332, 316, 138, 126,
230, 175, 254, 223, 207, 328, 378, 258, 60, 410, 149, 152, 209,
445, 409, 392), rain15 = c(347.83, 394.12, 382.2, 382.41, 395.7,
386.08, 383.79, 352.65, 354.31, 366.48, 416.79, 335.17, 409.24,
373, 390.76, 341.35, 387.25, 452.18, 329.14, 365.74, 432.58,
443.36, 375.57, 359.75, 379.14, 386.41, 361.47, 366.1, 382.57,
383.32, 409.56, 390.92, 380.38, 394.94, 366.72, 347.44, 336.88,
410.94, 370.83, 335.88, 368.53, 370.42, 344.56, 323.41, 348.34,
351.07, 382.75, 362.64, 402.7, 396.11, 418.01, 389.14, 462.76,
391.05, 369.47, 399.78, 419.32, 392.97, 389.15, 345.37, 336.22,
405.73, 378.45, 394.7, 388.29, 379.56, 437.29, 415.95, 388.91,
402.43, 397.09, 368.84, 378.54, 361.92, 355.22, 416.46, 361.24,
417.12, 420.92, 386.48, 375.04, 335.03, 385.23, 342.51, 401.27,
341.21, 362.81, 372.85, 396.48, 390.72, 385.06, 343.64, 365.25,
440.76, 364.68, 354.45, 368.7, 324.44, 366.4, 408.43, 405.71,
390.8, 401.09, 364.07, 360.68, 399.39, 348.38, 344.2, 345.23,
401.29, 356.48, 364.21, 376.12, 403.37, 384.1, 355.71, 389.53,
363.28, 417.76, 403.16, 362.28, 333.91, 337.46, 419.51, 389.22,
448.08, 338.46, 397.52, 372.25, 424.25, 349.25, 408.19, 376.68,
375.87, 403.78, 398.73, 386.92, 340.39, 391.58, 335.03, 390.25,
422.05, 423.79, 386.49, 392.97, 334.07, 403.85, 369.54, 348.84,
392.33, 336.68, 399.56, 386.84, 395.97, 409.93, 337.08, 410.27,
450.48, 364.93, 369.08, 413.31, 341.93, 360.06, 362.28, 395.8,
423.56, 376.67, 366.19, 358.88, 390.74, 390.84, 362.84, 370.21,
360.84, 371.9, 410.36, 421.59, 367.48, 355.62, 389.61, 370.81,
374.37, 382.61, 401.78, 373.7, 382.72, 387.56, 388.53, 329.06,
383.78, 336.97, 376.68, 398.57, 370.46, 388.88, 421.66, 369.29,
371.58, 369.01, 369.22), YearsBurnt = c(6, 6, 3.5, 5, 3, 2, 3.5,
2.5, 2, 1.5, 10.5, 3.5, 2.5, 3.5, 4.5, 3, 2, 2.5, 1.5, 3.5, 3.5,
4, 4, 3, 3.5, 2.5, 6, 4.5, 4, 2.5, 3.5, 2, 7, 3, 2.5, 3.5, 13,
3, 3.5, 3.5, 4.5, 3, 1.5, 2, 4, 2, 4.5, 4, 3.5, 2.5, 2, 2, 3,
1, 5, 2.5, 4, 12.5, 2.5, 1.5, 3.5, 1.5, 2.5, 4, 4.5, 10, 3, 3.5,
4.5, 10.5, 1, 4.5, 2, 13.5, 8.5, 10, 1, 4, 3, 3.5, 1.5, 3, 2.5,
2.5, 2.5, 4.5, 4, 1.5, 3, 3.5, 4.5, 1.5, 3, 2.5, 3.5, 8.5, 4,
7, 2.5, 5, 11, 3.5, 11.5, 3, 1.5, 3, 0.5, 4.5, 3.5, 13.5, 7.5,
3.5, 2, 12, 4, 5, 2, 1.5, 3.5, 4.5, 2, 3.5, 3, 4, 1.5, 2, 2.5,
6, 2, 5, 3.5, 4.5, 2, 3.5, 5, 4.5, 3, 4, 14, 3, 1.5, 3.5, 5.5,
3, 4, 3, 7, 4.5, 2.5, 3, 3, 3.5, 3, 9, 5, 6.5, 5, 4, 4, 3.5,
3, 8.5, 1, 4.5, 1.5, 5.5, 3, 2, 2.5, 2.5, 3, 8.5, 2.5, 1, 3.5,
5.5, 5, 1.5, 2, 4.5, 5, 4, 1.5, 3.5, 4.5, 6, 4.5, 3.5, 3, 6.5,
3, 6.5, 3.5, 4.5, 2.5, 2.5, 4, 4, 4, 4.5), YearsNotBurnt = c(9,
9, 11.5, 10, 12, 13, 11.5, 12.5, 13, 13.5, 4.5, 11.5, 12.5, 11.5,
10.5, 12, 13, 12.5, 13.5, 11.5, 11.5, 11, 11, 12, 11.5, 12.5,
9, 10.5, 11, 12.5, 11.5, 13, 8, 12, 12.5, 11.5, 2, 12, 11.5,
11.5, 10.5, 12, 13.5, 13, 11, 13, 10.5, 11, 11.5, 12.5, 13, 13,
12, 14, 10, 12.5, 11, 2.5, 12.5, 13.5, 11.5, 13.5, 12.5, 11,
10.5, 5, 12, 11.5, 10.5, 4.5, 14, 10.5, 13, 1.5, 6.5, 5, 14,
11, 12, 11.5, 13.5, 12, 12.5, 12.5, 12.5, 10.5, 11, 13.5, 12,
11.5, 10.5, 13.5, 12, 12.5, 11.5, 6.5, 11, 8, 12.5, 10, 4, 11.5,
3.5, 12, 13.5, 12, 14.5, 10.5, 11.5, 1.5, 7.5, 11.5, 13, 3, 11,
10, 13, 13.5, 11.5, 10.5, 13, 11.5, 12, 11, 13.5, 13, 12.5, 9,
13, 10, 11.5, 10.5, 13, 11.5, 10, 10.5, 12, 11, 1, 12, 13.5,
11.5, 9.5, 12, 11, 12, 8, 10.5, 12.5, 12, 12, 11.5, 12, 6, 10,
8.5, 10, 11, 11, 11.5, 12, 6.5, 14, 10.5, 13.5, 9.5, 12, 13,
12.5, 12.5, 12, 6.5, 12.5, 14, 11.5, 9.5, 10, 13.5, 13, 10.5,
10, 11, 13.5, 11.5, 10.5, 9, 10.5, 11.5, 12, 8.5, 12, 8.5, 11.5,
10.5, 12.5, 12.5, 11, 11, 11, 10.5), time = c(1.96, 4.94, 3.46,
4.94, 2.73, 6.22, 4.5, 2.67, 4.66, 3.83, 0.38, 2.6, 3.97, 4.18,
3.77, 3.44, 2.9, 3.93, 2.16, 3.51, 2.91, 3.19, 2.73, 6.36, 1.74,
4.39, 4.1, 2.26, 2.36, 5.32, 1.74, 3.66, 1.26, 5.61, 9.04, 4.61,
0.46, 3.98, 2.63, 5.5, 2.56, 5.92, 6.39, 2.26, 3.27, 7.95, 2.93,
4.93, 2.97, 2.43, 5.91, 3.07, 4.27, 3.21, 4.12, 4.72, 1.93, 0.69,
3.51, 4.39, 4.02, 3.18, 2.61, 4.61, 3.67, 0.54, 2.33, 2.93, 2.12,
1.06, 3.95, 2.31, 5.44, 0.17, 1.42, 0.55, 8.35, 2.53, 2.91, 3.26,
8.35, 2.26, 2.23, 7.18, 6.59, 6.36, 4.38, 7.67, 1.93, 3.34, 2.91,
8.54, 5.75, 3.77, 2.63, 0.97, 3.27, 1.58, 7.18, 2.08, 0.69, 5.43,
0.85, 2.26, 3.69, 3.18, 6.18, 2.93, 2.68, 0.69, 0.92, 2.34, 3.26,
0.85, 2.91, 4.3, 3.95, 7.67, 2.93, 2.1, 6.54, 6.31, 3.87, 2.91,
3.95, 3.35, 2.63, 1.49, 4.32, 3.51, 7.06, 2.67, 3.51, 3.46, 1.56,
4.33, 5.64, 2.73, 0.57, 2.87, 3.69, 2.56, 2.33, 4.27, 4.73, 4.02,
0.82, 4.11, 4.88, 2.29, 2.34, 3.72, 4.21, 1.49, 1.56, 3.03, 1.24,
2.65, 5.71, 1.67, 2.71, 1.49, 3.95, 4.51, 3.36, 5.21, 4.18, 4.54,
5.36, 4.25, 3.71, 0.95, 8.92, 3.12, 2.73, 1.36, 1.85, 7.24, 8.11,
2.2, 0.95, 5.16, 1.3, 6.54, 3.01, 1.97, 2.91, 3.26, 3.72, 1.79,
2.56, 1.96, 1.89, 1.89, 2.61, 5.25, 3.25, 5.26, 1.74, 3.73),
claylake = c(0, 0, 0, 0, 0, 17.53, 0.1, 0.59, 0, 9.13, 36.93,
12.75, 0, 0, 0, 0, 0, 0, 0, 0.09, 0.01, 0, 0, 9.43, 74.71,
26.42, 0.23, 0, 0, 35.27, 74.71, 0, 0, 0, 0, 0, 0, 0, 20.81,
9.46, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
1.14, 0, 26.42, 3.62, 0, 0, 0, 0.21, 0, 0, 0, 0.03, 10.43,
0.99, 3.6, 5.32, 0, 0.36, 0, 0, 0.25, 0.01, 0.22, 0, 0, 6.45,
0, 0, 0, 0, 0, 1.71, 0, 0, 0, 0, 0, 20.81, 0, 0, 0.18, 0,
0, 1.14, 0.03, 1.2, 0, 8.97, 0, 0, 0, 0, 1.14, 0, 1.56, 0.22,
1.2, 0, 0, 0.99, 0, 0, 0, 0, 4.14, 0, 0, 0.99, 0, 20.81,
0, 33.61, 0.09, 14.94, 0, 0, 0, 0, 0.41, 0, 2.7, 0, 0.61,
8.97, 0, 0, 0, 0, 1.7, 2.67, 7.71, 0.2, 8.63, 1.56, 0, 0.49,
0, 0, 0, 0, 0, 11.9, 33.08, 0, 0, 0.99, 2.13, 0, 0, 0, 0,
0.03, 0, 0, 0, 0, 0, 0, 2.86, 1.65, 0, 0, 0, 0, 0, 60.14,
0, 0, 0, 0, 0.22, 0, 0, 0, 0, 0.3, 0, 0, 0, 0, 0, 0, 5.57
), spinsandplain = c(81.94, 34.29, 89.55, 34.29, 80.86, 75.92,
81.55, 43.53, 97.3, 87.84, 60.62, 80.81, 11.73, 5.11, 98.67,
79.52, 60.73, 91.65, 2.82, 97.31, 73.65, 72.78, 96.51, 74.02,
25.09, 50.74, 96.62, 88.77, 98.8, 54.04, 25.09, 95.1, 69.85,
99.4, 78.79, 78.77, 48.16, 80.68, 75.79, 66.33, 68.3, 79.11,
91.89, 82.49, 98.33, 90.82, 91.24, 65.01, 69.24, 99.94, 99.75,
18.57, 90.39, 95.56, 71.07, 67.85, 92.37, 85.85, 17.89, 50.74,
79.65, 68.82, 74.05, 78.77, 87.67, 41.11, 91.74, 91.24, 44.8,
86.24, 97.7, 94.17, 85.59, 33.53, 85.23, 94.55, 78.52, 95.49,
73.65, 95.04, 78.52, 82.49, 77.26, 83.4, 98.29, 85.24, 98.78,
87.09, 81.36, 96.62, 3.4, 94.65, 28.6, 98.67, 75.79, 73.34,
98.33, 74.88, 83.4, 88.24, 85.85, 52.44, 95.84, 82.49, 62.11,
98.74, 70.32, 86.18, 95.67, 85.85, 11.42, 85.96, 75.53, 95.84,
95.46, 93.68, 97.7, 87.09, 91.24, 80.03, 87.77, 68.71, 17.51,
95.46, 97.7, 50.7, 75.79, 70.43, 61.06, 97.31, 74.63, 99,
17.89, 89.55, 99.25, 98.08, 97.61, 93.36, 99.03, 38.1, 62.11,
96.9, 88.87, 40.48, 90.21, 73.79, 95.2, 66.53, 96.67, 82.89,
85.96, 97.08, 75.74, 70.43, 99.25, 96.4, 98.88, 98.13, 85.32,
54.19, 99.2, 81.42, 97.7, 82.25, 97.42, 98.1, 5.11, 12.06,
66.14, 52.39, 52.72, 12.32, 87.32, 98.95, 71.55, 90.58, 97.9,
80.62, 93.32, 76, 86.48, 86.42, 39.54, 68.65, 6.05, 86.02,
3.4, 75.53, 97.08, 32.47, 68.3, 81.94, 89.64, 57.4, 74.05,
0.47, 96.76, 86.7, 78.46, 84.81)), row.names = c(5208L, 2828L,
1664L, 578L, 18L, 1644L, 4741L, 751L, 689L, 3813L, 1464L, 438L,
1553L, 4752L, 4960L, 376L, 2482L, 1811L, 5682L, 5441L, 4505L,
2281L, 2103L, 2993L, 562L, 4297L, 3592L, 5148L, 3793L, 1621L,
1912L, 1627L, 1737L, 4976L, 2173L, 5132L, 5758L, 2756L, 1789L,
5666L, 2628L, 2593L, 794L, 5779L, 5158L, 3123L, 4986L, 676L,
4200L, 2442L, 2751L, 4330L, 1802L, 2020L, 2500L, 1056L, 959L,
3290L, 4303L, 247L, 5586L, 922L, 1049L, 2432L, 2076L, 2560L,
1369L, 3636L, 3722L, 4137L, 1561L, 4915L, 2515L, 3034L, 5547L,
1491L, 1247L, 4116L, 455L, 4687L, 1697L, 5329L, 21L, 5724L, 3701L,
5697L, 2938L, 1721L, 61L, 998L, 4304L, 5798L, 651L, 910L, 2689L,
3986L, 2908L, 5753L, 2574L, 2345L, 1940L, 4317L, 4588L, 2179L,
665L, 4133L, 749L, 3977L, 3134L, 4190L, 3985L, 4937L, 2473L,
3238L, 4987L, 3915L, 4261L, 3521L, 2736L, 3665L, 1797L, 5692L,
5578L, 4087L, 2011L, 903L, 889L, 1523L, 3396L, 2291L, 5269L,
3644L, 3403L, 4814L, 4618L, 16L, 77L, 5385L, 2842L, 5816L, 2015L,
1443L, 3183L, 3331L, 4977L, 5380L, 989L, 4918L, 740L, 4637L,
887L, 1557L, 4295L, 4673L, 1918L, 5662L, 4167L, 1384L, 3441L,
614L, 2360L, 780L, 661L, 1267L, 2018L, 1906L, 3402L, 677L, 5218L,
2830L, 4979L, 3984L, 4924L, 1125L, 2640L, 986L, 1885L, 2573L,
5300L, 2398L, 4832L, 4816L, 3738L, 3276L, 3830L, 2425L, 2054L,
4273L, 5607L, 1678L, 378L, 1158L, 510L, 2210L, 2399L, 1952L,
2909L, 4945L, 2659L, 2642L), class = "data.frame")
I create a new variable as the proportion of years burnt is out of 15 years (i.e., binomial)
data$fireprop = cbind(data$YearsBurnt,data$YearsNotBurnt)
Model:
mfireprop = glmer(fireprop~log(time)+spinsandplain+rain15+claylake+rain15*log(time)+(1|circleID),na.action=na.fail, family=binomial, data=data)
Trend line code:
d = ggpredict(mfireprop, terms = "time[exp]")
d = rename(d, "time" = x, "fireprop" = predicted)
ggplot(d, aes(time, fireprop)) +
geom_ribbon(aes(ymin = conf.low, ymax = conf.high), alpha = .1) +
geom_line(size = 2, colour = "black") +
theme_bw()
And the trend line comes out looking like this:
Why is the x axis not stopping at 10 hours where the data stops? Why is it going to 20,000? And why does the y axis only go to 0.4? When some of the proportions are 1?
When I limit the x and y axis it ends up looking like this:
But when I look at the raw data over the top of that, it seems like the trend line is starting off in a really odd place.
I am unsure of what I am doing wrong?
Okay, so I've figured out the main problem here. In the documentation of the ggpredict() function there is an argument called back.transform that defaults to TRUE. This means that log-transformed data will be automatically transformed back to the original response scale. This is why if you examine the ggpredict object d, you will see that the time variable actually does go to over 8000 in that object. So because you did not flag back.transform=FALSE, but also specified time[exp], what happened was the function automatically exponentiated your values, and then you did it again.
If we look at the logged values:
summary(log(data$time))
Min. 1st Qu. Median Mean 3rd Qu. Max.
-1.7720 0.8154 1.1802 1.0904 1.4793 2.2017
Then we exponentiate the max value, we get the previous max:
exp(2.2017) # Exponentiated to get back to years
[1] 9.040369
summary(df$time) # The original variable
Min. 1st Qu. Median Mean 3rd Qu. Max.
0.170 2.260 3.255 3.519 4.390 9.040
If we exponentiate it again, we end up with the max time being over 8000.
exp(9.040369)
[1] 8436.89
So, to get the plot you want, you just need to leave out the [exp] after calling time in ggpredict():
d = ggpredict(mfireprop, terms = "time")
d = rename(d, "time" = x, "fireprop" = predicted)
ggplot(d, aes(time, fireprop)) +
geom_ribbon(aes(ymin = conf.low, ymax = conf.high), alpha = .1) +
geom_line(size = 2, colour = "black") +
theme_bw()
The time is being cut off because at time 0 there is no variation. YearsNotBurnt is always 0. Therefore, if you look at the object d from ggpredict, you will see NaN in all the columns for time 0. If you simplify the model to the following:
mfireprop2 = glmer(fireprop~
log(time) +
(1|circleID),
na.action=na.fail,
family=binomial,
data=df)
You will be able to get the plot, but because there is very little variation, the confidence interval will span from one to zero. I believe this is an issue related to separation, basically it means that binomial models can't be fit in frequentist models if there is no variation, or if something perfectly predicts the outcome.
The only I think I wanted to mention is that you had a question in the comments about "non-integer counts in a binomial glm!". This is because it expects the dependent variable to be a proportion of trials, which should not have decimals. You have points in your data that seem to be half-year intervals. I'm not familiar with your data enough to say for sure what a better alternative would be, but creating a proportion and giving the number of observations in the weights= argument might be an option.

Selecting the value at a given date for each lat/lon point in xarray

I have a xr.DataArray object that has a day of 2015 (as a cftime.DateTimeNoLeap object) for each lat-lon point on the grid.
date_matrix2015
<xarray.DataArray (lat: 160, lon: 320)>
array([[cftime.DatetimeNoLeap(2015, 12, 11, 12, 0, 0, 0),
cftime.DatetimeNoLeap(2015, 12, 11, 12, 0, 0, 0),
cftime.DatetimeNoLeap(2015, 12, 11, 12, 0, 0, 0), ...,
cftime.DatetimeNoLeap(2015, 12, 11, 12, 0, 0, 0),
cftime.DatetimeNoLeap(2015, 12, 11, 12, 0, 0, 0),
cftime.DatetimeNoLeap(2015, 12, 11, 12, 0, 0, 0)],
[cftime.DatetimeNoLeap(2015, 12, 11, 12, 0, 0, 0),
cftime.DatetimeNoLeap(2015, 12, 11, 12, 0, 0, 0),
cftime.DatetimeNoLeap(2015, 12, 11, 12, 0, 0, 0), ...,
cftime.DatetimeNoLeap(2015, 12, 11, 12, 0, 0, 0),
cftime.DatetimeNoLeap(2015, 12, 11, 12, 0, 0, 0),
cftime.DatetimeNoLeap(2015, 12, 11, 12, 0, 0, 0)],
[cftime.DatetimeNoLeap(2015, 12, 11, 12, 0, 0, 0),
cftime.DatetimeNoLeap(2015, 12, 11, 12, 0, 0, 0),
cftime.DatetimeNoLeap(2015, 12, 11, 12, 0, 0, 0), ...,
cftime.DatetimeNoLeap(2015, 12, 11, 12, 0, 0, 0),
cftime.DatetimeNoLeap(2015, 12, 11, 12, 0, 0, 0),
cftime.DatetimeNoLeap(2015, 12, 11, 12, 0, 0, 0)],
...,
[cftime.DatetimeNoLeap(2015, 3, 14, 12, 0, 0, 0),
cftime.DatetimeNoLeap(2015, 3, 14, 12, 0, 0, 0),
cftime.DatetimeNoLeap(2015, 3, 14, 12, 0, 0, 0), ...,
cftime.DatetimeNoLeap(2015, 9, 16, 12, 0, 0, 0),
cftime.DatetimeNoLeap(2015, 9, 16, 12, 0, 0, 0),
cftime.DatetimeNoLeap(2015, 9, 16, 12, 0, 0, 0)],
[cftime.DatetimeNoLeap(2015, 9, 15, 12, 0, 0, 0),
cftime.DatetimeNoLeap(2015, 9, 15, 12, 0, 0, 0),
cftime.DatetimeNoLeap(2015, 9, 15, 12, 0, 0, 0), ...,
cftime.DatetimeNoLeap(2015, 9, 16, 12, 0, 0, 0),
cftime.DatetimeNoLeap(2015, 9, 15, 12, 0, 0, 0),
cftime.DatetimeNoLeap(2015, 9, 15, 12, 0, 0, 0)],
[cftime.DatetimeNoLeap(2015, 9, 16, 12, 0, 0, 0),
cftime.DatetimeNoLeap(2015, 9, 16, 12, 0, 0, 0),
cftime.DatetimeNoLeap(2015, 9, 16, 12, 0, 0, 0), ...,
cftime.DatetimeNoLeap(2015, 9, 16, 12, 0, 0, 0),
cftime.DatetimeNoLeap(2015, 9, 16, 12, 0, 0, 0),
cftime.DatetimeNoLeap(2015, 9, 16, 12, 0, 0, 0)]], dtype=object)
Coordinates:
year int64 2015
* lat (lat) float64 -89.14 -88.03 -86.91 -85.79 ... 86.91 88.03 89.14
* lon (lon) float64 0.0 1.125 2.25 3.375 4.5 ... 355.5 356.6 357.8 358.9
I have another xr.DataArray on the same lat-lon grid for vertical velocity (omega) that has data for every day in 2015. At each lat-lon point I would like to select the velocity value on the corresponding day given in date_matrix2015. Ideally I would like to do something like this:
omega.sel(time=date_matrix2015)
I have tried constructing the new dataarray manually with iteration, but I haven't had much luck.
Does anyone have any ideas? Thank you in advance!
------------EDIT---------------
Here is a minimal reproducible example for the problem. To clarify what I am looking for: I have two DataArrays, one for daily precipitation values, and one for daily omega values. I want to determine for each lat/lon point the day that saw the maximum precipitation (I think I have done this part correctly). From there I want to select at each lat/lon point the omega value that occurred on the day of maximum precipitation. So ultimately I would like to end up with a DataArray of omega values that has two dimensions, lat and lon, where the value at each lat/lon point is the omega value on the day of maximum rainfall at that location.
import numpy as np
import xarray as xr
import pandas as pd
precip = np.abs(8*np.random.randn(10,10,10))
omega = 15*np.random.randn(10,10,10)
lat = np.arange(0,10)
lon = np.arange(0, 10)
##Note: actual data resolution is 160x360
dates = pd.date_range('01-01-2015', '01-10-2015')
precip_da = xr.DataArray(precip).rename({'dim_0':'time', 'dim_1':'lat', 'dim_2':'lon'}).assign_coords({'time':dates, 'lat':lat, 'lon':lon})
omega_da = xr.DataArray(omega).rename({'dim_0':'time', 'dim_1':'lat', 'dim_2':'lon'}).assign_coords({'time':dates, 'lat':lat, 'lon':lon})
#Find Date of maximum precip for each lat lon point and store in an array
maxDateMatrix = precip_da.idxmax(dim='time')
#For each lat lon point, select the value from omega_da on the day of maximum precip (i.e. the date given at that location in the maxDateMatrix)
You can pair da.sel with da.idxmax to select the index of the maxima along any number of dimensions:
In [10]: omega_da.sel(time=precip_da.idxmax(dim='time'))
Out[10]:
<xarray.DataArray (lat: 10, lon: 10)>
array([[ 17.72211193, -16.20781517, 9.65493368, -28.16691093,
18.8756182 , 16.81924325, -20.55251804, -18.36625778,
-19.57938236, -10.77385357],
[ 3.95402784, -5.28478105, -8.6632994 , 2.46787932,
20.53981254, -4.74908659, 9.5274101 , -1.08191372,
9.4637305 , -10.91884369],
[-31.30033085, 6.6284144 , 8.15945444, 5.74849304,
12.49505739, 2.11797825, -18.12861347, 7.27497695,
5.16197504, -32.99882591],
...
[-34.73945635, 24.40515233, 14.56982584, 12.16550083,
-8.3558104 , -20.16328749, -33.89051472, -0.09599935,
2.65689584, 29.54056082],
[-18.8660847 , -7.58120994, 15.57632568, 4.19142695,
8.71046261, 9.05684805, 8.48128361, 0.34166869,
8.41090015, -2.31386572],
[ -4.38999926, 17.00411671, 16.66619606, 24.99390669,
-14.01424591, 19.85606151, -16.87897 , 12.84205521,
-16.78824975, -6.33920671]])
Coordinates:
time (lat, lon) datetime64[ns] 2015-01-01 2015-01-01 ... 2015-01-10
* lat (lat) int64 0 1 2 3 4 5 6 7 8 9
* lon (lon) int64 0 1 2 3 4 5 6 7 8 9
See the great section of the xarray docs on Indexing and Selecting Data for more info, especially the section on Advanced Indexing, which goes into using DataArrays as indexers for powerful reshaping operations.

Increasing the label size in matplotlib in pie chart

I have the following dictionary
{'Electronic Arts': 66,
'GT Interactive': 1,
'Palcom': 1,
'Fox Interactive': 1,
'LucasArts': 5,
'Bethesda Softworks': 9,
'SquareSoft': 3,
'Nintendo': 142,
'Virgin Interactive': 4,
'Atari': 7,
'Ubisoft': 28,
'Konami Digital Entertainment': 11,
'Hasbro Interactive': 1,
'MTV Games': 1,
'Sega': 11,
'Enix Corporation': 4,
'Capcom': 13,
'Warner Bros. Interactive Entertainment': 7,
'Acclaim Entertainment': 1,
'Universal Interactive': 1,
'Namco Bandai Games': 7,
'Eidos Interactive': 9,
'THQ': 7,
'RedOctane': 1,
'Sony Computer Entertainment Europe': 3,
'Take-Two Interactive': 24,
'Square Enix': 5,
'Microsoft Game Studios': 22,
'Disney Interactive Studios': 2,
'Vivendi Games': 2,
'Sony Computer Entertainment': 52,
'Activision': 45,
'505 Games': 4}
Now the problem I am facing is viewing the labels. The labels are extremely small and invisible.
Please anyone can suggest on how to increase the label size.
I have tried the below code:
plt.figure(figsize=(80,80))
plt.pie(vg_dict.values(),labels=vg_dict.keys())
plt.show()
Adding textprops argument in plt.pie method:
plt.figure(figsize=(80,80))
plt.pie(vg_dict.values(), labels=vg_dict.keys(), textprops={'fontsize': 30})
plt.show()
You can check all the properties of Text object here.
Updated
I don't know if your labels order matter? To avoid overlapping labels, you can try to modify your start angle (plt start drawing pie counterclockwise from the x-axis), and re-order the "crowded" labels:
vg_dict = {
'Palcom': 1,
'Electronic Arts': 66,
'GT Interactive': 1,
'LucasArts': 5,
'Bethesda Softworks': 9,
'SquareSoft': 3,
'Nintendo': 142,
'Virgin Interactive': 4,
'Atari': 7,
'Ubisoft': 28,
'Hasbro Interactive': 1,
'Konami Digital Entertainment': 11,
'MTV Games': 1,
'Sega': 11,
'Enix Corporation': 4,
'Capcom': 13,
'Acclaim Entertainment': 1,
'Warner Bros. Interactive Entertainment': 7,
'Universal Interactive': 1,
'Namco Bandai Games': 7,
'Eidos Interactive': 9,
'THQ': 7,
'RedOctane': 1,
'Sony Computer Entertainment Europe': 3,
'Take-Two Interactive': 24,
'Vivendi Games': 2,
'Square Enix': 5,
'Microsoft Game Studios': 22,
'Disney Interactive Studios': 2,
'Sony Computer Entertainment': 52,
'Fox Interactive': 1,
'Activision': 45,
'505 Games': 4}
plt.figure(figsize=(80,80))
plt.pie(vg_dict.values(), labels=vg_dict.keys(), textprops={'fontsize': 35}, startangle=-35)
plt.show()
Result:

Postgresql Crosstab with Array row_name

I have the following SQL statement. The inner query ('SELECT ARRAY...ORDER BY 1,2') works correctly and gives the correct totals for each row_name. When I run the crosstab, the result is incorrect. Changing the 'ORDER BY' in the innner query doesn't seem to change its result, but changes the outer query result. I have verified the types match for crosstab(text,text) for column headings.
SELECT
ct.row_name[1:2] AS zonenumber,
sum(ct.amount1) AS "sumEmploymentamount",
sum(ct.amount3) AS "sumExport_Consumersamount"
FROM output.crosstab('
SELECT
ARRAY[
zonenumber::text,
comTypes.commodity_type_name::text,
year_run::text
] as row_name,
tab.activity_type_id as attribute,
amount as value
FROM
output.all_zonalmakeuse_3 tab,
output.activity_numbers actNums,
output.activity_types actTypes,
output.commodity_numbers comNums,
output.commodity_types comTypes
WHERE
scenario = ''S03'' AND year_run = ''2005'' AND
amount != ''-Infinity'' AND moru = ''M'' AND
actNums.activity_type_id = ActTypes.activity_type_id AND
tab.activity = actNums.activitynumber AND
comNums.commodity_type_id = comTypes.commodity_type_id AND
tab.commodity = comNums.commoditynumber AND
(
comTypes.commodity_type_name =''Financial''OR
comNums.commodity = ''Financial'' OR
comTypes.commodity_type_name =''Goods''OR
comNums.commodity = ''Goods''
) AND
(
actTypes.activity_type_name =''Employment'' OR
actNums.activity = ''Employment'' OR
actTypes.activity_type_name =''Export Consumers'' OR
actNums.activity = ''Export Consumers''
)
ORDER BY 1,2
'::text, '
SELECT activity_type_id AS activity
FROM output.activity_types
WHERE activity_type_id = 1 OR activity_type_id = 3
'::text
) ct (row_name text[], amount1 double precision, amount3 double precision)
GROUP BY ct.row_name[1:2]
ORDER BY ct.row_name[1:2]::text;
Tables
CREATE TABLE activity_numbers
("activitynumber" int, "activity" varchar(46), "activity_type_id" int)
;
INSERT INTO activity_numbers
("activitynumber", "activity", "activity_type_id")
VALUES
(0, '"AI01AgMinMan"', 1),
(1, '"AI02AgMinProd"', 1),
(2, '"AI03ConMan"', 1),
(3, '"AI04ConProd"', 1),
(4, '"AI05MfgMan"', 1),
(5, '"AI06MfgProd"', 1),
(6, '"AI07TCUMan"', 1),
(7, '"AI08TCUProd"', 1),
(8, '"AI09Whole"', 1),
(9, '"AI10Retail"', 1),
(10, '"AI11FIRE"', 1),
(11, '"AI12PTSci"', 1),
(12, '"AI13ManServ"', 1),
(13, '"AI14PBSOff"', 1),
(14, '"AI15PBSRet"', 1),
(15, '"AI16PSInd"', 1),
(16, '"AI17Religion"', 1),
(17, '"AI18BSOnsite"', 1),
(18, '"AI19PSOnsite"', 1);
CREATE TABLE activity_types
("activity_type_id" int, "activity_type_name" varchar(18))
;
INSERT INTO activity_types
("activity_type_id", "activity_type_name")
VALUES
(1, '"Employment"'),
(2, '"Households"'),
(3, '"Export Consumers"')
;
CREATE TABLE commodity_numbers
("commoditynumber" int, "commodity" varchar(29), "commodity_type_id" int)
;
INSERT INTO commodity_numbers
("commoditynumber", "commodity", "commodity_type_id")
VALUES
(0, '"CG01AgMinDirection"', 1),
(1, '"CG02AgMinOutput"', 1),
(2, '"CG03ConDirection"', 1),
(3, '"CG04ConOutput"', 1),
(4, '"CG05MfgDirection"', 1),
(5, '"CG06MfgOutput"', 1),
(6, '"CS07TCUDirection"', 2),
(7, '"CS08TCUOutput"', 2),
(8, '"CS09WsOutput"', 2),
(9, '"CS10RetailOutput"', 2),
(10, '"CS11FIREOutput"', 2),
(11, '"CS13OthServOutput"', 2),
(12, '"CS14HealthOutput"', 2),
(13, '"CS15GSEdOutput"', 2),
(14, '"CS16HiEdOutput"', 2),
(15, '"CS17GovOutput"', 2),
(16, '"CF18TaxReceipts"', 4),
(17, '"CF19GovSupReceipts"', 4),
(18, '"CF20InvestReceipts"', 4),
(19, '"CF21ReturnInvestReceipts"', 4),
(20, '"CF22CapitalTransferReceipts"', 4)
;
CREATE TABLE commodity_types
("commodity_type_id" int, "commodity_type_name" varchar(23))
;
INSERT INTO commodity_types
("commodity_type_id", "commodity_type_name")
VALUES
(1, '"Goods"'),
(4, '"Financial"')
;
CREATE TABLE all_zonalmakeuse_3
("year_run" int, "scenario" varchar(6), "activity" int, "zonenumber" int, "commodity" int, "moru" varchar(3), "amount" numeric, "activity_type_id" int, "commodity_type_id" int)
;
INSERT INTO all_zonalmakeuse_3
("year_run", "scenario", "activity", "zonenumber", "commodity", "moru", "amount", "activity_type_id", "commodity_type_id")
VALUES
(2005, '"C11a"', 0, 1, 0, '"M"', 1752708.30900861, 1, 1),
(2005, '"C11a"', 0, 3, 0, '"M"', 2785972.97039016, 1, 1),
(2005, '"C11a"', 0, 4, 0, '"M"', 3847879.45910403, 1, 1),
(2005, '"C11a"', 1, 1, 1, '"M"', 26154618.3893068, 1, 1),
(2005, '"C11a"', 1, 3, 1, '"M"', 1663.49609248196, 1, 1),
(2005, '"C11a"', 1, 4, 1, '"M"', 91727.9065950723, 1, 1),
(2005, '"C11a"', 1, 1, 5, '"M"', 855899.319689473, 1, 1),
(2005, '"C11a"', 1, 3, 5, '"M"', 54.4372375336784, 1, 1),
(2005, '"C11a"', 1, 4, 5, '"M"', 3001.75868302327, 1, 1),
(2005, '"C11a"', 2, 1, 2, '"M"', 150885191.664482, 1, 1),
(2005, '"C11a"', 2, 2, 2, '"M"', 99242746.1181359, 1, 1),
(2005, '"C11a"', 2, 3, 2, '"M"', 90993266.1879518, 1, 1),
(2005, '"C11a"', 2, 4, 2, '"M"', 60169908.2975819, 1, 1),
(2005, '"C11a"', 3, 1, 3, '"M"', 642982844.104623, 1, 1),
(2005, '"C11a"', 3, 2, 3, '"M"', 421379496.576106, 1, 1),
(2005, '"C11a"', 3, 3, 3, '"M"', 592125233.320609, 1, 1),
(2005, '"C11a"', 3, 4, 3, '"M"', 400206994.693349, 1, 1),
(2005, '"C11a"', 4, 1, 4, '"M"', 449206658.578704, 1, 1),
(2005, '"C11a"', 4, 2, 4, '"M"', 103823580.173348, 1, 1),
(2005, '"C11a"', 4, 3, 4, '"M"', 181300924.388112, 1, 1),
(2005, '"C11a"', 4, 4, 4, '"M"', 143113096.547075, 1, 1),
(2005, '"C11a"', 5, 1, 1, '"M"', 83889.8852772168, 1, 1),
(2005, '"C11a"', 5, 2, 1, '"M"', 25716.5837854808, 1, 1),
(2005, '"C11a"', 5, 3, 1, '"M"', 10243.7021847824, 1, 1),
(2005, '"C11a"', 5, 4, 1, '"M"', 22406.3296935502, 1, 1),
(2005, '"C11a"', 5, 1, 5, '"M"', 408669650.696034, 1, 1),
(2005, '"C11a"', 5, 2, 5, '"M"', 125278360.769936, 1, 1),
(2005, '"C11a"', 5, 3, 5, '"M"', 49902204.2985933, 1, 1),
(2005, '"C11a"', 5, 4, 5, '"M"', 109152455.018677, 1, 1),
(2005, '"C11a"', 5, 1, 20, '"M"', 161822.743734245, 1, 4),
(2005, '"C11a"', 5, 2, 20, '"M"', 49607.031096612, 1, 4),
(2005, '"C11a"', 5, 3, 20, '"M"', 19759.998336631, 1, 4),
(2005, '"C11a"', 5, 4, 20, '"M"', 43221.5842952059, 1, 4),
(2005, '"C11a"', 7, 1, 1, '"M"', 122316.017730318, 1, 1),
(2005, '"C11a"', 7, 2, 1, '"M"', 20514.5008361246, 1, 1),
(2005, '"C11a"', 7, 3, 1, '"M"', 8431.33094615992, 1, 1),
(2005, '"C11a"', 7, 4, 1, '"M"', 75842.631567318, 1, 1),
(2005, '"C11a"', 13, 1, 5, '"M"', 1195626.97941868, 1, 1),
(2005, '"C11a"', 13, 2, 5, '"M"', 567002.352487648, 1, 1),
(2005, '"C11a"', 13, 3, 5, '"M"', 1104908.87426762, 1, 1),
(2005, '"C11a"', 13, 4, 5, '"M"', 1071325.74253601, 1, 1),
(2005, '"C11a"', 17, 1, 1, '"M"', 751648.370711072, 1, 1),
(2005, '"C11a"', 17, 2, 1, '"M"', 340439.936040081, 1, 1),
(2005, '"C11a"', 17, 3, 1, '"M"', 800477.767008582, 1, 1),
(2005, '"C11a"', 17, 4, 1, '"M"', 489745.223392316, 1, 1),
(2005, '"C11a"', 17, 1, 20, '"M"', 3154907.39011312, 1, 4),
(2005, '"C11a"', 17, 2, 20, '"M"', 1428934.74123601, 1, 4),
(2005, '"C11a"', 17, 3, 20, '"M"', 3359859.9041298, 1, 4),
(2005, '"C11a"', 17, 4, 20, '"M"', 2055616.54193613, 1, 4),
(2005, '"C11a"', 18, 1, 20, '"M"', 2088003.66854949, 1, 4),
(2005, '"C11a"', 18, 2, 20, '"M"', 1310122.52506653, 1, 4),
(2005, '"C11a"', 18, 3, 20, '"M"', 1481450.29636847, 1, 4),
(2005, '"C11a"', 18, 4, 20, '"M"', 3035710.53213605, 1, 4)
;
I have manipulated the query in several ways (changed type casting, order by, etc), but always get incorrect values. The row and column headers are at least consistently correct.