I'm working on a pet project on cryptocurrency and Bollinger Bands, and I'm stuck on a problem I'm not able to solve.
Given this table:
CREATE TABLE public.dataset
(
"From_symbol" character varying(10) COLLATE pg_catalog."default" NOT NULL,
"To_symbol" character varying(10) COLLATE pg_catalog."default" NOT NULL,
"Timestamp" timestamp without time zone NOT NULL,
"Open" numeric(18,9),
"High" numeric(18,9),
"Low" numeric(18,9),
"Close" numeric(18,9),
"Volume_From" numeric(18,9),
"Volume_To" numeric(18,9),
"Weighted_Price" numeric(18,9),
"Id" integer NOT NULL DEFAULT nextval('dataset_id_seq'::regclass),
CONSTRAINT dataset_pkey PRIMARY KEY ("From_symbol", "To_symbol", "Timestamp")
If I run the following query
SELECT "From_symbol",
"To_symbol",
"Timestamp",
"Open",
"High",
"Low",
"Close",
"Volume_From",
"Volume_To",
"Weighted_Price",
AVG("Close") OVER
(PARTITION BY "Id"
ORDER BY "Id"
ROWS BETWEEN 19 PRECEDING AND CURRENT ROW) AS SMA20,
AVG("Close") OVER
(PARTITION BY "Id"
ORDER BY "Id"
ROWS BETWEEN 19 PRECEDING AND CURRENT ROW) +
STDDEV_SAMP("Close") OVER
(PARTITION BY "Id"
ORDER BY "Id"
ROWS BETWEEN 19 PRECEDING AND CURRENT ROW) * 2 AS "Upper_Bollinger_Band",
AVG("Close") OVER
(PARTITION BY "Id"
ORDER BY "Id"
ROWS BETWEEN 19 PRECEDING AND CURRENT ROW) -
STDDEV_SAMP("Close") OVER
(PARTITION BY "Id"
ORDER BY "Id"
ROWS BETWEEN 19 PRECEDING AND CURRENT ROW) * 2 AS "Lower_Bollinger_Band"
FROM public.dataset;
I get a null result on both the upper and lower bollinger bands.
While I have a very large dataset (2012-2020), I provide you with a sample of 40 lines. This should be enough in case you wish to test it.
INSERT INTO public.dataset VALUES ('BTC', 'USD', '2011-12-31 08:52:00', 4.390000000, 4.390000000, 4.390000000, 4.390000000, 0.455580870, 2.000000019, 4.390000000, 1);
INSERT INTO public.dataset VALUES ('BTC', 'USD', '2011-12-31 16:50:00', 4.390000000, 4.390000000, 4.390000000, 4.390000000, 48.000000000, 210.720000000, 4.390000000, 2);
INSERT INTO public.dataset VALUES ('BTC', 'USD', '2011-12-31 17:59:00', 4.500000000, 4.570000000, 4.500000000, 4.570000000, 37.862297230, 171.380337530, 4.526411498, 3);
INSERT INTO public.dataset VALUES ('BTC', 'USD', '2011-12-31 18:00:00', 4.580000000, 4.580000000, 4.580000000, 4.580000000, 9.000000000, 41.220000000, 4.580000000, 4);
INSERT INTO public.dataset VALUES ('BTC', 'USD', '2012-01-01 05:16:00', 4.580000000, 4.580000000, 4.580000000, 4.580000000, 1.502000000, 6.879160000, 4.580000000, 5);
INSERT INTO public.dataset VALUES ('BTC', 'USD', '2012-01-01 16:28:00', 4.840000000, 4.840000000, 4.840000000, 4.840000000, 10.000000000, 48.400000000, 4.840000000, 6);
INSERT INTO public.dataset VALUES ('BTC', 'USD', '2012-01-01 23:45:00', 5.000000000, 5.000000000, 5.000000000, 5.000000000, 10.100000000, 50.500000000, 5.000000000, 7);
INSERT INTO public.dataset VALUES ('BTC', 'USD', '2012-01-02 21:04:00', 5.000000000, 5.000000000, 5.000000000, 5.000000000, 19.048000000, 95.240000000, 5.000000000, 8);
INSERT INTO public.dataset VALUES ('BTC', 'USD', '2012-01-03 12:45:00', 5.320000000, 5.320000000, 5.320000000, 5.320000000, 2.419172930, 12.869999988, 5.320000000, 9);
INSERT INTO public.dataset VALUES ('BTC', 'USD', '2012-01-03 15:22:00', 5.140000000, 5.140000000, 5.140000000, 5.140000000, 0.680000000, 3.495200000, 5.140000000, 10);
INSERT INTO public.dataset VALUES ('BTC', 'USD', '2012-01-03 15:54:00', 5.260000000, 5.260000000, 5.260000000, 5.260000000, 29.319391630, 154.219999970, 5.260000000, 11);
INSERT INTO public.dataset VALUES ('BTC', 'USD', '2012-01-03 16:32:00', 5.290000000, 5.290000000, 5.290000000, 5.290000000, 29.302457470, 155.010000020, 5.290000000, 12);
INSERT INTO public.dataset VALUES ('BTC', 'USD', '2012-01-03 18:10:00', 5.290000000, 5.290000000, 5.290000000, 5.290000000, 11.285444230, 59.699999977, 5.290000000, 13);
INSERT INTO public.dataset VALUES ('BTC', 'USD', '2012-01-03 18:14:00', 5.140000000, 5.140000000, 5.140000000, 5.140000000, 0.020000000, 0.102800000, 5.140000000, 14);
INSERT INTO public.dataset VALUES ('BTC', 'USD', '2012-01-03 18:26:00', 5.290000000, 5.290000000, 5.290000000, 5.290000000, 11.000000000, 58.190000000, 5.290000000, 15);
INSERT INTO public.dataset VALUES ('BTC', 'USD', '2012-01-03 18:27:00', 5.290000000, 5.290000000, 5.290000000, 5.290000000, 4.010814660, 21.217209551, 5.290000000, 16);
INSERT INTO public.dataset VALUES ('BTC', 'USD', '2012-01-04 05:17:00', 4.930000000, 4.930000000, 4.930000000, 4.930000000, 2.320000000, 11.437600000, 4.930000000, 17);
INSERT INTO public.dataset VALUES ('BTC', 'USD', '2012-01-04 06:05:00', 4.930000000, 4.930000000, 4.930000000, 4.930000000, 9.680000000, 47.722400000, 4.930000000, 18);
INSERT INTO public.dataset VALUES ('BTC', 'USD', '2012-01-04 13:41:00', 5.190000000, 5.190000000, 5.190000000, 5.190000000, 2.641618500, 13.710000015, 5.190000000, 19);
INSERT INTO public.dataset VALUES ('BTC', 'USD', '2012-01-04 13:57:00', 5.190000000, 5.190000000, 5.190000000, 5.190000000, 8.724470130, 45.279999975, 5.190000000, 20);
INSERT INTO public.dataset VALUES ('BTC', 'USD', '2012-01-04 16:39:00', 5.190000000, 5.190000000, 5.190000000, 5.190000000, 16.344726030, 84.829128096, 5.190000000, 21);
INSERT INTO public.dataset VALUES ('BTC', 'USD', '2012-01-04 16:53:00', 5.320000000, 5.320000000, 5.320000000, 5.320000000, 0.186090230, 0.990000024, 5.320000000, 22);
INSERT INTO public.dataset VALUES ('BTC', 'USD', '2012-01-04 16:54:00', 5.320000000, 5.320000000, 5.320000000, 5.320000000, 10.394736840, 55.299999989, 5.320000000, 23);
INSERT INTO public.dataset VALUES ('BTC', 'USD', '2012-01-04 17:00:00', 5.360000000, 5.370000000, 5.360000000, 5.370000000, 13.629422720, 73.060000006, 5.360461812, 24);
INSERT INTO public.dataset VALUES ('BTC', 'USD', '2012-01-04 18:51:00', 5.370000000, 5.570000000, 5.370000000, 5.570000000, 43.312195780, 235.747069370, 5.442972011, 25);
INSERT INTO public.dataset VALUES ('BTC', 'USD', '2012-01-05 02:40:00', 5.720000000, 5.720000000, 5.720000000, 5.720000000, 5.000000000, 28.600000000, 5.720000000, 26);
INSERT INTO public.dataset VALUES ('BTC', 'USD', '2012-01-05 04:52:00', 5.750000000, 5.750000000, 5.750000000, 5.750000000, 5.200000000, 29.900000000, 5.750000000, 27);
INSERT INTO public.dataset VALUES ('BTC', 'USD', '2012-01-05 08:19:00', 5.750000000, 5.790000000, 5.750000000, 5.790000000, 14.800000000, 85.500000000, 5.777027027, 28);
INSERT INTO public.dataset VALUES ('BTC', 'USD', '2012-01-05 09:58:00', 6.000000000, 6.000000000, 6.000000000, 6.000000000, 2.236666670, 13.420000020, 6.000000000, 29);
INSERT INTO public.dataset VALUES ('BTC', 'USD', '2012-01-05 10:03:00', 6.000000000, 6.000000000, 6.000000000, 6.000000000, 0.168482700, 1.010896200, 6.000000000, 30);
INSERT INTO public.dataset VALUES ('BTC', 'USD', '2012-01-05 10:48:00', 6.150000000, 6.150000000, 6.150000000, 6.150000000, 10.000000000, 61.500000000, 6.150000000, 31);
INSERT INTO public.dataset VALUES ('BTC', 'USD', '2012-01-05 11:08:00', 6.190000000, 6.190000000, 6.190000000, 6.190000000, 0.571890150, 3.540000029, 6.190000000, 32);
INSERT INTO public.dataset VALUES ('BTC', 'USD', '2012-01-05 11:10:00', 6.190000000, 6.230000000, 6.190000000, 6.230000000, 16.000000000, 99.285718902, 6.205357431, 33);
INSERT INTO public.dataset VALUES ('BTC', 'USD', '2012-01-05 11:48:00', 6.230000000, 6.250000000, 6.230000000, 6.250000000, 14.000000000, 87.420000000, 6.244285714, 34);
INSERT INTO public.dataset VALUES ('BTC', 'USD', '2012-01-05 12:20:00', 6.460000000, 6.460000000, 6.460000000, 6.460000000, 0.773993810, 5.000000013, 6.460000000, 35);
INSERT INTO public.dataset VALUES ('BTC', 'USD', '2012-01-05 12:21:00', 6.460000000, 6.460000000, 6.460000000, 6.460000000, 0.178018570, 1.149999962, 6.460000000, 36);
INSERT INTO public.dataset VALUES ('BTC', 'USD', '2012-01-05 12:28:00', 6.430000000, 6.430000000, 6.430000000, 6.430000000, 0.311041990, 1.999999996, 6.430000000, 37);
INSERT INTO public.dataset VALUES ('BTC', 'USD', '2012-01-05 14:07:00', 6.440000000, 6.440000000, 6.440000000, 6.440000000, 0.310559010, 2.000000024, 6.440000000, 38);
INSERT INTO public.dataset VALUES ('BTC', 'USD', '2012-01-05 14:38:00', 6.430000000, 6.430000000, 6.430000000, 6.430000000, 0.466562990, 3.000000026, 6.430000000, 39);
INSERT INTO public.dataset VALUES ('BTC', 'USD', '2012-01-05 15:31:00', 6.420000000, 6.420000000, 6.420000000, 6.420000000, 0.311526480, 2.000000002, 6.420000000, 40);
INSERT INTO public.dataset VALUES ('BTC', 'USD', '2012-01-05 23:50:00', 6.430000000, 6.430000000, 6.430000000, 6.430000000, 0.311526480, 2.003115266, 6.430000000, 41);
INSERT INTO public.dataset VALUES ('BTC', 'USD', '2012-01-06 00:35:00', 6.440000000, 6.440000000, 6.440000000, 6.440000000, 0.466562990, 3.004665656, 6.440000000, 42);
INSERT INTO public.dataset VALUES ('BTC', 'USD', '2012-01-06 00:39:00', 6.470000000, 6.470000000, 6.470000000, 6.470000000, 0.952012380, 6.159520099, 6.470000000, 43);
INSERT INTO public.dataset VALUES ('BTC', 'USD', '2012-01-06 00:41:00', 6.650000000, 6.650000000, 6.650000000, 6.650000000, 20.777443610, 138.170000010, 6.650000000, 44);
INSERT INTO public.dataset VALUES ('BTC', 'USD', '2012-01-06 00:43:00', 6.650000000, 6.650000000, 6.650000000, 6.650000000, 1.466275650, 9.750733073, 6.650000000, 45);
INSERT INTO public.dataset VALUES ('BTC', 'USD', '2012-01-06 00:46:00', 6.650000000, 6.650000000, 6.650000000, 6.650000000, 0.499265780, 3.320117437, 6.650000000, 46);
INSERT INTO public.dataset VALUES ('BTC', 'USD', '2012-01-06 07:02:00', 6.650000000, 6.650000000, 6.650000000, 6.650000000, 1.425497660, 9.479559439, 6.650000000, 47);
INSERT INTO public.dataset VALUES ('BTC', 'USD', '2012-01-06 07:04:00', 6.690000000, 6.730000000, 6.690000000, 6.730000000, 6.310000000, 42.363858320, 6.713765186, 48);
INSERT INTO public.dataset VALUES ('BTC', 'USD', '2012-01-06 14:20:00', 6.800000000, 6.900000000, 6.800000000, 6.900000000, 9.310559010, 63.611801268, 6.832221481, 49);
INSERT INTO public.dataset VALUES ('BTC', 'USD', '2012-01-06 16:21:00', 6.760000000, 6.760000000, 6.760000000, 6.760000000, 0.295857990, 2.000000012, 6.760000000, 50);
INSERT INTO public.dataset VALUES ('BTC', 'USD', '2012-01-06 16:36:00', 6.500000000, 6.500000000, 6.500000000, 6.500000000, 0.500000000, 3.250000000, 6.500000000, 51);
INSERT INTO public.dataset VALUES ('BTC', 'USD', '2012-01-06 16:37:00', 6.490000000, 6.490000000, 6.490000000, 6.490000000, 1.540832050, 10.000000005, 6.490000000, 52);
INSERT INTO public.dataset VALUES ('BTC', 'USD', '2012-01-06 17:37:00', 6.400000000, 6.400000000, 6.400000000, 6.400000000, 0.500000000, 3.200000000, 6.400000000, 53);
INSERT INTO public.dataset VALUES ('BTC', 'USD', '2012-01-06 18:59:00', 6.400000000, 6.400000000, 6.400000000, 6.400000000, 1.550387590, 9.922480576, 6.400000000, 54);
INSERT INTO public.dataset VALUES ('BTC', 'USD', '2012-01-06 19:00:00', 6.400000000, 6.400000000, 6.400000000, 6.400000000, 0.838759680, 5.368061952, 6.400000000, 55);
INSERT INTO public.dataset VALUES ('BTC', 'USD', '2012-01-06 19:42:00', 6.400000000, 6.400000000, 6.400000000, 6.400000000, 9.110852730, 58.309457472, 6.400000000, 56);
INSERT INTO public.dataset VALUES ('BTC', 'USD', '2012-01-06 21:18:00', 6.300000000, 6.300000000, 6.300000000, 6.300000000, 0.500000000, 3.150000000, 6.300000000, 57);
Would you be so kind as to let me understand what am I doing wrong? I traced the problem to the STDDEV usage, but I have no clue on why the PARTITION BY clause works on AVG and fails on the STDDEV.
I'm running PostgreSQL 12.2 on Ubuntu:
PostgreSQL 12.2 (Ubuntu 12.2-4) on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 9.3.0-8ubuntu1) 9.3.0, 64-bit
Thanks!
The reason is this logic;
AVG("Close") OVER
(PARTITION BY "Id"
------------------^
ORDER BY "Id"
ROWS BETWEEN 19 PRECEDING AND CURRENT ROW
) AS SMA20,
Your id is a unique value on each row so the partition ha only one row -- and the standard deviation of a single value is not defined.
Presumably, you intend:
AVG(close) OVER
(PARTITION BY from_symbol, to_symbol
ORDER BY timestamp
ROWS BETWEEN 19 PRECEDING AND CURRENT ROW
) AS SMA20,
Notes:
Do not enclose identifiers in double quotes. That just makes it harder to write queries.
Why are you using a sequence when you can simply use generated always as identity?
Use the timestamp for ordering rather than the id.
Have you checked your results ? do the averages look right to you ?
I say this because your Id is unique and if you PARTITON BY on it, you will get 1 row partitions. While you can average 1 row, you cannot compute the standard deviation for one single row.
My suggestion would be to remove the PARTITION BY "Id" from all your aggregate functions. It seems you want to use the whole table as 1 partition anyway, or find the right column to partition by. A good candidate might be the From_symbol,To_symbol pair as you do not want to mix exchange pairs. So my suggestion would be to PARTITION BY "From_symbol","To_symbol" but you know the data best.
Related
I have manually populated a table as follows:
INSERT INTO country VALUES
-- columns are countryid, name, continent, population, gdp, lifeexpectancy, founded
(1, 'Argentina', 36.3, 348.2, 70.75, 9/7/1816),
(2, 'Brazil', 183.3, 1004, 65.6, 9/7/1822),
(3, 'Canada', 30.1, 658, 77.08, 1/7/1867),
(4, 'England', 60.8, 1256, 75.15, NULL),
(5, 'France', 60, 1000, 75.15, 14/7/1789),
(6, 'Mexico', 107.5, 694.3, 69.36, 16/9/1821),
(7, 'USA', 270, 8003, 75.75, 4/7/1776),
(8, 'Cuba', 11.7, 16.9, 75.95, 24/2/1895),
(9, 'Guatemala', 20, 200, 60, 15/9/1821),
(10, 'Tanzania', 55.57, 47.43, 60.76, 9/12/1961),
(11, 'India', 1324, 2264, 68.35, NULL),
(12, 'South Africa', 55.91, 294.8, 57.44, 31/05/1910),
(13, 'Costa Rica', 4.86, 57.44, 79.59, 15/9/1821),
(14, 'Uganda', 41.49, 25.53, 59.18, NULL);
but when I query
SELECT lifeexpectancy FROM country;
The tables returns a combination of "0" and "[null]" values.
I've tried changing the data_type of lifeexpectancy. Neither text nor numeric data types return values that have been entered.
Any suggestions?
You are not inserting the continent. Presumably you want that, but it is not in your sample data.
You should be using standard date formats. In most databases, YYYY-MM-DD works. And you should list the columns. So:
INSERT INTO country (countryid, name, population, gdp, lifeexpectancy, founded)
VALUES (1, 'Argentina', 36.3, 348.2, 70.75, '1816-07-09'),
(2, 'Brazil', 183.3, 1004, 65.6, '1822-07-09'),
(3, 'Canada', 30.1, 658, 77.08, '1867-07-01'),
(4, 'England', 60.8, 1256, 75.15, NULL),
(5, 'France', 60, 1000, 75.15, '1789-07-14'),
(6, 'Mexico', 107.5, 694.3, 69.36, '1821-09-16'),
(7, 'USA', 270, 8003, 75.75, '1776-07-04'),
(8, 'Cuba', 11.7, 16.9, 75.95, '1895-02-24'),
(9, 'Guatemala', 20, 200, 60, '1821-09-15'),
(10, 'Tanzania', 55.57, 47.43, 60.76, '1961-12-09'),
(11, 'India', 1324, 2264, 68.35, NULL),
(12, 'South Africa', 55.91, 294.8, 57.44, '1910-05-31'),
(13, 'Costa Rica', 4.86, 57.44, 79.59, '1821-09-15'),
(14, 'Uganda', 41.49, 25.53, 59.18, NULL);
I believe you're supplying 6 values instead of 7 (may be because of Auto Increment), if that the case you need to specify the columns here.
Also pass date with single quotes surrounded:
INSERT INTO country (countryid, name, continent, population, gdp, lifeexpectancy, founded)
VALUES
(1, 'Argentina', 36.3, 348.2, 70.75, '9/7/1816'),
(2, 'Brazil', 183.3, 1004, 65.6, '9/7/1822'),
(3, 'Canada', 30.1, 658, 77.08, '1/7/1867'),
(4, 'England', 60.8, 1256, 75.15, NULL),
(5, 'France', 60, 1000, 75.15, '14/7/1789'),
(6, 'Mexico', 107.5, 694.3, 69.36, '16/9/1821'),
(7, 'USA', 270, 8003, 75.75, '4/7/1776'),
(8, 'Cuba', 11.7, 16.9, 75.95, '24/2/1895'),
(9, 'Guatemala', 20, 200, 60, '15/9/1821'),
(10, 'Tanzania', 55.57, 47.43, 60.76, '9/12/1961'),
(11, 'India', 1324, 2264, 68.35, NULL),
(12, 'South Africa', 55.91, 294.8, 57.44, '31/05/1910'),
(13, 'Costa Rica', 4.86, 57.44, 79.59, '15/9/1821'),
(14, 'Uganda', 41.49, 25.53, 59.18, NULL);
I am designing an Electrical design software that will model an electrical utility system from the incoming Power Utility right down to the individual circuits such as computers and coffee machines.
I want to give each component of the system a dedicated table. eg. Transformers, Loads, cables, PowerPanels (called buses in this example).
Each component can be connected to one or many other components. I am using a parent/child table to manage the connections and plan to use a CTE to derive the hierarchical tree structure for a given component.
The voltage supplying any component in the system will be derived by finding the first instance of a transformer or a utility in the tree.
I have developed a query that can handle this as demonstrated below.
However, this only works for selecting one component in the CTE. I am looking for a way to select all buses and their connected voltage (nearest trafo or Utility). The only solution I can come up with is to use a table function on the above query. Is there a better way of doing this.
CREATE TABLE #componentConnection
(componentConnectionID int, parentComponentID varchar(4), childComponentID int)
;
INSERT INTO #componentConnection
(componentConnectionID, parentComponentID, childComponentID)
VALUES
(1, '13', 18),
(2, '13', 19),
(3, '13', 20),
(4, '13', 21),
(5, '13', 22),
(6, '13', 23),
(7, '14', 24),
(8, '14', 25),
(9, '14', 26),
(10, '14', 27),
(11, '14', 28),
(12, '14', 29),
(13, '15', 30),
(14, '15', 31),
(15, '15', 32),
(16, '15', 33),
(17, '15', 34),
(18, '15', 35),
(19, '16', 36),
(20, '16', 37),
(21, '16', 38),
(22, '16', 39),
(23, '16', 40),
(24, '16', 41),
(25, '1', 5),
(27, '5', 13),
(28, NULL, 1),
(29, '18', 6),
(30, '6', 11),
(31, '11', 7),
(32, '7', 14)
;
CREATE TABLE #component
(componentID int, componentName varchar(8), componentType varchar(7))
;
INSERT INTO #component
(componentID, componentName, componentType)
VALUES
(1, 'Utility1', 'utility'),
(2, 'Utility2', 'utility'),
(3, 'utility3', 'utility'),
(4, 'utility4', 'utility'),
(5, 'Cable1', 'cable'),
(6, 'Cable2', 'cable'),
(7, 'Cable3', 'cable'),
(8, 'Cable4', 'cable'),
(9, 'Cable5', 'cable'),
(10, 'Cable6', 'cable'),
(11, 'Trafo1', 'trafo'),
(12, 'Trafo2', 'trafo'),
(13, 'Bus1', 'bus'),
(14, 'Bus2', 'bus'),
(15, 'Bus3', 'bus'),
(16, 'Bus4', 'bus'),
(17, 'Bus5', 'bus'),
(18, 'cub1', 'cir'),
(19, 'cub2', 'cir'),
(20, 'cub3', 'cir'),
(21, 'cub4', 'cir'),
(22, 'cub5', 'cir'),
(23, 'cub6', 'cir'),
(24, 'cub1', 'cir'),
(25, 'cub2', 'cir'),
(26, 'cub3', 'cir'),
(27, 'cub4', 'cir'),
(28, 'cub5', 'cir'),
(29, 'cub6', 'cir'),
(30, 'cub1', 'cir'),
(31, 'cub2', 'cir'),
(32, 'cub3', 'cir'),
(33, 'cub4', 'cir'),
(34, 'cub5', 'cir'),
(35, 'cub6', 'cir'),
(36, 'cub1', 'cir'),
(37, 'cub2', 'cir'),
(38, 'cub3', 'cir'),
(39, 'cub4', 'cir'),
(40, 'cub5', 'cir'),
(41, 'cub6', 'cir')
;
CREATE TABLE #utility
([utilityID] int, [componentID] int, [utlityKV] float)
;
INSERT INTO #utility
([utilityID], [componentID], [utlityKV])
VALUES
(1, 1, 0.4),
(2, 2, 0.208),
(4, 3, 0.48),
(5, 4, 0.208)
;
CREATE TABLE #transformer
([transformerID] int, [componentID] int, [facilityID] int, [transformerName] varchar(4), [transformerPrimaryTapKv] float, [transformerSecondaryTapKv] float, [transformerPrimaryKv] float, [transformerSecondaryKv] float)
;
INSERT INTO #transformer
([transformerID], [componentID], [facilityID], [transformerName], [transformerPrimaryTapKv], [transformerSecondaryTapKv], [transformerPrimaryKv], [transformerSecondaryKv])
VALUES
(3, 11, 1, NULL, 0.48, 0.208, 0.48, 0.208),
(4, 12, 2, NULL, 0.48, 0.4, 0.48, 0.4)
;
CREATE TABLE #Bus
([busID] int, [busTypeID] int, [componentID] int, [bayID] int, [busName] varchar(4), [busConductorType] varchar(6), [busRatedCurrent] int)
;
INSERT INTO #Bus
([busID], [busTypeID], [componentID], [bayID], [busName], [busConductorType], [busRatedCurrent])
VALUES
(8, 1, 13, 1, 'bus1', 'Copper', 60),
(9, 1, 14, 1, 'bus2', 'copper', 50),
(10, 2, 15, 1, 'bus3', 'copper', 35),
(11, 2, 16, 1, 'bus4', 'copper', 35),
(13, 1, 17, 1, 'bus5', 'copper', 50)
;
WITH CTE AS (SELECT childComponentID AS SourceID, childComponentID, 0 AS depth
FROM #ComponentConnection
UNION ALL
SELECT C1.SourceID, C.childComponentID, c1.depth + 1 AS depth
FROM #ComponentConnection AS C INNER JOIN
CTE AS C1 ON C.parentComponentID = C1.childComponentID)
SELECT childComponentID,b.busName, min(depth)
--,c.componentType
,isnull(t.transformerSecondaryKv,u.utlityKV) kV
FROM CTE AS CTE1
join #Component c
on CTE1.SourceID = c.componentID
left join #Utility u
on CTE1.SourceID = u.componentID
left join #Transformer t
on CTE1.SourceID = t.componentID
LEFT JOIN #Bus b
on cte1.childComponentID = b.componentID
where busName is not null and c.componentType in ('Utility','trafo')
group by childComponentID,b.busName,isnull(t.transformerSecondaryKv,u.utlityKV)
order by depth
The desired result would be as follows for a Bus. I want to list all Buses and their associated Voltage. I would select all from the Bus table and derive the voltage from the heirarchical structure
Result
BusName | Voltage
Bus 1 | 0.4
Bus 2 | 0.208
Bus 3 | etc
I have a situation where at some point in the past some records in a table were modified to have duplicated information.
Consider an example below:
create table #CustomerExample
(
CustomerRecordId int,
CustomerId int,
CustomerName varchar(255),
CurrentCustomerValue varchar(255)
);
create table #CustomerExampleLog
(
LogId int,
CustomerRecordId int,
CustomerId int,
LogCreateDate datetime,
NewCustomerValue varchar(255)
);
insert #CustomerExample
values
(1, 100, 'Customer 1', 'Value X'),
(2, 100, 'Customer 1', 'Value X'),
(3, 200, 'Customer 2', 'Value Z'),
(4, 200, 'Customer 2', 'Value Z'),
(5, 200, 'Customer 2', 'Value Z');
insert #CustomerExampleLog
values
(1, 1, 100, '1/1/2014', 'Value B'),
(2, 1, 100, '2/1/2014', 'Value C'),
(3, 1, 100, '3/1/2014', 'Value B'),
(4, 1, 100, '4/1/2014', 'Value X'),
(5, 1, 100, '5/1/2014', 'Value X'),
(6, 1, 100, '6/1/2014', 'Value X'),
(7, 2, 100, '1/1/2014', 'Value D'),
(8, 2, 100, '2/1/2014', 'Value E'),
(9, 2, 100, '3/1/2014', 'Value F'),
(10, 2, 100, '4/1/2014', 'Value G'),
(11, 2, 100, '5/1/2014', 'Value X'),
(12, 2, 100, '6/1/2014', 'Value X'),
(13, 3, 200, '1/2/2014', 'Value A'),
(14, 3, 200, '1/3/2014', 'Value A'),
(15, 3, 200, '1/4/2014', 'Value B'),
(16, 3, 200, '1/5/2014', 'Value Z'),
(17, 4, 200, '1/2/2014', 'Value A'),
(18, 4, 200, '1/3/2014', 'Value A'),
(19, 4, 200, '1/4/2014', 'Value Z');
Originally "Customer 1" and "Customer 2" had different values in CustomerValue column for each record in [#CustomerExample] table. However, due to lack of a proper unique constraint, a bunch of "bad" UPDATE statements resulted in duplicated info. The updates were logged to [#CustomerExampleLog] table, which contains only the ID of the updated record, the update date, and the new value. My goal is to re-trace the log entries and revert one of the duplicates to it's "last known good" value before it became a dupe.
Ideally, I want to revert the CurrentCustomerValue for one of the dupes to a previous value. In the above example it would be the LogId=3 for CustomerRecordId=1, and LogId=15 for CustomerRecordId=3.
I am completely stumped.
Do you want something like this?
SELECT *
, prev_value = (
SELECT TOP 1 NewCustomerValue
FROM #CustomerExampleLog l
WHERE c.CustomerRecordId = l.CustomerRecordId
AND l.NewCustomerValue <> c.CurrentCustomerValue
ORDER BY LogCreateDate DESC
)
FROM #CustomerExample c
If you are looking to do it selectively (one record at a time), this would update the value.
UPDATE Customerexample
SET Currentcustomervalue = a.Newcustomervalue
FROM Customerexamplelog a
WHERE Logid IN(SELECT MAX(Logid)
FROM Customerexamplelog L
INNER JOIN Customerexample C ON L.Customerrecordid = C.Customerrecordid
AND L.Newcustomervalue <> C.Currentcustomervalue
WHERE L.Customerrecordid = #custid);
In the Enrollment_Changes table, the phone model listed is the phone the subscriber changed FROM on that date.
If there is no subsequent change on Enrollment_Changes, the phone the subscriber changed TO is listed on the P_Enrollment table
For example, subscriber 12345678 enrolled on 1/5/2011 with a RAZR. On 11/1/2011 he changed FROM the RAZR. You can see what he changed TO with the next transaction on Enrollment_Changes on 05/19/2012.
How would you find the Count of subs that first enrolled with the iPhone 3?
Here is the code I have for creating the tables
Create Tables: TBL 1
USE [Test2]
GO
/****** Object: Table [dbo].[P_ENROLLMENT] ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TABLE [dbo].[P_ENROLLMENT](
[Subid ] [float] NULL,
[Enrollment_Date] [datetime] NULL,
[Channel] [nvarchar](255) NULL,
[Region] [nvarchar](255) NULL,
[Active_Status] [float] NULL,
[Drop_Date] [datetime] NULL,
[Phone_Model] [nvarchar](255) NULL
) ON [PRIMARY]
GO
TBL 2
USE [Test2]
GO
/****** Object: Table [dbo].[ENROLLMENT_CHANGES] ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TABLE [dbo].[ENROLLMENT_CHANGES](
[Subid] [float] NULL,
[Cdate] [datetime] NULL,
[Phone_Model] [nvarchar](255) NULL
) ON [PRIMARY]
GO
Insert TBL1
INSERT INTO [P_ENROLLMENT]([Subid ], [Enrollment_Date], [Channel], [Region], [Active_Status], [Drop_Date], [Phone_Model])
VALUES(12345678, '2011-01-05 00:00:00', 'Retail', 'Southeast', 1, NULL, 'iPhone 4');
INSERT INTO [P_ENROLLMENT]([Subid ], [Enrollment_Date], [Channel], [Region], [Active_Status], [Drop_Date], [Phone_Model])
VALUES(12346178, '2011-03-13 00:00:00', 'Indirect Dealers', 'West', 1, NULL, 'HTC Hero');
INSERT INTO [P_ENROLLMENT]([Subid ], [Enrollment_Date], [Channel], [Region], [Active_Status], [Drop_Date], [Phone_Model])
VALUES(12346679, '2011-05-19 00:00:00', 'Indirect Dealers', 'Southeast', 0, '2012-03-15 00:00:00', 'Droid 2');
INSERT INTO [P_ENROLLMENT]([Subid ], [Enrollment_Date], [Channel], [Region], [Active_Status], [Drop_Date], [Phone_Model])
VALUES(12347190, '2011-07-25 00:00:00', 'Retail', 'Northeast', 0, '2012-05-21 00:00:00', 'iPhone 4');
INSERT INTO [P_ENROLLMENT]([Subid ], [Enrollment_Date], [Channel], [Region], [Active_Status], [Drop_Date], [Phone_Model])
VALUES(12347701, '2011-08-14 00:00:00', 'Indirect Dealers', 'West', 1, NULL, 'HTC Hero');
INSERT INTO [P_ENROLLMENT]([Subid ], [Enrollment_Date], [Channel], [Region], [Active_Status], [Drop_Date], [Phone_Model])
VALUES(12348212, '2011-09-30 00:00:00', 'Retail', 'West', 1, NULL, 'Droid 2');
INSERT INTO [P_ENROLLMENT]([Subid ], [Enrollment_Date], [Channel], [Region], [Active_Status], [Drop_Date], [Phone_Model])
VALUES(12348723, '2011-10-20 00:00:00', 'Retail', 'Southeast', 1, NULL, 'Southeast');
INSERT INTO [P_ENROLLMENT]([Subid ], [Enrollment_Date], [Channel], [Region], [Active_Status], [Drop_Date], [Phone_Model])
VALUES(12349234, '2012-01-06 00:00:00', 'Indirect Dealers', 'West', 0, '2012-02-14 00:00:00', 'West');
INSERT INTO [P_ENROLLMENT]([Subid ], [Enrollment_Date], [Channel], [Region], [Active_Status], [Drop_Date], [Phone_Model])
VALUES(12349745, '2012-01-26 00:00:00', 'Retail', 'Northeast', 0, '2012-04-15 00:00:00', 'HTC Hero');
INSERT INTO [P_ENROLLMENT]([Subid ], [Enrollment_Date], [Channel], [Region], [Active_Status], [Drop_Date], [Phone_Model])
VALUES(12350256, '2012-02-11 00:00:00', 'Retail', 'Southeast', 1, NULL, 'iPhone 4');
INSERT INTO [P_ENROLLMENT]([Subid ], [Enrollment_Date], [Channel], [Region], [Active_Status], [Drop_Date], [Phone_Model])
VALUES(12350767, '2012-03-02 00:00:00', 'Indirect Dealers', 'West', 1, NULL, 'Sidekick');
INSERT INTO [P_ENROLLMENT]([Subid ], [Enrollment_Date], [Channel], [Region], [Active_Status], [Drop_Date], [Phone_Model])
VALUES(12351278, '2012-04-18 00:00:00', 'Retail', 'Midwest', 1, NULL, 'iPhone 3');
INSERT INTO [P_ENROLLMENT]([Subid ], [Enrollment_Date], [Channel], [Region], [Active_Status], [Drop_Date], [Phone_Model])
VALUES(12351789, '2012-05-08 00:00:00', 'Indirect Dealers', 'West', 0, '2012-07-04 00:00:00', 'iPhone 3');
INSERT INTO [P_ENROLLMENT]([Subid ], [Enrollment_Date], [Channel], [Region], [Active_Status], [Drop_Date], [Phone_Model])
VALUES(12352300, '2012-06-24 00:00:00', 'Retail', 'Midwest', 1, NULL, 'Droid 2');
INSERT INTO [P_ENROLLMENT]([Subid ], [Enrollment_Date], [Channel], [Region], [Active_Status], [Drop_Date], [Phone_Model])
VALUES(12352811, '2012-06-25 00:00:00', 'Retail', 'Southeast', 1, NULL, 'Sidekick');
Insert TBL2
INSERT INTO [ENROLLMENT_CHANGES]([Subid], [Cdate], [Phone_Model])
VALUES(12345678, '2011-11-01 00:00:00', 'RAZR');
INSERT INTO [ENROLLMENT_CHANGES]([Subid], [Cdate], [Phone_Model])
VALUES(12346178, '2012-01-07 00:00:00', 'HTC Hero');
INSERT INTO [ENROLLMENT_CHANGES]([Subid], [Cdate], [Phone_Model])
VALUES(12348723, '2012-01-28 00:00:00', 'RAZR');
INSERT INTO [ENROLLMENT_CHANGES]([Subid], [Cdate], [Phone_Model])
VALUES(12350256, '2012-02-21 00:00:00', 'Blackberry Bold');
INSERT INTO [ENROLLMENT_CHANGES]([Subid], [Cdate], [Phone_Model])
VALUES(12349745, '2012-05-05 00:00:00', 'HTC Hero');
INSERT INTO [ENROLLMENT_CHANGES]([Subid], [Cdate], [Phone_Model])
VALUES(12345678, '2012-05-19 00:00:00', 'Palm Pre');
INSERT INTO [ENROLLMENT_CHANGES]([Subid], [Cdate], [Phone_Model])
VALUES(12347190, '2012-05-20 00:00:00', 'HTC Hero');
INSERT INTO [ENROLLMENT_CHANGES]([Subid], [Cdate], [Phone_Model])
VALUES(12350256, '2012-05-21 00:00:00', 'Blackberry Bold');
INSERT INTO [ENROLLMENT_CHANGES]([Subid], [Cdate], [Phone_Model])
VALUES(12349234, '2012-06-04 00:00:00', 'Palm Pre');
INSERT INTO [ENROLLMENT_CHANGES]([Subid], [Cdate], [Phone_Model])
VALUES(12346178, '2012-06-05 00:00:00', 'iPhone 3');
INSERT INTO [ENROLLMENT_CHANGES]([Subid], [Cdate], [Phone_Model])
VALUES(12350767, '2012-06-10 00:00:00', 'iPhone 3');
For the count
select COUNT(*) Total
from
(
select e.*,
rn = row_number() over (partition by e.subid order by c.cdate desc),
first_model = coalesce(c.phone_model, e.phone_model)
from [P_ENROLLMENT] e
left join [ENROLLMENT_CHANGES] c on c.subid = e.subid
) x
where rn=1 and first_model = 'iPhone 3'
For all the records
select *
from
(
select e.*,
rn = row_number() over (partition by e.subid order by c.cdate desc),
first_model = coalesce(c.phone_model, e.phone_model)
from [P_ENROLLMENT] e
left join [ENROLLMENT_CHANGES] c on c.subid = e.subid
) x
where rn=1 and first_model = 'iPhone 3'
order by subid
You want to know if the first record in the table is an iPhone 3. Something like this:
select count(*)
from (select e.*,
row_number() over (partition by subid order by enrollment_date) as seqnum
from p_enrollment e
) e
where seqnum = 1 and phone_model = 'iPhone 3'
Perhaps I'm thinking too simply, but wouldn't either of the following do what you're looking for?:
SELECT Phone_Model
, COUNT(*) AS Initially_Enrolled
FROM p_enrollment
GROUP BY Phone_Model
(working SQLFiddle: http://sqlfiddle.com/#!3/68258/4)
or
SELECT COUNT(*) AS Initially_Enrolled
FROM p_enrollment
WHERE Phone_Model = 'iPhone 3'
(working SQLFiddle: http://sqlfiddle.com/#!3/68258/3)
Since you only want initial enrollment, the ENROLLMENT_CHANGES table is irrelevant.
While running the following SQL query -
INSERT INTO Countries ('sno', 'Name' ) VALUES
(1, 'Afghanistan'),
(2, 'Albania'),
(3, 'Algeria'),
(4, 'American Samoa'),
(5, 'Andorra'),
(6, 'Angola'),
(7, 'Anguilla'),
(8, 'Antarctica'),
(9, 'Antigua and Barbuda'),
(10, 'Argentina'),
(11, 'Armenia'),
(12, 'Armenia'),
(13, 'Aruba'),
(14, 'Australia'),
(15, 'Austria'),
(16, 'Azerbaijan'),
(17, 'Azerbaijan'),
(18, 'Bahamas'),
(19, 'Bahrain'),
(20, 'Bangladesh'),
(21, 'Barbados'),
(22, 'Belarus'),
(23, 'Belgium'),
(24, 'Belize'),
(25, 'Benin'),
(26, 'Bermuda'),
(27, 'Bhutan'),
(28, 'Bolivia'),
(29, 'Bosnia and Herzegovina'),
(30, 'Botswana'),
(31, 'Bouvet Island'),
(32, 'Brazil'),
(33, 'British Indian Ocean Territory'),
(34, 'Brunei Darussalam'),
(35, 'Bulgaria'),
(36, 'Burkina Faso'),
(37, 'Burundi'),
(38, 'Cambodia'),
(39, 'Cameroon'),
(40, 'Canada'),
(41, 'Cape Verde'),
(42, 'Cayman Islands'),
(43, 'Central African Republic'),
(44, 'Chad'),
(45, 'Chile'),
(46, 'China'),
(47, 'Christmas Island'),
(48, 'Cocos (Keeling) Islands'),
(49, 'Colombia'),
(50, 'Comoros'),
(51, 'Congo'),
(52, 'Congo, The Democratic Republic of The'),
(53, 'Cook Islands'),
(54, 'Costa Rica'),
(55, 'Cote Divoire'),
(56, 'Croatia'),
(57, 'Cuba'),
(58, 'Cyprus'),
(59, 'Cyprus'),
(60, 'Czech Republic'),
(61, 'Denmark'),
(62, 'Djibouti'),
(63, 'Dominica'),
(64, 'Dominican Republic'),
(65, 'Easter Island'),
(66, 'Ecuador'),
(67, 'Egypt'),
(68, 'El Salvador'),
(69, 'Equatorial Guinea'),
(70, 'Eritrea'),
(71, 'Estonia'),
(72, 'Ethiopia'),
(73, 'Falkland Islands (Malvinas)'),
(74, 'Faroe Islands'),
(75, 'Fiji'),
(76, 'Finland'),
(77, 'France'),
(78, 'French Guiana'),
(79, 'French Polynesia'),
(80, 'French Southern Territories'),
(81, 'Gabon'),
(82, 'Gambia'),
(83, 'Georgia'),
(84, 'Georgia'),
(85, 'Germany'),
(86, 'Ghana'),
(87, 'Gibraltar'),
(88, 'Greece'),
(89, 'Greenland'),
(90, 'Greenland'),
(91, 'Grenada'),
(92, 'Guadeloupe'),
(93, 'Guam'),
(94, 'Guatemala'),
(95, 'Guinea'),
(96, 'Guinea-bissau'),
(97, 'Guyana'),
(98, 'Haiti'),
(99, 'Heard Island and Mcdonald Islands'),
(100, 'Honduras'),
(101, 'Hong Kong'),
(102, 'Hungary'),
(103, 'Iceland'),
(104, 'India'),
(105, 'Indonesia'),
(106, 'Indonesia'),
(107, 'Iran'),
(108, 'Iraq'),
(109, 'Ireland'),
(110, 'Israel'),
(111, 'Italy'),
(112, 'Jamaica'),
(113, 'Japan'),
(114, 'Jordan'),
(115, 'Kazakhstan'),
(116, 'Kazakhstan'),
(117, 'Kenya'),
(118, 'Kiribati'),
(119, 'Korea, North'),
(120, 'Korea, South'),
(121, 'Kosovo'),
(122, 'Kuwait'),
(123, 'Kyrgyzstan'),
(124, 'Laos'),
(125, 'Latvia'),
(126, 'Lebanon'),
(127, 'Lesotho'),
(128, 'Liberia'),
(129, 'Libyan Arab Jamahiriya'),
(130, 'Liechtenstein'),
(131, 'Lithuania'),
(132, 'Luxembourg'),
(133, 'Macau'),
(134, 'Macedonia'),
(135, 'Madagascar'),
(136, 'Malawi'),
(137, 'Malaysia'),
(138, 'Maldives'),
(139, 'Mali'),
(140, 'Malta'),
(141, 'Marshall Islands'),
(142, 'Martinique'),
(143, 'Mauritania'),
(144, 'Mauritius'),
(145, 'Mayotte'),
(146, 'Mexico'),
(147, 'Micronesia, Federated States of'),
(148, 'Moldova, Republic of'),
(149, 'Monaco'),
(150, 'Mongolia'),
(151, 'Montenegro'),
(152, 'Montserrat'),
(153, 'Morocco'),
(154, 'Mozambique'),
(155, 'Myanmar'),
(156, 'Namibia'),
(157, 'Nauru'),
(158, 'Nepal'),
(159, 'Netherlands'),
(160, 'Netherlands Antilles'),
(161, 'New Caledonia'),
(162, 'New Zealand'),
(163, 'Nicaragua'),
(164, 'Niger'),
(165, 'Nigeria'),
(166, 'Niue'),
(167, 'Norfolk Island'),
(168, 'Northern Mariana Islands'),
(169, 'Norway'),
(170, 'Oman'),
(171, 'Pakistan'),
(172, 'Palau'),
(173, 'Palestinian Territory'),
(174, 'Panama'),
(175, 'Papua New Guinea'),
(176, 'Paraguay'),
(177, 'Peru'),
(178, 'Philippines'),
(179, 'Pitcairn'),
(180, 'Poland'),
(181, 'Portugal'),
(182, 'Puerto Rico'),
(183, 'Qatar'),
(184, 'Reunion'),
(185, 'Romania'),
(186, 'Russia'),
(187, 'Russia'),
(188, 'Rwanda'),
(189, 'Saint Helena'),
(190, 'Saint Kitts and Nevis'),
(191, 'Saint Lucia'),
(192, 'Saint Pierre and Miquelon'),
(193, 'Saint Vincent and The Grenadines'),
(194, 'Samoa'),
(195, 'San Marino'),
(196, 'Sao Tome and Principe'),
(197, 'Saudi Arabia'),
(198, 'Senegal'),
(199, 'Serbia and Montenegro'),
(200, 'Seychelles'),
(201, 'Sierra Leone'),
(202, 'Singapore'),
(203, 'Slovakia'),
(204, 'Slovenia'),
(205, 'Solomon Islands'),
(206, 'Somalia'),
(207, 'South Africa'),
(208, 'South Georgia and The South Sandwich Islands'),
(209, 'Spain'),
(210, 'Sri Lanka'),
(211, 'Sudan'),
(212, 'Suriname'),
(213, 'Svalbard and Jan Mayen'),
(214, 'Swaziland'),
(215, 'Sweden'),
(216, 'Switzerland'),
(217, 'Syria'),
(218, 'Taiwan'),
(219, 'Tajikistan'),
(220, 'Tanzania, United Republic of'),
(221, 'Thailand'),
(222, 'Timor-leste'),
(223, 'Togo'),
(224, 'Tokelau'),
(225, 'Tonga'),
(226, 'Trinidad and Tobago'),
(227, 'Tunisia'),
(228, 'Turkey'),
(229, 'Turkey'),
(230, 'Turkmenistan'),
(231, 'Turks and Caicos Islands'),
(232, 'Tuvalu'),
(233, 'Uganda'),
(234, 'Ukraine'),
(235, 'United Arab Emirates'),
(236, 'United Kingdom'),
(237, 'United States'),
(238, 'United States Minor Outlying Islands'),
(239, 'Uruguay'),
(240, 'Uzbekistan'),
(241, 'Vanuatu'),
(242, 'Vatican City'),
(243, 'Venezuela'),
(244, 'Vietnam'),
(245, 'Virgin Islands, British'),
(246, 'Virgin Islands, U.S.'),
(247, 'Wallis and Futuna'),
(248, 'Western Sahara'),
(249, 'Yemen'),
(250, 'Yemen'),
(251, 'Zambia'),
(252, 'Zimbabwe');
following error code is seen on microsoft sql server-
Msg 215, Level 16, State 1, Line 1
Parameters supplied for object 'Countries' which is not a function. If the parameters are intended as a table hint, a WITH keyword is required.
Please help by sharing the reason ?
Don't use single quotes for column name.
INSERT INTO Countries ([sno], [Name])...
You can use brackets normally.
Or you can use double quote " if SET QUOTED_IDENTIFIER is ON (it should be by default)
Currently you have string literals which is causing SQL Server to interpret the code way different to what you expect
You could leave all the bracketed section out all together: SQL Fiddle
INSERT INTO Countries VALUES
This is therefore none explicit and not best practice (see comment by #AaronBertrand) - but still an alternative.