I have been trying to gather data using beautiful soup and selenium and have been unable to get more than the first 20 results on the table (there are over 2000 total). I saw some related questions and answers which suggested trying different parsers, so I tried lxml, html.parser and html5lib, but none worked. I also saw some answers suggesting to use selenium and webdriver but wasn't able to get the entire page using either. This is my code as of right now
import requests, time
import os
from selenium import webdriver
from selenium.webdriver.common.by import By
from webdriver_manager.chrome import ChromeDriverManager
import urllib.request
#import pandas as pd
from bs4 import BeautifulSoup
import requests, time
import lxml
driver = webdriver.Chrome(ChromeDriverManager().install())
driver.get('https://coinmarketcap.com/exchanges/uniswap-v2/')
#supposed to scroll to the bottom of the page
driver.execute_script("window.scrollTo(0, document.body.scrollHeight);")
time.sleep(5)
page = driver.page_source
soup = BeautifulSoup(''.join(page), 'lxml')
pairs = soup.find_all('div', attrs = {"class" : "hmd6df-0 kCRNNr"})
data = [i.find_all('a')[0] for i in pairs]
Any help would be awesome. Thanks.
I have come up with a purely selenium solution - the following python script successfully scrapes all the data from the table. I am just printing like this as an example, you will want to figure out how to organize your data.
Update: in order to expand and scrape all the rows, I am scrolling to the bottom of the list, then clicking the "Expand More" button, which expands 100 more rows. I am iterating this enough times to get all the way through the 1850 rows of data.
from selenium import webdriver
from webdriver_manager.chrome import ChromeDriverManager
import time
driver = webdriver.Chrome(ChromeDriverManager().install())
driver.get('https://coinmarketcap.com/exchanges/uniswap-v2/')
total_height = int(driver.execute_script("return document.body.scrollHeight"))
for x in range(20):
for i in range(1, total_height, 130):
driver.execute_script("window.scrollTo(0, {});".format(i))
if x == 0:
driver.find_element_by_css_selector('div.cmc-cookie-policy-banner__close').click()
driver.find_element_by_xpath('//button[text() = "Load More"]').click()
time.sleep(2)
first_column = driver.find_elements_by_css_selector('td.cmc-table__cell.cmc-table__cell--sticky.cmc-table__cell--sortable.cmc-table__cell--left.cmc-table__cell--sort-by__rank > div')
second_column = driver.find_elements_by_css_selector('div.cwwgik-0.bCvAgC')
third_column = driver.find_elements_by_css_selector('div.hmd6df-0.kCRNNr')
fourth_column = driver.find_elements_by_css_selector('div.cmc-table__column-market-pair-volume-24h')
fifth_column = driver.find_elements_by_css_selector('div.cmc-table__column-market-pair-volume-percent')
sixth_column = driver.find_elements_by_css_selector('td.cmc-table__cell.cmc-table__cell--sortable.cmc-table__cell--right.cmc-table__cell--sort-by__quote-usd-effective-liquidity > div')
seventh_column = driver.find_elements_by_css_selector('td.cmc-table__cell.cmc-table__cell--sortable.cmc-table__cell--right.cmc-table__cell--sort-by__fee-type > div')
eighth_column = driver.find_elements_by_css_selector('div.ghkc60-0.fLaXDt')
for i in range(len(second_column)):
print(str(first_column[i].get_attribute("innerText")) + ' ' + str(second_column[i].get_attribute("innerText")) + ' ' + str(third_column[i].get_attribute("innerText")) + ' ' + str(fourth_column[i].get_attribute("innerText")) + ' ' + str(fifth_column[i].get_attribute("innerText")) + ' ' + str(sixth_column[i].get_attribute("innerText")) + ' ' + str(seventh_column[i].get_attribute("innerText")) + ' ' + str(eighth_column[i].get_attribute("innerText")))
Output:
1 USD Coin USDC/WETH $250,651,977 20.81% 935 Percentage Recently
2 Fei Protocol FEI/WETH $157,435,972 13.07% - Percentage Recently
3 WETH WETH/USDT $153,706,066 12.76% 915 Percentage Recently
4 Dai DAI/WETH $58,995,907 4.90% 850 Percentage Recently
5 Tendies TEND/WETH $41,429,102 3.44% - Percentage 778 hours ago
6 Wrapped Bitcoin WBTC/WETH $37,838,161 3.14% 901 Percentage Recently
7 SHIBA INU SHIB/WETH $30,416,126 2.53% 612 Percentage Recently
...
1770 yplutus yPLT/WETH $? 0.00% - Percentage 671 hours ago
1771 Deflect DEFLCT/RFI $? 0.00% - Percentage 96 days ago
1772 Blaze DeFi BNFI/WETH $? 0.00% - Percentage 781 hours ago
1773 Buy-Sell BSE/WETH $? 0.00% - Percentage Recently
1774 PIRANHAS $PIR/WETH $? 0.00% - Percentage Recently
1775 HLand Token HLAND/USDT $? 0.00% - Percentage Recently
1776 Hub - Human Trust Protocol HUB/WETH $? 0.00% - Percentage 126 days ago
1777 WETH WETH/YVS $? 0.00% - Percentage Recently
1778 Reflector.Finance RFCTR/WETH $? 0.00% - Percentage Recently
1779 WETH WETH/R34P $? 0.00% - Percentage Recently
1780 WETH WETH/RFR $? 0.00% 265 Percentage Recently
1781 Golden Ratio Per Liquidity GRPL/WETH $? 0.00% - Percentage Recently
1782 xETH-G xETH-G/WETH $? 0.00% - Percentage 75 days ago
1783 3XT TOKEN 3XT/WETH $? 0.00% - Percentage 62 days ago
1784 Diffract Finance DFR/WETH $? 0.00% - Percentage Recently
1785 Bitpower BPP/WETH $? 0.00% - Percentage Recently
1786 IDL Token IDL/ELYX $? 0.00% - Percentage 77 days ago
1787 IDL Token IDL/WETH $? 0.00% - Percentage 81 days ago
1788 Dai DAI/BCC $? 0.00% - Percentage 42 days ago
1789 Stand Share SAS/USDT $? 0.00% - Percentage 123 days ago
1790 Definex Dswap/USDT $? 0.00% - Percentage 314 hours ago
1791 Vaultz VAULTZ/WETH $? 0.00% - Percentage 781 hours ago
1792 Fission Cash FCX/WETH $? 0.00% - Percentage Recently
1793 DeltaHub Community DHC/USDT $? 0.00% - Percentage 821 hours ago
1794 Dai DAI/DST $? 0.00% - Percentage 111 days ago
1795 AGAr AGAr/WETH $? 0.00% - Percentage 311 hours ago
1796 Basis Cash BAC/Mars $? 0.00% - Percentage 809 hours ago
1797 Tether USDT/NOW $? 0.00% - Percentage 781 hours ago
1798 USD Coin USDC/wCUSD $? 0.00% - Percentage Recently
1799 zzz.finance v2 ZZZV2/WETH $? 0.00% - Percentage 781 hours ago
1800 Rigel Finance RIGEL/WETH $? 0.00% - Percentage Recently
1801 Bitbot Protocol BBP/WETH $? 0.00% - Percentage Recently
1802 HeroSwap HERO/WETH $? 0.00% - Percentage 108 days ago
1803 XUSD Stable XUSD/LINK $? 0.00% - Percentage 69 days ago
1804 XUSD Stable XUSD/DAI $? 0.00% - Percentage 96 days ago
1805 CURE Farm CURE/WETH $? 0.00% - Percentage Recently
1806 WETH WETH/stETH $? 0.00% 518 Percentage Recently
1807 Xstable.Protocol XST/WETH $? 0.00% - Percentage Recently
1808 ZCore WZCR/WETH $? 0.00% - Percentage 94 days ago
1811 Wrapped BIND wBIND/WETH $? 0.00% - Percentage Recently
1812 USDFreeLiquidity USDFL/USDT $? 0.00% - Percentage Recently
1813 USDFreeLiquidity USDFL/DAI $? 0.00% - Percentage 113 hours ago
1814 USDFreeLiquidity USDFL/USDN $? 0.00% - Percentage Recently
1815 WETH WETH/MCX $? 0.00% - Percentage Recently
1816 Mythic Finance MYTHIC/WETH $? 0.00% - Percentage 781 hours ago
1817 Polkabase PBASE/WETH $? 0.00% - Percentage Recently
1818 WETH WETH/RAC $? 0.00% - Percentage Recently
1819 MiraQle MQL/WETH $? 0.00% - Percentage 68 days ago
1820 MiraQle MQL/USDT $? 0.00% - Percentage 68 days ago
1821 Parsiq Boost PRQBOOST/WETH $? 0.00% - Percentage Recently
1822 WETH WETH/PUX $? 0.00% - Percentage Recently
1823 Previse PRVS/WETH $? 0.00% - Percentage Recently
1824 SIMBA Storage SIMBA/USDT $? 0.00% - Percentage 781 hours ago
1825 CryptoPing PING/USDT $? 0.00% - Percentage Recently
1826 Wrapped Bitcoin WBTC/INSTAR $? 0.00% - Percentage Recently
1827 Vow VOW/WETH $? 0.00% - Percentage Recently
1828 Value Set Dollar VSD/DAI $? 0.00% - Percentage 471 hours ago
1829 Value Set Dollar VSD/USDT $? 0.00% - Percentage 110 hours ago
1830 Value Set Dollar VSD/WETH $? 0.00% - Percentage 44 days ago
1831 WETH WETH/DEGENS $? 0.00% - Percentage Recently
1832 Xriba XRA/WETH $? 0.00% - Percentage 630 hours ago
1833 Tower token TOWER/LYM $? 0.00% - Percentage 469 hours ago
1834 Shadetech SHD/WETH $? 0.00% - Percentage Recently
1835 Wrapped Bitcoin WBTC/DGCL $? 0.00% - Percentage 333 hours ago
1836 Rare Pepe rPepe/WETH $? 0.00% - Percentage Recently
1837 xSigma SIG/USDT $? 0.00% - Percentage 52 days ago
1838 Dollar Protocol USDf/USDC $? 0.00% - Percentage 156 hours ago
1839 MYFinance MYFI/WETH $? 0.00% - Percentage Recently
1840 Delta DELTA/WETH $? 0.00% - Percentage Recently
1841 Kambria Yield Tuning Engine KYTE/WETH $? 0.00% - Percentage Recently
1842 ClinTex CTi CTI/WETH $? 0.00% - Percentage Recently
1843 BasenjiDAO BSJ/WETH $? 0.00% - Percentage Recently
1844 EURxb EURxb/USDT $? 0.00% - Percentage Recently
1845 Landbox LAND/WETH $? 0.00% - Percentage 442 hours ago
1846 Folder Protocol FOL/WETH $? 0.00% - Percentage Recently
1847 Databroker DTX/USDT $? 0.00% - Percentage Recently
1848 STATERA STA/WSTA $? 0.00% - Percentage Recently
1849 Delta Exchange Token DETO/WETH $? 0.00% - Percentage Recently
1850 Elongate Deluxe ELongD/WETH $? 0.00% - Percentage Recently
Related
I am using Gurobi version 8.1.0 and Python API version 3.6 to solve MIP problems. I have two models in which I believe that their global optima are equal. However, I found out that they are not equal in one of my simulations. I then tried to warm-start the model that I believe the solution is incorrect (model-1) with the solution from another model (model-2). In other words, the problem is to maximize the objective function and the objective value of model-1 is 42.3333, but I believe it should be 42.8333. Therefore, I use the solution from model-2 with the objective value of 42.8333 to warm-start to model-1.
What is weird is that the solution from model-2 should not be feasible to model-1 since the objective value is greater than 42.3333 and the problem is maximization. However, it turns out that it is a feasible warm start and now the optimal value of model-1 is 42.8333. How can the same model have multiple optima?
Changed value of parameter timeLimit to 10800.0
Prev: 1e+100 Min: 0.0 Max: 1e+100 Default: 1e+100
Changed value of parameter LogFile to output/inconsistent_Model-1.log
Prev: gurobi.log Default:
Optimize a model with 11277 rows, 15150 columns and 165637 nonzeros
Model has 5050 general constraints
Variable types: 0 continuous, 15150 integer (5050 binary)
Coefficient statistics:
Matrix range [1e+00, 1e+00]
Objective range [1e-02, 1e+00]
Bounds range [1e+00, 1e+00]
RHS range [1e+00, 5e+01]
Presolve removed 6167 rows and 7008 columns
Presolve time: 0.95s
Presolved: 5110 rows, 8142 columns, 37608 nonzeros
Presolved model has 3058 SOS constraint(s)
Variable types: 0 continuous, 8142 integer (4403 binary)
Warning: Markowitz tolerance tightened to 0.0625
Warning: Markowitz tolerance tightened to 0.125
Warning: Markowitz tolerance tightened to 0.25
Warning: Markowitz tolerance tightened to 0.5
Root relaxation: objective 4.333333e+01, 4856 iterations, 2.15 seconds
Nodes | Current Node | Objective Bounds | Work
Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time
0 0 43.33333 0 587 - 43.33333 - - 3s
0 0 43.26667 0 243 - 43.26667 - - 4s
0 0 43.20000 0 1282 - 43.20000 - - 4s
0 0 43.20000 0 567 - 43.20000 - - 4s
0 0 43.18333 0 1114 - 43.18333 - - 5s
0 0 43.16543 0 2419 - 43.16543 - - 5s
0 0 43.15556 0 1575 - 43.15556 - - 5s
0 0 43.15333 0 2271 - 43.15333 - - 5s
0 0 43.13333 0 727 - 43.13333 - - 5s
0 0 43.12778 0 1698 - 43.12778 - - 5s
0 0 43.12500 0 1146 - 43.12500 - - 5s
0 0 43.12500 0 1911 - 43.12500 - - 6s
0 0 43.11927 0 1859 - 43.11927 - - 6s
0 0 43.11845 0 2609 - 43.11845 - - 7s
0 0 43.11845 0 2631 - 43.11845 - - 7s
0 0 43.11845 0 2642 - 43.11845 - - 7s
0 0 43.11845 0 2462 - 43.11845 - - 8s
0 0 43.11845 0 2529 - 43.11845 - - 8s
0 0 43.11845 0 2529 - 43.11845 - - 9s
0 2 43.11845 0 2531 - 43.11845 - - 14s
41 35 43.09874 17 957 - 43.09874 - 29.4 15s
94 84 42.93207 33 716 - 43.09874 - 22.1 31s
117 101 42.91940 40 2568 - 43.09874 - 213 37s
264 175 infeasible 92 - 43.09874 - 133 73s
273 181 infeasible 97 - 43.09874 - 277 77s
293 191 42.42424 17 1828 - 43.09874 - 280 90s
369 249 42.40111 52 2633 - 43.09874 - 311 105s
383 257 42.39608 59 3062 - 43.09874 - 329 152s
408 265 42.39259 65 2819 - 43.09874 - 386 162s
419 274 41.51399 66 2989 - 43.09874 - 401 170s
454 282 41.29938 71 3000 - 43.09874 - 390 182s
462 280 infeasible 74 - 43.09874 - 423 192s
479 287 infeasible 78 - 43.09874 - 419 204s
498 293 40.51287 81 2564 - 43.09874 - 435 207s
526 307 40.16638 86 2619 - 43.09874 - 419 227s
584 330 42.63100 33 621 - 43.09874 - 404 236s
628 333 infeasible 37 - 43.09874 - 394 252s
661 345 42.37500 26 25 - 43.09874 - 396 288s
684 353 infeasible 30 - 43.09874 - 426 290s
842 370 infeasible 69 - 43.09874 - 348 306s
944 379 infeasible 86 - 43.09874 - 321 370s
1009 395 42.36667 22 25 - 43.09874 - 350 409s
* 1031 243 3 42.3333333 43.09874 1.81% 343 409s
1056 203 43.00000 19 141 42.33333 43.09874 1.81% 362 411s
1194 222 cutoff 23 42.33333 43.00000 1.57% 325 430s
1199 219 cutoff 25 42.33333 43.00000 1.57% 349 450s
1202 212 cutoff 29 42.33333 43.00000 1.57% 361 472s
1211 200 infeasible 47 42.33333 42.91851 1.38% 380 498s
1226 169 infeasible 43 42.33333 42.91471 1.37% 395 511s
Cutting planes:
Gomory: 2
Cover: 15
Implied bound: 1
Clique: 26
MIR: 17
Inf proof: 1
Zero half: 8
Explored 1426 nodes (502432 simplex iterations) in 512.68 seconds
Thread count was 4 (of 4 available processors)
Solution count 1: 42.3333
Optimal solution found (tolerance 1.00e-04)
Warning: some integer variables take values larger than the maximum
supported value (2000000000)
Best objective 4.233333333333e+01, best bound 4.233333333333e+01, gap 0.0000%
In addition to the above, I also received this warning:
"Optimal solution found (tolerance 1.00e-04)
Warning: some integer variables take values larger than the maximum
supported value (2000000000)". What does it mean? Thank you so much!
It looks like you are encountering some numerical troubles. The root relaxation required an increased Markowitz tolerance, which indicates an ill-conditioned matrix. This may lead to inconsistencies as you have observed in the two different "optimal" solutions.
The warning about too large values means that there are integer variables with solution values so large that the integer feasibility tolerance can not reliably be checked anymore. If you have a variable with solution value in the range of 1e+9 it probably doesn't matter anymore whether they are integer or not. So you could probably also simplify your model by making them continuous variables.
You should check for violations in the two solutions for both models (see here) to see how feasible the solutions actually are.
I am computing my fuel consumption from OBD2 parameter. MAF to be specific and I am receiving data on per second basis. Here is an section of my data.
TS RS EngS MAF R MAP EL TD Travel
14:41:22 31 932 1056 98 23978 12130
14:41:23 29 2084 2639 107 23210 12130
14:41:24 32 2154 3867 149 38826 12130
14:41:25 36 2426 4683 184 36266 12130
14:41:26 39 2391 3031 133 682 12130
14:41:27 40 1784 2794 132 30634 12130
14:41:28 42 1864 2853 140 30378 12130
14:41:29 43 1953 2900 132 29098 12130
14:41:30 46 2031 3017 135 29098 12130
14:41:31 45 2027 2969 126 20138 12130
14:41:32 47 2122 4253 174 42154 12130
14:41:33 51 2220 4722 183 20906 12130
Where
TS : Time Stamp,
RS : Road Speed,
EngS : Engine Speed,
MAF R : Mass Air Flow Rate,
MAP Mass Air Pressure,
EL : Engine Load,
TD Travel : Total Distance Traveled
So basically from this data I am trying to compute my Instantaneous Fuel Consumption and The Mileage in KMPL.
For that, Since The Data is per second i am taking MAF of each row and using this formula,
Fuel Consumption = MAF/(14.7*710),
where 14.7 = ideal air/fuel ratio,
and 710 is density of gasoline in grams/L
So, this should give my consumption. and I am calculating the distance(in KM) from RS /3600. And further dividing distance by fuel consumption to get mileage. However the calculation is coming horribly wrong. The mileage of my car is around 14KMPL. Here are my results.
TS Distance (inKM) Fuel Consum(L) Mileage(KMPL)
14:41:22 0.0086111111 0.1008355216 0.0853975957
14:41:23 0.0080555556 0.2519933158 0.0319673382
14:41:24 0.0088888889 0.369252805 0.0240726374
14:41:25 0.01 0.4471711626 0.0223628016
14:41:26 0.0108333333 0.2894246837 0.0374305785
14:41:27 0.0111111111 0.2667939842 0.0416467828
14:41:28 0.0116666667 0.2724277871 0.0428248043
14:41:29 0.0119444444 0.2769157317 0.0431338602
14:41:30 0.0127777778 0.2880878491 0.0443537546
14:41:31 0.0125 0.2835044163 0.0440910239
14:41:32 0.0130555556 0.4061112437 0.0321477323
14:41:33 0.0141666667 0.4508952017 0.0314189785
Can someone tell what am I doing so wrong that the computation is so wrong. As the formulas are simple there isn't much scope to do error.Thank You.
MAF is in g/s
MAF(g/s) * 1/14.7 * 1L/710g = Fuel Consumption in L/s Units
Speed (V) is in KPH (Km/hr) so V(Km/hr) * (1hr/3600s) = v KPS(Km/s)
so FC(L/s) / v (Km/s) = L/Km
you want Km/L so v/Fc so your final formula is
KmPL = V * 1/ 3600 * 1/MAF * 14.7 * 710
Divide the MAF by 14.7 to get Grams of fuel per Sec
next divide by 454 to get lbs fuel/sec
next divide 6.701 to get fuel/sec
multiply by 3600 to get gallons/ hr
other case GPH=MAF*0.0805 next MPG=MPH?GPH
I would like to perform two different calculations across consecutive columns in a pandas or pyspark dataframe.
Columns are weeks and the metrics are displayed as rows.
I want to calculate the actual and percentage differences across the columns.
The input/output tables incl. the calculations used in Excel are displayed in the following image.
I want to replicate these calculations on a pandas or pyspark dataframe.
Raw Data Attached:
Metrics Week20 Week21 Week22 Week23 Week24 Week25 Week26 Week27
Sales 20301 21132 20059 23062 19610 22734 22140 20699
TRXs 739 729 690 779 701 736 762 655
Attachment Rate 4.47 4.44 4.28 4.56 4.41 4.58 4.55 4.96
AOV 27.47 28.99 29.07 29.6 27.97 30.89 29.06 31.6
Profit 5177 5389 5115 5881 5001 5797 5646 5278
Profit per TRX 7.01 7.39 7.41 7.55 7.13 7.88 7.41 8.06
in pandas you could use pct_change(axis=1) and diff(axis=1) methods:
df = df.set_index('Metrics')
# list of metrics with "actual diff"
actual = ['AOV', 'Attachment Rate']
rep = (df[~df.index.isin(actual)].pct_change(axis=1).round(2)*100).fillna(0).astype(str).add('%')
rep = pd.concat([rep,
df[df.index.isin(actual)].diff(axis=1).fillna(0)
])
In [131]: rep
Out[131]:
Week20 Week21 Week22 Week23 Week24 Week25 Week26 Week27
Metrics
Sales 0.0% 4.0% -5.0% 15.0% -15.0% 16.0% -3.0% -7.0%
TRXs 0.0% -1.0% -5.0% 13.0% -10.0% 5.0% 4.0% -14.0%
Profit 0.0% 4.0% -5.0% 15.0% -15.0% 16.0% -3.0% -7.0%
Profit per TRX 0.0% 5.0% 0.0% 2.0% -6.0% 11.0% -6.0% 9.0%
Attachment Rate 0 -0.03 -0.16 0.28 -0.15 0.17 -0.03 0.41
AOV 0 1.52 0.08 0.53 -1.63 2.92 -1.83 2.54
Recently I got a new batch of dumps to identify the HighMemory usage in 3 of our WCF Services. Which are hosted on 64Bit AppPool and Windows Server 2012.
Application one :
ProcessUp Time : 22 days
GC Heap usage : 2.69 Gb
Loaded Modules : 220 Mb
Commited Memory : 3.08 Gb
Native memory : 2 Gb
Issue identified as large GC heap usage is due to un closed WCF client proxy objects. Which are accounting for almost 2.26 Gb and rest for cache in GC heap.
Application Two :
ProcessUp Time : 9 Hours
GC Heap usage : 4.43 Gb
Cache size : 2.45 Gb
Loaded Modules : 224 Mb
Commited Memory : 5.13 Gb
Native memory heap : 2 Gb
Issue identified as most of the objects are of System.Web.CaheEntry, as they are due to large cache size. 2.2 Gb of String object on Gc heap has roots to CacheRef root objects.
Application Three :
Cache size : 950 Mb
GC heap : 1.2 Gb
Native Heap : 2 Gb
We recently upgrade to Windows Server 2012, I had old dumps as well. Those dumps does not show the native heap for the same application. It was only around 90 Mb.
I also use WinDbg to explore the Native heap with !heap -s command.
which shows very minimal native heap sizes as shown below.
I am just confused Why DebugDiag 2.0 is showing 2Gb of Native Heap in every WCF service. My understand is that !heap -s should also dump the same native heaps and it should match the debug diag reports graphs. Report also shows values in Thoushand of TBytes.
0:000> !heap -s
LFH Key : 0x53144a890e31e98b
Termination on corruption : ENABLED
Heap Flags Reserv Commit Virt Free List UCR Virt Lock Fast
(k) (k) (k) (k) length blocks cont. heap
-------------------------------------------------------------------------------------
000000fc42c10000 00000002 32656 31260 32552 2885 497 6 2 f LFH
000000fc42a40000 00008000 64 4 64 2 1 1 0 0
000000fc42bf0000 00001002 3228 1612 3124 43 9 3 0 0 LFH
000000fc43400000 00001002 1184 76 1080 1 5 2 0 0 LFH
000000fc43390000 00001002 1184 148 1080 41 7 2 0 0 LFH
000000fc43d80000 00001002 60 8 60 5 1 1 0 0
000000fc433f0000 00001002 60 8 60 5 1 1 0 0
000000fc442a0000 00001002 1184 196 1080 1 6 2 0 0 LFH
000000fc44470000 00041002 60 8 60 5 1 1 0 0
000001008e9f0000 00041002 164 40 60 3 1 1 0 0 LFH
000001008f450000 00001002 3124 1076 3124 1073 3 3 0 0
External fragmentation 99 % (3 free blocks)
-------------------------------------------------------------------------------------
Can anybody explain me why WinDbg command !heap -s and DebugDiag report is varing. Or I have incorrect knowledge of above command.
I also use Pykd script to dump native object stats. Which does not show much large number of objects.
Also what is mean by External fragmentation 99 % (3 free blocks)
in above output. I understand that fragmented memory has less large block of continuous memory place. But fail to relate it with Percentage.
Edit 1 :
Application 2 :
0:000> !address -summary
Mapping file section regions...
Mapping module regions...
Mapping PEB regions...
Mapping TEB and stack regions...
Mapping heap regions...
Mapping page heap regions...
Mapping other regions...
Mapping stack trace database regions...
Mapping activation context regions...
--- Usage Summary ---------------- RgnCount ----------- Total Size -------- %ofBusy %ofTotal
Free 363 7ffb`04a14000 ( 127.981 Tb) 99.98%
<unknown> 952 4`e8c0c000 ( 19.637 Gb) 98.54% 0.01%
Image 2122 0`0e08d000 ( 224.551 Mb) 1.10% 0.00%
Heap 88 0`03372000 ( 51.445 Mb) 0.25% 0.00%
Stack 124 0`013c0000 ( 19.750 Mb) 0.10% 0.00%
Other 7 0`001be000 ( 1.742 Mb) 0.01% 0.00%
TEB 41 0`00052000 ( 328.000 kb) 0.00% 0.00%
PEB 1 0`00001000 ( 4.000 kb) 0.00% 0.00%
--- Type Summary (for busy) ------ RgnCount ----------- Total Size -------- %ofBusy %ofTotal
MEM_PRIVATE 643 4`ebf44000 ( 19.687 Gb) 98.79% 0.02%
MEM_IMAGE 2655 0`0eb96000 ( 235.586 Mb) 1.15% 0.00%
MEM_MAPPED 37 0`00b02000 ( 11.008 Mb) 0.05% 0.00%
--- State Summary ---------------- RgnCount ----------- Total Size -------- %ofBusy %ofTotal
MEM_FREE 363 7ffb`04a14000 ( 127.981 Tb) 99.98%
MEM_RESERVE 725 3`b300d000 ( 14.797 Gb) 74.25% 0.01%
MEM_COMMIT 2610 1`485cf000 ( 5.131 Gb) 25.75% 0.00%
--- Protect Summary (for commit) - RgnCount ----------- Total Size -------- %ofBusy %ofTotal
PAGE_READWRITE 868 1`3939d000 ( 4.894 Gb) 24.56% 0.00%
PAGE_EXECUTE_READ 157 0`09f10000 ( 159.063 Mb) 0.78% 0.00%
PAGE_READONLY 890 0`035ed000 ( 53.926 Mb) 0.26% 0.00%
PAGE_WRITECOPY 433 0`0149c000 ( 20.609 Mb) 0.10% 0.00%
PAGE_EXECUTE_READWRITE 148 0`0065d000 ( 6.363 Mb) 0.03% 0.00%
PAGE_EXECUTE_WRITECOPY 67 0`0017c000 ( 1.484 Mb) 0.01% 0.00%
PAGE_READWRITE|PAGE_GUARD 41 0`000b9000 ( 740.000 kb) 0.00% 0.00%
PAGE_NOACCESS 4 0`00004000 ( 16.000 kb) 0.00% 0.00%
PAGE_EXECUTE 2 0`00003000 ( 12.000 kb) 0.00% 0.00%
--- Largest Region by Usage ----------- Base Address -------- Region Size ----------
Free 101`070a0000 7ef6`e77b2000 ( 126.964 Tb)
<unknown> fd`72f14000 0`d156c000 ( 3.271 Gb)
Image 7ff9`91344000 0`012e8000 ( 18.906 Mb)
Heap 100`928a0000 0`00544000 ( 5.266 Mb)
Stack fc`43240000 0`0007b000 ( 492.000 kb)
Other fc`42ea0000 0`00181000 ( 1.504 Mb)
TEB 7ff7`ee852000 0`00002000 ( 8.000 kb)
PEB 7ff7`eeaaf000 0`00001000 ( 4.000 kb)
Application Three :
0:000> !address -summary
--- Usage Summary ---------------- RgnCount ----------- Total Size -------- %ofBusy %ofTotal
Free 323 7ffb`9f8ea000 ( 127.983 Tb) 99.99%
<unknown> 832 4`4bbb6000 ( 17.183 Gb) 98.15% 0.01%
Image 2057 0`0e5ab000 ( 229.668 Mb) 1.28% 0.00%
Heap 196 0`04f52000 ( 79.320 Mb) 0.44% 0.00%
Stack 127 0`01440000 ( 20.250 Mb) 0.11% 0.00%
Other 7 0`001be000 ( 1.742 Mb) 0.01% 0.00%
TEB 42 0`00054000 ( 336.000 kb) 0.00% 0.00%
PEB 1 0`00001000 ( 4.000 kb) 0.00% 0.00%
--- Type Summary (for busy) ------ RgnCount ----------- Total Size -------- %ofBusy %ofTotal
MEM_PRIVATE 783 4`51099000 ( 17.266 Gb) 98.63% 0.01%
MEM_IMAGE 2444 0`0ec06000 ( 236.023 Mb) 1.32% 0.00%
MEM_MAPPED 35 0`00a67000 ( 10.402 Mb) 0.06% 0.00%
--- State Summary ---------------- RgnCount ----------- Total Size -------- %ofBusy %ofTotal
MEM_FREE 323 7ffb`9f8ea000 ( 127.983 Tb) 99.99%
MEM_RESERVE 621 3`e3504000 ( 15.552 Gb) 88.83% 0.01%
MEM_COMMIT 2641 0`7d202000 ( 1.955 Gb) 11.17% 0.00%
--- Protect Summary (for commit) - RgnCount ----------- Total Size -------- %ofBusy %ofTotal
PAGE_READWRITE 919 0`6dc07000 ( 1.715 Gb) 9.80% 0.00%
PAGE_EXECUTE_READ 153 0`0a545000 ( 165.270 Mb) 0.92% 0.00%
PAGE_READONLY 734 0`02cf5000 ( 44.957 Mb) 0.25% 0.00%
PAGE_WRITECOPY 470 0`01767000 ( 23.402 Mb) 0.13% 0.00%
PAGE_EXECUTE_READWRITE 240 0`009cf000 ( 9.809 Mb) 0.05% 0.00%
PAGE_EXECUTE_WRITECOPY 76 0`001c5000 ( 1.770 Mb) 0.01% 0.00%
PAGE_READWRITE|PAGE_GUARD 42 0`000be000 ( 760.000 kb) 0.00% 0.00%
PAGE_NOACCESS 5 0`00005000 ( 20.000 kb) 0.00% 0.00%
PAGE_EXECUTE 2 0`00003000 ( 12.000 kb) 0.00% 0.00%
--- Largest Region by Usage ----------- Base Address -------- Region Size ----------
Free 52`892e0000 7fa5`65548000 ( 127.646 Tb)
<unknown> 4f`4ec81000 0`e9c3f000 ( 3.653 Gb)
Image 7ff9`91344000 0`012e8000 ( 18.906 Mb)
Heap 52`8833b000 0`00fa4000 ( 15.641 Mb)
Stack 4e`37a70000 0`0007b000 ( 492.000 kb)
Other 4e`37720000 0`00181000 ( 1.504 Mb)
TEB 7ff7`ee828000 0`00002000 ( 8.000 kb)
PEB 7ff7`eea43000 0`00001000 ( 4.000 kb)
I have a pivot table that is being created like this:
date Services Uptime Services Downtime Centers Downtime Centers Uptime
----- --------- - ------------------ ---------------- ---------------
12/5/14 100.00% 0.00% 100.00% 100.00%
12/12/14 100.00% 0.00% 0.00% 0.00%
12/19/14 100.00% 0.00% 100.00% 0.00%
12/26/14 100.00% 0.00% 100.00% 0.00%
I would like it to come out as a pivot table, like this:
Date Name Uptime Downtime
----- ------ --------- -------------
12/5/14 Services 100.00% 0.00%
12/5/14 Center 100.00% 100.00%
12/12/14 services 100.00% 0.00%
12/12/14 Center 0.00% 0.00%
If you only have those 2 values, try a UNION:
select [date]
,'Services'
,[Services Uptime] as Uptime
,[Services Dowtime] as Downtime
from myTable
union all
select [date]
,'Center'
,[Centers Uptime] as Uptime
,[Centers Dowtime] as Downtime
from myTable
Edited: to include Jason suggestion about "union all"
May be you need to unpivot instead of pivoting. I will do this using cross apply with tables valued constructor.
Performance wise this will be better than Union All if you have some more names
SELECT [date],NAME, uptime, downtime
FROM Yourtable
CROSS apply (VALUES ('service',Services_Uptime,Services_Downtime),
('center',Centers_Uptime,Centers_Downtime) )
cs (NAME, uptime, downtime)