What is the "star" measurement in Expression Blend? - windows-8

I am currently working on a Windows 8 Metro/Modern UI application. Right now, I'm working on the interface in Expression Blend for Visual Studio.
My question is this: When sizing UI elements such as grid columns, I can use either pixels, auto, or stars. What is a star in this context? A google search turns up nothing and I haven't found anything in the Windiws 8 developer documentation.
Thank you.

In a grid a * means that it will equally share available space with other * columns (or rows). There are some good WPF examples of how this works here.
From the documentation here:
starSizing
A convention by which you can size rows or columns to take
the remaining available space in a Grid. A star sizing always includes
the asterisk character (), and optionally precedes the asterisk with
an integer value that specifies a weighted factor versus other
possible star sizings (for example, 3). For more information about
star sizing, see Grid.

In a grid with multiple columns, the * size columns divide up the remaining space. For example assume a 300px wide grid with 3 columns (150px, 120px and 1*).
The calculation is:
remainder = (300 - 150 - 120)
Since the remainder is 30px the 1* column is 30px wide
Now add some columns and modify the widths to (35px, 85px, 2*, 1*, 3*)
Redoing the calculation:
remainder = (300 - 35 - 85)
In this scenario the remainder is 180px, so each * column splits the remaining pixels according to their weighting number.
factor = (180/ (2 + 1 + 3))
factor = 30px
Therefore the 2* column is 60px, the 1* column is 30px and the 3* column is 90px
300 == 35 + 85 + 60 + 30 + 90
Of course the same principles apply for Row sizing.
When the grid is resized the * columns divvy up the new remainder size. But they keep the same size ratio between other * size items. In the example the 3* column will always be 3 times as wide as the 1* column.

Related

Pixel issue of characters in concatenation. Cols should be fixed as per the width for all the cols concatenated

These values are stored as part of 1 column. They are actually 3 cols concatenated through sql oracle. How can I make these values look aligned as 3 different cols?
Currently, these numbers will be 3 characters for 1 col, 10 for second and 7 for 3rd col. For some reason I had to bring them in 1 column to use in my rtf template. I need a solution in SQL Oracle to set width in concatenated cols. I am using cast here but this is the output I am getting
CAST(TO_CHAR(G_WEIGHTAGE)||'%' as char(3))||' '
||CAST(TO_CHAR(G_ACHIEVE , '990.00')||'%' as char(13)) ||' '
||CAST( TO_CHAR(ROUND((G_ACHIEVE_FACTOR*(G_WEIGHTAGE/100)),2)*(TARGET_BONUS/100), '990.00')||'%' as char(8)) AS G_WAIT
This is for 1st line in this output. I have written for other lines as well same way.
This is how it is coming
40% 96.79% 9.60%
10% 99.89% 2.70%
20% NA/ 51.42% 0.00%
10% 62.90% 0.00%
10% 112.77% 4.80%
Output I need :
40% 96.79% 9.60%
10% 99.89% 2.70%
20% NA/51.42% 0.00%
10% 62.90% 0.00%
10% 112.77% 4.80%
I believe this is due to different characters having different width. Like N has different width in comparison to 1.
Yeah, you want a PAD function. They add spaces to make a string a specific width. Try this:
lpad(TO_CHAR(G_WEIGHTAGE)||'%',3) ||' '
||lpad(TO_CHAR(G_ACHIEVE , '990.00')||'%',13) ||' '
||lpad(TO_CHAR(ROUND((G_ACHIEVE_FACTOR*(G_WEIGHTAGE/100)),9)*(TARGET_BONUS/100), '990.00')||'%',7) AS G_WAIT

How do I split a string into two separate columns in Standard SQL (Google BigQuery)?

I have a column that contains measurements in terms of length and width. Each entry in this column is written in the form lxw. I need to separate this column into two separate columns with one being length and the other being width. Please see below:
This is my original column called "size":
size
930x570
1460x700
4x7
I want to turn "size" into columns "length" and "width" as follows:
length
width
930
570
1460
700
4
7
Use below
select
split(size, 'x')[offset(0)] as length,
split(size, 'x')[offset(1)] as width
from your_table
if applied to sample data in your question - output is
You can use split():
select t.*,
split(size, 'x')[ordinal(1)] as length,
split(size, 'x')[ordinal(2)] as width
from t;

CR | Copy data to another row using a formula field or variable

Here is my problem:
Raw data 1
If there is a position 105 and 150, I need the material number of position 150. If there is only position 105, I need the material number of position 105.
On the right side of the picture you can see the correct selected material number.
Now I need to assign this data to position 100 (bc I will use a counter later on, which is depending on position 100).
Here you can see more of the raw data of the report (I can´t insert the complete report here, I use the details area only for testing).
I marked one "group" in which you can see why I can´t change the order of the positions. In this case I need to use position 105 to output the material number (number rightmost on the red border) because there is no position 150.
Raw data 2
Here is another example with position 150 used for the material number (the correct material number will be placed on position 105 every time):
Raw data 3
To use this material number in my following tables, it need to be assigned to position 100.
Thanks!

48bit RGB single pixel value

3 RGB values are represented with a single one value in some image processing applications.
For example: The single value for RGB(2758, 5541, 4055) is 4542.64
There are some questions related on how to obtain single pixel values from 8bit RGB images but none works with 48bit RGB images. How can I obtain that value?
If I do (2758 + 5541 + 4055) / 3 the result is 4118 which is near but not the same.
It appears that you are trying to determine the grayscale formula used to arrive at that given value. I suggest that you read Seven grayscale conversion algorithms by Tanner Helland.
Based on your example of:
The single value for RGB(2758, 5541, 4055) is 4542.64
It appears that value is computed using the formula:
Gray = (Red * 0.3 + Green * 0.59 + Blue * 0.11)

Power-law distribution in T-SQL

I basically need the answer to this SO question that provides a power-law distribution, translated to T-SQL for me.
I want to pull a last name, one at a time, from a census provided table of names. I want to get roughly the same distribution as occurs in the population. The table has 88,799 names ranked by frequency. "Smith" is rank 1 with 1.006% frequency, "Alderink" is rank 88,799 with frequency of 1.7 x 10^-6. "Sanders" is rank 75 with a frequency of 0.100%.
The curve doesn't have to fit precisely at all. Just give me about 1% "Smith" and about 1 in a million "Alderink"
Here's what I have so far.
SELECT [LastName]
FROM [LastNames] as LN
WHERE LN.[Rank] = ROUND(88799 * RAND(), 0)
But this of course yields a uniform distribution.
I promise I'll still be trying to figure this out myself by the time a smarter person responds.
Why settle for the power-law distribution when you can draw from the actual distribution ?
I suggest you alter the LastNames table to include a numeric column which would contain a numeric value representing the actual number of indivuduals with a name that is more common. You'll probably want a number on a smaller but proportional scale, say, maybe 10,000 for each percent of representation.
The list would then look something like:
(other than the 3 names mentioned in the question, I'm guessing about White, Johnson et al)
Smith 0
White 10,060
Johnson 19,123
Williams 28,456
...
Sanders 200,987
..
Alderink 999,997
And the name selection would be
SELECT TOP 1 [LastName]
FROM [LastNames] as LN
WHERE LN.[number_described_above] < ROUND(100000 * RAND(), 0)
ORDER BY [number_described_above] DESC
That's picking the first name which number does not exceed the [uniform distribution] random number. Note how the query, uses less than and ordering in desc-ending order; this will guaranty that the very first entry (Smith) gets picked. The alternative would be to start the series with Smith at 10,060 rather than zero and to discard the random draws smaller than this value.
Aside from the matter of boundary management (starting at zero rather than 10,060) mentioned above, this solution, along with the two other responses so far, are the same as the one suggested in dmckee's answer to the question referenced in this question. Essentially the idea is to use the CDF (Cumulative Distribution function).
Edit:
If you insist on using a mathematical function rather than the actual distribution, the following should provide a power law function which would somehow convey the "long tail" shape of the real distribution. You may wan to tweak the #PwrCoef value (which BTW needn't be a integer), essentially the bigger the coeficient, the more skewed to the beginning of the list the function is.
DECLARE #PwrCoef INT
SET #PwrCoef = 2
SELECT 88799 - ROUND(POWER(POWER(88799.0, #PwrCoef) * RAND(), 1.0/#PwrCoef), 0)
Notes:
- the extra ".0" in the function above are important to force SQL to perform float operations rather than integer operations.
- the reason why we subtract the power calculation from 88799 is that the calculation's distribution is such that the closer a number is closer to the end of our scale, the more likely it is to be drawn. The List of family names being sorted in the reverse order (most likely names first), we need this substraction.
Assuming a power of, say, 3 the query would then look something like
SELECT [LastName]
FROM [LastNames] as LN
WHERE LN.[Rank]
= 88799 - ROUND(POWER(POWER(88799.0, 3) * RAND(), 1.0/3), 0)
Which is the query from the question except for the last line.
Re-Edit:
In looking at the actual distribution, as apparent in the Census data, the curve is extremely steep and would require a very big power coefficient, which in turn would cause overflows and/or extreme rounding errors in the naive formula shown above.
A more sensible approach may be to operate in several tiers i.e. to perform an equal number of draws in each of the, say, three thirds (or four quarters or...) of the cumulative distribution; within each of these parts list, we would draw using a power law function, possibly with the same coeficient, but with different ranges.
For example
Assuming thirds, the list divides as follow:
First third = 425 names, from Smith to Alvarado
Second third = 6,277 names, from to Gainer
Last third = 82,097 names, from Frisby to the end
If we were to need, say, 1,000 names, we'd draw 334 from the top third of the list, 333 from the second third and 333 from the last third.
For each of the thirds we'd use a similar formula, maybe with a bigger power coeficient for the first third (were were are really interested in favoring the earlier names in the list, and also where the relative frequencies are more statistically relevant). The three selection queries could look like the following:
-- Random Drawing of a single Name in top third
-- Power Coef = 12
SELECT [LastName]
FROM [LastNames] as LN
WHERE LN.[Rank]
= 425 - ROUND(POWER(POWER(425.0, 12) * RAND(), 1.0/12), 0)
-- Second third; Power Coef = 7
...
WHERE LN.[Rank]
= (425 + 6277) - ROUND(POWER(POWER(6277.0, 7) * RAND(), 1.0/7), 0)
-- Bottom third; Power Coef = 4
...
WHERE LN.[Rank]
= (425 + 6277 + 82097) - ROUND(POWER(POWER(82097.0, 4) * RAND(), 1.0/4), 0)
Instead of storing the pdf as rank, store the CDF (the sum of all frequencies until that name, starting from Aldekirk).
Then modify your select to retrieve the first LN with rank greater than your formula result.
I read the question as "I need to get a stream of names which will mirror the frequency of last names from the 1990 US Census"
I might have read the question a bit differently than the other suggestions and although an answer has been accepted, and a very through answer it is, I will contribute my experience with the Census last names.
I had downloaded the same data from the 1990 census. My goal was to produce a large number of names to be submitted for search testing during performance testing of a medical record app. I inserted the last names and the percentage of frequency into a table. I added a column and filled it with a integer which was the product of the "total names required * frequency". The frequency data from the census did not add up to exactly 100% so my total number of names was also a bit short of the requirement. I was able to correct the number by selecting random names from the list and increasing their count until I had exactly the required number, the randomly added count never ammounted to more than .05% of the total of 10 million.
I generated 10 million random numbers in the range of 1 to 88799. With each random number I would pick that name from the list and decrement the counter for that name. My approach was to simulate dealing a deck of cards except my deck had many more distinct cards and a varing number of each card.
Do you store the actual frequencies with the ranks?
Converting the algebra from that accepted answer to MySQL is no bother, if you know what values to use for n. y would be what you currently have ROUND(88799 * RAND(), 0) and x0,x1 = 1,88799 I think, though I might misunderstand it. The only non-standard maths operator involved from a T-SQL perspective is ^ which is just POWER(x,y) == x^y.