Results shown in LINQPad are limited. If objects are deep nested, a red line is shown.
I would like to increase a limit, so I can see more nested objects.
Do you know how to do it? (I have not find that in options.)
Background not in the original question: LINQPad will display of
"limit of graph" error message if the total output exceeds a certain
threshold (total page size).
The suggested answers are addressing how to limit the depth of the individually dumped object
graphs, which may help avoiding hitting the total size limit. It does not, however, increase the "limit of graph", which is what OP is asking.
To flesh out #lioil's answer:
Dump's overloads include:
T Dump<T>(this T o); //Dump the object and return it (for fluency)
T Dump<T>(this T o, string description); //Dump with label
T Dump<T>(this T o, int maximumDepth); //Dump with given maximum depth
T Dump<T>(this T o, string description, int maximumDepth); //Combine the above
You're looking for the third or fourth option.
Try another overload of Dump() method.
Related
The requirements:
100k lines
One of the columns is not text - its custom painted with wxDC*.
The items addition is coming from another thread using wxThreadEvent.
Up until now I used wxDataViewListCtrl, but it takes too long to AppendItem 100 thousand time.
wxListCtrl (in virtual mode) does not have the ability to use wxDC* - please correct me if I am wrong.
The only thing I can think of is using wxDataViewCtrl + wxDataViewModel. But I can't understand how to add items.
I looked at the samples (https://github.com/wxWidgets/wxWidgets/tree/WX_3_0_BRANCH/samples/dataview), too complex for me.
I cant understand them.
I looked at the wiki (https://wiki.wxwidgets.org/WxDataViewCtrl), also too complex for me.
Can somebody please provide a very simple example of a wxDataViewCtrl + wxDataViewModel with one string column and one wxDC* column.
Thanks in advance.
P.S.
Per #HajoKirchhoff's request in the comments, I am posting some code:
// This is called from Rust 100k times.
extern "C" void Add_line_to_data_view_list_control(unsigned int index,
const char* date,
const char* sha1) {
wxThreadEvent evt(wxEVT_THREAD, 44);
evt.SetPayload(ViewListLine{index, std::string(date), std::string(sha1)});
wxQueueEvent(g_this, evt.Clone());
}
void TreeWidget::Add_line_to_data_view_list_control(wxThreadEvent& event) {
ViewListLine view_list_line = event.GetPayload<ViewListLine>();
wxVector<wxVariant> item;
item.push_back(wxVariant(static_cast<int>(view_list_line.index)));
item.push_back(wxVariant(view_list_line.date));
item.push_back(wxVariant(view_list_line.sha1));
AppendItem(item);
}
Appending 100k items to a control will always be slow. That's because it requires moving 100k items from your storage to the controls storage. A much better way for this amount of data is to have a "virtual" list control or wxGrid. In both cases the data is not actually transferred to the control. Instead when painting occurs, a callback function will transfer only the data required to paint. So for a 100k list you will only have "activity" for the 20-30 lines that are visible.
With wxListCtrl see https://docs.wxwidgets.org/3.0/classwx_list_ctrl.html, specify the wxLC_VIRTUAL flag, call SetItemCount and then provide/override
OnGetItemText
OnGetItemImage
OnGetItemColumnImage
Downside: You can only draw items contained in a wxImageList, since the OnGetItemImage return indizes into the list. So you cannot draw arbitrary items using a wxDC. Since the human eye will be overwhelmed with 100k different images anyway, this is usually acceptable. You may have to provide 20/30 different images beforehand, but you'll have a fast, flexible list.
That said, it is possible to override the OnPaint function and use that wxDC to draw anything in the list. But that'll get difficult pretty soon.
So an alternative would be to use wxGrid, create a wxGridTableBase derived class that acts as a bridge between the grid and your actual 100k data and create wxGridCellRenderer derived classes to render the actual data onscreen. The wxGridCellRenderer class will get a wxDC. This will give you more flexibility but is also much more complex than using a virtual wxListCtrl.
The full example of doing what you want will inevitably be relatively complex. But if you decompose in simple parts, it's really not that difficult: you do need to define a custom model, but if your list is flat, this basically just means returning the value of the item at the N-th position, as you can trivially implement all model methods related to the tree structure. An example of such a model, although with multiple columns can be found in the sample, so you just need to simplify it to a one (or two) column version.
Next, you are going to need a custom renderer too, but this is not difficult neither and, again, there is an example of this in the sample too.
If you have any concrete questions, you should ask them, but it's going to be difficult to do much better than what the sample shows and it does already show exactly what you want to do.
Thank you every one who replied!
#Vz.'s words "If you have any concrete questions, you should ask them" got me thinking and I took another look at the samples of wxWidgets. The full code can be found here. Look at the following classes:
TreeDataViewModel
TreeWidget
TreeCustomRenderer
Why is it that when I see example code for binary search there is never an if statement to check if the start of the array or end is the target?
import java.util.Arrays;
public class App {
public static int binary_search(int[] arr, int left, int right, int target) {
if (left > right) {
return -1;
}
int mid = (left + right) / 2;
if (target == arr[mid]) {
return mid;
}
if (target < arr[mid]) {
return binary_search(arr, left, mid - 1, target);
}
return binary_search(arr, mid + 1, right, target);
}
public static void main(String[] args) {
int[] arr = { 3, 2, 4, -1, 0, 1, 10, 20, 9, 7 };
Arrays.sort(arr);
for (int i = 0; i < arr.length; i++) {
System.out.println("Index: " + i + " value: " + arr[i]);
}
System.out.println(binary_search(arr, arr[0], arr.length - 1, -1));
}
}
in this example if the target was -1 or 20 the search would enter recursion. But it added an if statement to check if target is mid, so why not add two more statements also checking if its left or right?
EDIT:
As pointed out in the comments, I may have misinterpreted the initial question. The answer below assumes that OP meant having the start/end checks as part of each step of the recursion, as opposed to checking once before the recursion even starts.
Since I don't know for sure which interpretation was intended, I'm leaving this post here for now.
Original post:
You seem to be under the impression that "they added an extra check for mid, so surely they should also add an extra check for start and end".
The check "Is mid the target?" is in fact not a mere optimization they added. Recursively checking "mid" is the whole point of a binary search.
When you have a sorted array of elements, a binary search works like this:
Compare the middle element to the target
If the middle element is smaller, throw away the first half
If the middle element is larger, throw away the second half
Otherwise, we found it!
Repeat until we either find the target or there are no more elements.
The act of checking the middle is fundamental to determining which half of the array to continue searching through.
Now, let's say we also add a check for start and end. What does this gain us? Well, if at any point the target happens to be at the very start or end of a segment, we skip a few steps and end slightly sooner. Is this a likely event?
For small toy examples with a few elements, yeah, maybe.
For a massive real-world dataset with billions of entries? Hm, let's think about it. For the sake of simplicity, we assume that we know the target is in the array.
We start with the whole array. Is the first element the target? The odds of that is one in a billion. Pretty unlikely. Is the last element the target? The odds of that is also one in a billion. Pretty unlikely too. You've wasted two extra comparisons to speed up an extremely unlikely case.
We limit ourselves to, say, the first half. We do the same thing again. Is the first element the target? Probably not since the odds are one in half a billion.
...and so on.
The bigger the dataset, the more useless the start/end "optimization" becomes. In fact, in terms of (maximally optimized) comparisons, each step of the algorithm has three comparisons instead of the usual one. VERY roughly estimated, that suggests that the algorithm on average becomes three times slower.
Even for smaller datasets, it is of dubious use since it basically becomes a quasi-linear search instead of a binary search. Yes, the odds are higher, but on average, we can expect a larger amount of comparisons before we reach our target.
The whole point of a binary search is to reach the target with as few wasted comparisons as possible. Adding more unlikely-to-succeed comparisons is typically not the way to improve that.
Edit:
The implementation as posted by OP may also confuse the issue slightly. The implementation chooses to make two comparisons between target and mid. A more optimal implementation would instead make a single three-way comparison (i.e. determine ">", "=" or "<" as a single step instead of two separate ones). This is, for instance, how Java's compareTo or C++'s <=> normally works.
BambooleanLogic's answer is correct and comprehensive. I was curious about how much slower this 'optimization' made binary search, so I wrote a short script to test the change in how many comparisons are performed on average:
Given an array of integers 0, ... , N
do a binary search for every integer in the array,
and count the total number of array accesses made.
To be fair to the optimization, I made it so that after checking arr[left] against target, we increase left by 1, and similarly for right, so that every comparison is as useful as possible. You can try this yourself at Try it online
Results:
Binary search on size 10: Standard 29 Optimized 43 Ratio 1.4828
Binary search on size 100: Standard 580 Optimized 1180 Ratio 2.0345
Binary search on size 1000: Standard 8987 Optimized 21247 Ratio 2.3642
Binary search on size 10000: Standard 123631 Optimized 311205 Ratio 2.5172
Binary search on size 100000: Standard 1568946 Optimized 4108630 Ratio 2.6187
Binary search on size 1000000: Standard 18951445 Optimized 51068017 Ratio 2.6947
Binary search on size 10000000: Standard 223222809 Optimized 610154319 Ratio 2.7334
so the total comparisons does seem to tend to triple the standard number, implying the optimization becomes increasingly unhelpful for larger arrays. I'd be curious whether the limiting ratio is exactly 3.
To add some extra check for start and end along with the mid value is not impressive.
In any algorithm design the main concerned is moving around it's complexity either it is time complexity or space complexity. Most of the time the time complexity is taken as more important aspect.
To learn more about Binary Search Algorithm in different use case like -
If Array is not containing any repeated
If Array has repeated element in this case -
a) return leftmost index/value
b) return rightmost index/value
and many more point
I clearly see the need to deepen my knowledge in RenderScript memory allocation and data types (I'm still confused about the sheer number of data types and finding the correct corresponding types on either side - allocations and elements. (or when to refer the forEach to input, to output or to both, etc.) Therefore I will read and re-read the documentation, which is really not bad - but it needs some time to get the necessary "intuition" how to use it correctly. But for now, please help me with this basic one (and I will return later with hopefully less stupid questions...). I need a very simple kernel that takes an ARGB Color Bitmap and returns an integer Array of gray-values. My attempt was the following:
#pragma version(1)
#pragma rs java_package_name(com.example.xxxx)
#pragma rs_fp_relaxed
uint __attribute__((kernel)) grauInt(uchar4 in) {
uint gr= (uint) (0.2125*in.r + 0.7154*in.g + 0.0721*in.b);
return gr;
}
and Java side:
int[] data1 = new int[width*height];
ScriptC_gray graysc;
graysc=new ScriptC_gray(rs);
Type.Builder TypeOut = new Type.Builder(rs, Element.U8(rs));
TypeOut.setX(width).setY(height);
Allocation outAlloc = Allocation.createTyped(rs, TypeOut.create());
Allocation inAlloc = Allocation.createFromBitmap(rs, bmpfoto1,
Allocation.MipmapControl.MIPMAP_NONE, Allocation.USAGE_SCRIPT);
graysc.forEach_grauInt(inAlloc, outAlloc);
outAlloc.copyTo(data1);
This crashed with the message cannot locate symbol "convert_uint". What's wrong with this conversion? Is the code otherwise correct?
UPDATE: isn't that ridiculous? I don't get this "easy one" run, even after 2 hours trying. I still struggle with the different Element- and variable-types. Let's recap: Input is a Bitmap. Output is an int[] Array. So, why doesnt it work when I use U8 in the Java-side Out-allocation, createFromBitmap in the Java-side In-allocation, uchar4 as kernel Input and uint as the kernel Output (RSRuntimeException: Type mismatch with U32) ?
There is no convert_uint() function. How about simple casting? Other than that, the code looks alright (assuming width and height have correct values).
UPDATE: I have just noticed that you allocate Element.I32 (i.e. signed integer type), but return uint from the kernel. These should match. And in any case, unless you need more than 8-bit precision, you should be able to fit your result in U8.
UPDATE: If you are changing the output type, make sure you change it in all places, e.g. if the kernel returns an uint, the allocation should use U32. If the kernel returns a char, the allocation should use I8. And so on...
You can't use a Uint[] directly because the input Bitmap is actually 2-dimensional. Can you create the output Allocation with a proper width/height and try that? You should still be able to extract the values into a Java array when you are finished.
i am planning to use tinyGP as a way to train a set of Input variables (Around 400 or so) to a value set before. Is there a maximum size of Input variables? Do i need to specify the same amount of variables each time?
I have a lot of computation power (500 core cluster for a weekend) so any thoughts on what parameters to use for such a large problem?
cheers
In TinyGP your constant and variable pool share the same space. The total of these two spaces cannot exceede FSET_START, which is essentially the opcode of your first operator. By default is 110. So your 400 is already over this. This should be just a matter of increasing the opcode of the first instruction up to make enough space. You will also want to make sure you still have a big enough "constant pool".
You can see this checked with the following line in TinyGP:
if (varnumber + randomnumber >= FSET_START )
System.out.println("too many variables and constants");
I'm attempting to gauge the percentage difference between two images.
Having done a lot of reading I seem to have a number of options but I'm not sure what the best method to follow for:
Ease of coding
Performance.
The methods I've seen are:
Non language specific - academic Image comparison - fast algorithm and Mac specific direct pixel access http://www.markj.net/iphone-uiimage-pixel-color/
Does anyone have any advice about what solutions make most sense for the above two cases and have code samples to show how to apply them?
I've had success calculating the difference between two images using the histogram technique mentioned here. redmoskito's answer in the SO question you linked to was actually my inspiration!
The following is an overview of the algorithm I used:
Convert the images to grayscale—compare one channel instead of three.
Divide each image into an n * n grid of "subimages". Then, for subimage pair:
Calculate their colour composition histograms.
Calculate the absolute difference between the two histograms.
The maximum difference found between two subimages is a measure of the two images' difference. Other metrics could also be used (e.g. the average difference betwen subimages).
As tskuzzy noted in his answer, if your ultimate goal is a binary "yes, these two images are (roughly) the same" or "no, they're not", you need some meaningful threshold value. You could produce such a value by passing images into the algorithm and tweaking the threshold based on its output and how similar you think the images are. A form of machine learning, I suppose.
I recently wrote a blog post on this very topic, albeit as part of a larger goal. I also created a simple iPhone app to demonstrate the algorithm. You can find the source on GitHub; perhaps it will help?
It is really difficult to suggest something when you don't tell us more about the images or the variations. Are they shapes? Are they the different objects and you want to know what class of objects? Are they the same object and you want to distinguish the object instance? Are they faces? Are they fingerprints? Are the objects in the same pose? Under the same illumination?
When you say performance, what exactly do you mean? How large are the images? All in all it really depends. With what you've said if it is only ease of coding and performance I would suggest to just find the absolute value of the difference of pixels. That is super easy to code and about as fast as it gets, but really unlikely to work for anything other than the most synthetic examples.
That being said I would like to point you to: DHOG, GLOH, SURF and SIFT.
You can use fairly basic subtraction technique that the lads above suggested. #carlosdc has hit the nail on the head with regard to the type of image this basic technique can be used for. I have attached an example so you can see the results for yourself.
The first shows a image from a simulation at some time t. A second image was subtracted away from the first which was taken some (simulation) time later t + dt. The subtracted image (in black and white for clarity) then shows how the simulation has changed in that time. This was done as described above and is very powerful and easy to code.
Hope this aids you in some way
This is some old nasty FORTRAN, but should give you the basic approach. It is not that difficult at all. Due to the fact that I am doing it on a two colour pallette you would do this operation for R, G and B. That is compute the intensities or values in each cell/pixal, store them in some array. Do the same for the other image, and subtract one array from the other, this will leave you with some coulorfull subtraction image. My advice would be to do as the lads suggest above, compute the magnitude of the sum of the R, G and B componants so you just get one value. Write that to array, do the same for the other image, then subtract. Then create a new range for either R, G or B and map the resulting subtracted array to this, the will enable a much clearer picture as a result.
* =============================================================
SUBROUTINE SUBTRACT(FNAME1,FNAME2,IOS)
* This routine writes a model to files
* =============================================================
* Common :
INCLUDE 'CONST.CMN'
INCLUDE 'IO.CMN'
INCLUDE 'SYNCH.CMN'
INCLUDE 'PGP.CMN'
* Input :
CHARACTER fname1*(sznam),fname2*(sznam)
* Output :
integer IOS
* Variables:
logical glue
character fullname*(szlin)
character dir*(szlin),ftype*(3)
integer i,j,nxy1,nxy2
real si1(2*maxc,2*maxc),si2(2*maxc,2*maxc)
* =================================================================
IOS = 1
nomap=.true.
ftype='map'
dir='./pictures'
! reading first image
if(.not.glue(dir,fname2,ftype,fullname))then
write(*,31) fullname
return
endif
OPEN(unit2,status='old',name=fullname,form='unformatted',err=10,iostat=ios)
read(unit2,err=11)nxy2
read(unit2,err=11)rad,dxy
do i=1,nxy2
do j=1,nxy2
read(unit2,err=11)si2(i,j)
enddo
enddo
CLOSE(unit2)
! reading second image
if(.not.glue(dir,fname1,ftype,fullname))then
write(*,31) fullname
return
endif
OPEN(unit2,status='old',name=fullname,form='unformatted',err=10,iostat=ios)
read(unit2,err=11)nxy1
read(unit2,err=11)rad,dxy
do i=1,nxy1
do j=1,nxy1
read(unit2,err=11)si1(i,j)
enddo
enddo
CLOSE(unit2)
! substracting images
if(nxy1.eq.nxy2)then
nxy=nxy1
do i=1,nxy1
do j=1,nxy1
si(i,j)=si2(i,j)-si1(i,j)
enddo
enddo
else
print *,'SUBSTRACT: Different sizes of image arrays'
IOS=0
return
endif
* normal finishing
IOS=0
nomap=.false.
return
* exceptional finishing
10 write (*,30) fullname
return
11 write (*,32) fullname
return
30 format('Cannot open file ',72A)
31 format('Improper filename ',72A)
32 format('Error reading from file ',72A)
end
! =============================================================
Hope this is of some use. All the best.
Out of the methods described in your first link, the histogram comparison method is by far the simplest to code and the fastest. However key point matching will provide far more accurate results since you want to know a precise number describing the difference between two images.
To implement the histogram method, I would do the following:
Compute the red, green, and blue histograms of each image
Add up the differences between each bucket
If the difference is above a certain threshold, then the percentage is 0%
Otherwise the colors found in the images are similar. So then do a pixel by pixel comparison and convert the difference into a percentage.
I don't know any precise algorithms for finding the key points of an image. However once you find them for each image you can do a pixel by pixel comparison for each of the key points.