How can I show full array without double clicking? - ide

When I am debugging and trying to see a large array, CLion shows only the first 50 elements. I need to double click to see the rest. Is there a way to increase this amount to about 1000 elements by default?

There's a registry value that controls the number of children shown by default as a workaround until we come up with some more convenient solution.
Open the registry (Ctrl+Shift+A -> Registry...) and set the cidr.debugger.value.maxChildren to the desired value.
From the related blog post:
While inspecting arrays during debug, you might notice there was a limit of 50 elements shown by default. To see more user had to explicitly expand the next 50 elements. This was done to reduce performance issues. However, sometimes a few elements with big indexes are needed, and it’s quite tiresome to click expand several times in a row.
In order to provide a solution to the problem, we’ve added a registry value to control the default number of composite value’s children.

Related

Vue 3 Better performance of lot of DOM elements

I have a big fat sort of table with lots of elements that make it really buggy.
For context 1 row per user, each user has X projects and each project has 3 month of day display (sort of gantt)
So I built something cool it's working great, but if I scale to more users it begin to be real buggy.
I'm implementing filter to display less users but at some point it need to handle the limit that I have now without being buggy.
What I found is when I'm updating a day it re-render the whole Gantt which is really stupid.
here is a minimal reproduction link: https://stackblitz.com/edit/vitejs-vite-g6azah?file=src%2FApp.vue&terminal=dev
As you can see when I'm updating an input with the v-model:
value.value
I have the attribute with the function
:test="testRerender()"
That trigger for the whole Gantt and I believe it is the issue here.
I saw v-memo that look like what I need but I can't figure out with the doc how can I use it to match my needs (and there is almost no good article on it)
Thanks for your help you would save my life!
I tried to filled proper :key attribute, code optimization, v-memo etc...

DOORS creates Object ID while they are not saved

I am adding some 2000 new objects to a DOORS module, I do this by importing a spread sheet with blank IDs, DOORS is supposed to create IDs for those blank rows.
Now the problem is, while i import spreadsheet, DOORS hangs, then when i Kill DOORS process, it anyhow creates IDs, next time when i add a new object, ID number starts from those which are already created but no exist. For some reason i need to continue from my last saved ID. Is there any way I can do this?
several remarks here:
works as designed. As soon as an object is created in any DOORS session, the new absolute number is centrally marked as "used". I think the main reason for this feature is the possibility to work in shared mode. If there were a different design, you would get into trouble as soon as two developers work on the module at the same time.
are you sure that DOORS really hangs? Perhaps it is just not yet finished, at least you can see that the objects are really created. Note that depending on how the script is written that you use for import, the number of imports per second might decrease significantly for bigger files
You should NEVER give any meaning to the absolute number other than uniqueness (perhaps QSS should have used timestamps or UUIDS instead of integers for their absolute numbers when they designed DOORS, this would make the situation clearer). You will have to rework “some reasons” . Perhaps you use a different mechanism to assign your own ID mechanism or you have to evaluate whether the requirement “generate consecutive numbers without gaps” is really necessary.

How to implement a stack with limited number of elements?

I have recently created an elaborate Undo/Redo mechanism for a programm of mine. It is an editor that works with specific XML files. However, since certain changes may or may not change any amount of nodes in the XML file, I am currently backing up the whole XML document as a clone.
So far, I've been using two System.Collections.Generic.Stack(Of XmlNode) objects to store them, and skipping back and forth works very well. But now I want to limit the number of steps one can undo, i. e. I need to throw out the oldest entries if the number of items in the undo stack exceeds a certain threshold.
How would I do that?
P.S.: It occured to me that I might use something like a Deque, so I already implemented my own DoubleEndedQueue(Of T). I could easily emulate a limited stack with that. It uses a System.Collections.Generic.List(Of T), though, and I don't know if List.Insert(item, 0) is high-performance O(1) or O(n).

setFetchBatchSize doesn't seem to work properly

I ask a few questions on this topic but still can't get it to work. I have a core data with 10k+ rows of people names that i am showing in a tableview. I would like to be able to search and update the table with every letter. It's very laggy. As suggested i watch the WWWDC '10 core data presentation and was trying to implement
[request setFetchBatchSize:50];
Doesn't seem to work. When i use instruments to check core data it still shows that there is still 10k request when loading the tableview and when i search it also gets all the results.
Is there anything else that needs to be done to set the batch size or thats not something that will help me.
The only thing that seems to work is setting the fetchlimit to 100 when i search. Do you think its a good solution?
Thanks in advance!
The batch size just tells it how many objects to fetch at a time. This is probably not going to help you very much. Let's consider your use case a bit...
The user types "F" and you tell the database, "Go find all the names that start with 'F'" and the database looks at all 10k+ records to find the ones that start with 'F'
Then, the user types 'r', so you tell the database to go find all the records that start with "Fr" and it again looks at all 10k+ records to find the ones that start with "Fr."
All fetchBatchSize is doing is telling it "Hey, when you fault in a record, bring in 50 at once because I'm going to probably need all those anyway." That does nothing to limit your search.
However, setting fetchLimit to 100 helps some because the database starts hunting through all 10k+ records, but once it has its 100 records, it does not have to keep looking at the rest of the records because it already has filled its request. It's done, and stops searching as soon as it gets 100 records that satisfy the request.
So, there are several things you can do, all depending on your other use cases.
The easiest thing to try is adding an index on the field that you are searching. You can set that in the Xcode model editor (section that says Indexes, right under where you can name the entity in the inspector). This will allow the database to setup a special index on that field, and searching will be much faster.
Second, after your initial request, you already have an array of names that begin with 'F' so there is no need to go back to the database to ask for names that begin with 'Fr' If it begins with 'Fr' it also begins with 'F' and you already have NSManagedObject pointers for all of those. Now, you can just search the array you got back.
Even better, if you gave it a sort descriptor, the array is sorted. Thus, you can do a simple binary search on the array. Or, if you prefer, you can just use the same predicate, and apply it to the results array instead of the database.
Even if you don't use the results-pruning I just discussed, I think indexing the attribute will belt dramatically.
EDIT
Maybe you should run instruments to see how much time you are spending where. Also, a badly formed predicate can bring any index scheme to it knees. Code would help.
Finally, consider how many elements you are bringing into memory. CoreData does not fault all the information in, but it does create shells for everything in the array.
If you give it a sort predicate,
I don't know how SQLLite implements its search on an index, but a B-Tree has complexity logBN so even on 30k records, that's not a lot of searching. Unless you have another problem, the indexing should have given you a pretty big improvement.
Once you have the index, you should not be examining all records. However, you still may have a very large set of data. Try fetchBatchSize on those, because it will limit the number of records fetched, and create proxies for the rest.
You can also call countFetchRequext instead of executeFetchRequest to get the number of items. Then, you can use fetchLimit to restrict the number you get.
As far as working all this with a fetched results controller... well, that guy has to know the records, so it still has to do the search.
Also, one place to look... are you doing sections? If you have a user defined comparator for anything (like translating for sections) this will get called for every single record.
Thus, I guess my big suggestion, after making the index change, is to run instruments and really study it to determine where you are spending your time. It should be pretty obvious. That will help steer you toward the real issue.
My bet is that you are still access all of the elements for some reason...

SharePoint 2010 Lists - Totals at Bottom of List Instead of Top of List

Is there a simple way to cause the column totals (like Sum) to appear at the end of a list instead of at the top of the list? Having them at the top just seems unnatural...
Thanks
Lonnie Tyre
You can move the totals to the bottom as described here.
I'd probably venture to guess that this was done because some lists may be large and span multiple pages and it could take quite a while to find out how many items are on a list.
However, you do have options. If you are code-savy, you can enlist the SPList.ItemCount Property and get your answer. From there you can put it anywhere you please. I have done it for custom web part development where the count drove certain things. For instance, maybe you would like to fire some event or change the style of something on the page based on how many tasks a user has assigned to them. You can get that information a few different ways but this is just one.
I'd look into possibly creating your own display forms if you have a strong enough desire. I have done that in SPD a few times.
Good luck!