Sharepoint list view threshold maximum limit? [closed] - sharepoint-2010

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I have a client who has already exceeded default sharepoint threshold.
I understand that it is not advisable to increase the default limit but in my case I am left with no other option than to increase the threshold .
Can anyone tell me what is the max limit for threshold I can put? Client is expecting another 50000 files going into the document store.
As per this it says max limit is 50 million. How much can I increase it to?
Would I look like a fool if I simply increase it to 5 or or 10 million

50 million, if that's what the article says.
It's really just a question of performance and scalability, but if the client really needs a quarter million items in a list, he'll have to accept a little waiting time.
Read this, where it says that SQL Server escalates to a full table lock if you exceed 5000 items. That means that you're probably going to be making other people wait, if they also require access to the same table at the same time.
Use the smallest possible threshold that will satisfy the client. Warn him that there may be performance penalties.

You don't want to go above 5000 items in a view for performance reasons. Is there a way you can redesign the lists?
this says the max limit is 30 million items in a list. Which would be your limiting factor on the view threshold

Related

Issues while implementing Google Big Query [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Our company is going to implement Big Query.
We saw many drawbacks in Big Query like
1. Only 1000 requests per day allowed.
2. No update delete allowed.
and so on...
Can u guys highlight some more drawbacks and also discuss on above two.
Please share any issues come during and after implementing Big Query.
Thanks in Advance.
"Only 1000 requests per day allowed"
Not true, fortunately! There is a limit of how many batch loads you can do to a table per day (1000, so one every 90 seconds), but this is about loading data, not querying it. And if you need to load data more frequently, you can use the streaming API for up to a 100,000 rows per second per table.
"No update delete allowed"
BigQuery is an analytical database which are not optimized for updates and deletes of individual rows. The analytical databases that support these operations usually do with caveats and performance costs. You can achieve the equivalent update and deletes with BigQuery by re-materializing your tables in just a couple minutes: https://stackoverflow.com/a/31663889/132438

How does gmail query from 900 million records? with rdms or no-sql? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
According to this Techcrunch news
Gmail has 900 million users. When I try to login with my username and password to gmail, It queries with the speed of light. Do they use rdms (relational) or no-sql? Is it possible with rdms?
I'm sure this isn't exactly how it's done, but one billion records at say 50 bytes per user name is only 50 gigabytes. They could keep it all in RAM in a sorted tree and just search the sorted tree.
A binary tree of that size is only thirty nodes deep, which would take microseconds to traverse, and I suspect they'd use something that branches more than a binary tree so it would be even flatter.
All in all, there's probably much more amazing things google does, this part is relatively trivial.

Get estimated execution time during design phase [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I am going to develop some business logic. During requirement phase, I need to provide SLA.
Business always want everything in less than second. Some time it is very frustatung also.They are not bothered about complexity.
My DB is SQL server 2012 and transaction DB.
Is there any formula which will take number of tables, columns etc and provide estimate?
No, you won't be able to get an execution time. Not only do the # of tables/joins factor in, but how much data is returned, network speed, load on the server, and many other factors. What you can get is a Query plan. SQL Server generates a query plan for every query it executes. And the execution plan will give you a "cost" value that you can use as a VERY general guideline about the query's performance. Check out these two links for more information...
1) this is a good StackOverflow question/answer.
2) sys.dm_exec_query_plan

website speed performance as it relates to database load [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I am new to this but I am curious, does the size of a database negatively affect page load speeds. Like if you had to fetch 20 items from a small database with 20,000 records and then fetch those same 20 items from a database of 2,000,000 records would it be safe to assume that the latter would be much slower all else being equal? And would buying more dedicated servers improve the speed. I want to educate myself on this so I can be prepared for future events.
It is not safe to assume that the bigger data is much slower. An intelligently designed database is going to do such page accesses through an index. For most real problems, the index will fit in memory. The cost of any page access is then:
Cost of looking up where the appropriate records are in the database.
Cost of loading the database pages containing those records into memory.
The cost of index lookups varies little (relatively) based on the size of the index. So, the typical worst case scenario is about 20 disk accesses for getting the data. And, for a wide range of overall table sizes, this doesn't change.
If the table is small and fits in memory, then you have the advantage of fully caching it in the in-memory page cache. This will speed up queries in that case. But the upper limit on performance is fixed.
If the index doesn't fit in memory, then the situation is a bit more complicated.
What would typically increase performance for a single request is having more memory. You need more processors if you have many multiple requests at the same time.
And, if you are seeing linear performance degradation, then you have a database poorly optimized for your application needs. Fixing the database structure in that case will generally be better than throwing more hardware at the problem.

Can we restrict our database to not autogrow? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
can we make our database ( what ever its size) to not auto grow at all ( data and log file ) ?
if we proceed with this choice maybe we will face problems when the database is full during the on hours
Typically the way you prevent growth events from occurring during business hours is by pre-allocating the data and log files to a large enough size to minimize or completely eliminate auto-growth events in the first place. This may mean making the files larger than they need to be right now, but large enough to handle all of the data and/or your largest transactions across some time period x.
Other things you can do to minimize the impact of growth events:
balance the growth size so that growth events are rare, but still don't take a lot of time individually. You don't want the default of 10% and 1MB that come from the model database; but there is no one-size-fits-all answer for what your settings should be.
ensure you are in the right recovery model. If you don't need point-in-time recovery, put your database in SIMPLE. If you do, put it in FULL, but make sure you are taking frequent log backups.
ensure you have instant file initialization enabled. This won't help with log files, but when your data file grows, it should be near instantaneous, up to a certain size (again, no one-size-fits-all here).
get off of slow storage.
Much more info here:
How do you clear the SQL Server transaction log?