Is there a limit for the size of a single file in TFVC source control for TFS 2015? - tfs-2015

One of our developers tried to upload a 20mb SSIS package into TFVC from the Webportal He got the following error: Failed to checkin changes: The maximum request size of 26214400 bytes was exceeded.
What setting do I change to allow that file to be uploaded to TFVC?
Thanks

Yes, there is a limit for the size of a single file when you upload/checkin from web access. The maximum total size allowed to be uploaded is 20 MB. However this limitation does not apply to using your favorite IDE or tf.exe for this operation.
We could not increase the size limitation now, you could use visual studio with Team Explorer or tf.exe (See Checkin command) to check in large files.
Reference the same issue in this thread : [TFS 2017] how to increase maximum total size of files when checking files from web browser?

Related

Jmeter Out of memory _ File upload test

I am facing out of memory errors during file upload test execution.
I am running the test from an ec2 m4.xlarge instance (16 gb RAM) and have allocated 80% of the memory as Jmeter heapsize.
During the test CPU util is hitting 100% , whole memory is consumed (around 12 gb) and huge java_pid***.hrpof (heap dump) file is created in the Bin folder.
File upload size : Mix of 200 kb , 400 mb , 1.5 gb files
No of Total threads : 50
Jmeter version : 3.3
I have tried the below suggested by different forums, but didnt work:
Disabled listeners
Running the test in non-gui mode
Increased heap size in jmeter.bat
Running the test from a higher configuration instance (yet to try this)
Has anyone faced this and how did you fix this?
Also, how to disable the huge(3-5GB) java_pid***.hrpof dump file getting generated?
50 threads * 1.5 GB == 75 GB while you have from 3 to 5 GB allocated to JMeter so it is definitely not enough.
You need either to use something like m4.10xlarge with 160 GB RAM or m5d.12xlarge with 192 GB RAM in order to be able to upload that big files with that many threads.
Another option is considering switching to Distributed Testing but you will need to kick off more m4.xlarge instances
You can also try switching to HTTP Raw Request sampler which has nice feature of streaming file directly to the server without pre-loading it into memory so theoretically you should be able to simulate file uploads even on that limited instance, however it might not fully reflect real life scenario. You can install HTTP Raw Request sampler using JMeter Plugins Manager
To disable heap dump creation remove DUMP="-XX:+HeapDumpOnOutOfMemoryError" line from JMeter startup script.

How to stop Adobe Experience Manager from storing binaries into local file system?

We are using Amazon S3 for storing our binaries and we just want to keep reference of those binaries into our local files system. Currently, binaries are getting stored in both S3 and local file system.
Assuming that you are referring to repository/datastore folder when you are using the S3 data store, that is your S3 cache. You can change the size of the cache and in theory reduce it some small number but you cannot completely disable it
cacheSize=<size in bytes>
in your S3 config file.
Note that there is a practical lower limit to this number based on purge factor parameters. Setting this below 10% of your S3 bucket size will have lots of cache purge triggered and this will slow down your system. Changing it to zero will give a configuration error on startup.
Just for some background, the path property in your S3 config is the path to data store on a file system. This is because S3 datastore is implemented as a write through cache. All the S3 data is written on the file system and then asynchronously uploaded to the S3 bucket. The asynchronous uploads are controlled via other config in the same file (number of retries, threads etc.)
This write through cache gives a lot of performance boost to your application as write operations won't suffer from S3 net latency. You should, ideally, configure the cache size and purge ratio according to your disk requirements and performance efficiency rather than reducing it to bare minimum.
UPDATED 28 March 2017
Improved and updated the answer to reflect latest understanding.

SharePoint 2010 - URL Invalid. It may refer to a nonexistent file or folder

I am trying to upload documents to SharePoint 2010 site and i am getting the following error
The URL 'i74 Corridor/book/ADG Book_04-06-09.pdf' is invalid. It may refer to a nonexistent file or folder, or refer to a valid file or folder that is not in the current Web
I found it could be the Database full and I cleared the log files of the Database but still i was facing the same issue. Could you please provide a solution to fix this issue.
Thanks,
Sandeep Manne
I have faced the same issue and I have done few changes in DB settings as below.
1.Check the auto growth size of that DB
2.And see the Maximum file size limit increase it to around 2048 MB.

OpenLDAP BDB log file maintenance and auto removal

I have a question about the log files OpenLDAP/BDB creates in the data directory. These files have the form log.XXXXXXXXXX (X is a digit) and each has the same size (which is configurable in DB_CONFIG).
I read a lot about checkpointing and log file maintenance in the OpenLDAP and BDB documentatioon. It seems to be normal that these files grow very fast and need maintenance. Normally you should backup them regularly and delete them afterwards. But how to handle this during a long running data migration?
In my case running a test migration for 375 accounts which triggers 3 write requests per account to the LDAP server produces 6 log files with 5 MB each. The problem ist there are more than 37000 accounts on the live system that need to be migrated and the creation of several gigabytes of log files is not accepted.
Because of that I tried to configure auto removal of the log files but the suggested solution is not working for me. After reading through the documentation, my conclusion was that I have to enable checkpoints via slapd.conf and set the DB_LOG_AUTOREMOVE flag in the DB_CONFIG file like this:
My settings in slapd.conf:
checkpoint 128 15
My settings in DB_CONFIG:
set_flags DB_LOG_AUTOREMOVE
set_lg_regionmax 262144
set_lg_bsize 2097152
But the log files are still there - even if I decrease the checkpoint settings to checkpoint 1 1. If I run slapd_db_archive -d in the data directory all of these files but the very last get removed.
Does anyone have an idea how the get the auto removal working? I am close to giving up and add a cron job to run slapd_db_archive -d during the migration. But I am not sure if this may cause problems.
We are using OpenLDAP 2.3.43 with the BDB Backend (HDB to be precise) on centos.
In BDB (dunno HDB), DB_LOG_AUTOREMOVE removes log.* files that do not reference records currently in the database. This is not the same as removing all log files.

IBM Worklight v5.0.6 Application Center - apk file upload fails

When attempting to upload our apk file, the server responds back with simply
"File HelloWorld.apk file not uploaded"
Nothing is logged in trace.log in relation to this upload, so not able to see any type of log message to diagnose further. How do you enable logging for this?
Is there a timeout, or file upload size limit? If so, how/where do you change that? The HelloWorld.apk file size is 5.6MB
There is indeed a filesize limit, but it is imposed by MySQL by default (1MB). If you are using MySQL 5.1 or 5.5 (5.6 is not supported in Worklight 5.0.x). follow these steps:
Locate the file my.ini belonging to your MySQL installation
In it, find the section [mysqld]
Underneath the section name, paste this: max_allowed_packet=1000M
Re-start the MySQL service
Re-deploy the .apk file
You may need to re-start the application server running Application Center as well.