APC OpCode - apc.filters - Excluding a directory from cache - apc

I have recently implemented APC on my KnownHost SSD VPS and it is working very well (WordPress sites).
One thing I would like to do is exclude a directory (or directories) from the cache.
I have read the documentation on apc.filters, and it seems confusing to me on whether directories or just file types can be excluded. I have also done extensive searching on the web, and have not found a working example of excluding a directory.
I have tried numerous variations for apc.filters, and have yet to find one that will exclude my directory.
So if my directory is located on this path in the server:
/home/my_user/public_html/my_directory
What would the correct value be for apc.filters to exclude the "my_directory" sub-directory?

Moved from the question
UPDATE: I found the answer (with help from KnownHost)
The correct syntax to exclude one directory is:
apc.filters = "-my_directory/.*";
Multiple directories are:
apc.filters = "-my_directory/.*,-my_directory2/.*";
I hope this can help someone out there, as I could not get it right, or find any information on it.
Thanks
In the interest of complete information, here are my APC runtime settings:
apc.cache_by_default 1
apc.canonicalize 1
apc.coredump_unmap 0
apc.enable_cli 0
apc.enabled 1
apc.file_md5 0
apc.file_update_protection 2
apc.filters
apc.gc_ttl 3600
apc.include_once_override 0
apc.lazy_classes 0
apc.lazy_functions 0
apc.max_file_size 2M
apc.mmap_file_mask /tmp/apc.XXXXXX
apc.num_files_hint 3000
apc.preload_path
apc.report_autofilter 0
apc.rfc1867 0
apc.rfc1867_freq 0
apc.rfc1867_name APC_UPLOAD_PROGRESS
apc.rfc1867_prefix upload_
apc.rfc1867_ttl 3600
apc.serializer default
apc.shm_segments 1
apc.shm_size 512M
apc.slam_defense 0
apc.stat 1
apc.stat_ctime 0
apc.ttl 7200
apc.use_request_time 1
apc.user_entries_hint 4096
apc.user_ttl 7200
apc.write_lock 1
Comments from other users suggest that it may also be necessary to set apc.cache_by_default=0

Related

How to change the max size for file upload on AOLServer/CentOS 6?

We have a portal for our customers that allow them to start new projects directly on our platform. The problem is that we cannot upload documents bigger than 10MO.
Every time I try to upload a file bigger than 10Mo, I have a "The connection was reset" error. After some research it seems that I need to change the max size for uploads but I don't know where to do it.
I'm on CentOS 6.4/RedHat with AOL Server.
Language: TCL.
Anyone has an idea on how to do it?
EDIT
In the end I could solve the problem with the command ns_limits set default -maxupload 500000000.
In your config.tcl, add the following line to the nssock module section:
set max_file_upload_mb 25
# ...
ns_section ns/server/${server}/module/nssock
# ...
ns_param maxinput [expr {$max_file_upload_mb * 1024 * 1024}]
# ...
It is also advised to constrain the upload times, by setting:
set max_file_upload_min 5
# ...
ns_section ns/server/${server}/module/nssock
# ...
ns_param recvwait [expr {$max_file_upload_min * 60}]
If running on top of nsopenssl, you will have to set those configuration values (maxinput, recvwait) in a different section.
I see that you are running Project Open. As well as setting the maxinput value for AOLserver, as described by mrcalvin, you also need to set 2 parameters in the Site Map:
Attachments package: parameter "MaximumFileSize"
File Storage package: parameter "MaximumFileSize"
These should be set to values in bytes, but not larger than the maxinput value for AOLserver. See the Project Open documentation for more info.
In the case where you are running Project Open using a reverse proxy, check the documentation here for Pound and here for Nginx. Most likely you will need to set a larger file upload limit there too.

ZFS: Unable to expand pool after increasing disk size in vmware

I have a Centos7 VM with ZFS on linux installed.
The VM has a disk /dev/sdb, that I've added to a pool named 'backup', and in this pool created a dataset.
Now, I wanted to increase the size of the disk in VMware, and then expand the size of the pool, but I'm not getting this to work.
I've tried 'zpool online -e backup sdb', but nothing changes.
I've tried running 'partprobe /dev/sdb' before and after the live above, but nothing changes.
I've tried rebooting + the above, nothing changes.
I've tried "parted /dev/sdb",resizing the partition (it suggests the actual new size of the volume), and then all of the above. But nothing changes
I've tried 'zpool export backup' + 'zpool import backup' in various combinations with all of the above. No luck
And also: 'lsblk' and 'df -h' reports the old/wrong size of /dev/sdb, even if parted seems to understand that it has been increased.
PS: autoexpand=on
What to do?
I faced a similar issue today and had to try a lot before finding the solution.
When I tried the known solutions (using zpool) of setting autoexpand as on and also restarting the partprobe, system would not auto expand (even after a restart).
Finally, I could solve it using parted instead of getting into zpool at all.
We need to be careful here since wrong partition selections can cause data loss.
What worked for me in your situation
Step 1: Find which pool you are trying to expand. In my case, it is 5 as seen below (unallocated space is after this pool). Use parted -l
parted -l
Output
Model: VMware, VMware Virtual S (scsi)
Disk /dev/sda: 69.8GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 2097kB 1049kB bios_grub
2 2097kB 540MB 538MB fat32 EFI System Partition boot, esp
3 540MB 2009MB 1469MB swap
4 2009MB 3592MB 1583MB zfs
5 3592MB 32.2GB 28.6GB zfs
Step 2: Instructing explictly to expany pool number 5 to 100% available. Note that '5' is not static. You need to use the pool id you wish to expand. Double-check this. Use parted /dev/XXX resizepart YY 100%
parted /dev/sda resizepart 5 100%
After this, I was able to use the entire space in VM.
For reference:
LSBSK Before
sda 8:0 0 65G 0 disk
├─sda1 8:1 0 1M 0 part
├─sda2 8:2 0 513M 0 part /boot/grub
│ /boot/efi
├─sda3 8:3 0 1.4G 0 part
│ └─cryptoswap 253:1 0 1.4G 0 crypt [SWAP]
├─sda4 8:4 0 1.5G 0 part
└─sda5 8:5 0 29.5G 0 part
LSBSK After
sda 8:0 0 65G 0 disk
├─sda1 8:1 0 1M 0 part
├─sda2 8:2 0 513M 0 part /boot/grub
│ /boot/efi
├─sda3 8:3 0 1.4G 0 part
│ └─cryptoswap 253:1 0 1.4G 0 crypt [SWAP]
├─sda4 8:4 0 1.5G 0 part
└─sda5 8:5 0 61.7G 0 part

Redis Booksleeve GetConfig Concern

I'm using Redis MSOpenTech 2.6 with Booksleeve 1.3.38. Whenever I execute
Dictionary<string, string> config = conn.Server.GetConfig("save").Result;
I get the following:
save 900 0 300 0 60 0
which I know is incorrect, since I can read the .conf file and it's set to the standard
900 1 300 10 60 10000
I've tried running the command with admin privileges and without, and it's always the same. Is there something I'm missing, or is this a bug with the MS OpeTech version of Redis?

FORTRAN 90 Open file issues

I've been searching around this code for long now and can't seem to find the reason of this not working... Maybe an outsiders view can help.
!I open File 1
!Opening File 1
open(2, File='File1.txt',status='old')
read(2,*)!File 1 header
PRINT*,'File1.txt read'
!Read it
DO b=1,nb
DO i=1,ni(b)
READ(2,*)dum(b,i),Qr(1,xbu(b),i),hr(1,xbu(b),i),Ar(1,xbu(b),i),Pr(1,xbu(b),i),dx(xbu(b),i),sx(xbu(b),i)
END DO
END DO
And it's fine. I've printed it, it's all there. But when i go to File 2, doing the exact same thing:
PRINT*,'Reading File 2 '
open(3, File='File2.txt',status='old') !<- It stays here forever.
PRINT*,'File2.txt read'
The files are plain txt, with real values like this
File 1:
11 0 0 0 0 6500 1.2
File 2
11 0.00 0.00 0.00 0.0
Any thoughts on what could cause the same code to fail the second time?
You should probably throw some error checking in there, try putting
open(3, File='File2.txt',status='old',iostat=io_status, err=100)
And somewhere put
100 write(*,*) 'io status = ', io_status
stop
I also recommend writing a function which checks for the first available fortran unit number rather than hard coding it in, something like getting free unit number in fortran

Awstats - LogFormat doesn't match the Amazon S3 log file contents

I'm trying to setup Awstats to formate Amazon S3 log files,but it keeps saying the log doesn't match the LogFormat, below is the configuration and log content:
LogFormat="%other %extra1 %time1 %host %logname %other %method %url
%otherquot %code %extra2 %bytesd %other %extra3 %extra4 %refererquot
%uaquot %other"
0dfbd34f831f30a30832ff62edcb8a93158c056f27cebd6b746e35309d19039c
looxcie-data1 [18/Dec/2011:04:30:15 +0000] 75.101.241.228
arn:aws:iam::062105025988:user/s3-user E938CC6E4B848BEA
REST.GET.BUCKET - "GET
/?delimiter=/&prefix=data/prod/looxciemp4/0/20/&max-keys=1000
HTTP/1.1" 200 - 672 - 44 41 "-" "-" -
Then I execute the command and get following result:
root#test:/usr/local/awstats/wwwroot/cgi-bin# perl awstats.pl -update - config=www.awstats.apache.com
Create/Update database for config "/etc/awstats/awstats.www.awstats.apache.com.conf" by AWStats version 7.0 (build 1.971)
From data in log file "/var/log/httpd/access.log"...
Phase 1 : First bypass old records, searching new record...
Searching new records from beginning of log file...
Jumped lines in file: 0
Parsed lines in file: 1
Found 0 dropped records,
Found 0 comments,
Found 0 blank records,
Found 1 corrupted records,
Found 0 old records,
Found 0 new qualified records.
Can anyone help to figure it out?
===========================================
I found that the format "%logname" can not match such name
arn:aws:iam::062105025988:user/s3-user
It is wired, but "%lognamequot" is able to match "arn:aws:iam::062105025988:user/s3-user";
This is the cause of this problem;
But our system log file does include logname like arn:aws:iam::062105025988:user/s3-user;
Is there anyone can help to figure it out why it doesn't match it?