Ranger LDAP Integration User/Group Sync issue - ldap

I am using Ranger version 1.2.0.Iam trying to integrate with LDAP user/group sync. Below are the configurations of ranger.
Bind User: uid=admin,o=Mobility
Username Attribute : cn
User Object Class​ : inetOrgPerson
User Search Base : ou=Users,o=Mobility
​User Search Filter : (&(objectClass=inetOrgPerson)(cn=?))
User Search Scope : cn
User Group Name Attribute : cn
Group Member Attribute : member
Group Name Attribute : cn
Group Object Class : groupOfNames
Group Search Base : ou=Groups,o=Mobility
Group Search Filter : (&(objectClass=groupOfNames)(cn=?))
below is the screenshot of LDAP condig
Below are the logs iam getting in ranger auth.log
11 Feb 2021 16:51:04 INFO LdapDeltaUserGroupBuilder [UnixUserSyncThread] - LdapDeltaUserGroupBuilder.getGroups() completed with group count: 0
11 Feb 2021 16:51:04 INFO UserGroupSync [UnixUserSyncThread] - End: update user/group from source==>sink
11 Feb 2021 17:51:04 INFO UserGroupSync [UnixUserSyncThread] - Begin: update user/group from source==>sink
11 Feb 2021 17:51:04 INFO LdapDeltaUserGroupBuilder [UnixUserSyncThread] - LdapDeltaUserGroupBuilder updateSink started
11 Feb 2021 17:51:04 INFO LdapDeltaUserGroupBuilder [UnixUserSyncThread] - Performing user search first
11 Feb 2021 17:51:04 INFO LdapDeltaUserGroupBuilder [UnixUserSyncThread] - extendedUserSearchFilter = (&(objectclass=inetOrgPerson)(|(uSNChanged>=0)(modifyTimestamp>=1
9700101053000Z))(&(objectClass=inetOrgPerson)(cn=?)))
11 Feb 2021 17:51:04 INFO LdapDeltaUserGroupBuilder [UnixUserSyncThread] - LdapDeltaUserGroupBuilder.getUsers() completed with user count: 0
11 Feb 2021 17:51:04 INFO LdapDeltaUserGroupBuilder [UnixUserSyncThread] - extendedAllGroupsSearchFilter = (&(objectclass=groupOfNames)(&(objectClass=groupOfNames)(cn=
?))(|(uSNChanged>=0)(modifyTimestamp>=19700101053000Z)))
11 Feb 2021 17:51:04 INFO LdapDeltaUserGroupBuilder [UnixUserSyncThread] - LdapDeltaUserGroupBuilder.getGroups() completed with group count: 0
11 Feb 2021 17:51:05 INFO UserGroupSync [UnixUserSyncThread] - End: update user/group from source==>sink

Usersync property is wrongly configured.
Here is the fix for it.
https://github.com/apache/ranger/pull/74
https://www.mail-archive.com/user#ranger.apache.org/msg00684.html

Related

Get the number of unique days with overlapping dates (in SAS)

I couldn't briefly explain the problem so I'll try to explain it this way. Let's say I have a table similar to the one below.
How do I get the total number of days in October per student that that student has at least 1 book checked out?
Please note that a single student can check out more than 1 book at a time which cause the overlapping dates.
Student
Book
Date_Borrowed
Date_Returned
David
A Thousand Splendid Suns
01 Oct 2021
05 Oct 2021
David
Jane Eyre
09 Oct 2021
13 Oct 2021
David
Please Look After Mom
21 Oct 2021
29 Oct 2021
Fiona
Sense and Sensibility
05 Oct 2021
14 Oct 2021
Fiona
The Girl Who Saved the King of Sweden
05 Oct 2021
14 Oct 2021
Fiona
A Fort of Nine Towers
02 Oct 2021
17 Oct 2021
Fiona
One Hundred Years of Solitude
20 Oct 2021
30 Oct 2021
Fiona
The Unbearable Lightness of Being
20 Oct 2021
30 Oct 2021
Greg
Fahrenheit 451
06 Oct 2021
11 Oct 2021
Greg
One Hundred Years of Solitude
10 Oct 2021
17 Oct 2021
Greg
Please Look After Mom
15 Oct 2021
21 Oct 2021
Greg
4 3 2 1
20 Oct 2021
27 Oct 2021
Greg
The Girl Who Saved the King of Sweden
27 Oct 2021
03 Nov 2021
Marcus
Fahrenheit 451
01 Oct 2021
04 Oct 2021
Marcus
Nectar in a Sieve
15 Oct 2021
15 Oct 2021
Marcus
Please Look After Mom
30 Oct 2021
31 Oct 2021
Priya
Like Water for Chocolate
02 Oct 2021
21 Oct 2021
Priya
Fahrenheit 451
21 Oct 2021
22 Oct 2021
Sasha
Baudolino
03 Oct 2021
29 Oct 2021
Sasha
A Thousand Splendid Suns
07 Oct 2021
16 Oct 2021
Sasha
A Fort of Nine Towers
26 Oct 2021
01 Nov 2021
Thanks in advance!
Using the data step, you can expand each date into a long format. From there, you can use SQL to do a simple count by student after removing overlapping dates.
data foo;
set have;
do date = date_borrowed to date_returned;
output;
end;
keep student date;
format date date9.;
run;
This gets us a long table of all the dates with at least one book checked out for each student.
student date
David 01OCT2021
David 02OCT2021
David 03OCT2021
David 04OCT2021
David 05OCT2021
David 09OCT2021
...
Now we need to remove the overlapping dates.
proc sort data=foo nodupkey;
by student date;
run;
From here, we can do a simple SQL count per student.
proc sql noprint;
create table want as
select student
, intnx('month', date, 0, 'B') as month format=monyy7.
, count(*) as days_checked_out
from foo
where calculated month = '01OCT2021'd
group by student, calculated month
;
quit;
Output:
student month days_checked_out
David OCT2021 19
Fiona OCT2021 27
Greg OCT2021 26
Marcus OCT2021 7
Priya OCT2021 21
Sasha OCT2021 29
An easy way is to make a temporary array with one variable for each day in the time period you want to count. Then just use a do loop to set the variables representing those days to 1. When you have reached the last record for a student then take the sum to find the number of days covered.
First let's convert your posted table into a dataset.
data have;
infile cards dsd dlm='|' truncover;
input Student :$20. Book :$100. (Date_Borrowed Date_Returned) (:date.);
format Date_Borrowed Date_Returned date11.;
cards;
David|A Thousand Splendid Suns|01 Oct 2021|05 Oct 2021
David|Jane Eyre|09 Oct 2021|13 Oct 2021
David|Please Look After Mom|21 Oct 2021|29 Oct 2021
Fiona|Sense and Sensibility|05 Oct 2021|14 Oct 2021
Fiona|The Girl Who Saved the King of Sweden|05 Oct 2021|14 Oct 2021
Fiona|A Fort of Nine Towers|02 Oct 2021|17 Oct 2021
Fiona|One Hundred Years of Solitude|20 Oct 2021|30 Oct 2021
Fiona|The Unbearable Lightness of Being|20 Oct 2021|30 Oct 2021
Greg|Fahrenheit 451|06 Oct 2021|11 Oct 2021
Greg|One Hundred Years of Solitude|10 Oct 2021|17 Oct 2021
Greg|Please Look After Mom|15 Oct 2021|21 Oct 2021
Greg|4 3 2 1|20 Oct 2021|27 Oct 2021
Greg|The Girl Who Saved the King of Sweden|27 Oct 2021|03 Nov 2021
Marcus|Fahrenheit 451|01 Oct 2021|04 Oct 2021
Marcus|Nectar in a Sieve|15 Oct 2021|15 Oct 2021
Marcus|Please Look After Mom|30 Oct 2021|31 Oct 2021
Priya|Like Water for Chocolate|02 Oct 2021|21 Oct 2021
Priya|Fahrenheit 451|21 Oct 2021|22 Oct 2021
Sasha|Baudolino|03 Oct 2021|29 Oct 2021
Sasha|A Thousand Splendid Suns|07 Oct 2021|16 Oct 2021
Sasha|A Fort of Nine Towers|26 Oct 2021|01 Nov 2021
;
Now we can use BY group processing in a data step to aggregate per student. We can set the upper and lower index for the array to be the values SAS uses to represent those days. Temporary arrays are automatically retained across observations, we just need to clear it out when we start a new student.
The SAS compiler does not expect to see a date literal as the index boundaries for an array so we can use %SYSEVALF() to convert the date literal to the integer it represents.
data want;
set have;
by student ;
array october [%sysevalf('01oct2021'd):%sysevalf('31oct2021'd)] _temporary_ ;
if first.student then call missing(of october[*]);
do date=max(date_borrowed,'01oct2021'd) to min(date_returned,'31oct2021'd);
october[date]=1;
end;
if last.student;
days = sum(0, of october[*]);
keep student days;
run;
Results:
Obs Student days
1 David 19
2 Fiona 27
3 Greg 26
4 Marcus 7
5 Priya 21
6 Sasha 29
You could also modify it slightly to not only count the number of "covered" (or unique) days, but also the total number of "book" days.
data want;
set have;
by student ;
array october [%sysevalf('01oct2021'd):%sysevalf('31oct2021'd)] _temporary_ ;
if first.student then call missing(of october[*]);
do date=max(date_borrowed,'01oct2021'd) to min(date_returned,'31oct2021'd);
october[date]=sum(october[date],1);
end;
if last.student;
unique_days = n(of october[*]);
book_days = sum(0,of october[*]);
keep student unique_days book_days;
run;
Results:
unique_ book_
Obs Student days days
1 David 19 19
2 Fiona 27 58
3 Greg 26 34
4 Marcus 7 7
5 Priya 21 22
6 Sasha 29 43

How to get the latest values day wise from a timeseries table?

I want to get the latest values of each SIZE_TYPE day wise, ordered by TIMESTAMP. So, only 1 value of each SIZE_TYPE must be present for a given day, and that is the latest value for the day.
How do I get the desired output? I'm using PostgreSQL here.
Input
|TIMESTAMP |SIZE_TYPE|SIZE|
|----------------------------------------|---------|----|
|1595833641356 [Mon Jul 27 2020 07:07:21]|0 |541 |
|1595833641356 [Mon Jul 27 2020 07:07:21]|1 |743 |
|1595833641356 [Mon Jul 27 2020 07:07:21]|2 |912 |
|1595876841356 [Mon Jul 27 2020 19:07:21]|1 |714 |
|1595876841356 [Mon Jul 27 2020 19:07:21]|2 |987 |
|1595963241356 [Tue Jul 28 2020 19:07:21]|0 |498 |
|1595920041356 [Tue Jul 28 2020 07:07:21]|2 |974 |
|1595920041356 [Tue Jul 28 2020 07:07:21]|0 |512 |
*Note: the TIMESTAMP values are in UNIX time. I have given
the date-time string for reference*
Output
|TIMESTAMP |SIZE_TYPE|SIZE|
|----------------------------------------|---------|----|
|1595833641356 [Mon Jul 27 2020 07:07:21]|0 |541 |
|1595876841356 [Mon Jul 27 2020 19:07:21]|1 |714 |
|1595876841356 [Mon Jul 27 2020 19:07:21]|2 |987 |
|1595920041356 [Tue Jul 28 2020 07:07:21]|2 |974 |
|1595963241356 [Tue Jul 28 2020 19:07:21]|0 |498 |
*Note: the TIMESTAMP values are in UNIX time. I have given
the date-time string for reference*
Explanation
For July 27, the latest values for
0: 541 (no other entries for the day)
1: 714
2: 987
For July 28, the latest values for
0: 498
1: nothing (ignore)
2: 974 (no other entries for the day)
You can use distinct on:
select distinct on (floor(timestamp / (24 * 60 * 60 * 1000)), size_type) t.*
from input
order by floor(timestamp / (24 * 60 * 60 * 1000)), size_type,
timestamp desc;
The arithmetic is just to extract the day from the timestamp.
Here is a db<>fiddle.

Showing data order from Monday-Sunday full week only and hide non-full week data

sorry if I'm shooting newbie questions here.
I want to create a weekly report, but for this weekly report, I want full data from Monday to Sunday
Condition:
Last 4 weeks only
Showing full week (Monday - Sunday)
Hide the result if it's not full week
If i use getdate -14, if I access the data on Wednesday, they will start counting last week from Wednesday 2 weeks ago instead of last Monday. Meanwhile, I want the report to show full week only.
Can anyone share how to do that in SQL?
Here I provide sample data:
Column name = DATE -- Column name: TOTAL_PERSON
- Fri, 1 Jun 2018 -- 10
- Sat, 2 Jun 2018 -- 4
- Sun, 3 Jun 2018 -- 12
- Mon, 4 Jun 2018 -- 15
- Tue, 5 Jun 2018 -- 10
- Wed, 6 Jun 2018 -- 3
- Thu, 7 Jun 2018 -- 1
- Fri, 8 Jun 2018 -- 13
- Sat, 9 Jun 2018 -- 9
- Sun, 10 Jun 2018 -- 23
- Mon, 11 Jun 2018 -- 5
- Tue, 12 Jun 2018 -- 3
- Wed, 13 Jun 2018 -- 1
- Thu, 14 Jun 2018 -- (TODAY)
In this case, if I am accessing on Thu 6 Jun 2018 I want to get TOTAL PERSON data from Mon, 4 Jun 2018 to Sun, 10 Jun 2018 only and not showing data from the rest since the week is not full.
Can anyone help me how to do that?
Thanks a lot!
I think you want:
where datediff(week, date, getdate()) <= 2
This counts the number of week boundaries between two dates, so it returns an entire week.
For MySQL, you can use such a select:
SELECT * FROM `myDB` WHERE `Date`
BETWEEN DATE_SUB(NOW()-INTERVAL DATE_FORMAT(CURRENT_DATE, '%w') DAY, INTERVAL 28 DAY)
AND NOW()- INTERVAL DATE_FORMAT(CURRENT_DATE, '%w') DAY
This uses the capability to transform the current day of this week into a number and substract this to get the last Sunday. from there, we select an intervall of 28 days.
(Only testet with 14 days and a very limited test-dataset, but should work)

Duplicity incremental backup taking too long

I have duplicity running an incremental daily backup to S3. About 37 GiB.
On the first month or so, it went ok. It used to finish in about 1 hour. Then it started taking too long to complete the task. Right now, as I type, it is still running the daily backup that started 7 hours ago.
I'm running two commands, first to backup and then cleanup:
duplicity --full-if-older-than 1M LOCAL.SOURCE S3.DEST --volsize 666 --verbosity 8
duplicity remove-older-than 2M S3.DEST
The logs
Temp has 54774476800 available, backup will use approx 907857100.
So the temp has enough space, good. Then it starts with this...
Copying duplicity-full-signatures.20161107T090303Z.sigtar.gpg to local cache.
Deleting /tmp/duplicity-ipmrKr-tempdir/mktemp-13tylb-2
Deleting /tmp/duplicity-ipmrKr-tempdir/mktemp-NanCxQ-3
[...]
Copying duplicity-inc.20161110T095624Z.to.20161111T103456Z.manifest.gpg to local cache.
Deleting /tmp/duplicity-ipmrKr-tempdir/mktemp-VQU2zx-30
Deleting /tmp/duplicity-ipmrKr-tempdir/mktemp-4Idklo-31
[...]
This continues for each day till today, taking long minutes for each file. And continues with this...
Ignoring incremental Backupset (start_time: Thu Nov 10 09:56:24 2016; needed: Mon Nov 7 09:03:03 2016)
Ignoring incremental Backupset (start_time: Thu Nov 10 09:56:24 2016; needed: Wed Nov 9 18:09:07 2016)
Added incremental Backupset (start_time: Thu Nov 10 09:56:24 2016 / end_time: Fri Nov 11 10:34:56 2016)
After a long time...
Warning, found incomplete backup sets, probably left from aborted session
Last full backup date: Sun Mar 12 09:54:00 2017
Collection Status
-----------------
Connecting with backend: BackendWrapper
Archive dir: /home/user/.cache/duplicity/700b5f90ee4a620e649334f96747bd08
Found 6 secondary backup chains.
Secondary chain 1 of 6:
-------------------------
Chain start time: Mon Nov 7 09:03:03 2016
Chain end time: Mon Nov 7 09:03:03 2016
Number of contained backup sets: 1
Total number of contained volumes: 2
Type of backup set: Time: Num volumes:
Full Mon Nov 7 09:03:03 2016 2
-------------------------
Secondary chain 2 of 6:
-------------------------
Chain start time: Wed Nov 9 18:09:07 2016
Chain end time: Wed Nov 9 18:09:07 2016
Number of contained backup sets: 1
Total number of contained volumes: 11
Type of backup set: Time: Num volumes:
Full Wed Nov 9 18:09:07 2016 11
-------------------------
Secondary chain 3 of 6:
-------------------------
Chain start time: Thu Nov 10 09:56:24 2016
Chain end time: Sat Dec 10 09:44:31 2016
Number of contained backup sets: 31
Total number of contained volumes: 41
Type of backup set: Time: Num volumes:
Full Thu Nov 10 09:56:24 2016 11
Incremental Fri Nov 11 10:34:56 2016 1
Incremental Sat Nov 12 09:59:47 2016 1
Incremental Sun Nov 13 09:57:15 2016 1
Incremental Mon Nov 14 09:48:31 2016 1
[...]
After listing all chains:
Also found 0 backup sets not part of any chain, and 1 incomplete backup set.
These may be deleted by running duplicity with the "cleanup" command.
This was only the backup part. It takes hours doing this and only 10 minutes to upload 37 GiB to S3.
ElapsedTime 639.59 (10 minutes 39.59 seconds)
SourceFiles 288
SourceFileSize 40370795351 (37.6 GB)
Then comes the cleanup, that gives me this:
Cleaning up
Local and Remote metadata are synchronized, no sync needed.
Warning, found incomplete backup sets, probably left from aborted session
Last full backup date: Sun Mar 12 09:54:00 2017
There are backup set(s) at time(s):
Tue Jan 10 09:58:05 2017
Wed Jan 11 09:54:03 2017
Thu Jan 12 09:56:42 2017
Fri Jan 13 10:05:05 2017
Sat Jan 14 10:24:54 2017
Sun Jan 15 09:49:31 2017
Mon Jan 16 09:39:41 2017
Tue Jan 17 09:59:05 2017
Wed Jan 18 09:59:56 2017
Thu Jan 19 10:01:51 2017
Fri Jan 20 09:35:30 2017
Sat Jan 21 09:53:26 2017
Sun Jan 22 09:48:57 2017
Mon Jan 23 09:38:45 2017
Tue Jan 24 09:54:29 2017
Which can't be deleted because newer sets depend on them.
Found old backup chains at the following times:
Mon Nov 7 09:03:03 2016
Wed Nov 9 18:09:07 2016
Sat Dec 10 09:44:31 2016
Mon Jan 9 10:04:51 2017
Rerun command with --force option to actually delete.
I found the problem. Because of an issue, I followed this answer, and added this code to my script:
rm -rf ~/.cache/deja-dup/*
rm -rf ~/.cache/duplicity/*
This is supposed to be a one time thing because of random bug duplicity had. But the answer didn't mention this. So every day the script was removing the cache just after syncing the cache, and then, on the next day it had to download the whole thing again.

mod_wsgi w/ Apache 2.2 and Python 2.7 installation issues. Miniconda as well

This problem has eluded me thus far. I have a Centos 6.7 machine running Apache 2.2 with Python 2.7 installed at /opt/home/user/miniconda2/envs/myenv/lib. Python 2.6 is obviously also installed on this system at /usr/bin/python. At first I installed mod_wsgi with pip and copied the created *.so Apache module to my modules folder. From my perspective it was created with 2.7 but I could not get the stupid ImportError: package site not found or whatever to go away. I uninstalled mod_wsgi and compiled and installed from source 4.22. I put the folder into my /home/user/ directory and started my installation process.
However easy I expected the fine-tuning available to configure to be, it quickly became apparent it was anything but. My first hurdle I surpassed, but my second has continued to stump me. After running configure:
./configure --with-python=/opt/home/user/miniconda2/envs/myenv/bin/python LD_RUN_PATH=/opt/home/user/miniconda2/envs/myenv/lib
(myenv)[user#machine2 mod_wsgi-4.4.21]$ ldd /usr/lib/httpd/modules/mod_wsgi.so
linux-gate.so.1 => (0x002a1000)
libpython2.7.so.1.0 => not found
libpthread.so.0 => /lib/libpthread.so.0 (0x00164000)
libdl.so.2 => /lib/libdl.so.2 (0x00c1f000)
libutil.so.1 => /lib/libutil.so.1 (0x00c2d000)
libm.so.6 => /lib/libm.so.6 (0x007cf000)
libc.so.6 => /lib/libc.so.6 (0x002a2000)
/lib/ld-linux.so.2 (0x00bcc000)
I think we all know that this means the shared library was not found. But I can see it in the directory!
pwd = /opt/home/user/miniconda2/envs/myenv/lib
[user#machine2 lib]$ ls -l
total 20264
drwxrwxr-x 2 user user 4096 Mar 21 13:46 engines
-rw-rw-r-- 2 user user 3066000 Mar 1 12:23 libcrypto.a
lrwxrwxrwx 1 user user 18 Mar 21 13:46 libcrypto.so -> libcrypto.so.1.0.0
-rwxrwxr-x 1 user user 1945963 Mar 21 13:46 libcrypto.so.1.0.0
-rw-r--r-- 3 user user 104318 Jan 3 2014 libhistory.a
lrwxrwxrwx 1 user user 15 Mar 21 13:46 libhistory.so -> libhistory.so.6
lrwxrwxrwx 1 user user 17 Mar 21 13:46 libhistory.so.6 -> libhistory.so.6.2
-rwxr-xr-x 3 user user 78845 Jan 3 2014 libhistory.so.6.2
lrwxrwxrwx 1 user user 19 Mar 21 13:47 libpython2.7.so -> libpython2.7.so.1.0
-rwxrwxr-x 3 user user 4979591 Dec 6 16:09 libpython2.7.so.1.0
-rw-r--r-- 3 user user 715160 Jan 3 2014 libreadline.a
lrwxrwxrwx 1 user user 16 Mar 21 13:46 libreadline.so -> libreadline.so.6
lrwxrwxrwx 1 user user 18 Mar 21 13:46 libreadline.so.6 -> libreadline.so.6.2
-rwxr-xr-x 3 user user 516418 Jan 3 2014 libreadline.so.6.2
-rw-rw-r-- 2 user user 2977926 Jan 11 11:52 libsqlite3.a
-rwxrwxr-x 1 user user 984 Mar 21 13:46 libsqlite3.la
lrwxrwxrwx 1 user user 19 Mar 21 13:46 libsqlite3.so -> libsqlite3.so.0.8.6
lrwxrwxrwx 1 user user 19 Mar 21 13:46 libsqlite3.so.0 -> libsqlite3.so.0.8.6
-rwxrwxr-x 2 user user 2573507 Jan 11 11:52 libsqlite3.so.0.8.6
-rw-rw-r-- 2 user user 613290 Mar 1 12:23 libssl.a
lrwxrwxrwx 1 user user 15 Mar 21 13:46 libssl.so -> libssl.so.1.0.0
-rwxrwxr-x 2 user user 462887 Mar 1 12:23 libssl.so.1.0.0
-rwxr-xr-x 3 user user 1154833 Mar 16 2015 libtcl8.5.so
-rwxr-xr-x 3 user user 3008 Mar 16 2015 libtclstub8.5.a
-rwxr-xr-x 3 user user 1257824 Mar 16 2015 libtk8.5.so
-rwxr-xr-x 3 user user 4446 Mar 16 2015 libtkstub8.5.a
-rw-r--r-- 3 user user 98574 Jan 5 2015 libz.a
lrwxrwxrwx 1 user user 13 Mar 21 13:46 libz.so -> libz.so.1.2.8
lrwxrwxrwx 1 user user 13 Mar 21 13:46 libz.so.1 -> libz.so.1.2.8
-rwxr-xr-x 3 user user 91730 Jan 5 2015 libz.so.1.2.8
drwxrwxr-x 2 user user 4096 Mar 21 13:47 pkgconfig
drwxrwxr-x 26 user user 20480 Mar 21 13:49 python2.7
drwxrwxr-x 4 user user 4096 Mar 21 13:46 tcl8
drwxrwxr-x 6 user user 4096 Mar 21 13:46 tcl8.5
-rw-r--r-- 1 user user 7356 Mar 21 13:46 tclConfig.sh
drwxrwxr-x 6 user user 4096 Mar 21 13:46 tk8.5
-rw-r--r-- 1 user user 4299 Mar 21 13:46 tkConfig.sh
I have been cycling between the above configure, sudo make,sudo makeinstall and sudo make distclean but to no avail, any help is appreciated.
Don't use conda with mod_wsgi and Apache. Use virtualenv. Conda's embedded Python install will be obscured from your Apache module AFAIK.