Ldap authentication with MoinMoin doesn't work - authentication

I'm trying to connect MoinMoin with my ldap server, however it doesn't work. Am I doing the setting in a proper way?
I'm using MoinMoin from the Ubuntu's repository.
Here I show you my farmconfig.py:
from farmconfig import FarmConfig
# now we subclass that config (inherit from it) and change what's different:
class Config(FarmConfig):
# basic options (you normally need to change these)
sitename = u'MyWiki' # [Unicode]
interwikiname = u'MyWiki' # [Unicode]
# name of entry page / front page [Unicode], choose one of those:
# a) if most wiki content is in a single language
#page_front_page = u"MyStartingPage"
# b) if wiki content is maintained in many languages
page_front_page = u"FrontPage"
data_dir = '/usr/share/moin/data'
data_underlay_dir = '/usr/share/moin/underlay'
from MoinMoin.auth.ldap_login import LDAPAuth
ldap_authenticator1 = LDAPAuth(
server_uri='ldap://192.168.1.196',
bind_dn='cn=admin,ou=People,dc=company,dc=com',
bind_pw='secret',
scope=2,
referrals=0,
search_filter='(uid=%(username)s)',
givenname_attribute='givenName',
surname_attribute='sn',
aliasname_attribute='displayName',
email_attribute='mailRoutingAddress',
email_callback=None,
coding='utf-8',
timeout=10,
start_tls=0,
tls_cacertdir=None,
tls_cacertfile=None,
tls_certfile=None,
tls_keyfile=None,
tls_require_cert=0,
bind_once=True,
autocreate=True,
)
auth = [ldap_authenticator1, ]
cookie_lifetime = 1

This is an indentation issue, auth and cookie_lifetime must be within class Config (so just indent all that by 4 spaces).

Related

Is the Mapbox access token still required for Kepler.gl?

we plan to integrate Kepler.gl into our open source analytics platform which will be used by many users. We have successfully integrated the library into our code and are able to visual beautiful maps with our data. The library is awesome and we want to use it but aren't sure about the Mapbox integration. Our code works without setting any Mapbox access token. Is the token no longer required if one uses the Kepler.gl with default settings? Are there any limitations in using it without a token e.g. will no longer work if more than 100 users use it?
This is the Kepler.gl related code that we used:
map_1 = KeplerGl(show_docs=False)
map_1.add_data(data=gdf.copy(), name="state")
config = {}
if self.save_config:
# Save map_1 config to a file
# config_str = json.dumps(map_1.config)
# if type(config) == str:
# config = config.encode("utf-8")
with open("kepler_config.json", "w") as f:
f.write(json.dumps(map_1.config))
if self.load_config:
with open("kepler_config.json", "r") as f:
config = json.loads(f.read())
map_1.config = config
# map_1.add_data(data=data.copy(), name="haha")
html = map_1._repr_html_()
html = html.decode("utf-8")
Thanks for your help
Tobias

Multiple mongoDB related to same django rest framework project

We are having one django rest framework (DRF) project which should have multiple databases (mongoDB).Each databases should be independed. We are able to connect to one database, but when we are going to another DB for writing connection is happening but data is storing in DB which is first connected.
We changed default DB and everything but no changes.
(Note : Solution should be apt for the usage of serializer. Because we need to use DynamicDocumentSerializer in DRF-mongoengine.
Thanks in advance.
While running connect() just assign an alias for each of your databases and then for each Document specify a db_alias parameter in meta that points to a specific database alias:
settings.py:
from mongoengine import connect
connect(
alias='user-db',
db='test',
username='user',
password='12345',
host='mongodb://admin:qwerty#localhost/production'
)
connect(
alias='book-db'
db='test',
username='user',
password='12345',
host='mongodb://admin:qwerty#localhost/production'
)
models.py:
from mongoengine import Document
class User(Document):
name = StringField()
meta = {'db_alias': 'user-db'}
class Book(Document):
name = StringField()
meta = {'db_alias': 'book-db'}
I guess, I finally get what you need.
What you could do is write a really simple middleware that maps your url schema to the database:
from mongoengine import *
class DBSwitchMiddleware:
"""
This middleware is supposed to switch the database depending on request URL.
"""
def __init__(self, get_response):
# list all the mongoengine Documents in your project
import models
self.documents = [item for in dir(models) if isinstance(item, Document)]
def __call__(self, request):
# depending on the URL, switch documents to appropriate database
if request.path.startswith('/main/project1'):
for document in self.documents:
document.cls._meta['db_alias'] = 'db1'
elif request.path.startswith('/main/project2'):
for document in self.documents:
document.cls._meta['db_alias'] = 'db2'
# delegate handling the rest of response to your views
response = get_response(request)
return response
Note that this solution might be prone to race conditions. We're modifying a Documents globally here, so if one request was started and then in the middle of its execution a second request is handled by the same python interpreter, it will overwrite document.cls._meta['db_alias'] setting and first request will start writing to the same database, which will break your database horribly.
Same python interpreter is used by 2 request handlers, if you're using multithreading. So with this solution you can't start your server with multiple threads, only with multiple processes.
To address the threading issues, you can use threading.local(). If you prefer context manager approach, there's also a contextvars module.

Assign new ticket to logged in user

In Trac is possible to set the "owner" for a new ticket to the logged in user ?
I've tried with different value for default_owner in trac.ini but no luck
From trac.edgewall.org authoritative documentation in Trac tickets (wiki):
default_owner: Name of the default owner. If set to the text "< default >" (the default value), the component owner is used.
So this is cannot help you for sure. There are a few more approaches left, depending on you Genshi template knowledge, Python skills etc. Your requirement is not hard to fulfill, but you cannot get it with stock Trac. You'll need to modify the (new) ticket template or add a relatively small plugin (tested with Trac-1.1.1):
import re
from trac.core import Component, implements
from trac.ticket.api import ITicketManipulator
class DefaultTicketOwnerManipulator(Component):
"""Set ticket owner to logged in user, if available."""
implements(ITicketManipulator)
def prepare_ticket(self, req, ticket, fields, actions):
pass
def validate_ticket(self, req, ticket):
if not ticket['owner'] and req.authname:
ticket['owner'] = req.authname
# Optionally report-back manipulation, so require a second POST.
# return [(None, "Owner set to self (%s)" % req.authname)]
return []
Hint: Use commented-out 'return' in pre-last line to not alter owner silently right on save, good for test too.
The following applies to Trac 1.1.3 or later, which is scheduled for release on 1st Jan 2015. It is implemented on the Trac trunk, which we do a pretty good job of keeping stable. The default create workflow actions are:
create = <none> -> new
create.default = 1
create_and_assign = <none> -> assigned
create_and_assign.label = assign
create_and_assign.operations = may_set_owner
create_and_assign.permissions = TICKET_MODIFY
To have the create action assign to the current user, just add:
create.operations = set_owner_to_self
Renaming the action might be appropriate, putting the ticket into either the assigned or accepted state:
create_and_accept = <none> -> accepted
create_and_accept.label = accept
create_and_accept.default = 1
create_and_accept.operations = set_owner_to_self

How to protect my e-mail address from spambots

I was wondering what rails offers to obfuscate e-mail addresses to protect it from crawlers, spambots and mail harvesters, gathering addresses to send spam.
May be I used wrong keywords, but wasn’t really able to find a gem.
I found a statistic comparing different methods to mask the mail address:
http://techblog.tilllate.com/2008/07/20/ten-methods-to-obfuscate-e-mail-addresses-compared/
I wrote a snippet that combines the top two methods.
The snipped isn’t mature yet, but I like to share it anyway, it might be a starting point for others facing the same issue.
(One next step would be to replace already linked addresses with obscured plain text.)
Before heading on I would like to know what is best practice in rails. This is a common problem and I must have missed a gem dealing with it!?
If I use my approach, what is the best way to integrate/trigger it in my app?
Any kind of before_filter? Before rendering??? Something like that?
Or like I do it currently, calling it in the view as a helper_methode?
It could even be added to string class…
In my application_helper.rb
def obfuscate_emails(content, domain_prefix = 'nirvana', clss = 'maildecode')
# This shall protect emails from spam spiders/crawlers gathering emails from webpages
# Add the following SASS to your Stylesheets
#
# span.maildecode
# direction: rtl
# unicode-bidi: bidi-override
#
# Further more you might want to use Javascript(.erb) to add links to the email addresses like this
#
# $(document).ready(function() {
# function link_emails(subdomain){
# console.log("Find an replace reverse emails, fake subdomain is "+subdomain);
# $(".maildecode").each(function() {
# email = $(this).text().replace('.'+subdomain,'').split("").reverse().join("");
# console.log("- clean email is "+email);
# // $(this).html($(this).text().replace('.'+subdomain,'')); // uncomment if you like to clean up the html a bit
# $(this).wrap('<a href="mailto:'+email+'">');
# });
# }
#
# link_emails('<%= ENV['OBFUSCATE_EMAIL_SUBDOMAIN'] %>');
# });
#
# Thanks to
# http://techblog.tilllate.com/2008/07/20/ten-methods-to-obfuscate-e-mail-addresses-compared/
email_re = /[\w.!#\$%+-]+#[\w-]+(?:\.[\w-]+)+/
content.scan(email_re).each do |mail|
obfuscate_mail = "<span class='#{clss}'>#{mail.reverse.split('#')[0]}<span style='display: none;'>.#{domain_prefix}</span>##{mail.reverse.split('#')[1]}</span>"
content = content.sub(mail, obfuscate_mail)
end
content # use raw(obfuscate_emails(content)) otherwise rails will escape the html
end
Just use the built in mail_to helper that Rails has...
http://api.rubyonrails.org/classes/ActionView/Helpers/UrlHelper.html#method-i-mail_to
mail_to 'email#here.com', 'click to email', :encode => .... # couple of encoding options
NOTE: This does not work in Rails 4 anymore. From the docs: Prior to Rails 4.0, mail_to provided options for encoding the address in order to hinder email harvesters. To take advantage of these options, install the actionview-encoded_mail_to gem. (Thanks to #zwippie)
You could simply replace the #-sign as a simple solution:
"example#example.com".sub("#","-at-") #=> example-at-example.com
"example#example.org".sub("#","{at}") #=> example{at}example.org
see obfuscate emails with ruby to protect against harvesters

How to write a Python script that uses the OpenERP ORM to directly upload to Postgres Database

I need to write a "standalone" script in Python to upload sales taxes to the account_tax table in the database using ONLY the ORM module of OpenERP. What I would like to do is something like the pseudo code below.
Can someone provide me a more details on the following:
1) what sys.path's do I need to set
2) what modules do I need to import before importing the "account" module. Currently when I import the "account" module I get the following error:
AssertionError: The report "report.custom" already exists!
3) What is the proper way to get my database cursor. In the code below I am simply calling psycopg2 directly to get a cursor.
If this approach cannot work, can anyone suggest an alternative approach other than writing XML files to load the data from the OpenERP application itself. This process needs to run outside of the the standard OpenERP application.
PSEUDO CODE:
import sys
# set Python paths to access openerp modules
sys.path.append("./openerp")
sys.path.append("./openerp/addons")
# import OpenERP
import openerp
# import the account addon modules that contains the tables
# to be populated.
import account
# define connection string
conn_string2 = "dbname='test2' user='xyz' password='password'"
# get a db connection
conn = psycopg2.connect(conn_string2)
# conn.cursor() will return a cursor object
cursor = conn.cursor()
# and finally use the ORM to insert data into table.
If you wanna do it via web service then have look at the OpenERP XML-RPC Web services
Example code top work with OpenERP Web Services :
import xmlrpclib
username = 'admin' #the user
pwd = 'admin' #the password of the user
dbname = 'test' #the database
# OpenERP Common login Service proxy object
sock_common = xmlrpclib.ServerProxy ('http://localhost:8069/xmlrpc/common')
uid = sock_common.login(dbname, username, pwd)
#replace localhost with the address of the server
# OpenERP Object manipulation service
sock = xmlrpclib.ServerProxy('http://localhost:8069/xmlrpc/object')
partner = {
'name': 'Fabien Pinckaers',
'lang': 'fr_FR',
}
#calling remote ORM create method to create a record
partner_id = sock.execute(dbname, uid, pwd, 'res.partner', 'create', partner)
More clearly you can also use the OpenERP Client lib
Example Code with client lib :
import openerplib
connection = openerplib.get_connection(hostname="localhost", database="test", \
login="admin", password="admin")
user_model = connection.get_model("res.users")
ids = user_model.search([("login", "=", "admin")])
user_info = user_model.read(ids[0], ["name"])
print user_info["name"]
You see both way are good but when you use the client lib, code is less and easy to understand while using xmlrpc proxy is lower level calls that you will handle
Hope this will help you.
As per my view one must go for XMLRPC or NETSVC services provided by Open ERP for such needs.
You don't need to import accounts module of Open ERP, there are possibilities that other modules have inherited accounts.tax object and had altered its behaviour as per your business needs.
Eventually if you feed data by calling those methods manually without using Open ERP Web service its possible you'll get undesired result / unexpected failures / inconsistent database state.
You can use Erppeek to browse data, but not sure if you can really upload data to DB, personally I use/prefer XMLRPC
Why don't you use the xmlrpc call of openerp.
it will not need to import account or openerp . and even you can have all orm functionality.
You can use python library to access openerp server using xmlrpc service.
Please check https://github.com/OpenERP/openerp-client-lib
It is officially supported by OpenERP SA.
If you want to interacti directly with the DB, you could just import psycopg2 and:
conn = psycopg2.connect(dbname='dbname', user='dbuser', password='dbpassword', host='dbhost')
cur = conn.cursor()
cur.execute('select * from table where id = %d' % table_id)
cur.execute('insert into table(column1, column2) values(%d, %d)' % (value1, value2))
cur.close()
conn.close()
Why you want to fix it like that?! You should create a localization module and define data in XML files. This is the standard way to fix such a problem in OpenERP.
You want to insert sales taxes for which country? Explain more plz.
from openerp.modules.registry import RegistryManager
registry = RegistryManager.get("databasename")
with registry.cursor() as cr:
user = registry.get('res.users').browse(cr, userid, listids)
print user