moqui:getting This request requires HTTP authentication - moqui

In web browser if I give the below url it is working fine.
http://localhost:8080/moqui/rest/s1/User/UserRoleMap/100301,
but if the same url
In the mobile app It is getting error like
ionic.bundle.js:24977 GET ionic.bundle.js:24977 GET http://localhost:8080/moqui/rest/s1/User/UserRoleMap/100301 401 (Unauthorized)(anonymous function) # ionic.bundle.js:24977sendReq # ionic.bundle.js:24770serverRequest # ionic.bundle.js:24480processQueue # ionic.bundle.js:29104(anonymous function) # ionic.bundle.js:29120$eval # ionic.bundle.js:30372$digest # ionic.bundle.js:30188$apply # ionic.bundle.js:30480(anonymous function) # ionic.bundle.js:65289defaultHandlerWrapper # ionic.bundle.js:16764eventHandler # ionic.bundle.js:16752triggerMouseEvent # ionic.bundle.js:2953tapClick # ionic.bundle.js:2942tapMouseUp # ionic.bundle.js:3018
app.js:116 Apache Tomcat/7.0.63 - Error report HTTP Status 401 - User must be logged in for operaton list on entity UserRoleMapstype Status reportmessage User must be logged in for operaton list on entity UserRoleMapsdescription This request requires HTTP authentication.Apache Tomcat/7.0.63 401 (Unauthorized)(anonymous function) # ionic.bundle.js:24977sendReq # ionic.bundle.js:24770serverRequest # ionic.bundle.js:24480processQueue # ionic.bundle.js:29104(anonymous function) # ionic.bundle.js:29120$eval # ionic.bundle.js:30372$digest # ionic.bundle.js:30188$apply # ionic.bundle.js:30480(anonymous function) # ionic.bundle.js:65289defaultHandlerWrapper # ionic.bundle.js:16764eventHandler # ionic.bundle.js:16752triggerMouseEvent # ionic.bundle.js:2953tapClick # ionic.bundle.js:2942tapMouseUp # ionic.bundle.js:3018
app.js:116 Apache Tomcat/7.0.63 - Error report HTTP Status 401 - User must be logged in for operaton list on entity UserRoleMapstype Status reportmessage User must be logged in for operaton list on entity UserRoleMapsdescription This request requires HTTP authentication.Apache Tomcat/7.0.63
even user is login it is getting the same error.
this my UserRoleMap entity
<entity entity-name="UserRoleMap" package-name="moqui.security" short-alias="UserRoleMaps" allow-user-field="false" allow-remote="true">
<field name="gropMapId" type="id-long" is-pk="true" />
<field name="persontId" type="text-medium" />
<field name="child_Id" type="text-medium" />
// the below relationship parentId mean userId so it maps the childId and get the respective data related to parentId.
<relationship type="one" title="UserType" related-entity-name="moqui.security.UserAccount" short-alias="childUserDetails">
<key-map field-name="child_Id" related-field-name="userId"/>
</relationship>
<master>
<detail relationship="childUserDetails"/>
</master>
</entity>

Related

GCP DataTransferServiceClient, how to set up the procedure

# This snippet has been automatically generated and should be regarded as a
# code template only.
# It will require modifications to work:
# - It may require correct/in-range values for request initialization.
# - It may require specifying regional endpoints when creating the service
# client as shown in:
# https://googleapis.dev/python/google-api-core/latest/client_options.html
from google.cloud import bigquery_datatransfer_v1
def sample_start_manual_transfer_runs():
# Create a client
client = bigquery_datatransfer_v1.DataTransferServiceClient()
# Initialize request argument(s)
request = bigquery_datatransfer_v1.StartManualTransferRunsRequest(
)
# Make the request
response = client.start_manual_transfer_runs(request=request)
# Handle the response
print(response)
I am trying to write a script that manually triggers s3 to bigquery. However, I am unsure of how to integrate other types and functions provided from DataTransferServiceClient.
For example, how do I integrate transfer_config into the script above. Also, I am not quite sure how to get config_id from transfer_config once I have it.

Verdaccio - Tarball data seems to be corrupted. Code EINTEGRITY with any random package

I have configured Verdaccio on my local machine for testing. Below is my configuration,
#
# This is the default configuration file. It allows all users to do anything,
# please read carefully the documentation and best practices to
# improve security.
#
# Look here for more config file examples:
# https://github.com/verdaccio/verdaccio/tree/5.x/conf
#
# Read about the best practices
# https://verdaccio.org/docs/best
# path to a directory with all packages
storage: /verdaccio/storage/data
# path to a directory with plugins to include
plugins: /verdaccio/plugins
# https://verdaccio.org/docs/webui
# https://verdaccio.org/docs/configuration#uplinks
# a list of other known repositories we can talk to
uplinks:
npmjs:
url: https://registry.npmjs.org/
cache: false
# https://verdaccio.org/docs/configuration#authentication
auth:
htpasswd:
file: /verdaccio/htpasswd
# Learn how to protect your packages
# https://verdaccio.org/docs/protect-your-dependencies/
# https://verdaccio.org/docs/configuration#packages
packages:
'#mycompany/*':
access: $authenticated
publish: $authenticated
unpublish: $authenticated
'#*/*':
# scoped packages
access: $all
publish: $authenticated
unpublish: $authenticated
proxy: npmjs
'**':
access: $all
publish: $authenticated
unpublish: $authenticated
# publish: azuread
# unpublish: azuread
# if package is not available locally, proxy requests to 'npmjs' registry
proxy: npmjs
# To improve your security configuration and avoid dependency confusion
# consider removing the proxy property for private packages
# https://verdaccio.org/docs/best#remove-proxy-to-increase-security-at-private-packages
# https://verdaccio.org/docs/configuration#server
# You can specify HTTP/1.1 server keep alive timeout in seconds for incoming connections.
# A value of 0 makes the http server behave similarly to Node.js versions prior to 8.0.0, which did not have a keep-alive timeout.
# WORKAROUND: Through given configuration you can workaround following issue https://github.com/verdaccio/verdaccio/issues/301. Set to 0 in case 60 is not enough.
server:
keepAliveTimeout: 60
# Allow `req.ip` to resolve properly when Verdaccio is behind a proxy or load-balancer
# See: https://expressjs.com/en/guide/behind-proxies.html
# trustProxy: '127.0.0.1'
# https://verdaccio.org/docs/configuration#offline-publish
# publish:
# allow_offline: false
# https://verdaccio.org/docs/configuration#url-prefix
# url_prefix: /verdaccio/
# VERDACCIO_PUBLIC_URL='https://somedomain.org';
# url_prefix: '/my_prefix'
# // url -> https://somedomain.org/my_prefix/
# VERDACCIO_PUBLIC_URL='https://somedomain.org';
# url_prefix: '/'
# // url -> https://somedomain.org/
# VERDACCIO_PUBLIC_URL='https://somedomain.org/first_prefix';
# url_prefix: '/second_prefix'
# // url -> https://somedomain.org/second_prefix/'
# https://verdaccio.org/docs/configuration#security
# security:
# api:
# legacy: true
# jwt:
# sign:
# expiresIn: 29d
# verify:
# someProp: [value]
# web:
# sign:
# expiresIn: 1h # 1 hour by default
# verify:
# someProp: [value]
# https://verdaccio.org/docs/configuration#user-rate-limit
# userRateLimit:
# windowMs: 50000
# max: 1000
# https://verdaccio.org/docs/configuration#max-body-size
# max_body_size: 10mb
# https://verdaccio.org/docs/configuration#listen-port
# listen:
# - localhost:4873 # default value
# - http://localhost:4873 # same thing
# - 0.0.0.0:4873 # listen on all addresses (INADDR_ANY)
# - https://example.org:4873 # if you want to use https
# - "[::1]:4873" # ipv6
# - unix:/tmp/verdaccio.sock # unix socket
# The HTTPS configuration is useful if you do not consider use a HTTP Proxy
# https://verdaccio.org/docs/configuration#https
# https:
# key: ./path/verdaccio-key.pem
# cert: ./path/verdaccio-cert.pem
# ca: ./path/verdaccio-csr.pem
# https://verdaccio.org/docs/configuration#proxy
# http_proxy: http://something.local/
# https_proxy: https://something.local/
# https://verdaccio.org/docs/configuration#notifications
# notify:
# method: POST
# headers: [{ "Content-Type": "application/json" }]
# endpoint: https://usagge.hipchat.com/v2/room/3729485/notification?auth_token=mySecretToken
# content: '{"color":"green","message":"New package published: * {{ name }}*","notify":true,"message_format":"text"}'
middlewares:
audit:
enabled: true
# https://verdaccio.org/docs/logger
# log settings
logs: { type: stdout, format: pretty, level: http }
#experiments:
# # support for npm token command
# token: false
# # disable writing body size to logs, read more on ticket 1912
# bytesin_off: false
# # enable tarball URL redirect for hosting tarball with a different server, the tarball_url_redirect can be a template string
# tarball_url_redirect: 'https://mycdn.com/verdaccio/${packageName}/${filename}'
# # the tarball_url_redirect can be a function, takes packageName and filename and returns the url, when working with a js configuration file
# tarball_url_redirect(packageName, filename) {
# const signedUrl = // generate a signed url
# return signedUrl;
# }
# translate your registry, api i18n not available yet
# i18n:
# list of the available translations https://github.com/verdaccio/verdaccio/blob/master/packages/plugins/ui-theme/src/i18n/ABOUT_TRANSLATIONS.md
# web: en-US
# minio configuration
store:
minio:
# The HTTP port of your minio instance
port: 9000
# The endpoint on which verdaccio will access minio (without scheme)
endPoint: 172.17.0.4
# The minio access key
accessKey: ***
# The minio secret key
secretKey: *****
# Disable SSL if you're accessing minio directly through HTTP
useSSL: false
# The region used by your minio instance (optional, default to "us-east-1")
# region: eu-west-1
# A bucket where verdaccio will store it's database & packages (optional, default to "verdaccio")
bucket: 'npm'
# Number of retry when a request to minio fails (optional, default to 10)
retries: 3
# Delay between retries (optional, default to 100)
delay: 50
I am able to login and I can publish and pull private packages. However, whenever I try to pull any package which is not present on my machine, and it gets pulled from registry.npmjs.org I get a warning which states that tarball data seems to be corrupted. Trying again. for any random package and then the command crashes with ERR: CODE EINTEGRITY, sha256:****
I am not able to figure this out.

Use both JDBC and SAML2 IdPs in Apereo CAS 5.3.16

I am trying to setup Apereo CAS 5.3.16 to use a SAML2 IdP and a JDBC (PostreSQL) database IdP. We need CAS to try to authenticate against the SAML IdP first and then, if that fails, against the JDBC IdP.
Unfortunately, over the past weekend, the documentation for v 5.3.16 was removed from the Apereo website, so am now working from the markdown source documents in the codebase. I have consulted the manual extensively and read these posts - https://fawnoos.com/2017/03/22/cas51-delauthn-tutorial/ and CAS delegate authentication to Azure SAML - and can't get the app to do what we need.
CAS creates its SAML metadata, keys and obtains metadata from the SAML IdP (Okta).
The logs show the following entry:
DEBUG [org.apereo.cas.authentication.PolicyBasedAuthenticationManager] -
<Resolved and finalized authentication handlers to carry out this authentication transaction are
[[org.apereo.cas.authentication.handler.support.HttpBasedServiceCredentialsAuthenticationHandler#301ed37a,
org.apereo.cas.adaptors.jdbc.QueryDatabaseAuthenticationHandler#b48d4df,
org.apereo.cas.support.pac4j.authentication.handler.support.ClientAuthenticationHandler#6d3bc620]
Which looks right to me, except that I want the pac4j handler executed before the JDBC one. I don't know what HttpBasedServiceCredentialsAuthenticationHandler is but it is part of the CAS core in its source code, so I think it is supposed to be there.
The authentication request is going to the JDBC handler first and if that fails, is not falling through to the SAML handler. The authentication request is immediately rejected.
Here is (the relevant part of) our properties file (standalone.properties).
Can some kind soul please tell me what am I missing or doing wrong?
# --- UTS Library --- #
server.port=8080
server.ssl.enabled=false
server.use-forward-headers=true
server.session.cookie.http-only=true
server.session.tracking-modes=cookie
cas.server.name=${CAS_SERVER_NAME:}
cas.server.prefix=${cas.server.name}/cas
cas.host.name=
# Default theme name
cas.theme.defaultThemeName=ourtheme
# CAS session persistence
cas.ticket.tgt.rememberMe.enabled=true
cas.ticket.tgt.rememberMe.timeToKillInSeconds=604800
##
# CAS endpoint security
#
...
# logging settings
# Stacktrace settings, possible values: NEVER|ALWAYS|ON_TRACE_PARAM
server.error.include-stacktrace=${CAS_INCLUDE_STACKTRACE:ALWAYS}
##
# Database settings
#
database.driverClass=org.postgresql.Driver
database.url=jdbc:postgresql://${CAS_DB_HOST:127.0.0.1}:${CAS_DB_PORT:5432}/${CAS_DB_NAME:our_db}
database.dialect=org.hibernate.dialect.PostgreSQL82Dialect
database.user=${CAS_DB_USER:}
database.password=${CAS_DB_PASS:}
database.pool.initialSize=2
database.pool.minSize=2
database.pool.maxSize=12
database.pool.acquireIncrement=2
# kills persistent connections that have been idle for > 60 seconds
database.pool.maxIdleTime=60
# keys
cas.tgc.crypto.encryption.key=${CAS_TGC_ENCRYPTION_KEY:}
cas.tgc.crypto.signing.key=${CAS_TGC_SIGNING_KEY:}
cas.webflow.crypto.encryption.key=${CAS_WEBFLOW_ENCRYPTION_KEY:}
cas.webflow.crypto.signing.key=${CAS_WEBFLOW_SIGNING_KEY:}
##
# CAS Authentication Policy
#
cas.authn.policy.any.enabled=true
cas.authn.policy.any.tryAll=false
# Attribute release policy
cas.authn.attributeRepository.defaultAttributesToRelease=username,givenname,familyname,mail,[others]
# Disable default authenticators
cas.authn.accept.users=
#cas.sso.proxyAuthnEnabled=false
##
# Okta SAML IdP delegation integration
cas.authn.pac4j.saml[0].keystorePassword=our_passwd
cas.authn.pac4j.saml[0].privateKeyPassword=our_key
cas.authn.pac4j.saml[0].serviceProviderEntityId=urn:cas:saml:our.url
cas.authn.pac4j.saml[0].serviceProviderMetadataPath=/etc/cas/config/sp-metadata.xml
cas.authn.pac4j.saml[0].keystorePath=/etc/cas/config/samlKeystore.jks
cas.authn.pac4j.saml[0].identityProviderMetadataPath=https://our.okta.vanity.domain/app/our_okta_sp_id/sso/saml/metadata
##
# PostgreSQL authentication
cas.authn.jdbc.query[0].name=ourdb
cas.authn.jdbc.query[0].order=1
cas.authn.jdbc.query[0].sql=SELECT ...
cas.authn.jdbc.query[0].fieldPassword=password
cas.authn.jdbc.query[0].fieldDisabled=disabled
cas.authn.jdbc.query[0].url=${database.url}
cas.authn.jdbc.query[0].dialect=${database.dialect}
cas.authn.jdbc.query[0].user=${database.user}
cas.authn.jdbc.query[0].password=${database.password}
cas.authn.jdbc.query[0].driverClass=${database.driverClass}
cas.authn.jdbc.query[0].passwordEncoder.type=DEFAULT
cas.authn.jdbc.query[0].passwordEncoder.encodingAlgorithm=...
##
# Attributes
#
cas.authn.attributeRepository.jdbc[0].sql=SELECT ...
cas.authn.attributeRepository.jdbc[0].username=username,univid
...
cas.authn.attributeRepository.jdbc[0].singleRow=true
cas.authn.attributeRepository.jdbc[0].order=0
cas.authn.attributeRepository.jdbc[0].queryType=OR
cas.authn.attributeRepository.jdbc[0].url=${database.url}
cas.authn.attributeRepository.jdbc[0].dialect=${database.dialect}
cas.authn.attributeRepository.jdbc[0].user=${database.user}
cas.authn.attributeRepository.jdbc[0].password=${database.password}
cas.authn.attributeRepository.jdbc[0].driverClass=${database.driverClass}
# Specify whether CAS should redirect to the specified service parameter on /logout requests
cas.logout.followServiceRedirects=true
# Specify how CAS should respond and validate incoming HTTP requests
# X-Frame-Options - default setting is DENY
cas.httpWebRequest.header.xframe=true
cas.httpWebRequest.header.xframeOptions=ALLOWALL
##
# CAS PersonDirectory Principal Resolution
#
...
##
# CAS Authentication Throttling
#
...
##
# CAS Health Monitoring
#
...
##
# SAML
#
# Indicates the SAML response issuer
#cas.samlCore.issuer=sso.lib.uts.edu.au
#
# Indicates the skew allowance which controls the issue instant of the SAML response
#cas.samlCore.skewAllowance=60
#
# Indicates whether SAML ticket id generation should be saml2-compliant.
#cas.samlCore.ticketidSaml2=false
##
# CORS handling
#
...
##
# Memcached
#
...
# Monitoring
cas.monitor.memcached.daemon=false
##
# Service ticket behaviour
#
cas.ticket.st.timeToKillInSeconds=60
##
# Service registry
cas.serviceRegistry.json.location=file:/etc/cas/services
# -- / -- #
Background:
Our organisation plans to retire CAS for Okta in a phased transition. The first phase is to use Okta as an IdP for CAS, replacing a bespoke Azure AD/MSAL module. We are not keen to upgrade to CAS 6 given our CAS will be retired. The org's CAS expert has left and it's been given to me, as I'm a Java programmer and CAS is written in Java. So at least I can debug it. I am, most certainly, not a CAS expert and I find the manual vague, incomplete and lacking in concrete examples.

rails 4 devise ldap_authenticatable current_user not set

I'm fairly new to Rails 4 and am experimenting with Devise and ldap_authenticatable and I see something that I'm not sure is right. When I authenticate to my Active Directory Devise works fine and stores the user in the MySQL database as expected. However, I seem to lose the user params and can't tell which user just authenticated. user_signed_in? returns false but if I hit the login link I get the message "already signed in" current_user is nil and set_user fails because params(:id) is nil. Seems like something is broken here but I'm not sure what the norm is as far as Devise setting or keeping user params alive.
Any ideas or helpful information?
User Model:
class User < ActiveRecord::Base
# Include default devise modules. Others available are:
# :confirmable, :lockable, :timeoutable and :omniauthable
devise :ldap_authenticatable, :trackable, :validatable
before_save :get_ldap_attrs
def get_ldap_attrs
self.firstname = Devise::LDAP::Adapter.get_ldap_param(self.email, 'givenName')
self.lastname = Devise::LDAP::Adapter.get_ldap_param(self.email, 'sn')
self.login = Devise::LDAP::Adapter.get_ldap_param(self.email, 'sAMAccountName')
self.email = Devise::LDAP::Adapter.get_ldap_param(self.email,'mail').first
self.studentid = Devise::LDAP::Adapter.get_ldap_param(self.email, 'title')
end
end
----
ldap.yaml
## Authorizations
# Uncomment out the merging for each environment that you'd like to include.
# You can also just copy and paste the tree (do not include the "authorizations") to each
# environment if you need something different per enviornment.
authorizations: &AUTHORIZATIONS
allow_unauthenticated_bind: false
group_base: ou=groups,dc=kentshill,dc=org
## Requires config.ldap_check_group_membership in devise.rb be true
# Can have multiple values, must match all to be authorized
required_groups:
# If only a group name is given, membership will be checked against "uniqueMember"
#- ########################
#- #######################
# If an array is given, the first element will be the attribute to check against, the second the group name
#- ["moreMembers", "cn=users,ou=groups,dc=test,dc=com"]
## Requires config.ldap_check_attributes in devise.rb to be true
## Can have multiple attributes and values, must match all to be authorized
require_attribute:
objectClass: inetOrgPerson
authorizationRole: postsAdmin
## Environment
development:
host: address
port: 636
attribute: mail
base: DN
admin_user: fqn user with privs
admin_password: password
ssl: true
# <<: *AUTHORIZATIONS
test:
host: localhost
port: 3389
attribute: cn
base: ou=people,dc=test,dc=com
admin_user: cn=admin,dc=test,dc=com
admin_password: admin_password
ssl: simple_tls
# <<: *AUTHORIZATIONS
production:
host: localhost
port: 636
attribute: cn
base: ou=people,dc=test,dc=com
admin_user: cn=admin,dc=test,dc=com
admin_password: admin_password
ssl: start_tls
# <<: *AUTHORIZATIONS
----------------
Devise initializer
# Use this hook to configure devise mailer, warden hooks and so forth.
# Many of these configuration options can be set straight in your model.
Devise.setup do |config|
# ==> LDAP Configuration
config.ldap_logger = true
config.ldap_create_user = true
config.ldap_update_password = true
#config.ldap_config = "#{Rails.root}/config/ldap.yml"
config.ldap_check_group_membership = false
#config.ldap_check_group_membership_without_admin = false
config.ldap_check_attributes = false
config.ldap_use_admin_to_bind = true
config.ldap_ad_group_check = false
# The secret key used by Devise. Devise uses this key to generate
# random tokens. Changing this key will render invalid all existing
# confirmation, reset password and unlock tokens in the database.
# Devise will use the `secret_key_base` on Rails 4+ applications as its `secret_key`
# by default. You can change it below and use your own secret key.
# config.secret_key = 'ead157a98cc1402f93c717c537225a807971f381bdb51063b22d9979b39e0db385493e0d392999152597ce52baf327d97ffc9a59371ea3258cd8f5fc6d158b75'
# ==> Mailer Configuration
# Configure the e-mail address which will be shown in Devise::Mailer,
# note that it will be overwritten if you use your own mailer class
# with default "from" parameter.
config.mailer_sender = 'please-change-me-at-config-initializers-devise#example.com'
# Configure the class responsible to send e-mails.
# config.mailer = 'Devise::Mailer'
# ==> ORM configuration
# Load and configure the ORM. Supports :active_record (default) and
# :mongoid (bson_ext recommended) by default. Other ORMs may be
# available as additional gems.
require 'devise/orm/active_record'
config.ldap_auth_username_builder = Proc.new() { |attribute, login, ldap| login }
# config.warden do |manager|
# manager.default_strategies(:scope => :user).unshift :ldap_authenticatable
# end
# ==> Configuration for any authentication mechanism
# Configure which keys are used when authenticating a user. The default is
# just :email. You can configure it to use [:username, :subdomain], so for
# authenticating a user, both parameters are required. Remember that those
# parameters are used only when authenticating and not when retrieving from
# session. If you need permissions, you should implement that in a before filter.
# You can also supply a hash where the value is a boolean determining whether
# or not authentication should be aborted when the value is not present.
config.authentication_keys = [:email]
# Configure parameters from the request object used for authentication. Each entry
# given should be a request method and it will automatically be passed to the
# find_for_authentication method and considered in your model lookup. For instance,
# if you set :request_keys to [:subdomain], :subdomain will be used on authentication.
# The same considerations mentioned for authentication_keys also apply to request_keys.
# config.request_keys = []
# Configure which authentication keys should be case-insensitive.
# These keys will be downcased upon creating or modifying a user and when used
# to authenticate or find a user. Default is :email.
config.case_insensitive_keys = [:email]
"config/initializers/devise.rb" 280L, 13721C

Running buildbot behind cherokee reverse proxy

I am attempting to run my buildbot master server behind a cherokee reverse proxy with the buildbot instance as cherokee's information source in a round robin reverse proxy layout.
This is the buildbot master.cfg configuration file:-
# -*- python -*-
# ex: set syntax=python:
# This is a sample buildmaster config file. It must be installed as
# 'master.cfg' in your buildmaster's base directory.
# This is the dictionary that the buildmaster pays attention to. We also use
# a shorter alias to save typing.
c = BuildmasterConfig = {}
####### BUILDSLAVES
# The 'slaves' list defines the set of recognized buildslaves. Each element is
# a BuildSlave object, specifying a unique slave name and password. The same
# slave name and password must be configured on the slave.
from buildbot.buildslave import BuildSlave
c['slaves'] = [BuildSlave("example-slave", "pass")]
# 'slavePortnum' defines the TCP port to listen on for connections from slaves.
# This must match the value configured into the buildslaves (with their
# --master option)
c['slavePortnum'] = 9989
####### CHANGESOURCES
# the 'change_source' setting tells the buildmaster how it should find out
# about source code changes. Here we point to the buildbot clone of pyflakes.
from buildbot.changes.gitpoller import GitPoller
c['change_source'] = []
c['change_source'].append(GitPoller(
'git://github.com/buildbot/pyflakes.git',
workdir='gitpoller-workdir', branch='master',
pollinterval=300))
####### SCHEDULERS
# Configure the Schedulers, which decide how to react to incoming changes. In this
# case, just kick off a 'runtests' build
from buildbot.schedulers.basic import SingleBranchScheduler
from buildbot.schedulers.forcesched import ForceScheduler
from buildbot.changes import filter
c['schedulers'] = []
c['schedulers'].append(SingleBranchScheduler(
name="all",
change_filter=filter.ChangeFilter(branch='master'),
treeStableTimer=None,
builderNames=["runtests"]))
c['schedulers'].append(ForceScheduler(
name="force",
builderNames=["runtests"]))
####### BUILDERS
# The 'builders' list defines the Builders, which tell Buildbot how to perform a build:
# what steps, and which slaves can execute them. Note that any particular build will
# only take place on one slave.
from buildbot.process.factory import BuildFactory
from buildbot.steps.source import Git
from buildbot.steps.shell import ShellCommand
factory = BuildFactory()
# check out the source
factory.addStep(Git(repourl='git://github.com/buildbot/pyflakes.git', mode='copy'))
# run the tests (note that this will require that 'trial' is installed)
factory.addStep(ShellCommand(command=["trial", "pyflakes"]))
from buildbot.config import BuilderConfig
c['builders'] = []
c['builders'].append(
BuilderConfig(name="runtests",
slavenames=["example-slave"],
factory=factory))
####### STATUS TARGETS
# 'status' is a list of Status Targets. The results of each build will be
# pushed to these targets. buildbot/status/*.py has a variety to choose from,
# including web pages, email senders, and IRC bots.
c['status'] = []
from buildbot.status import html
from buildbot.status.web import authz, auth
authz_cfg=authz.Authz(
# change any of these to True to enable; see the manual for more
# options
auth=auth.BasicAuth([("pyflakes","pyflakes")]),
gracefulShutdown = False,
forceBuild = 'auth', # use this to test your slave once it is set up
forceAllBuilds = False,
pingBuilder = False,
stopBuild = False,
stopAllBuilds = False,
cancelPendingBuild = False,
)
c['status'].append(html.WebStatus(http_port=8010, authz=authz_cfg))
####### PROJECT IDENTITY
# the 'title' string will appear at the top of this buildbot
# installation's html.WebStatus home page (linked to the
# 'titleURL') and is embedded in the title of the waterfall HTML page.
c['title'] = "Pyflakes"
c['titleURL'] = "http://divmod.org/trac/wiki/DivmodPyflakes"
# the 'buildbotURL' string should point to the location where the buildbot's
# internal web server (usually the html.WebStatus page) is visible. This
# typically uses the port number set in the Waterfall 'status' entry, but
# with an externally-visible host name which the buildbot cannot figure out
# without some help.
c['buildbotURL'] = "http://localhost:8010/"
####### DB URL
c['db'] = {
# This specifies what database buildbot uses to store its state. You can leave
# this at its default for all but the largest installations.
'db_url' : "sqlite:///state.sqlite",
}
# change any of these to True to enable; see the manual for more
# options
auth=auth.BasicAuth([("pyflakes","pyflakes")]),
And this is the cherokee configuration:-
Unfortunately, I get 502 Bad gateway when I go to my web url but on the other hand, I know that my buildbot master server instance is working correctly because going to the same web url and appending :8010 behind the web url gives me the "Welcome to the Buildbot ..." page.
Is your proxy on the same machine as the buildbot? If not, you will need to adjust the URL in cherokee, to point to the machine running buildbot (localhost points to the machine cherokee is running on).
In any case, c['buildbotURL'] should be changed to point to the public URL that the buildbot is available under (i.e. what cherokee exposes, rather than the URL being proxied).