my postgresql.conf has following content:
# -----------------------------
# PostgreSQL configuration file
# -----------------------------
#
# This file consists of lines of the form:
#
# name = value
#
# (The "=" is optional.) Whitespace may be used. Comments are introduced with
# "#" anywhere on a line. The complete list of parameter names and allowed
# values can be found in the PostgreSQL documentation.
#
# The commented-out settings shown in this file represent the default values.
# Re-commenting a setting is NOT sufficient to revert it to the default value;
# you need to reload the server.
#
# This file is read on server startup and when the server receives a SIGHUP
# signal. If you edit the file on a running system, you have to SIGHUP the
# server for the changes to take effect, or use "pg_ctl reload". Some
# parameters, which are marked below, require a server shutdown and restart to
# take effect.
#
# Any parameter can also be given as a command-line option to the server, e.g.,
# "postgres -c log_connections=on". Some parameters can be changed at run time
# with the "SET" SQL command.
#
# Memory units: kB = kilobytes Time units: ms = milliseconds
# MB = megabytes s = seconds
# GB = gigabytes min = minutes
# h = hours
# d = days
#------------------------------------------------------------------------------
# FILE LOCATIONS
#------------------------------------------------------------------------------
# The default values of these variables are driven from the -D command-line
# option or PGDATA environment variable, represented here as ConfigDir.
#data_directory = 'ConfigDir' # use data in another directory
# (change requires restart)
#hba_file = 'ConfigDir/pg_hba.conf' # host-based authentication file
# (change requires restart)
#ident_file = 'ConfigDir/pg_ident.conf' # ident configuration file
# (change requires restart)
# If external_pid_file is not explicitly set, no extra PID file is written.
#external_pid_file = '(none)' # write an extra PID file
# (change requires restart)
#------------------------------------------------------------------------------
# CONNECTIONS AND AUTHENTICATION
#------------------------------------------------------------------------------
# - Connection Settings -
listen_addresses = '*' # what IP address(es) to listen on;
# comma-separated list of addresses;
# defaults to 'localhost', '*' = all
# (change requires restart)
port = 5432 # (change requires restart)
max_connections = 100 # (change requires restart)
# Note: Increasing max_connections costs ~400 bytes of shared memory per
# connection slot, plus lock space (see max_locks_per_transaction).
#superuser_reserved_connections = 3 # (change requires restart)
#unix_socket_directory = '' # (change requires restart)
#unix_socket_group = '' # (change requires restart)
#unix_socket_permissions = 0777 # begin with 0 to use octal notation
# (change requires restart)
#bonjour = off # advertise server via Bonjour
# (change requires restart)
#bonjour_name = '' # defaults to the computer name
# (change requires restart)
# - Security and Authentication -
#authentication_timeout = 1min # 1s-600s
#ssl = off # (change requires restart)
#ssl_ciphers = 'ALL:!ADH:!LOW:!EXP:!MD5:#STRENGTH' # allowed SSL ciphers
# (change requires restart)
#ssl_renegotiation_limit = 512MB # amount of data between renegotiations
#password_encryption = on
#db_user_namespace = off
# Kerberos and GSSAPI
#krb_server_keyfile = ''
#krb_srvname = 'postgres' # (Kerberos only)
#krb_caseins_users = off
# - TCP Keepalives -
# see "man 7 tcp" for details
#tcp_keepalives_idle = 0 # TCP_KEEPIDLE, in seconds;
# 0 selects the system default
#tcp_keepalives_interval = 0 # TCP_KEEPINTVL, in seconds;
# 0 selects the system default
#tcp_keepalives_count = 0 # TCP_KEEPCNT;
# 0 selects the system default
#------------------------------------------------------------------------------
# RESOURCE USAGE (except WAL)
#------------------------------------------------------------------------------
# - Memory -
shared_buffers = 32MB # min 128kB
# (change requires restart)
#temp_buffers = 8MB # min 800kB
#max_prepared_transactions = 0 # zero disables the feature
# (change requires restart)
# Note: Increasing max_prepared_transactions costs ~600 bytes of shared memory
# per transaction slot, plus lock space (see max_locks_per_transaction).
# It is not advisable to set max_prepared_transactions nonzero unless you
# actively intend to use prepared transactions.
#work_mem = 1MB # min 64kB
#maintenance_work_mem = 16MB # min 1MB
#max_stack_depth = 2MB # min 100kB
# - Kernel Resource Usage -
#max_files_per_process = 1000 # min 25
# (change requires restart)
#shared_preload_libraries = '' # (change requires restart)
# - Cost-Based Vacuum Delay -
#vacuum_cost_delay = 0ms # 0-100 milliseconds
#vacuum_cost_page_hit = 1 # 0-10000 credits
#vacuum_cost_page_miss = 10 # 0-10000 credits
#vacuum_cost_page_dirty = 20 # 0-10000 credits
#vacuum_cost_limit = 200 # 1-10000 credits
# - Background Writer -
#bgwriter_delay = 200ms # 10-10000ms between rounds
#bgwriter_lru_maxpages = 100 # 0-1000 max buffers written/round
#bgwriter_lru_multiplier = 2.0 # 0-10.0 multipler on buffers scanned/round
# - Asynchronous Behavior -
#effective_io_concurrency = 1 # 1-1000. 0 disables prefetching
#------------------------------------------------------------------------------
# WRITE AHEAD LOG
#------------------------------------------------------------------------------
# - Settings -
#wal_level = minimal # minimal, archive, or hot_standby
# (change requires restart)
#fsync = on # turns forced synchronization on or off
#synchronous_commit = on # synchronization level; on, off, or local
#wal_sync_method = fsync # the default is the first option
# supported by the operating system:
# open_datasync
# fdatasync (default on Linux)
# fsync
# fsync_writethrough
# open_sync
#full_page_writes = on # recover from partial page writes
#wal_buffers = -1 # min 32kB, -1 sets based on shared_buffers
# (change requires restart)
#wal_writer_delay = 200ms # 1-10000 milliseconds
#commit_delay = 0 # range 0-100000, in microseconds
#commit_siblings = 5 # range 1-1000
# - Checkpoints -
#checkpoint_segments = 3 # in logfile segments, min 1, 16MB each
#checkpoint_timeout = 5min # range 30s-1h
#checkpoint_completion_target = 0.5 # checkpoint target duration, 0.0 - 1.0
#checkpoint_warning = 30s # 0 disables
# - Archiving -
#archive_mode = off # allows archiving to be done
# (change requires restart)
#archive_command = '' # command to use to archive a logfile segment
#archive_timeout = 0 # force a logfile segment switch after this
# number of seconds; 0 disables
#------------------------------------------------------------------------------
# REPLICATION
#------------------------------------------------------------------------------
# - Master Server -
# These settings are ignored on a standby server
#max_wal_senders = 0 # max number of walsender processes
# (change requires restart)
#wal_sender_delay = 1s # walsender cycle time, 1-10000 milliseconds
#wal_keep_segments = 0 # in logfile segments, 16MB each; 0 disables
#vacuum_defer_cleanup_age = 0 # number of xacts by which cleanup is delayed
#replication_timeout = 60s # in milliseconds; 0 disables
#synchronous_standby_names = '' # standby servers that provide sync rep
# comma-separated list of application_name
# from standby(s); '*' = all
# - Standby Servers -
# These settings are ignored on a master server
#hot_standby = off # "on" allows queries during recovery
# (change requires restart)
#max_standby_archive_delay = 30s # max delay before canceling queries
# when reading WAL from archive;
# -1 allows indefinite delay
#max_standby_streaming_delay = 30s # max delay before canceling queries
# when reading streaming WAL;
# -1 allows indefinite delay
#wal_receiver_status_interval = 10s # send replies at least this often
# 0 disables
#hot_standby_feedback = off # send info from standby to prevent
# query conflicts
#------------------------------------------------------------------------------
# QUERY TUNING
#------------------------------------------------------------------------------
# - Planner Method Configuration -
#enable_bitmapscan = on
#enable_hashagg = on
#enable_hashjoin = on
#enable_indexscan = on
#enable_material = on
#enable_mergejoin = on
#enable_nestloop = on
#enable_seqscan = on
#enable_sort = on
#enable_tidscan = on
# - Planner Cost Constants -
#seq_page_cost = 1.0 # measured on an arbitrary scale
#random_page_cost = 4.0 # same scale as above
#cpu_tuple_cost = 0.01 # same scale as above
#cpu_index_tuple_cost = 0.005 # same scale as above
#cpu_operator_cost = 0.0025 # same scale as above
#effective_cache_size = 128MB
# - Genetic Query Optimizer -
#geqo = on
#geqo_threshold = 12
#geqo_effort = 5 # range 1-10
#geqo_pool_size = 0 # selects default based on effort
#geqo_generations = 0 # selects default based on effort
#geqo_selection_bias = 2.0 # range 1.5-2.0
#geqo_seed = 0.0 # range 0.0-1.0
# - Other Planner Options -
#default_statistics_target = 100 # range 1-10000
#constraint_exclusion = partition # on, off, or partition
#cursor_tuple_fraction = 0.1 # range 0.0-1.0
#from_collapse_limit = 8
#join_collapse_limit = 8 # 1 disables collapsing of explicit
# JOIN clauses
#------------------------------------------------------------------------------
# ERROR REPORTING AND LOGGING
#------------------------------------------------------------------------------
# - Where to Log -
log_destination = 'stderr' # Valid values are combinations of
# stderr, csvlog, syslog, and eventlog,
# depending on platform. csvlog
# requires logging_collector to be on.
# This is used when logging to stderr:
logging_collector = on # Enable capturing of stderr and csvlog
# into log files. Required to be on for
# csvlogs.
# (change requires restart)
# These are only used if logging_collector is on:
#log_directory = 'pg_log' # directory where log files are written,
# can be absolute or relative to PGDATA
#log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log' # log file name pattern,
# can include strftime() escapes
#log_file_mode = 0600 # creation mode for log files,
# begin with 0 to use octal notation
#log_truncate_on_rotation = off # If on, an existing log file with the
# same name as the new log file will be
# truncated rather than appended to.
# But such truncation only occurs on
# time-driven rotation, not on restarts
# or size-driven rotation. Default is
# off, meaning append to existing files
# in all cases.
#log_rotation_age = 1d # Automatic rotation of logfiles will
# happen after that time. 0 disables.
#log_rotation_size = 10MB # Automatic rotation of logfiles will
# happen after that much log output.
# 0 disables.
# These are relevant when logging to syslog:
#syslog_facility = 'LOCAL0'
#syslog_ident = 'postgres'
#silent_mode = off # Run server silently.
# DO NOT USE without syslog or
# logging_collector
# (change requires restart)
# - When to Log -
#client_min_messages = notice # values in order of decreasing detail:
# debug5
# debug4
# debug3
# debug2
# debug1
# log
# notice
# warning
# error
#log_min_messages = warning # values in order of decreasing detail:
# debug5
# debug4
# debug3
# debug2
# debug1
# info
# notice
# warning
# error
# log
# fatal
# panic
#log_min_error_statement = error # values in order of decreasing detail:
# debug5
# debug4
# debug3
# debug2
# debug1
# info
# notice
# warning
# error
# log
# fatal
# panic (effectively off)
#log_min_duration_statement = -1 # -1 is disabled, 0 logs all statements
# and their durations, > 0 logs only
# statements running at least this number
# of milliseconds
# - What to Log -
#debug_print_parse = off
#debug_print_rewritten = off
#debug_print_plan = off
#debug_pretty_print = on
#log_checkpoints = off
#log_connections = off
#log_disconnections = off
#log_duration = off
#log_error_verbosity = default # terse, default, or verbose messages
#log_hostname = off
log_line_prefix = '%t ' # special values:
# %a = application name
# %u = user name
# %d = database name
# %r = remote host and port
# %h = remote host
# %p = process ID
# %t = timestamp without milliseconds
# %m = timestamp with milliseconds
# %i = command tag
# %e = SQL state
# %c = session ID
# %l = session line number
# %s = session start timestamp
# %v = virtual transaction ID
# %x = transaction ID (0 if none)
# %q = stop here in non-session
# processes
# %% = '%'
# e.g. '<%u%%%d> '
#log_lock_waits = off # log lock waits >= deadlock_timeout
#log_statement = 'all' # none, ddl, mod, all
#log_temp_files = -1 # log temporary files equal or larger
# than the specified size in kilobytes;
# -1 disables, 0 logs all temp files
#log_timezone = '(defaults to server environment setting)'
#------------------------------------------------------------------------------
# RUNTIME STATISTICS
#------------------------------------------------------------------------------
# - Query/Index Statistics Collector -
#track_activities = on
#track_counts = on
#track_functions = none # none, pl, all
#track_activity_query_size = 1024 # (change requires restart)
#update_process_title = on
#stats_temp_directory = 'pg_stat_tmp'
# - Statistics Monitoring -
#log_parser_stats = off
#log_planner_stats = off
#log_executor_stats = off
#log_statement_stats = off
#------------------------------------------------------------------------------
# AUTOVACUUM PARAMETERS
#------------------------------------------------------------------------------
#autovacuum = on # Enable autovacuum subprocess? 'on'
# requires track_counts to also be on.
#log_autovacuum_min_duration = -1 # -1 disables, 0 logs all actions and
# their durations, > 0 logs only
# actions running at least this number
# of milliseconds.
#autovacuum_max_workers = 3 # max number of autovacuum subprocesses
# (change requires restart)
#autovacuum_naptime = 1min # time between autovacuum runs
#autovacuum_vacuum_threshold = 50 # min number of row updates before
# vacuum
#autovacuum_analyze_threshold = 50 # min number of row updates before
# analyze
#autovacuum_vacuum_scale_factor = 0.2 # fraction of table size before vacuum
#autovacuum_analyze_scale_factor = 0.1 # fraction of table size before analyze
#autovacuum_freeze_max_age = 200000000 # maximum XID age before forced vacuum
# (change requires restart)
#autovacuum_vacuum_cost_delay = 20ms # default vacuum cost delay for
# autovacuum, in milliseconds;
# -1 means use vacuum_cost_delay
#autovacuum_vacuum_cost_limit = -1 # default vacuum cost limit for
# autovacuum, -1 means use
# vacuum_cost_limit
#------------------------------------------------------------------------------
# CLIENT CONNECTION DEFAULTS
#------------------------------------------------------------------------------
# - Statement Behavior -
#search_path = '"$user",public' # schema names
#default_tablespace = '' # a tablespace name, '' uses the default
#temp_tablespaces = '' # a list of tablespace names, '' uses
# only default tablespace
#check_function_bodies = on
#default_transaction_isolation = 'read committed'
#default_transaction_read_only = off
#default_transaction_deferrable = off
#session_replication_role = 'origin'
#statement_timeout = 0 # in milliseconds, 0 is disabled
#vacuum_freeze_min_age = 50000000
#vacuum_freeze_table_age = 150000000
#bytea_output = 'hex' # hex, escape
#xmlbinary = 'base64'
#xmloption = 'content'
# - Locale and Formatting -
datestyle = 'iso, mdy'
#intervalstyle = 'postgres'
#timezone = '(defaults to server environment setting)'
#timezone_abbreviations = 'Default' # Select the set of available time zone
# abbreviations. Currently, there are
# Default
# Australia
# India
# You can create your own file in
# share/timezonesets/.
#extra_float_digits = 0 # min -15, max 3
#client_encoding = sql_ascii # actually, defaults to database
# encoding
# These settings are initialized by initdb, but they can be changed.
lc_messages = 'English_United States.1252' # locale for system error message
# strings
lc_monetary = 'English_United States.1252' # locale for monetary formatting
lc_numeric = 'English_United States.1252' # locale for number formatting
lc_time = 'English_United States.1252' # locale for time formatting
# default configuration for text search
default_text_search_config = 'pg_catalog.english'
# - Other Defaults -
#dynamic_library_path = '$libdir'
#local_preload_libraries = ''
#------------------------------------------------------------------------------
# LOCK MANAGEMENT
#------------------------------------------------------------------------------
#deadlock_timeout = 1s
#max_locks_per_transaction = 64 # min 10
# (change requires restart)
# Note: Each lock table slot uses ~270 bytes of shared memory, and there are
# max_locks_per_transaction * (max_connections + max_prepared_transactions)
# lock table slots.
#max_pred_locks_per_transaction = 64 # min 10
# (change requires restart)
#------------------------------------------------------------------------------
# VERSION/PLATFORM COMPATIBILITY
#------------------------------------------------------------------------------
# - Previous PostgreSQL Versions -
#array_nulls = on
#backslash_quote = safe_encoding # on, off, or safe_encoding
#default_with_oids = off
#escape_string_warning = on
#lo_compat_privileges = off
#quote_all_identifiers = off
#sql_inheritance = on
#standard_conforming_strings = on
#synchronize_seqscans = on
# - Other Platforms and Clients -
#transform_null_equals = off
#------------------------------------------------------------------------------
# ERROR HANDLING
#------------------------------------------------------------------------------
#exit_on_error = off # terminate session on any error?
#restart_after_crash = on # reinitialize after backend crash?
#------------------------------------------------------------------------------
# CUSTOMIZED OPTIONS
#------------------------------------------------------------------------------
#custom_variable_classes = '' # list of custom variable class names
Where can I find file where I can see all sql statement which server was executed ?
If compare with default file I have changed following row:
#log_statement = 'all'
After it I restart my pc.
You left log_statement commented out, so it's still at its default.
In postgresql.conf, set:
log_statement = 'all'
(note the lack of the leading '#').
Then restart PostgreSQL. You don't have to restart the whole computer, just the PostgreSQL server. On Windows, that's a service managed by the Services control panel.
Then you need to check the PostgreSQL logs. Their location is dependent on how you installed PostgreSQL and your OS. On Windows they're in the pg_log directory inside your data directory, which you can find with SHOW data_directory; in psql or PgAdmin-III.
On Windows, look in %PROGRAMFILES%\PostgreSQL\9.3\data\pg_log (adjusting for your PostgreSQL version and where you installed PostgreSQL if you didn't use the defaults).
Just as a small addition to the accepted answer, this should at least be mentioned so that some can avoid searching and opening the postgresql.conf.
You can change the settings globally for every database by logging in as a superuser and then by running:
SET log_statement = 'all';
or (for transaction level)
SET LOCAL log_statement = 'all'
From a comment at: postgresql log queries single database
Mind that for the changes to work, the settings must already be uncommented in the postgresql.conf. Else, the settings will not change after the server restart. Therefore, you will have to change the config first by uncommenting
#log_statement = 'none'
to
log_statement = 'all'
and
#logging_collector = off
to
logging_collector = on
and
uncomment log_directory = ... and log_filename = ... as well.
From: How to log PostgreSQL queries?
But if the settings are already uncommented, you can use the postgres commands.
Side note: you can also make the log available for a chosen database by logging in as a superuser and then by running:
ALTER DATABASE your_database_name
SET log_statement = 'all';
From: postgresql log queries single database
Related
I have configured Verdaccio on my local machine for testing. Below is my configuration,
#
# This is the default configuration file. It allows all users to do anything,
# please read carefully the documentation and best practices to
# improve security.
#
# Look here for more config file examples:
# https://github.com/verdaccio/verdaccio/tree/5.x/conf
#
# Read about the best practices
# https://verdaccio.org/docs/best
# path to a directory with all packages
storage: /verdaccio/storage/data
# path to a directory with plugins to include
plugins: /verdaccio/plugins
# https://verdaccio.org/docs/webui
# https://verdaccio.org/docs/configuration#uplinks
# a list of other known repositories we can talk to
uplinks:
npmjs:
url: https://registry.npmjs.org/
cache: false
# https://verdaccio.org/docs/configuration#authentication
auth:
htpasswd:
file: /verdaccio/htpasswd
# Learn how to protect your packages
# https://verdaccio.org/docs/protect-your-dependencies/
# https://verdaccio.org/docs/configuration#packages
packages:
'#mycompany/*':
access: $authenticated
publish: $authenticated
unpublish: $authenticated
'#*/*':
# scoped packages
access: $all
publish: $authenticated
unpublish: $authenticated
proxy: npmjs
'**':
access: $all
publish: $authenticated
unpublish: $authenticated
# publish: azuread
# unpublish: azuread
# if package is not available locally, proxy requests to 'npmjs' registry
proxy: npmjs
# To improve your security configuration and avoid dependency confusion
# consider removing the proxy property for private packages
# https://verdaccio.org/docs/best#remove-proxy-to-increase-security-at-private-packages
# https://verdaccio.org/docs/configuration#server
# You can specify HTTP/1.1 server keep alive timeout in seconds for incoming connections.
# A value of 0 makes the http server behave similarly to Node.js versions prior to 8.0.0, which did not have a keep-alive timeout.
# WORKAROUND: Through given configuration you can workaround following issue https://github.com/verdaccio/verdaccio/issues/301. Set to 0 in case 60 is not enough.
server:
keepAliveTimeout: 60
# Allow `req.ip` to resolve properly when Verdaccio is behind a proxy or load-balancer
# See: https://expressjs.com/en/guide/behind-proxies.html
# trustProxy: '127.0.0.1'
# https://verdaccio.org/docs/configuration#offline-publish
# publish:
# allow_offline: false
# https://verdaccio.org/docs/configuration#url-prefix
# url_prefix: /verdaccio/
# VERDACCIO_PUBLIC_URL='https://somedomain.org';
# url_prefix: '/my_prefix'
# // url -> https://somedomain.org/my_prefix/
# VERDACCIO_PUBLIC_URL='https://somedomain.org';
# url_prefix: '/'
# // url -> https://somedomain.org/
# VERDACCIO_PUBLIC_URL='https://somedomain.org/first_prefix';
# url_prefix: '/second_prefix'
# // url -> https://somedomain.org/second_prefix/'
# https://verdaccio.org/docs/configuration#security
# security:
# api:
# legacy: true
# jwt:
# sign:
# expiresIn: 29d
# verify:
# someProp: [value]
# web:
# sign:
# expiresIn: 1h # 1 hour by default
# verify:
# someProp: [value]
# https://verdaccio.org/docs/configuration#user-rate-limit
# userRateLimit:
# windowMs: 50000
# max: 1000
# https://verdaccio.org/docs/configuration#max-body-size
# max_body_size: 10mb
# https://verdaccio.org/docs/configuration#listen-port
# listen:
# - localhost:4873 # default value
# - http://localhost:4873 # same thing
# - 0.0.0.0:4873 # listen on all addresses (INADDR_ANY)
# - https://example.org:4873 # if you want to use https
# - "[::1]:4873" # ipv6
# - unix:/tmp/verdaccio.sock # unix socket
# The HTTPS configuration is useful if you do not consider use a HTTP Proxy
# https://verdaccio.org/docs/configuration#https
# https:
# key: ./path/verdaccio-key.pem
# cert: ./path/verdaccio-cert.pem
# ca: ./path/verdaccio-csr.pem
# https://verdaccio.org/docs/configuration#proxy
# http_proxy: http://something.local/
# https_proxy: https://something.local/
# https://verdaccio.org/docs/configuration#notifications
# notify:
# method: POST
# headers: [{ "Content-Type": "application/json" }]
# endpoint: https://usagge.hipchat.com/v2/room/3729485/notification?auth_token=mySecretToken
# content: '{"color":"green","message":"New package published: * {{ name }}*","notify":true,"message_format":"text"}'
middlewares:
audit:
enabled: true
# https://verdaccio.org/docs/logger
# log settings
logs: { type: stdout, format: pretty, level: http }
#experiments:
# # support for npm token command
# token: false
# # disable writing body size to logs, read more on ticket 1912
# bytesin_off: false
# # enable tarball URL redirect for hosting tarball with a different server, the tarball_url_redirect can be a template string
# tarball_url_redirect: 'https://mycdn.com/verdaccio/${packageName}/${filename}'
# # the tarball_url_redirect can be a function, takes packageName and filename and returns the url, when working with a js configuration file
# tarball_url_redirect(packageName, filename) {
# const signedUrl = // generate a signed url
# return signedUrl;
# }
# translate your registry, api i18n not available yet
# i18n:
# list of the available translations https://github.com/verdaccio/verdaccio/blob/master/packages/plugins/ui-theme/src/i18n/ABOUT_TRANSLATIONS.md
# web: en-US
# minio configuration
store:
minio:
# The HTTP port of your minio instance
port: 9000
# The endpoint on which verdaccio will access minio (without scheme)
endPoint: 172.17.0.4
# The minio access key
accessKey: ***
# The minio secret key
secretKey: *****
# Disable SSL if you're accessing minio directly through HTTP
useSSL: false
# The region used by your minio instance (optional, default to "us-east-1")
# region: eu-west-1
# A bucket where verdaccio will store it's database & packages (optional, default to "verdaccio")
bucket: 'npm'
# Number of retry when a request to minio fails (optional, default to 10)
retries: 3
# Delay between retries (optional, default to 100)
delay: 50
I am able to login and I can publish and pull private packages. However, whenever I try to pull any package which is not present on my machine, and it gets pulled from registry.npmjs.org I get a warning which states that tarball data seems to be corrupted. Trying again. for any random package and then the command crashes with ERR: CODE EINTEGRITY, sha256:****
I am not able to figure this out.
can anyone help me to set streaming by telegraf into cloud InfluxDB? I use this tutorial, python script launches on my local machine and it pushing notification into rabbitMQ. Telegraf subscribed to rabbitMQ by this config.
# Configuration for telegraf agent
[agent]
## Default data collection interval for all inputs
interval = "10s"
## Rounds collection interval to 'interval'
## ie, if interval="10s" then always collect on :00, :10, :20, etc.
round_interval = true
## Telegraf will send metrics to outputs in batches of at most
## metric_batch_size metrics.
## This controls the size of writes that Telegraf sends to output plugins.
metric_batch_size = 1000
## For failed writes, telegraf will cache metric_buffer_limit metrics for each
## output, and will flush this buffer on a successful write. Oldest metrics
## are dropped first when this buffer fills.
## This buffer only fills when writes fail to output plugin(s).
metric_buffer_limit = 10000
## Collection jitter is used to jitter the collection by a random amount.
## Each plugin will sleep for a random time within jitter before collecting.
## This can be used to avoid many plugins querying things like sysfs at the
## same time, which can have a measurable effect on the system.
collection_jitter = "0s"
## Default flushing interval for all outputs. Maximum flush_interval will be
## flush_interval + flush_jitter
flush_interval = "10s"
## Jitter the flush interval by a random amount. This is primarily to avoid
## large write spikes for users running a large number of telegraf instances.
## ie, a jitter of 5s and interval 10s means flushes will happen every 10-15s
flush_jitter = "0s"
## By default or when set to "0s", precision will be set to the same
## timestamp order as the collection interval, with the maximum being 1s.
## ie, when interval = "10s", precision will be "1s"
## when interval = "250ms", precision will be "1ms"
## Precision will NOT be used for service inputs. It is up to each individual
## service input to set the timestamp at the appropriate precision.
## Valid time units are "ns", "us" (or "µs"), "ms", "s".
precision = ""
## Logging configuration:
## Run telegraf with debug log messages.
debug = true
## Run telegraf in quiet mode (error log messages only).
quiet = false
## Specify the log file name. The empty string means to log to stderr.
logfile = ""
## Override default hostname, if empty use os.Hostname()
hostname = ""
## If set to true, do no set the "host" tag in the telegraf agent.
omit_hostname = false
[[outputs.influxdb_v2]]
## The URLs of the InfluxDB cluster nodes.
##
## Multiple URLs can be specified for a single cluster, only ONE of the
## urls will be written to each interval.
## urls exp: http://127.0.0.1:9999
urls = ["https://eu-central-1-1.aws.cloud2.influxdata.com"]
## Token for authentication.
token = "$INFLUX_TOKEN"
## Organization is the name of the organization you wish to write to; must exist.
organization = "some#gmail.com"
## Destination bucket to write into.
bucket = "two"
[[inputs.cpu]]
## Whether to report per-cpu stats or not
percpu = true
## Whether to report total system cpu stats or not
totalcpu = true
## If true, collect raw CPU time metrics.
collect_cpu_time = false
## If true, compute and report the sum of all non-idle CPU states.
report_active = false
[[inputs.disk]]
## By default stats will be gathered for all mount points.
## Set mount_points will restrict the stats to only the specified mount points.
# mount_points = ["/"]
## Ignore mount points by filesystem type.
ignore_fs = ["tmpfs", "devtmpfs", "devfs", "overlay", "aufs", "squashfs"]
[[inputs.diskio]]
[[inputs.mem]]
[[inputs.net]]
[[inputs.processes]]
[[inputs.swap]]
[[inputs.system]]
# # Reads metrics from RabbitMQ servers via the Management Plugin
[[inputs.rabbitmq]]
# ## Management Plugin url. (default: http://localhost:15672)
url = "http://localhost:15672"
# ## Tag added to rabbitmq_overview series; deprecated: use tags
# # name = "rmq-server-1"
# ## Credentials
username = "guest"
password = "guest"
#
# ## Optional TLS Config
# # tls_ca = "/etc/telegraf/ca.pem"
# # tls_cert = "/etc/telegraf/cert.pem"
# # tls_key = "/etc/telegraf/key.pem"
# ## Use TLS but skip chain & host verification
# # insecure_skip_verify = false
#
# ## Optional request timeouts
# ##
# ## ResponseHeaderTimeout, if non-zero, specifies the amount of time to wait
# ## for a server's response headers after fully writing the request.
header_timeout = "3s"
# ##
# ## client_timeout specifies a time limit for requests made by this client.
# ## Includes connection time, any redirects, and reading the response body.
client_timeout = "4s"
#
# ## A list of nodes to gather as the rabbitmq_node measurement. If not
# ## specified, metrics for all nodes are gathered.
# # nodes = ["rabbit#node1", "rabbit#node2"]
#
# ## A list of queues to gather as the rabbitmq_queue measurement. If not
# ## specified, metrics for all queues are gathered.
# # queues = ["telegraf"]
#
# ## A list of exchanges to gather as the rabbitmq_exchange measurement. If no
[[inputs.mqtt_consumer]]
name_prefix = "influx"
servers = ["tcp://rabbitmq:1883"]
qos = 0
connection_timeout = "30s"
topics = [
"crypto/btc",
# "crypto/eth",
]
persistent_session = false
client_id = ""
data_format = "json"
json_string_fields
Logs show that data is writing into influxdb cloud
2020-02-25T16:01:53Z I! Starting Telegraf 1.13.3
2020-02-25T16:01:53Z I! Loaded inputs: mqtt_consumer disk diskio net system rabbitmq cpu mem processes swap
2020-02-25T16:01:53Z I! Loaded aggregators:
2020-02-25T16:01:53Z I! Loaded processors:
2020-02-25T16:01:53Z I! Loaded outputs: influxdb_v2
2020-02-25T16:01:53Z I! Tags enabled: host=dos4dev
2020-02-25T16:01:53Z I! [agent] Config: Interval:10s, Quiet:false, Hostname:"dos4dev", Flush Interval:10s
2020-02-25T16:01:53Z D! [agent] Initializing plugins
2020-02-25T16:01:53Z D! [agent] Connecting outputs
2020-02-25T16:01:53Z D! [agent] Attempting connection to [outputs.influxdb_v2]
2020-02-25T16:01:53Z D! [agent] Successfully connected to outputs.influxdb_v2
2020-02-25T16:01:53Z D! [agent] Starting service inputs
2020-02-25T16:02:00Z D! [inputs.mqtt_consumer] Connecting [tcp://rabbitmq:1883]
2020-02-25T16:02:10Z D! [inputs.mqtt_consumer] Connecting [tcp://rabbitmq:1883]
2020-02-25T16:02:10Z D! [outputs.influxdb_v2] Wrote batch of 78 metrics in 595.462779ms
2020-02-25T16:02:10Z D! [outputs.influxdb_v2] Buffer fullness: 83 / 10000 metrics
2020-02-25T16:02:20Z D! [inputs.mqtt_consumer] Connecting [tcp://rabbitmq:1883]
2020-02-25T16:02:20Z D! [outputs.influxdb_v2] Wrote batch of 83 metrics in 344.265787ms
2020-02-25T16:02:20Z D! [outputs.influxdb_v2] Buffer fullness: 83 / 10000 metrics
2020-02-25T16:02:30Z D! [inputs.mqtt_consumer] Connecting [tcp://rabbitmq:1883]
But I can`t find data in cloud Influxdb
Based on the log messages from Telegraf, it looks like the data is being written. Have you tried following the docs for exploring your data? https://v2.docs.influxdata.com/v2.0/visualize-data/explore-metrics/
We are monitoring several servers with Monit. We are using the version 5.25.1.
Some are dedicated apache servers. The monitoring is ok.
But the log of monit (/var/log/monit) is like this :
[CET Mar 18 03:12:03] info : Starting Monit 5.25.1 daemon with http interface at [0.0.0.0]:3353
[CET Mar 18 03:12:03] info : Monit start delay set to 180s
[CET Mar 18 03:15:03] info : 'xxxxx.localhost' Monit 5.25.1 started
[CET Mar 18 03:15:03] error : 'apache' error -- unknown resource ID: [5]
[CET Mar 18 03:16:08] error : 'apache' error -- unknown resource ID: [5]
[CET Mar 18 03:17:08] error : 'apache' error -- unknown resource ID: [5]
[CET Mar 18 03:18:08] error : 'apache' error -- unknown resource ID: [5]
The configuration file /etc/monit.conf is like this :
###############################################################################
## Monit control file
###############################################################################
###############################################################################
## Global section
###############################################################################
##
## Start Monit in the background (run as a daemon):
# check services at 2-minute intervals
# with start delay 240 # optional: delay the first check by 4-minutes (by
# # default Monit check immediately after Monit start)
set daemon 60
with start delay 180
### Set the location of the Monit id file which stores the unique id for the
### Monit instance. The id is generated and stored on first Monit start. By
### default the file is placed in $HOME/.monit.id.
#
set idfile /var/.monit.id
## Set the list of mail servers for alert delivery. Multiple servers may be
## specified using a comma separator. By default Monit uses port 25 - it is
## possible to override this with the PORT option.
#
# set mailserver mail.bar.baz, # primary mailserver
# backup.bar.baz port 10025, # backup mailserver on port 10025
# localhost # fallback relay
set mailserver localhost
## By default Monit will drop alert events if no mail servers are available.
## If you want to keep the alerts for later delivery retry, you can use the
## EVENTQUEUE statement. The base directory where undelivered alerts will be
## stored is specified by the BASEDIR option. You can limit the maximal queue
## size using the SLOTS option (if omitted, the queue is limited by space
## available in the back end filesystem).
#
set eventqueue
basedir /var/monit # set the base directory where events will be stored
slots 100 # optionally limit the queue size
## Send status and events to M/Monit (for more informations about M/Monit
## see http://mmonit.com/).
#
# set mmonit http://monit:monit#192.168.1.10:8080/collector
#
#
## Monit by default uses the following alert mail format:
##
##
## You can override this message format or parts of it, such as subject
## or sender using the MAIL-FORMAT statement. Macros such as $DATE, etc.
## are expanded at runtime. For example, to override the sender, use:
#
# set mail-format { from: monit#foo.bar }
#
#
## You can set alert recipients whom will receive alerts if/when a
## service defined in this file has errors. Alerts may be restricted on
## events by using a filter as in the second example below.
#
set alert fake#mail.com not on { instance }
# receive all alerts
# set alert manager#foo.bar only on { timeout } # receive just service-
# # timeout alert
#
mail-format {
from: xxxxxxx#monit.localhost
subject: $SERVICE => $EVENT
message:
DESCRIPTION : $DESCRIPTION
ACTION : $ACTION
DATE : $DATE
HOST : $HOST
Sorry for the spam.
Monit
}
## Monit has an embedded web server which can be used to view status of
## services monitored and manage services from a web interface. See the
## Monit Wiki if you want to enable SSL for the web server.
#
set httpd port 3353 and
use address 0.0.0.0
allow yyyyy:zzzz
###############################################################################
## SeSTART rvices
###############################################################################
##
## Check general system resources such as load average, cpu and memory
## usage. Each test specifies a resource, conditions and the action to be
## performed should a test fail.
#
check system xxxxxx.localhost
if loadavg (1min) > 8 for 5 cycles then alert
if loadavg (5min) > 4 for 5 cycles then alert
if memory usage > 75% for 5 cycles then alert
if cpu usage (user) > 70% for 5 cycles then alert
if cpu usage (system) > 50% for 5 cycles then alert
if cpu usage (wait) > 50% for 5 cycles then alert
check process apache with pidfile /var/run/httpd/httpd.pid
group www
start program = "/etc/init.d/httpd start" with timeout 60 seconds
stop program = "/etc/init.d/httpd stop"
if failed host localhost port 80 then restart
if cpu > 60% for 2 cycles then alert
if cpu > 80% for 5 cycles then restart
if loadavg(5min) greater than 10 for 8 cycles then restart
if 3 restarts within 5 cycles then timeout
###############################################################################
## Includes
###############################################################################
##
## It is possible to include additional configuration parts from other files or
## directories.
#
# include /etc/monit.d/*
#
#
# Include all files from /etc/monit.d/
include /etc/monit.d/*
On ui monit, everything is ok. and the monitoring is 100% useful. We can stop, restart the service like we want.
So I don't understand the sentence 'error : 'apache' error -- unknown resource ID: [5]' we found on the log of monit.
Anyone has an idea about it ?
Thanks for your help.
I had the same problem..
mmonit said that loadavg is for "check system" only. it used to work for apache but not anymore..
"The loadavg statement can be used in "check system" context only (load average is system property, not process'). Please remove the following statement and reload monit"
so disable this line by adding # on the first of:
# if loadavg(5min) greater than 10 for 8 cycles then restart
then restart monit
service monit restart
You will no longer receive the appache error.
I have got an Edimax USB to Ethernet Adapter which is based on famous ASIX AX88772B chip and I want to make it work with my board which is a ARM based board running an Embedded Linux with Kernel 2.6.32.17
According to Asix documentation this chip is supposed to work in this kernel without problems but it's not working in my case. I have selected the necessary components in kernel as modules (asix, mii, usbnet) and after compiling kernel I have mii.ko, usbnet.ko and asix.ko files. So I copied them to the right place.
After inserting the adapter into hardware I can see that lsusb has recognized a new ASIX USB device with its PID and VID. After this I ran :
root#dm368-evm:~# modprobe asix
usbcore: registered new interface driver asix
and you can see that the modules have been loaded into memory without any errors or problems (apparently modprobe is managing dependencies automatically and I didn't need to insert mii.ko and usbnet.ko manually)
root#dm368-evm:~# lsmod
Module Size Used by
asix 11444 0
usbnet 11657 1 asix
mii 3392 2 asix,usbnet
But I don't see an ethernet interface after running below command
ifconfig -a
I also don't see any messages in dmesg. I would have expected a message like "eth0 is registered" or something like that based on what I saw in Ubuntu when I plugged in the adapter. I also used this adapter with another ARM hardware which had kernel 3.50 and it worked fine so I have no idea why it's not working.
Once I tried to build the necessary drivers statically into kernel but it didn't make a difference either.
I really need to make this work because the board doesn't have an ethernet connection and I want one to use as debug interface and FTP file transfers.
Snippets of a sdiff of a 2.6.28 configuration (left side) versus your .config (right side) follows:
# #
# USB Network Adapters # USB Network Adapters
# #
# CONFIG_USB_CATC is not set # CONFIG_USB_CATC is not set
# CONFIG_USB_KAWETH is not set # CONFIG_USB_KAWETH is not set
# CONFIG_USB_PEGASUS is not set # CONFIG_USB_PEGASUS is not set
# CONFIG_USB_RTL8150 is not set # CONFIG_USB_RTL8150 is not set
CONFIG_USB_USBNET=y | CONFIG_USB_USBNET=m
CONFIG_USB_NET_AX8817X=y | CONFIG_USB_NET_AX8817X=m
# CONFIG_USB_NET_CDCETHER is not set | CONFIG_USB_NET_CDCETHER=m
> CONFIG_USB_NET_CDC_EEM=m
# CONFIG_USB_NET_DM9601 is not set # CONFIG_USB_NET_DM9601 is not set
# CONFIG_USB_NET_SMSC95XX is not set # CONFIG_USB_NET_SMSC95XX is not set
# CONFIG_USB_NET_GL620A is not set # CONFIG_USB_NET_GL620A is not set
# CONFIG_USB_NET_NET1080 is not set | CONFIG_USB_NET_NET1080=m
# CONFIG_USB_NET_PLUSB is not set # CONFIG_USB_NET_PLUSB is not set
# CONFIG_USB_NET_MCS7830 is not set # CONFIG_USB_NET_MCS7830 is not set
# CONFIG_USB_NET_RNDIS_HOST is not set # CONFIG_USB_NET_RNDIS_HOST is not set
# CONFIG_USB_NET_CDC_SUBSET is not set | CONFIG_USB_NET_CDC_SUBSET=m
# CONFIG_USB_NET_ZAURUS is not set | # CONFIG_USB_ALI_M5632 is not set
> # CONFIG_USB_AN2720 is not set
> CONFIG_USB_BELKIN=y
> CONFIG_USB_ARMLINUX=y
> # CONFIG_USB_EPSON2888 is not set
> # CONFIG_USB_KC2190 is not set
> CONFIG_USB_NET_ZAURUS=m
> # CONFIG_USB_NET_INT51X1 is not set
# CONFIG_WAN is not set # CONFIG_WAN is not set
# CONFIG_PPP is not set | CONFIG_PPP=m
> # CONFIG_PPP_MULTILINK is not set
> # CONFIG_PPP_FILTER is not set
> CONFIG_PPP_ASYNC=m
> CONFIG_PPP_SYNC_TTY=m
> CONFIG_PPP_DEFLATE=m
> # CONFIG_PPP_BSDCOMP is not set
> # CONFIG_PPP_MPPE is not set
> # CONFIG_PPPOE is not set
> # CONFIG_PPPOL2TP is not set
# CONFIG_SLIP is not set # CONFIG_SLIP is not set
# CONFIG_NETCONSOLE is not set | CONFIG_SLHC=m
# CONFIG_NETPOLL is not set | CONFIG_NETCONSOLE=y
# CONFIG_NET_POLL_CONTROLLER is not set | # CONFIG_NETCONSOLE_DYNAMIC is not set
> CONFIG_NETPOLL=y
> CONFIG_NETPOLL_TRAP=y
> CONFIG_NET_POLL_CONTROLLER=y
# CONFIG_ISDN is not set # CONFIG_ISDN is not set
> # CONFIG_PHONE is not set
...
CONFIG_USB_SUPPORT=y CONFIG_USB_SUPPORT=y
CONFIG_USB_ARCH_HAS_HCD=y CONFIG_USB_ARCH_HAS_HCD=y
CONFIG_USB_ARCH_HAS_OHCI=y | # CONFIG_USB_ARCH_HAS_OHCI is not set
# CONFIG_USB_ARCH_HAS_EHCI is not set # CONFIG_USB_ARCH_HAS_EHCI is not set
CONFIG_USB=y CONFIG_USB=y
CONFIG_USB_DEBUG=y | # CONFIG_USB_DEBUG is not set
CONFIG_USB_ANNOUNCE_NEW_DEVICES=y | # CONFIG_USB_ANNOUNCE_NEW_DEVICES is not set
# #
# Miscellaneous USB options # Miscellaneous USB options
# #
# CONFIG_USB_DEVICEFS is not set | CONFIG_USB_DEVICEFS=y
# CONFIG_USB_DEVICE_CLASS is not set | CONFIG_USB_DEVICE_CLASS=y
CONFIG_USB_DYNAMIC_MINORS=y | # CONFIG_USB_DYNAMIC_MINORS is not set
# CONFIG_USB_OTG is not set # CONFIG_USB_OTG is not set
# CONFIG_USB_OTG_WHITELIST is not set # CONFIG_USB_OTG_WHITELIST is not set
# CONFIG_USB_OTG_BLACKLIST_HUB is not set # CONFIG_USB_OTG_BLACKLIST_HUB is not set
# CONFIG_USB_MON is not set # CONFIG_USB_MON is not set
# CONFIG_USB_WUSB is not set # CONFIG_USB_WUSB is not set
# CONFIG_USB_WUSB_CBAF is not set # CONFIG_USB_WUSB_CBAF is not set
# #
# USB Host Controller Drivers # USB Host Controller Drivers
# #
# CONFIG_USB_C67X00_HCD is not set # CONFIG_USB_C67X00_HCD is not set
> # CONFIG_USB_OXU210HP_HCD is not set
# CONFIG_USB_ISP116X_HCD is not set # CONFIG_USB_ISP116X_HCD is not set
CONFIG_USB_OHCI_HCD=y | # CONFIG_USB_ISP1760_HCD is not set
# CONFIG_USB_OHCI_BIG_ENDIAN_DESC is not set | # CONFIG_USB_ISP1362_HCD is not set
# CONFIG_USB_OHCI_BIG_ENDIAN_MMIO is not set <
CONFIG_USB_OHCI_LITTLE_ENDIAN=y <
# CONFIG_USB_SL811_HCD is not set # CONFIG_USB_SL811_HCD is not set
# CONFIG_USB_R8A66597_HCD is not set # CONFIG_USB_R8A66597_HCD is not set
# CONFIG_USB_HWA_HCD is not set # CONFIG_USB_HWA_HCD is not set
# CONFIG_USB_MUSB_HDRC is not set | CONFIG_USB_MUSB_HDRC=y
> CONFIG_USB_MUSB_SOC=y
>
You should investigate if your SoC has support for CONFIG_USB_ARCH_HAS_OHCI and CONFIG_USB_OHCI_HCD
I suggest you use make menuconfig to enable
Drivers
USB support
Support for Host-side USB
USB verbose debug messages
USB announce new devices
I am attempting to run my buildbot master server behind a cherokee reverse proxy with the buildbot instance as cherokee's information source in a round robin reverse proxy layout.
This is the buildbot master.cfg configuration file:-
# -*- python -*-
# ex: set syntax=python:
# This is a sample buildmaster config file. It must be installed as
# 'master.cfg' in your buildmaster's base directory.
# This is the dictionary that the buildmaster pays attention to. We also use
# a shorter alias to save typing.
c = BuildmasterConfig = {}
####### BUILDSLAVES
# The 'slaves' list defines the set of recognized buildslaves. Each element is
# a BuildSlave object, specifying a unique slave name and password. The same
# slave name and password must be configured on the slave.
from buildbot.buildslave import BuildSlave
c['slaves'] = [BuildSlave("example-slave", "pass")]
# 'slavePortnum' defines the TCP port to listen on for connections from slaves.
# This must match the value configured into the buildslaves (with their
# --master option)
c['slavePortnum'] = 9989
####### CHANGESOURCES
# the 'change_source' setting tells the buildmaster how it should find out
# about source code changes. Here we point to the buildbot clone of pyflakes.
from buildbot.changes.gitpoller import GitPoller
c['change_source'] = []
c['change_source'].append(GitPoller(
'git://github.com/buildbot/pyflakes.git',
workdir='gitpoller-workdir', branch='master',
pollinterval=300))
####### SCHEDULERS
# Configure the Schedulers, which decide how to react to incoming changes. In this
# case, just kick off a 'runtests' build
from buildbot.schedulers.basic import SingleBranchScheduler
from buildbot.schedulers.forcesched import ForceScheduler
from buildbot.changes import filter
c['schedulers'] = []
c['schedulers'].append(SingleBranchScheduler(
name="all",
change_filter=filter.ChangeFilter(branch='master'),
treeStableTimer=None,
builderNames=["runtests"]))
c['schedulers'].append(ForceScheduler(
name="force",
builderNames=["runtests"]))
####### BUILDERS
# The 'builders' list defines the Builders, which tell Buildbot how to perform a build:
# what steps, and which slaves can execute them. Note that any particular build will
# only take place on one slave.
from buildbot.process.factory import BuildFactory
from buildbot.steps.source import Git
from buildbot.steps.shell import ShellCommand
factory = BuildFactory()
# check out the source
factory.addStep(Git(repourl='git://github.com/buildbot/pyflakes.git', mode='copy'))
# run the tests (note that this will require that 'trial' is installed)
factory.addStep(ShellCommand(command=["trial", "pyflakes"]))
from buildbot.config import BuilderConfig
c['builders'] = []
c['builders'].append(
BuilderConfig(name="runtests",
slavenames=["example-slave"],
factory=factory))
####### STATUS TARGETS
# 'status' is a list of Status Targets. The results of each build will be
# pushed to these targets. buildbot/status/*.py has a variety to choose from,
# including web pages, email senders, and IRC bots.
c['status'] = []
from buildbot.status import html
from buildbot.status.web import authz, auth
authz_cfg=authz.Authz(
# change any of these to True to enable; see the manual for more
# options
auth=auth.BasicAuth([("pyflakes","pyflakes")]),
gracefulShutdown = False,
forceBuild = 'auth', # use this to test your slave once it is set up
forceAllBuilds = False,
pingBuilder = False,
stopBuild = False,
stopAllBuilds = False,
cancelPendingBuild = False,
)
c['status'].append(html.WebStatus(http_port=8010, authz=authz_cfg))
####### PROJECT IDENTITY
# the 'title' string will appear at the top of this buildbot
# installation's html.WebStatus home page (linked to the
# 'titleURL') and is embedded in the title of the waterfall HTML page.
c['title'] = "Pyflakes"
c['titleURL'] = "http://divmod.org/trac/wiki/DivmodPyflakes"
# the 'buildbotURL' string should point to the location where the buildbot's
# internal web server (usually the html.WebStatus page) is visible. This
# typically uses the port number set in the Waterfall 'status' entry, but
# with an externally-visible host name which the buildbot cannot figure out
# without some help.
c['buildbotURL'] = "http://localhost:8010/"
####### DB URL
c['db'] = {
# This specifies what database buildbot uses to store its state. You can leave
# this at its default for all but the largest installations.
'db_url' : "sqlite:///state.sqlite",
}
# change any of these to True to enable; see the manual for more
# options
auth=auth.BasicAuth([("pyflakes","pyflakes")]),
And this is the cherokee configuration:-
Unfortunately, I get 502 Bad gateway when I go to my web url but on the other hand, I know that my buildbot master server instance is working correctly because going to the same web url and appending :8010 behind the web url gives me the "Welcome to the Buildbot ..." page.
Is your proxy on the same machine as the buildbot? If not, you will need to adjust the URL in cherokee, to point to the machine running buildbot (localhost points to the machine cherokee is running on).
In any case, c['buildbotURL'] should be changed to point to the public URL that the buildbot is available under (i.e. what cherokee exposes, rather than the URL being proxied).